
Citation: Gerard Marx, Chaim Gilon. The Molecular Basis of Neural Memory. Part 7: Neural Intelligence (NI) versus Artificial Intelligence (AI)[J]. AIMS Medical Science, 2017, 4(3): 241-260. doi: 10.3934/medsci.2017.3.241
[1] | Gunhild A. Reigstad . Numerical network models and entropy principles for isothermal junction flow. Networks and Heterogeneous Media, 2014, 9(1): 65-95. doi: 10.3934/nhm.2014.9.65 |
[2] | Yogiraj Mantri, Michael Herty, Sebastian Noelle . Well-balanced scheme for gas-flow in pipeline networks. Networks and Heterogeneous Media, 2019, 14(4): 659-676. doi: 10.3934/nhm.2019026 |
[3] | Martin Gugat, Falk M. Hante, Markus Hirsch-Dick, Günter Leugering . Stationary states in gas networks. Networks and Heterogeneous Media, 2015, 10(2): 295-320. doi: 10.3934/nhm.2015.10.295 |
[4] | Michael Herty . Modeling, simulation and optimization of gas networks with compressors. Networks and Heterogeneous Media, 2007, 2(1): 81-97. doi: 10.3934/nhm.2007.2.81 |
[5] | Raimund Bürger, Christophe Chalons, Rafael Ordoñez, Luis Miguel Villada . A multiclass Lighthill-Whitham-Richards traffic model with a discontinuous velocity function. Networks and Heterogeneous Media, 2021, 16(2): 187-219. doi: 10.3934/nhm.2021004 |
[6] | Luis Almeida, Federica Bubba, Benoît Perthame, Camille Pouchol . Energy and implicit discretization of the Fokker-Planck and Keller-Segel type equations. Networks and Heterogeneous Media, 2019, 14(1): 23-41. doi: 10.3934/nhm.2019002 |
[7] | Maya Briani, Emiliano Cristiani . An easy-to-use algorithm for simulating traffic flow on networks: Theoretical study. Networks and Heterogeneous Media, 2014, 9(3): 519-552. doi: 10.3934/nhm.2014.9.519 |
[8] | Tingting Ma, Yayun Fu, Yuehua He, Wenjie Yang . A linearly implicit energy-preserving exponential time differencing scheme for the fractional nonlinear Schrödinger equation. Networks and Heterogeneous Media, 2023, 18(3): 1105-1117. doi: 10.3934/nhm.2023048 |
[9] | Mapundi K. Banda, Michael Herty, Axel Klar . Gas flow in pipeline networks. Networks and Heterogeneous Media, 2006, 1(1): 41-56. doi: 10.3934/nhm.2006.1.41 |
[10] | Junjie Wang, Yaping Zhang, Liangliang Zhai . Structure-preserving scheme for one dimension and two dimension fractional KGS equations. Networks and Heterogeneous Media, 2023, 18(1): 463-493. doi: 10.3934/nhm.2023019 |
Breast cancer is the second most common cancer in women after skin cancer [1]. According to WHO statistics, about 7.8 million women were diagnosed with breast cancer by the end of 2020 [2]. It is getting more ubiquitous with each passing day. Resultantly, it is expediting the mortality rate all over world. Moreover, it has become the second greatest cause of mortality among women [3]. This threatening trend can be countered if the lumps are detected earlier [4]. In the early screening of breast cancer diagnosis, the mammogram is extensively used due to its comparatively inexpensive characteristics [5]. During the diagnosis process, factors like radiologist experience and activeness play a vital role [6,7]. Breast cancer is caused by the irregular growth of cells in the breast. It starts as cells begin to develop in an unbalanced fashion [8]. Cancer-free breast tissue may expand unusually, but it does not expand beyond the breast. A healthcare provider must test any breast lump or shift to see whether it is benign or malignant? A tumor can be benign (non-cancerous) or malignant (cancerous). Benign cells are universal. Further, they grow rapidly and do not infiltrate neighboring tissues or spread to other body parts. Malignant tumors, on the other hand, are cancerous in character. Malignant cells, if left untreated, ultimately go beyond the initial tumor's area and extend their reach to the other parts of the body [9].
Plethora of imaging systems have been developed to diagnose and cure the breast cancer. Besides, many models meant for the earlier detection of this menace have been suggested by the experts. Moreover, many studies have been carried out [10,11] to enhance the diagnosis accuracy of this threat. Data mining has proved very useful in extracting the important data from a large set of data [12]. Further, the varied techniques of this discipline have been extensively exploited in the discovery of many diseases. Moreover, the techniques like machine learning (ML), statistical analysis, data warehouse, fuzzy systems, databases, and neural networks have been employed in the prediction and diagnosis of various types of cancer [13]. Additionally, deep models helped a lot to the academicians and practitioners while they grappled the intricacies in the real-world training [14,15,16]. In the training of ML and deep learning (DL) models, the gradient descent optimization method is crucial. In recent years, many new variant algorithms [17,18,19,20] have been developed to improve it further. Machine learning is a branch of computer science that make computers to comprehend the tasks without explicitly teaching them to do so [21,22,23]. Introducing a cost function to machine learning and data mining enables the machines to discover appropriate weights for outcomes [24]. Optimization finds the function parameters in a way that the solution of the problem becomes simpler. Many machine learning methods are plagued by this problem [25,26].
Gradient descent is applied for optimization of multiple loss functions like Support Vector Machine (SVM), and Logistic Regression (LR) methods [27]. Gradient descent optimization-based techniques for binary classifications have produced better accuracy for the detection of diseases. These diseases, in turn, are based on certain parameters which provide the necessary basis to the research for carrying out breast cancer detection [28]. Deep learning is an offshoot of ML techniques whose primary thrust is on the acquisition of data models rather than job-specific algorithms [29]. Common classical diagnosis methods are not delivering the way the current era needs. So, more accurate and reliable breast cancer diagnosis techniques are required to thwart the rising numbers of death of women [30]. The objective of this study is to build a deep learning-based model for breast cancer diagnosis that has a better prediction rate with minimum complexities. To develop such a diagnostic system, deep extreme machine learning using gradient descent optimization technique is being proposed in this work. DEGDO based model consists of two major phases; first is the training and the second one is the validation phase. There are three major layers in the training phase. The data acquisition layer gets the data to be used in training that is, in turn, taken from some source or test reports. Data acquisition layer stores that data in the object layer where it is in raw form. Data in the object layer can contain noise which is needed to be removed. Preprocessing layer processes the raw data in order to remove noise and to handle missing values. The application layer is the main part of the training phase which consists of two sub layers, i.e., predication layer and the evaluation layer. The prediction layer holds the actual DEGDO to predict the disease and then the result is evaluated by the evaluation layer based on certain evaluation parameters. Moreover, accuracy is measured and get it compared with the required accuracy. If it meets the training criterion, training process ends and the trained model is stored on the server. In case, training criterion is not met, the required accuracy model is retrained. This process continues unless it obtains the required accuracy. In the validation phase, data is provided to the trained model imported from the server and the result of the diagnosis is predicted. To carry out the experiments, a dataset has been taken from UCI Machine Learning Repository for the proposed research. This dataset is based on 569 instances with 32 attributes for each record [31]. The proposed model predicts the diagnostic status as positive or negative based on the provided set of parameters.
The current study is geared towards the enhancement of the prognosis accuracy of the breast cancer detection. Moreover, it introduces a more reliable classification method using deep extreme with gradient descent optimization technique resulting in a higher diagnostic accuracy rate. Apart from that, this work also presents a comparative study of the state-of-the-art methods on the same dataset. Ten different evaluation metrics have been employed to demonstrate the utility, effectiveness and authenticity of the proposed work. These metrics span accuracy, specificity, sensitivity, positive predicted value (PPV), negative predictive value (NPV), false-positive rate (FPR), false-negative rate (FNR), false discovery rate (FDR), F-score (F1) and Matthews correlation coefficient (MCC). Moreover, the area under the curve (AUC) from the receiver operating characteristic (ROC) curve has also been used for the evaluation of the proposed model. Besides, both the split and K-fold cross-validation have also been applied.
Rest of the article has been formatted like this. Section 2 summarizes the relevant literature. Section 3 describes the proposed model and the procedure for conducting a detailed evaluation about the Breast cancer like malignant or benign. Details of the dataset used for research have been given in the Section 4. Section 5 sheds the necessary light on the experimental results, discussion and the ensuing findings. Finally, the paper has been concluded in the last Section 6.
The conventional approach to cancer diagnosis rests upon a technique called colloquially a "gold standard" which entails three screenings: clinical assessment, radiographic, and pathology [32]. The traditional approach, rooted in regression, shows the existence of malignancy. Whereas, the state-of-the-art machine learning methods and algorithms are based on model development. A model is created to predict previously unknown data. As this model is sparked with a list of parameters depending upon the nature of the problem, it generates the required outcome through the twin processes of training and testing [33]. Pre-processing, feature selection or extraction, and classification are the three major processes used in the machine learning [34].
Breast cancer is the second leading cause of death after the lung cancer. 8% of women throughout their lifetime are diagnosed with breast cancer. Machine learning techniques have been frequently employed to categorize the breast cancer. Researches carried out to diagnose breast cancer with the help of KNN classifier offered better accuracy (97.5%) with a lower error rate compared to its Naive Bayes counterpart (96.19%) [35]. Detection of breast cancer with the help of algorithms has become a pronounced medical problem in the current era. Diagnosis of early breast cancer is, no doubt, a key to the survival of a patient. This study showed how a decision tree algorithm, along with some other approaches, was used to construct a real breast cancer diagnosis model for a clinical and a systematic treatment. The experimental results demonstrated the viability and do-ability of the proposed concept. The effectiveness of the decision tree technique for the detection of breast cancer has been studied and shown through the experiments [36]. Besides, a combination of K-means and K-Support Vector Machine (K-SVM) algorithms was developed for extracting valuable information which could be utilized to diagnose the tumor. Moreover, K-means algorithm has been used for the detection of the secret/hidden patterns of the cancerous cells. Each tumor membership was measured and viewed as a new trait in these patterns. Apart from that, a support vector machine was employed to acquire a novel classifier in order to distinguish between benign and malignant tumors. The proposed technique raised the accuracy of Wisconsin diagnostic breast cancer results to the mark of 97.38%. Findings given in [37] not only reflected the potential of the recommended solution for the diagnosis of breast cancer but it also indicated the lower cost of time during preparation phase.
To identify the hematoxylin and eosin-stained-breast biopsy photos using the convolutional neural network, the researcher utilized a deep-learning-based method and obtained 83.30% accuracy [38]. Moreover, the Sequential Minimum Optimization (SMO) and K Nearest Neighbor Classification Algorithms (IBk) were used for the breast cancer estimation using certain ensemble techniques. Apart from that, the dataset used for the experiments purpose consisted of 683 records with 9 parameters for each record. Moreover, Weka data mining tool was used for this research, which used cross-validation (K Fold) technique to determine the accuracy of the proposed method for breast cancer diagnosis. Additionally, the SMO achieved 96.19% accuracy while IBk reached to the mark of 95.90% for breast cancer detection [39]. Breast cancer data was collected from Iranian Centre on Breast Cancer (ICBC). The results were evaluated by the researchers through an array of validation metrics like specificity, and sensitivity of DT (C4.5), ANN, and SVM-based models. The ensuring results showed t accuracy hat SVM, with an accuracy of 95.70%, was the best predictive algorithm for breast cancer screening [40]. Apart from that, using the ADTree, J48, and CART algorithms taking digital files in DOCOM format, the Indian Breast Cancer Center Adyar, Chennai's breast cancer dataset was examined to build a model for breast cancer diagnosis. The dataset used by the researchers was in the CSV format. In this work, three different data mining (DM) algorithms were used to investigate the accuracy of prediction. The findings out of this study revealed that the CART algorithm is more suitable than the others as CART achieved an accuracy of 98.5% while ADTree and DT J48 achieved accuracies of 97.7 and 98.1% respectively [41]. Moreover, researchers performed a comparative study on NB, RF, LR, ANN-MLP, and KNN based models for breast cancer diagnosis. Apart from that, UCI data was used to diagnose breast cancer by taking the top ten different parameters from the dataset. Each algorithm was applied to the dataset to test the output of each model. The accuracies of the instances identified by KNN, NB, and RF were 72.3, 71.6 and 69.5% in a respective fashion. While, the accuracies of LR and ANN-MLP remained just 68.8 and 64.6% respectively [42]. In Nigeria, breast cancer is a very common disease. Further, no diagnosis is available for such a heterogeneous disease in Nigeria. Dataset having 17 attributes was taken by LASUTH, Nigerian Cancer Registry. The NB probabilistic method was applied for controlling the dependent group count on the probabilistic model. Top to bottom greedy checks on training data were implemented in the decision tree J48. J48 came out to be the most suitable procedure to predict and diagnose breast cancer as its accuracy was 94.20% while NB obtained an accuracy of just 82.60% [43].
The deep learning approaches including multiple kernel/activation functions like maxout, tanh, and exprectifier to diagnose breast cancer on infected cells, were applied. Moreover, a comparison of different ML techniques like NB, DT, SVM, and RF was carried out on the Wisconsin dataset. It was shown that the highest accuracy of 96.99% was obtained by Exponential Rectifier Linear Unit activation function - a deep learning algorithm in order to diagnose breast cancer [44]. Apart from the breast cancer diagnosis dataset, DT, NB, and KNN were also introduced to build a model for diagnosing the breast cancer. In this study, researchers utilized the original Wisconsin dataset. The results indicated that the accuracy of the NB classifier reached to the mark of 95.99% which was higher than those of DT and KNN algorithms [45]. Moreover, comparative analyses of different nonlinear supervised learning models like MLP, KNN, SVM, CAR Tree and Gaussian NB have been carried out for the detection of breast cancer. Comparative analysis for the said methods was the main theme of the research for efficient breast cancer identification. Apart from that, prediction accuracy for every algorithm was independently calculated using the Wisconsin breast cancer dataset. Moreover, for performance analysis of algorithms, a cross-validation (K-Fold) approach has been applied. MLP produced 96.70% accuracy for breast cancer detection which was higher among all [46]. To predict the restorative emergence of breast cancer, a data mining-based model was developed. Additionally, there were two major methods for this model namely Extreme Learning Machine (ELM) and the Bat algorithm. The prejudices and random weights were generated using the Bat algorithm. MATLAB software was used for carrying out experiments on the Wisconsin breast cancer dataset with certain selected attributes. Besides, the coefficient correlation approach was used for attributes selection. ELM and Bat algorithms have been employed to predict whether the breast cancer was recurring or non-recurring. To verify the consistency of research at various training levels, tanh and sigmoid activation functions were applied. When tanh was used as an activation function, 93.75% accuracy was recorded with a minimum error rate (RMSE = 0.30) [47]. The greedy search algorithm was proposed to build a diagnostic system to predict breast cancer diagnosis.
For the selection of features that are important ranging from the broad set of data to the trivial ones, SVM with Constrained Search Sequential Floating Forward Search (CSSFFS) is used. For this experiment, the dataset was compiled from the WDBC machine learning database. Researchers used the cross-validation (K-Fold) technique to establish results for CSSFFS with SVM. The main purpose of using SVM was to eliminate irrelevant features. Using the CSSFFS method, with some top attributes, accuracy was enhanced up to 98.25%. RBF network produced a decent accuracy of 93.60% when all the attributes were taken into consideration [48]. Besides, a deep learning-based automated mammography processing technique was employed for estimating the patients' risk of getting breast cancer [49]. The automatic classification was conducted for the region containing the cancerous part in images of the breast. Grasshopper optimization algorithm and CNNs were used in their proposed research. Their research model was able to manage 93% accuracy [50]. Moreover, breast cancer detection with the help of histogram images using deep learning CNN was carried out by the authors. This study showed reasonable results by obtaining an 86.60% accurate detection rate [51]. In a yet another work, a deep feature-based model was used for the breast mass categorization which, in turn, employed CNNs and decision trees [52]. A Patch-based LeNet, a U-Net, and transfer learning techniques were employed using a pre-trained FCN-AlexNet in order to identify lesions in breast ultrasound images [53]. A CNN which is pre-trained and its learned parameters were transferred to some other CNN for classification of mitoses obtained a 0.80 F-Score [54]. In another work, ELM was used to categorize breast tumor characteristics and its results were compared to the SVM classifier [55]. By training CNN with a huge quantity of time series data, [56] used CNN for the risk prediction of breast cancer. Based on 420 mammography time series data, [57] utilized deep neural networks to forecast the probability of breast cancer in near future. Moreover, [58] abstracted breast tumor representations using CNN and subsequently categorized the tumors as malignant or benign. Because features may influence both the efficacy and efficiency of a breast CAD system, [59] proposed an image retrieval system utilizing Zernike Moments (ZMs) to get the required features. Apart from that, machine learning and deep leaning optimization are popular techniques which have the potential to get used in the diverse fields like price controlling in medical domain, agriculture, business intelligence etc. Moreover, Chun-Hui He' iteration algorithms can also be used for optimization [60,61].
Deep extreme gradient descent optimization-based model comprises of two stages: training and validation. Moreover, the data acquisition layer, the pre-processing layer, and the application layer are the three levels that make up the training phase. Additionally, the data acquisition layer collects data from some source and stores it in raw form which is used as a database in future. This raw data may contain noisy values as well since it is transmitted through an online link from the source to the acquisition layer. Apart from that, pre-processing layer deals with missing values and eliminate noise from the given data. Moreover, the moving average technique is employed to approximate the missing data. Further, normalization addresses the problem of noise. As pre-processing is completed, the application layer starts its work. The application layer comprises of two sub-layers: prediction and performance evaluation. Deep extreme gradient descent optimization method is used in the prediction layer. Moreover, the performance evaluation layer assesses the predictive model's performance in terms of the validation metrics like accuracy, sensitivity, specificity, precision, and miss rate. As the required learning criterion is met, the trained model is stored on the server which, of course, is used in a later phase. Apart from that, the validation phase uses the data acquisition layer which provides data as an input to the trained model. This trained model is, in turn, imported from the server to predict the disease. Figure 1 demonstrates the methodological diagram of the proposed model based on deep extreme gradient descent optimization.
Deep extreme learning machine has been applied in the varied fields. Since conventional artificial neural network needs more samples so it consumes more time for learning. Besides, it can produce over-fit results [62]. The deep extreme learning machine is extensively used for regression and classification problems in different fields. Its learning rate is better and computational complexity is much lower than that of traditional artificial neural networks. The structure of a deep extreme learning machine model consists of three layers namely the input layer, multiple hidden layers, and the output layer [63]. Extreme learning was firstly idealized by [64].
Figure 2 shows the diagrammatical model for the proposed system based on DEGDO, where ip denotes the nodes of the input layer; hidden layer nodes are represented by h, and ODP shows the output layer node.
A mathematical representation of the filter of moving average is given in Eq (1) [77].
P[x]=1G∑G−1T=0u(x+T) | (1) |
Here u represents inputs; P denotes output, and the point of moving average is denoted by G. To increase the predictive ability and to improve the training process of the machine learning model, dataset is standardized for the interval [0, 1] with the help of Eq (2) [77,78].
C=ux−uminumax−umin;x=1,2,3…N | (2) |
At first taking training [A,B]=[av,bv](x=1,2,...,R) and input A=[av1,av2,av3...avr] samples and B=[b11,b12,b13,...,b1r] as target matrix, has been taken from training samples then matrices A and B, which can be presented as provided in Eqs (3) and (4) respectively [77,78,79]. While A and B are considered the dimensions of the input and output matrix. The extreme learning machine is utilized to adjust the weights between the input and the hidden layers. Considering the Cth as the input layer node and lth as the hidden layer node, the weights between them are represented by CE1 as given in Eq (5) [80]. Where Matrix A represents as input features; B as target matrix, C as Input layer node, weights between Input and hidden layer and D as weights between hidden neurons and output layer neurons.
A=[a11a12…a1va21a22…a2va31a32…a2v...aE1aE2…aEv] | (3) |
B=[b11b12…b1vb21b22…b2vb31b32…b2v...bE1bE2…bEv] | (4) |
C=[c11c12…c1vc21c22…c2vc31c32…c2v...cE1cE2…cEv] | (5) |
D=[d11d12…d1vd21d22…d2vd31d32…d2v...dE1dE2…dEv] | (6) |
Furthermore, Eq (7) shows the randomly selected biases for the hidden layer nodes by Extreme Learning Machine [81,82]. A function represented as f(x) is a network activation function preferred by Extreme Machine Learning. Eq (8) shows the resulting matrix given in the data acquisition layer. Equation (9) represents the column vector of S i.e., the resulting matrix [60,77,80].
B=[b1,b2,b3…bE]′ | (7) |
H=[h1,h2,h3…hz]x×y | (8) |
h1=[h1jh2jT3j⋮Txj]=[∑∝q=1Ml1f(ωlαq+bl)∑∝q=1Ml2f(ωlαq+bl)∑∝q=1Ml3f(ωlαq+bl)⋮∑∝q=1Mlyf(ωlαq+bl)](q=1,2,3,…,y) | (9) |
Z is the hidden layer outcome and transposition of H has been denoted by H′. Equation (11) shows the calculations of matrix β's weighted values by using the least square method [79,81].
Zβ=H′ | (10) |
β=Z+H′ | (11) |
To increase the overall stability of the network, β regularization term has been utilized [65].
Deep learning has become a topnotch research niche for the scientists due to the marvelous it has. Minimum four input layers and one output layer is needed for a system to qualify as a deep learning system [64]. Neurons present in different layers of deep learning networks are trained with different parameters based on the result of the previous layer. Besides, a deep learning network bears immense promise to process extensive datasets. In order to capture the positive and outstanding features of both ELM and DL, the proposed work is utilizing a deep extreme gradient descent optimization-based approach. The proposed model is based on a deep extreme learning machine with gradient descent that consists of one input layer, six hidden layers, and one output layer. Moreover, the input layer contains sixteen (16) neurons. Besides, each of the hidden layers consists of sixteen (16) neurons as well. Whereas, the output layer consists of only one (1) neuron. For the selection of the hidden layer's number of nodes, a test and error scheme is applied. The output of 2nd hidden layer is obtained as [77,81]:
Zg=Hβ+g;g=1,2,3,4…,6 | (12) |
where β+ represents the general inverse of the β matrix. Equation (11) can be helpful for obtaining values for the 2nd hidden layer [77,80].
f(WgZ+Bg)=Zg | (13) |
Four parameters are present in Eq (13). Among them, Wg represents the weight of the first 2 hidden layers. In this layer, the first neurons preference is shown by Z. Bg is the measured first hidden layer's output and Zg as the second hidden layer's projected output [78,79].
WFG=f−1(g)F+G | (14) |
F+G represents the inverse of FG. Moreover, to calculate Eq (5), f(x) has been used as an activation function [80,81]. Therefore, f(x),the activation function is corrected to revise the second hidden layer's outcome given below.
Zg+1=f(WFGZG) |
Such as WFGZG=Qhg+1
Zg+1=f(Qhg+1) | (15) |
As per Eq (16), weighted matrix β between the 2nd and 3rd layers has been updated [80]. Z+g+1 is an inverted form of Zg+1. Equation (17) provides the result of the estimated layer [80,81].
βg+1=Z+g+1H | (16) |
H+β is the inverse of the weight matrix μg+1. Then matrix FWFG = [Bg+1, Wg+1] is set by a deep extreme learning machine. Equations (10) and (11) help to achieve the output of the further layer [77,78,79].
f(x)=11+e−x | (17) |
The back-propagation algorithm provides weight initialization, feed-forward, error back-propagation, and updating weights and bias. f(x)=sigmoid(x) being an activation function is present on hidden layers of every neuron. The hidden layer and sigmoid input function of DELM can be composed through this method [80,81];
ErrorBP=12∑n(aon−ton)2 | (18) |
to = desired output t
ao = measured or calculated output
Equation (18) shows the error's back-propagation. Adjustment of weights is required to minimize the overall error [77,78,79,80,81]. Equation (19) presents the output layer's rate of weight change [77,78,79,80,81].
ΔZhd=6m,n∝∂E∂Zhd=6 | (19) |
where m = 1, 2, 3... 10 (Neurons) and n = output layer
ΔZhd=6m,n=−constant∂E∂Zhd=6 | (20) |
Applying the chain rule on Eq (20) generates Eq (21) [79,80].
ΔZhd=6m,n=−constant∂E∂aohdn×∂aohdn∂QhZhdn×∂QhZhdn∂Zhdn | (21) |
After simplification of Eq (21) it can be written as [80,82]:
ΔZhd=6m,n=constant(ton−aon)×(aohdn(1−aohdn)×aohdn) |
Through aotoZ6
ΔZhd=6m,n=constantρnaohdn | (22) |
The following method shows calculation to find out the proper weight change to hidden weight [80,82]. This is considered more complex because weighted links can become a reason for errors at every node.
Through Z6toZ1orZk
where k = 5, 4, 3, 2, 1
ΔZhdm,k∝−[∑n∂E∂aohdn×∂aohdn∂QhZhdn×∂Zhdn∂Zhdk]×∂aohdk∂QhZhdk×∂QhZhdk∂Zhdm,k |
ΔZhdm,k=−E[∑n∂E∂aohdn×∂aohdn∂QhZhdn×∂Zhdk∂aohdk]×∂aohdk∂Zhdk×∂QhZhdk∂Zhdm,k |
ΔZhdm,k=E[∑n(ton−aohdn)×aohdk(1−aohdn)×Zk,n]×aohdn(1−aohdn)×(Lm,k) |
ΔZhdm,k=E[∑n(ton−aohdn)×aohdk(1−aohdn)×Zk,n]×aohdn(1−aohdn)×(Lm,k) |
ΔZhdm,k=E[∑n(ton−aohdn)×aohdk(1−aohdn)×Zk,n]×aohdk(1−aohdk)×(Lm,k) |
ΔZhdm,k=E[ρk(Lm,k)] |
where,
ρk=[∑nρn(Zhdk,n)]×aohdk(1−aohdk) |
Modifying weight and bias among output & hidden layer is presented in Eq (23) where ∇Zm,n represent the gradient descent w.r.t Zm,n [77,79,80,81,82].
ΔZhd=6m,n(u)=ΔZhd=6m,n(u)+τ∇Zhd=6m,n | (23) |
Modifying weight and bias among input & hidden layers is present in Eq (24) where ∇Zm,n is gradient descent w.r.t Zm,n[77,79,80,81,82].
ΔZhdm,k(u)=ΔZhdm,k(u+1)+τ∇Zhdm,n | (24) |
τ represent the key to finding local minima because it gives the step size for finding local minima.
Materials and methods applied in this study are described below.
In this research, one dataset has been used for the experimentation. This dataset is accessible from the UCI Learning Repository. Apart from that, Cleveland data package was used for training, testing, and validating the prediction of breast cancer. Besides, the Wisconsin Breast Cancer Diagnostic (WBCD) dataset [66] is open to the public for analysis and research. This data set includes 32 human and biological characteristics. Further, the selection of features plays a vital role in the classification outcomes [67]. An increase in the performance and decrease in the time complexity of machine learning can be achieved through the appropriate selection of features [68]. Top 16 features have been selected using uni-variate and recursive feature selection strategies. Besides, data has been distributed among two classes (Positive and Negative). There are 355 healthy (Negative) samples and 214 diseased (Positive) samples. The selected features of data collection specifications are shown in the Table 1.
Sr. No. | Attributes | Symbol | Type |
1 | Mean of the concave sections of the contour's severity | concavity_mean | Numeric |
2 | The average of distances between the center and the peripheral points | radius_mean | Numeric |
3 | The mean value for the severity of concave sections of the contour that is the worst or the greatest | concavity_worst | Numeric |
4 | Area | area_se | Numeric |
5 | Gray-scale standard deviation | Texture | Numeric |
6 | Worst symmetry | symmetry_worst | Numeric |
7 | arithmetic mean of the regional variance in radius lengths | smoothness_mean | Numeric |
8 | The standard error for the severity of concave contour segments | concavity_se | Numeric |
9 | The mean value that is the worst or the greatest for local variance in radius lengths | smoothness_worst | Numeric |
10 | The worst or biggest number for the mean of "coastline approximation"-1 | fractal_dimension_worst | Numeric |
11 | The standard error for approximating the coastline-1 | fractal_dimension_se | Numeric |
12 | Symmetry mean | symmetry_ mean | Numeric |
13 | arithmetic mean for "coastline approximation"-1 | fractal_dimension_mean | Numeric |
14 | Symmetry se | symmetry_se | Numeric |
15 | standard inaccuracy in radius lengths due to local variation | smoothness_se | Numeric |
16 | standard inaccuracy for the standard deviation of gray-scale values | texture_se | Numeric |
An array of performance evaluation metrics has been developed to evaluate the performance of machine learning algorithms. Out of this array, frequently used metrics are accuracy (which evaluates accuracy rate (Acc)), specificity (Sp), precision (Pres), sensitivity (Sn), F-Measure, Negative Predicted Value (NPV), False Discovery Rate (FDR), False Positive Rate (FPR), False Negative Rate (FNR) and Mathew Co-relation Co-efficient (MCC), which assesses steadiness employing false positives, true negative, false negatives, and false positives values. These criteria are as follows:
Using Eq (25), accuracy can be calculated as given under [83]: -
Accuracy(Acc)=TrP+TrNTrp+TrN+Fap+FaN | (25) |
Using Eq (26), sensitivity/recall can be calculated as given under [83,84,85]: -
Sensitivity/Recall(Sn)=TrPTrP+FaN | (26) |
Using Eq (27), specificity can be calculated as given under [83,84]: -
Specificity(Sp)=TrNTrN+FaP | (27) |
Using Eq (28), Precision(PPV) can be calculated as given under [85]: -
Precision(PPV)=TrPTrP+FaP | (28) |
Using Eq (29), NPV can be calculated as given under [85]: -
NegativePredictedValue(NPV)=TrNTrN+FaN | (29) |
Using Eq (30), FPR can be calculated as given under [84,85]: -
FalsePositiveRate(FPR)=FaPTrN+FaP | (30) |
Using Eq (31), FDR can be calculated as given under [83,84]: -
FalseDiscoveryRate(FDR)=FaPTrP+FaP | (31) |
Using Eq (32), FNR can be calculated as given under [83,84]: -
FalseNegativeRate(FNR)=FaNTrP+FaN | (32) |
Using Eq (33), F1-Score can be calculated as given under [83,84]: -
F1=2×TrP(2×TrP+FaN+FaP) | (33) |
Using Eq (34) MCC can be calculated as given under: -
MCC=(TrP×TrN−FaP×FaN)sqrt((TrP+FaP)×(TrP+FaN)×(TrN+FaP)×(TrN+FaN)) | (34) |
The receiver operating curve method is used to quantify and analyze the connection between a binary classifier's sensitivity and specificity. Sensitivity quantifies the percentage of properly classified positives; specificity quantifies the percentage of properly classified negatives [69,83,84].
AUC is the measurement of the area that is entirely covered by the ROC curve and it varies between 0 to 1. If a classification model produces a 100% accuracy rate then the AUC for that model comes out to be 1. In case, classification model gives 100% wrong classification results then the value of AUC calculates to be 0.
In split validation, data is divided into certain train and test ratios. In the proposed research, dataset has been divided into different sets of train-test ratios.
K-Fold cross-validation has also been employed to test the proposed model by plugging the different values in K. We have used K = 2 to K = 10 for K-Fold cross-validation to compute the average values but we have incorporated 10 folds cross-validation for the proposed model.
The MSE analysis shows up to what extent the model has learned how much impact it is casting on the outcome. The machine's efficiency requires error minimization. The discrepancy between the intended and the actual output is measured as the mean square error. Besides, MSE, RMSR, MAE values for the training, testing, and validation phases were recorded against the different epochs.
Varied experiments have been carried out to demonstrate the classification performance of the proposed model. Moreover, WBCD dataset has been used to train, test, and validate the model. Both the Split and K-Fold cross-validation techniques have been employed to validate the DEGDO base model. Besides, different train-test ratio groups have been set up and used like 50-50, 60-40, 70-30, and 80-20. Performance results produced with different train-test ratio groups are given in the Table 3. Apart from that, K-Fold cross-validation has also been carried out using the various values of K, K = 2 to K-10, for instance. Moreover, average performance produced by the different count of folds is shown in the Figure 3.
In the proposed model, multiple classifiers have been used to compare the performance of the proposed model DEGDO with other state-of-the-art methods. Moreover, multiple performance evaluation metrics have been used to check the proposed model for different classifiers like AUC-ROC, MSE, RMSE, MAE, Acc, Sp, PPV, Sn, F-Measure, NPV, FDR, FPR, FNR, and Matthew Co-relation Co-efficient (MCC). Table 2 shows the performance assessment of the Intelligent Breast Cancer Diagnostic System Empowered by Deep Extreme Gradient Descent Optimization with different train-test ratios. Besides, experiments are conducted on the selective features as well as on all the available features. Results produced with selective features are shown in the Figures 3 and 4. Apart from that, results with a complete set of attributes lowered the classification performance when it is compared to the performance carried out with the selective features. Results without selective features are shown in Table 2.
Train-Test Ratio | Acc (%) | Sn (%) | Sp (%) | PPV (%) | NPV (%) | FPR | FDR | FNR | F1 | MCC |
50-50 | 96.66 | 95.28 | 97.48 | 95.73 | 97.21 | 0.0252 | 0.0427 | 0.0472 | 0.9551 | 0.9285 |
60-40 | 97.72 | 97.64 | 97.76 | 96.28 | 98.59 | 0.0224 | 0.0372 | 0.0236 | 0.9696 | 0.9513 |
70-30 | 98.95 | 98.58 | 99.16 | 98.58 | 99.16 | 0.0084 | 0.0142 | 0.0142 | 0.9858 | 0.9774 |
80-20 | 99.12 | 99.06 | 99.16 | 98.59 | 99.44 | 0.0084 | 0.0141 | 0.0094 | 0.9882 | 0.9812 |
The accuracy, sensitivity, and specificity of the Intelligent Breast Cancer Diagnostic System Empowered by DEGDO were measured including classifiers like NB, SVM, K-NN, RF, and ANN. The performance results of the proposed Intelligent Breast Cancer Diagnostic System Empowered by Deep Extreme Gradient Descent Optimization are compared to the state-of-the-art methods like NB, SVM, K-NN, RF, and ANN (Figure 3).
It is found that the classification algorithm performed better with the deep extreme gradient descent optimization-based method. Besides, with the selective attributes for binary classification, the Intelligent Breast Cancer Diagnostic System empowered by DEGDO achieved a maximum accuracy of 98.73%. Apart from that, RF achieved the accuracy of 94.62% after the maximum accuracy of 98.73%. Moreover, Naive Bayes achieved an accuracy of 87.58% which sounds well but falls short of the target. SVM achieved just a 90.25% accuracy, which is better than the other algorithms like K-NN. Additionally, ANN and K-NN achieved accuracies of 85.29 and 83.81% respectively. Given these objective stats, we are justified to assert that the proposed model's accuracy got improved by using the selection of features technique. Moreover, Figure 4 illustrates a schematic comparison of the proposed DEGDO with various state-of-the-art machine learning techniques.
ROC curve generated by different classifiers used in this research is shown in the Figure 4. This figure vividly demonstrates that the proposed model rendered better results. Besides, AUC score for DEGDO, KNN, ANN, SVM, RF and NB are 0.989, 0.838, 0.867, 0.927, 0.948 and 0.876, respectively. X-axis represents the False Positive Rate (FPR) while Y-axis, the True Positive Rate (TPR).
Mean square error results for the training, testing, and validation phases measured by the number of epochs are displayed in Table 3. As the training iterations increase, a linear minimization in MSE, RMSE, and MAE is observed. Moreover, the lowest MSE is attained at 873 training epochs count for the proposed model which is recorded at 0.0569. Furthermore, in the training phase, the lowest obtained MSE was 0.0699 after 873 epochs.
Epochs Count | 0 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 873 |
Training Phase | ||||||||||
Mean Square Error (MSE) | 1.665 | 0.2849 | 0.1724 | 0.1298 | 0.1012 | 0.0999 | 0.0842 | 0.0712 | 0.0645 | 0.0599 |
Root Mean Square Error (RMSE) | 1.294 | 0.5464 | 0.4256 | 0.3652 | 0.3199 | 0.2994 | 0.2845 | 0.2689 | 0.2512 | 0.2502 |
Mean Absolute Error (MAE) | 0.943 | 0.3815 | 0.3264 | 0.3054 | 0.2814 | 0.1985 | 0.1845 | 0.1542 | 0.1725 | 0.1311 |
Testing Phase | ||||||||||
Mean Square Error (MSE) | 1.754 | 0.3031 | 0.1852 | 0.1198 | 0.1426 | 0.1287 | 0.0984 | 0.0954 | 0.07425 | 0.0699 |
Root Mean Square Error (RMSE) | 1.2854 | 0.5421 | 0.4157 | 0.3655 | 0.3451 | 0.3356 | 0.3158 | 0.2847 | 0.2485 | 0.0745 |
Mean Absolute Error (MAE) | 0.948 | 0.3548 | 0.2954 | 0.2817 | 0.2465 | 0.2258 | 0.1956 | 0.1785 | 0.1688 | 0.1465 |
Validation Phase | ||||||||||
MSE | 1.841 | 0.3514 | 0.2785 | 0.1545 | 0.1348 | 0.1254 | 0.0998 | 0.0871 | 0.0785 | 0.0721 |
Root Mean Square Error (RMSE) | 1.451 | 0.5812 | 0.5266 | 0.3863 | 0.3598 | 0.3421 | 0.3125 | 0.2706 | 0.2632 | 0.2415 |
Mean Absolute Error (MAE) | 0.987 | 0.4487 | 0.4123 | 0.3458 | 0.2785 | 0.1859 | 0.1481 | 0.1399 | 0.1302 | 0.1298 |
In terms of performance, the proposed approach has been evaluated by comparing it to previously published experimental research models. It has been proven that the proposed approach is far more accurate than the ones published in the past. Table 4 gives a comparison of the accuracies between the proposed model and the other published works.
Reference Method | Model | Accuracy Results (%)a |
[39] | Deep learning SMO, IBK | 96.19, 95.90 |
[43] | J48, Probabilistic NB | 94.20, 82.60 |
[44] | Deep learning ELU, Maxout, Tanh, ReLU Vote (NB + DT + SVM) | 96.99, 96.56, 96.27, 96.55, 96.13 |
[37] | K-SVM | 97.38 |
[48] | CSSFFS (10-FOLD), RBF Network | 98.25, 93.60 |
[70] | BIG-F | 97.10 |
[71] | DLA, EABA | 97.20 |
[72] | LDA & AE-DL | 98.27 |
[73] | DesneNet121 CNN | 98.07 |
[74] | EBL-RBFNN | 98.40 |
[75] | DL-CNN | 95.00 |
[76] | Boosting CN | 98.27 |
Proposed Model | 98.73 |
This article demonstrated a marked rise in the breast cancer detection rates. A range of performance evaluation metrics like AUC-ROC, MSE, RMSE, MAE, Acc, Sp, PPV, Sn, F-Measure, NPV, FDR, FPR, FNR, and Matthew Co-relation Co-efficient (MCC) have been employed to evaluate the proposed model for different classifiers. The proposed model's accuracy, precision, sensitivity, and specificity are much better than many of the ones published in the literature, which, of course, makes this study more pronounced. Apart from that, both split and K-Fold cross-validation have been used to evaluate the performance. Besides, we have taken 10-folds cross-validation as a benchmark for results. The proposed model's accuracy, precision, sensitivity, and specificity rates came out to be 98.73, 99.48, 99.43 and 99.60% respectively. Additionally, the proposed model achieved a 0.989 AUC score. Numerous classifiers like ANN, KNN, NB, RF, and SVM were also applied to the same dataset but the proposed method outperformed all the afore-mentioned classifiers in terms of accuracy, precision, sensitivity, and specificity.
Paradoxically, some limitations plague the proposed model. Firstly, the model is trained and validated on a small dataset. Secondly, the diagnostic process consists of multiple stages ranging from the collection of relevant features to the medical laboratory test reports to feeding it to the proposed model in CSV format which, of course, delays the entire diagnosis.
In the future, this model can be exposed to multiple datasets like TCGA or NCBI GEO databases for better results. Moreover, the fusion technique can also be used to make the proposed model more reliable. Lastly, this model can be synergized with some other feature selections methods to boost its performance.
This research received no external funding.
The authors declare there is no conflict of interest.
[1] | Boole G (1853) The Laws of Thought. In: The Mathematical Theories of Logic and Probabilities. Project Gutenberg (EBook #15114). |
[2] | Calderone J (2014) 10 Big Ideas in 10 Years of Brain Science. Scientific American MIND, November 6. |
[3] | Chalmers DJ (1996) The Conscious Mind: In Search for a Fundamental Theory. New York: Oxford University Press. |
[4] | Dehaene S (2014) Consciousness and the Brain. In: Deciphering How the Brain Codes Our Thoughts. New York: Penguin Publishers. |
[5] | Edelman G, Tononi G (2000) A universe of consciousness: How matter becomes imagination. Basic books. |
[6] | LeDoux JE (2003) Synaptic self: How our brains become who we are. New York: Penguin Publishers. |
[7] |
LeDoux JE (2012) Evolution of human emotion: A view through fear. Prog Brain Res 195:431-442. doi: 10.1016/B978-0-444-53860-4.00021-0
![]() |
[8] | Penrose R (1989) The Emperor's New Mind. New York: Oxford University Press. |
[9] | Shiffman D, Fry S, Marsh Z (2012) The nature of code. D. Shiffman. |
[10] | Bostrom N (2014) Superintelligence: Paths, dangers, strategies. OUP Oxford. |
[11] | Zimme (2014) The New Science of the Brain. National Geographic. |
[12] | McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. B Math Biol 5: 115-133. |
[13] | Turing AM (1950) Computing machinery and intelligence. Mind 59: 433-460. |
[14] | Graves A, Wayne G, Danihelka I (2014) Neural turing machines. arXiv preprint arXiv:1410.5401 |
[15] | Von Neumann J (2012) The computer and the brain. 3rd ed. New Haven: Yale University Press: 66. |
[16] | Jeffress LA (1951) Cerebral mechanisms in behavior; the Hixon Symposium. a. von Neumann J. The general and logical theory of automata: 1-41. b. McCullogh WS. Why the mind is in the head: 42-57. |
[17] | Arbib MA (1987) Brains, Machines and Mathematics. In: Neural Nets and Finite Automata. 2nd ed. Berlin: Springer US: 15-29. |
[18] | Franklin S (1995) Artificial Minds. Cambridge, MA: MIT Press. |
[19] | Longuet-Higgins HC (1981) Artificial intelligence-a new theoretical psychology? Cognition 10: 197-201. |
[20] |
Neisser U (1963) The imitation of man by machine. Science 139: 193-197. doi: 10.1126/science.139.3551.193
![]() |
[21] | Sloman A (1979) Epistemology and Artificial Intelligence: Expert Systems in the Microelectronic Age. Edinburgh: Edinburgh University Press. |
[22] | Gardner H (1985) The Mind's New Science. New York: Basic Books. |
[23] | Garland A (2015) Ex Machina – Movie. |
[24] | Sejnowski TJ, Koch C, Churchland PS (1988) Computational neuroscience. Science 24: 1299-1330. |
[25] | Russel S, Norvig P (2009) Artificial Intelligence: A Modern Approach. 3rd ed. NY: Pearson Publishers. |
[26] | Aho AV (2012) Computation and computational thinking. Comput J 55: 832-835. |
[27] |
Guidolin D, Albertin G, Guescini M, et al. (2011) Central nervous system and computation. Quart Rev Biol 86: 265-85 doi: 10.1086/662456
![]() |
[28] | Howard N (2012) Brain Language: The fundamental code unit. Brain Sci 1: 6-34. |
[29] | Howard N, Guidere M (2012) LXIO: The mood detection Robopsych Brain Sci 1: 71-77. |
[30] | Pockett S (2014) Problems with theories of consciousness. Front Syst Neurosci: 225. |
[31] |
Hirschberg J, Manning CD (2015) Advances in natural language processing. Science 349: 261-266. doi: 10.1126/science.aaa8685
![]() |
[32] |
Parkes DC, Wellman MP (2015) Economic reasoning and artificial intelligence. Science 349: 267-272. doi: 10.1126/science.aaa8403
![]() |
[33] |
Gershman SJ, Horvitz EJ, Tenenbaum JB (2015) Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 349: 273-278. doi: 10.1126/science.aac6076
![]() |
[34] |
Jordan M, Mitchell TM (2015) Machine learning: Trends, perspectives, and prospects. Science 349: 255-260. doi: 10.1126/science.aaa8415
![]() |
[35] | Wu Y, Schuster M, Chen Z, et al. (2016) Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv 1609.08144. |
[36] | y Cajal SR (1995) Histology of the nervous system of man and vertebrates. USA: Oxford University Press. |
[37] | Garcia-Lopez P, Garcia-Marin V, FreireM (2010) The histological slides and drawing s of Cajal. Front Neuroanat 4: 9 |
[38] | Hebb DO (1949). The Organization of Behavior. New York: Wiley. |
[39] | Kandel ER, Schwartz JH, Jessell TM, et al. (2013) Principles of Neural Science. New York: MacGraw-Hill. |
[40] |
Arshavsky YI (2006) "The seven sins" of the Hebbian synapse: can the hypothesis of synaptic plasticity explain long-term memory consolidation?. Prog Neurobiol 80: 99-113. doi: 10.1016/j.pneurobio.2006.09.004
![]() |
[41] | Gallistel CR, KingA (2009) Memory and the Computational Brain. New York: Wiley Blackwell. |
[42] | Hawkins J, Blakeslee S (2005) On Intelligence. New York: St Martin's Press. |
[43] |
Zador A, Koch C, Brown TH (1990) Biophysical model of a Hebbian synapse. Proc Natl Acad Sci USA 87: 6718-6722. doi: 10.1073/pnas.87.17.6718
![]() |
[44] |
Vizi ES, Fekete A, Karoly R, et al (2010) Non-synaptic receptors and transporters involved in brain functions and targets of drug treatment. Br J Pharmacol 160: 785-809. doi: 10.1111/j.1476-5381.2009.00624.x
![]() |
[45] | Vizi E (2013) Role of high-affinity receptors and membrane transporters in non-synaptic communication and drug action in the central nervous system. Pharmacol Rev 52: 63-89. |
[46] | Schmitt FO, Samson FE, Irwin LN, et al. (1969) Brain cell micro-environment. NRP Bulletin: 7. |
[47] | Cserr HF (1986) The Neuronal Environment. Ann NY Acad Sci: 481. |
[48] |
Juliano RI, Haskill S (1993) Signal transduction from extracellular matrix. J Cell Biol 120: 577-585. doi: 10.1083/jcb.120.3.577
![]() |
[49] |
Vargova L, Sykova E (2014) Astrocytes and extracellular matrix in extrasynaptic volume transmission. Phil Trans R Soc B 369: 20130608. doi: 10.1098/rstb.2013.0608
![]() |
[50] |
Giaume C, Oliet S (2016) Introduction to the special issue: Dynamic and metabolic interactions between astrocytes and neurons. Neuroscience 323: 1-2. doi: 10.1016/j.neuroscience.2016.02.062
![]() |
[51] | Hrabětová S, Nicholson C (2007) Biophysical properties of brain extracellular space explored with ion-selective microelectrodes, integrative optical imaging and related techniques. In: Michael AC, Borland LM, editors. Electrochemical Methods for Neuroscience. Boca Raton, FL: CRC Press: chapter 10. |
[52] | Kuhn TS (1970) The Structure of Scientific Revolutions. 2nd ed. Chicago, IL: University of Chicago Press. |
[53] |
Landauer R (1996) The physical nature of information. Physics Letters A 217: 188-193. doi: 10.1016/0375-9601(96)00453-7
![]() |
[54] | Chua LO (2011) Resistance switching memories are memristors. Applied Physics A 102: 765-783. |
[55] |
Di Ventra M, Pershin YY (2011) Memory materials: A unifying description. Mater Today 14: 584-591. doi: 10.1016/S1369-7021(11)70299-1
![]() |
[56] | Kamalanathan D, Akhavan A, Kozicki MN (2011) Low voltage cycling of programmable metallization cell memory devices. Nanotechnology 22: 254017. |
[57] |
Tian F, Jiao D, Biedermann F, et al. (2012) Orthogonal switching of a single supramolecular complex. Nat Commun 3: 1207. doi: 10.1038/ncomms2198
![]() |
[58] |
Sakata Y, Furukawa S, Kondo M, et al. (2013) Shape-memory nanopores induced in coordination frameworks by crystal downsizing. Science 339: 193-196. doi: 10.1126/science.1231451
![]() |
[59] |
Lin WP, Liu S, Gong T, et al. (2014) Polymer-based resistive memory materials and devices. Adv Mater 26: 570-606. doi: 10.1002/adma.201302637
![]() |
[60] |
Zhou X, Xia M, Rao F, et al. (2014) Understanding phase-change behaviors of carbon-doped Ge₂Sb₂Te₅ for phase-change memory application. ACS Appl Mater Interfaces 6: 14207-14214. doi: 10.1021/am503502q
![]() |
[61] | Agnati LF, Fuxe K (2014) Extracellular-vesicle type of volume transmission and tunnelling-nanotube type of wiring transmission add a new dimension to brain neuro-glial networks. Philos Trans R Soc Lond B Biol Sci 369: pii: 20130505. |
[62] |
Fuxe K, Borroto-Escuela DO, Romero-Fernandez W, et al. (2013) Volume transmission and its different forms in the central nervous system. Chin J Integr Med 19: 323-329. doi: 10.1007/s11655-013-1455-1
![]() |
[63] |
Fuxe L, Borroto-Escuela DO (2016) Volume transmission and receptor-receptor interactions in heteroreceptor complexes: Understanding the role of new concepts for brain communication. Neural Regen Res 11: 1220-1223. doi: 10.4103/1673-5374.189168
![]() |
[64] | Fodor JA (1975) The Language of Thought. Boston, MA: Harvard University Press. |
[65] | Sloman A, Croucher M (1981) Why robots will have emotions. Sussex University. |
[66] | Mayer JD (1986) How mood influences cognition. Advances in Cognitive Science 1: 290-314. |
[67] | Salovey P, Mayer JD (1990) Emotional intelligence. Imagination, Cognition and Personality 9: 285-311. |
[68] | Goleman D (1995) Emotional Intelligence. New York: Bantam Books. |
[69] | Valverdu J, Casacuberta D (2009) Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. Information Science Reference. New York: Hershey. |
[70] | Meyer JJ, Dastani MM (2010) The logical structure of emotions. Dutch Companion project grant nr: IS053013. SIKS Dissertation Series No. 2010-2. |
[71] | Hasson C (2011) Modelisation des mecanismes emotionnels pour un robot autonome: perspective developpementale et sociale. PhD Thesis, Universite de Cergy Pontoise, France. |
[72] | Meshulam M, Winter E, Ben-Shakhar G, et al. (2011). Rational emotions. Soc Neurosci 1: 1-7. |
[73] | Steunebrink BR. The Logical structure of emotions. Utrecht University, 2010. |
[74] |
Steunebrink BR, Dastani M, Meyer JJC (2012) A formal model of emotion triggers: an approach for BDI agents. Synthese 185: 83-129. doi: 10.1007/s11229-011-0004-8
![]() |
[75] | Bosse T, Broekens J, Dias J, et al. (2014) Emotion Modeling. Towards Pragmatic Computational Models of Affective Processes. New York: Springer. |
[76] | a. Hudlicka E. From habits to standards: Towards systematic design of emotion models and affective architecture: 3-23. |
[77] | b. Dastani M, Floor C. Meyer JJ. Programming agents with emotions: 57-75. |
[78] | c. Lowe R, Kiryazov K, Utilizing emotions in autonomous robots: An enactive approach: 76-98. |
[79] | 76. Smailes D, Moseley P, Wilkinson S (2015) A commentary on: Affective coding: the emotional dimension of agency. Front Hum Neurosci 9: 142. |
[80] | 77. Lewis M (2016) The Undoing Project: A Friendship That Changed Our Minds. New York:W.W. Norton & Co. |
[81] | 78. Binet A, Simon T (1916) The Intelligence of the Feeble Minded. Baltimore, MD: Williams and Wilkins. |
[82] | 79. Griffin D (2001) Animal Minds: Beyond Cognition to Consciousness. Chicago, IL: University Chicago Press. |
[83] |
80. Albuquerque N, Guo K, Wilkinson A, et al. (2016) Dogs recognize dog and human emotions. Biol Lett 12: 201508 doi: 10.1098/rsbl.2015.0883
![]() |
[84] |
81. Andics A, Gábor A, Gácsi M, et al. (2016) Neural mechanisms for lexical processing in dogs. Science 353:1030-1032. doi: 10.1126/science.aaf3777
![]() |
[85] | 82. De Waal F (2016) Are We Smart Enough To Know How Smart Animals Are? New York: WW Norton & Co. |
[86] | 83. Kekecs Z, Szollosi A, Palfi B, et al. (2016). Commentary: Oxytocin-gaze positive loop and the coevolution of human-dog bond. Front Neurosci 10:155. |
[87] |
84. Kovács K, Kis A, Kanizsár O, et al. (2016) The effect of oxytocin on biological motion perception in dogs (Canis familiaris). Animal Cogn 19: 513-522. doi: 10.1007/s10071-015-0951-4
![]() |
[88] | 85. Wasserman EA (2016). Thinking abstractly like a duck(ling). Science 353: 222-223. |
[89] | 86. Wynne CDL (2004) Do Animals Think? New Jersey: Princeton University Press. |
[90] | 87. Levy S (1993) Artificial Life. New York: Random House. |
[91] |
88. Marx G, Gilon C (2012) The molecular basis of memory. ACS Chem Neurosci 3: 633-642. doi: 10.1021/cn300097b
![]() |
[92] | 89. Marx G, Gilon C (2013) The molecular basis of memory. MBM Pt 2: The chemistry of the tripartite mechanism. ACS Chem Neurosci 4: 983–993. |
[93] | 90. Marx G, Gilon C (2014) The molecular basis of memory. MBM Pt 3: Tagging with neurotransmitters (NTs). Front Neurosci 3: 58. |
[94] | 91. Marx G, Gilon C (2016) The molecular basis of neural memory. MBM Pt 4: The brain is not a computer. "Binary" computation versus "multinary" mentation. Neuroscience and Biomedical Engineering 4: 14-24. |
[95] | 92. Marx G, Gilon C (2016) The Molecular Basis of Neural Memory Part 5: Chemograhic notations from alchemy to psycho-chemistry. In Press. |
[96] | 93. Marx G, Gilon C (2016) The molecular basis of neural memory. MBM Pt 6: Chemical coding of logical and emotive modes. Int J Neurology Res 2: 259-268. |
[97] | 94. Asimov I (1950) I, Robot. New York: Gnome Press. |
[98] | 95. Waldrop MM (1992) Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Viking Penguin Group. |
[99] | 96. Freedman DH (1) Brainmakers. New York: Touchstone Press. |
[100] | 97. Maass W, Joshi P, Sontag ED (2007) Computational aspects of feedback in neural circuits. PLoS Comput Biol 3: e165. |
[101] |
98. Picard RW, Vyzas E, Healey J (2001) Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence 23: 1175-1191. doi: 10.1109/34.954607
![]() |
[102] | 99. Hirschberg J, Manning CD (2015) Advances in natural language processing. Science 349: 261-266. |
[103] | 100. Critique of Pure Reason (1781) Translated by Norman Kemp Smith. London Macmillan 1934. |
[104] | 101. Sloman A (2008) Kantian philosophy of mathematics and young robots. In: Proceedings 7th International Conference on Mathematical Knowledge Management Birmingham, UK, July 28-30. Available from: http://events.cs.bham.ac.uk/cicm08/mkm08/. |
[105] | 102. Berto F (2010). There's Something about Gödel: The Complete Guide to the Incompleteness Theorem. New York: John Wiley and Sons. |
[106] | 103. Ryle G (1949) The Concept of Mind. UK: Penguin Books. |
Sr. No. | Attributes | Symbol | Type |
1 | Mean of the concave sections of the contour's severity | concavity_mean | Numeric |
2 | The average of distances between the center and the peripheral points | radius_mean | Numeric |
3 | The mean value for the severity of concave sections of the contour that is the worst or the greatest | concavity_worst | Numeric |
4 | Area | area_se | Numeric |
5 | Gray-scale standard deviation | Texture | Numeric |
6 | Worst symmetry | symmetry_worst | Numeric |
7 | arithmetic mean of the regional variance in radius lengths | smoothness_mean | Numeric |
8 | The standard error for the severity of concave contour segments | concavity_se | Numeric |
9 | The mean value that is the worst or the greatest for local variance in radius lengths | smoothness_worst | Numeric |
10 | The worst or biggest number for the mean of "coastline approximation"-1 | fractal_dimension_worst | Numeric |
11 | The standard error for approximating the coastline-1 | fractal_dimension_se | Numeric |
12 | Symmetry mean | symmetry_ mean | Numeric |
13 | arithmetic mean for "coastline approximation"-1 | fractal_dimension_mean | Numeric |
14 | Symmetry se | symmetry_se | Numeric |
15 | standard inaccuracy in radius lengths due to local variation | smoothness_se | Numeric |
16 | standard inaccuracy for the standard deviation of gray-scale values | texture_se | Numeric |
Train-Test Ratio | Acc (%) | Sn (%) | Sp (%) | PPV (%) | NPV (%) | FPR | FDR | FNR | F1 | MCC |
50-50 | 96.66 | 95.28 | 97.48 | 95.73 | 97.21 | 0.0252 | 0.0427 | 0.0472 | 0.9551 | 0.9285 |
60-40 | 97.72 | 97.64 | 97.76 | 96.28 | 98.59 | 0.0224 | 0.0372 | 0.0236 | 0.9696 | 0.9513 |
70-30 | 98.95 | 98.58 | 99.16 | 98.58 | 99.16 | 0.0084 | 0.0142 | 0.0142 | 0.9858 | 0.9774 |
80-20 | 99.12 | 99.06 | 99.16 | 98.59 | 99.44 | 0.0084 | 0.0141 | 0.0094 | 0.9882 | 0.9812 |
Epochs Count | 0 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 873 |
Training Phase | ||||||||||
Mean Square Error (MSE) | 1.665 | 0.2849 | 0.1724 | 0.1298 | 0.1012 | 0.0999 | 0.0842 | 0.0712 | 0.0645 | 0.0599 |
Root Mean Square Error (RMSE) | 1.294 | 0.5464 | 0.4256 | 0.3652 | 0.3199 | 0.2994 | 0.2845 | 0.2689 | 0.2512 | 0.2502 |
Mean Absolute Error (MAE) | 0.943 | 0.3815 | 0.3264 | 0.3054 | 0.2814 | 0.1985 | 0.1845 | 0.1542 | 0.1725 | 0.1311 |
Testing Phase | ||||||||||
Mean Square Error (MSE) | 1.754 | 0.3031 | 0.1852 | 0.1198 | 0.1426 | 0.1287 | 0.0984 | 0.0954 | 0.07425 | 0.0699 |
Root Mean Square Error (RMSE) | 1.2854 | 0.5421 | 0.4157 | 0.3655 | 0.3451 | 0.3356 | 0.3158 | 0.2847 | 0.2485 | 0.0745 |
Mean Absolute Error (MAE) | 0.948 | 0.3548 | 0.2954 | 0.2817 | 0.2465 | 0.2258 | 0.1956 | 0.1785 | 0.1688 | 0.1465 |
Validation Phase | ||||||||||
MSE | 1.841 | 0.3514 | 0.2785 | 0.1545 | 0.1348 | 0.1254 | 0.0998 | 0.0871 | 0.0785 | 0.0721 |
Root Mean Square Error (RMSE) | 1.451 | 0.5812 | 0.5266 | 0.3863 | 0.3598 | 0.3421 | 0.3125 | 0.2706 | 0.2632 | 0.2415 |
Mean Absolute Error (MAE) | 0.987 | 0.4487 | 0.4123 | 0.3458 | 0.2785 | 0.1859 | 0.1481 | 0.1399 | 0.1302 | 0.1298 |
Reference Method | Model | Accuracy Results (%)a |
[39] | Deep learning SMO, IBK | 96.19, 95.90 |
[43] | J48, Probabilistic NB | 94.20, 82.60 |
[44] | Deep learning ELU, Maxout, Tanh, ReLU Vote (NB + DT + SVM) | 96.99, 96.56, 96.27, 96.55, 96.13 |
[37] | K-SVM | 97.38 |
[48] | CSSFFS (10-FOLD), RBF Network | 98.25, 93.60 |
[70] | BIG-F | 97.10 |
[71] | DLA, EABA | 97.20 |
[72] | LDA & AE-DL | 98.27 |
[73] | DesneNet121 CNN | 98.07 |
[74] | EBL-RBFNN | 98.40 |
[75] | DL-CNN | 95.00 |
[76] | Boosting CN | 98.27 |
Proposed Model | 98.73 |
Sr. No. | Attributes | Symbol | Type |
1 | Mean of the concave sections of the contour's severity | concavity_mean | Numeric |
2 | The average of distances between the center and the peripheral points | radius_mean | Numeric |
3 | The mean value for the severity of concave sections of the contour that is the worst or the greatest | concavity_worst | Numeric |
4 | Area | area_se | Numeric |
5 | Gray-scale standard deviation | Texture | Numeric |
6 | Worst symmetry | symmetry_worst | Numeric |
7 | arithmetic mean of the regional variance in radius lengths | smoothness_mean | Numeric |
8 | The standard error for the severity of concave contour segments | concavity_se | Numeric |
9 | The mean value that is the worst or the greatest for local variance in radius lengths | smoothness_worst | Numeric |
10 | The worst or biggest number for the mean of "coastline approximation"-1 | fractal_dimension_worst | Numeric |
11 | The standard error for approximating the coastline-1 | fractal_dimension_se | Numeric |
12 | Symmetry mean | symmetry_ mean | Numeric |
13 | arithmetic mean for "coastline approximation"-1 | fractal_dimension_mean | Numeric |
14 | Symmetry se | symmetry_se | Numeric |
15 | standard inaccuracy in radius lengths due to local variation | smoothness_se | Numeric |
16 | standard inaccuracy for the standard deviation of gray-scale values | texture_se | Numeric |
Train-Test Ratio | Acc (%) | Sn (%) | Sp (%) | PPV (%) | NPV (%) | FPR | FDR | FNR | F1 | MCC |
50-50 | 96.66 | 95.28 | 97.48 | 95.73 | 97.21 | 0.0252 | 0.0427 | 0.0472 | 0.9551 | 0.9285 |
60-40 | 97.72 | 97.64 | 97.76 | 96.28 | 98.59 | 0.0224 | 0.0372 | 0.0236 | 0.9696 | 0.9513 |
70-30 | 98.95 | 98.58 | 99.16 | 98.58 | 99.16 | 0.0084 | 0.0142 | 0.0142 | 0.9858 | 0.9774 |
80-20 | 99.12 | 99.06 | 99.16 | 98.59 | 99.44 | 0.0084 | 0.0141 | 0.0094 | 0.9882 | 0.9812 |
Epochs Count | 0 | 100 | 200 | 300 | 400 | 500 | 600 | 700 | 800 | 873 |
Training Phase | ||||||||||
Mean Square Error (MSE) | 1.665 | 0.2849 | 0.1724 | 0.1298 | 0.1012 | 0.0999 | 0.0842 | 0.0712 | 0.0645 | 0.0599 |
Root Mean Square Error (RMSE) | 1.294 | 0.5464 | 0.4256 | 0.3652 | 0.3199 | 0.2994 | 0.2845 | 0.2689 | 0.2512 | 0.2502 |
Mean Absolute Error (MAE) | 0.943 | 0.3815 | 0.3264 | 0.3054 | 0.2814 | 0.1985 | 0.1845 | 0.1542 | 0.1725 | 0.1311 |
Testing Phase | ||||||||||
Mean Square Error (MSE) | 1.754 | 0.3031 | 0.1852 | 0.1198 | 0.1426 | 0.1287 | 0.0984 | 0.0954 | 0.07425 | 0.0699 |
Root Mean Square Error (RMSE) | 1.2854 | 0.5421 | 0.4157 | 0.3655 | 0.3451 | 0.3356 | 0.3158 | 0.2847 | 0.2485 | 0.0745 |
Mean Absolute Error (MAE) | 0.948 | 0.3548 | 0.2954 | 0.2817 | 0.2465 | 0.2258 | 0.1956 | 0.1785 | 0.1688 | 0.1465 |
Validation Phase | ||||||||||
MSE | 1.841 | 0.3514 | 0.2785 | 0.1545 | 0.1348 | 0.1254 | 0.0998 | 0.0871 | 0.0785 | 0.0721 |
Root Mean Square Error (RMSE) | 1.451 | 0.5812 | 0.5266 | 0.3863 | 0.3598 | 0.3421 | 0.3125 | 0.2706 | 0.2632 | 0.2415 |
Mean Absolute Error (MAE) | 0.987 | 0.4487 | 0.4123 | 0.3458 | 0.2785 | 0.1859 | 0.1481 | 0.1399 | 0.1302 | 0.1298 |
Reference Method | Model | Accuracy Results (%)a |
[39] | Deep learning SMO, IBK | 96.19, 95.90 |
[43] | J48, Probabilistic NB | 94.20, 82.60 |
[44] | Deep learning ELU, Maxout, Tanh, ReLU Vote (NB + DT + SVM) | 96.99, 96.56, 96.27, 96.55, 96.13 |
[37] | K-SVM | 97.38 |
[48] | CSSFFS (10-FOLD), RBF Network | 98.25, 93.60 |
[70] | BIG-F | 97.10 |
[71] | DLA, EABA | 97.20 |
[72] | LDA & AE-DL | 98.27 |
[73] | DesneNet121 CNN | 98.07 |
[74] | EBL-RBFNN | 98.40 |
[75] | DL-CNN | 95.00 |
[76] | Boosting CN | 98.27 |
Proposed Model | 98.73 |