Review

Functional Foods for Type 2 Diabetes

  • A number of studies have suggested that functional foods such as tea, wheat, nuts, and sweet potatoes have beneficial effects on glycemic control; however, the effectiveness of consuming functional foods for the management of diabetes remains unclear. The aim of this review is to summarize the evidence for functional foods in diabetes maintenance. A PubMed and Cochrane Database of Systematic Reviews search utilizing the indexing terms “functional food” and “diabetes” was performed. A total of 11 randomized controlled trials (RCTs) met the criteria. Resveratrol, wheat albumin, ginger, and wine grape pomace flour all improved glycemic control, insulin sensitivity, blood pressure, and lipid profiles. On the other hand, citrus flavonoids-enriched product, arabinogalactan, low-fat milk, raw red onion, and functional yogurt appear to have no effects on diabetes. As for resveratrol, the results are controversial. Although the underlying mechanism is not clear, the differences in the characteristics of study participants may have an influence on the effectiveness of functional foods. However, the effects of functional foods on diabetes remain inconclusive due to the small number of subjects, short duration, and methodological heterogeneity among RCTs. In addition to the current evidence, further RCTs investigating the effects of functional foods on diabetic complications, mortality, and cost-effectiveness, as well as glycemic control, are required.

    Citation: Hidetaka Hamasaki. Functional Foods for Type 2 Diabetes[J]. AIMS Medical Science, 2016, 3(3): 278-297. doi: 10.3934/medsci.2016.3.278

    Related Papers:

    [1] Yang Liu, Tianran Tao, Xuemei Liu, Jiayun Tian, Zehong Ren, Yize Wang, Xingzhi Wang, Ying Gao . Knowledge graph completion method for hydraulic engineering coupled with spatial transformation and an attention mechanism. Mathematical Biosciences and Engineering, 2024, 21(1): 1394-1412. doi: 10.3934/mbe.2024060
    [2] Hongqiang Zhu . A graph neural network-enhanced knowledge graph framework for intelligent analysis of policing cases. Mathematical Biosciences and Engineering, 2023, 20(7): 11585-11604. doi: 10.3934/mbe.2023514
    [3] Shi Liu, Kaiyang Li, Yaoying Wang, Tianyou Zhu, Jiwei Li, Zhenyu Chen . Knowledge graph embedding by fusing multimodal content via cross-modal learning. Mathematical Biosciences and Engineering, 2023, 20(8): 14180-14200. doi: 10.3934/mbe.2023634
    [4] Xianfang Wang, Qimeng Li, Yifeng Liu, Zhiyong Du, Ruixia Jin . Drug repositioning of COVID-19 based on mixed graph network and ion channel. Mathematical Biosciences and Engineering, 2022, 19(4): 3269-3284. doi: 10.3934/mbe.2022151
    [5] Huiqing Wang, Sen Zhao, Jing Zhao, Zhipeng Feng . A model for predicting drug-disease associations based on dense convolutional attention network. Mathematical Biosciences and Engineering, 2021, 18(6): 7419-7439. doi: 10.3934/mbe.2021367
    [6] Lifang Wang, Xinyu Lu, Zejun Jiang, Zhikai Zhang, Ronghan Li, Meng Zhao, Daqing Chen . FRS: A simple knowledge graph embedding model for entity prediction. Mathematical Biosciences and Engineering, 2019, 16(6): 7789-7807. doi: 10.3934/mbe.2019391
    [7] Hanming Zhai, Xiaojun Lv, Zhiwen Hou, Xin Tong, Fanliang Bu . MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion. Mathematical Biosciences and Engineering, 2023, 20(8): 14096-14116. doi: 10.3934/mbe.2023630
    [8] Ying Xu, Jinyong Cheng . Secondary structure prediction of protein based on multi scale convolutional attention neural networks. Mathematical Biosciences and Engineering, 2021, 18(4): 3404-3422. doi: 10.3934/mbe.2021170
    [9] Zhangjie Wu, Minming Gu . A novel attention-guided ECA-CNN architecture for sEMG-based gait classification. Mathematical Biosciences and Engineering, 2023, 20(4): 7140-7153. doi: 10.3934/mbe.2023308
    [10] Peng Wang, Shiyi Zou, Jiajun Liu, Wenjun Ke . Matching biomedical ontologies with GCN-based feature propagation. Mathematical Biosciences and Engineering, 2022, 19(8): 8479-8504. doi: 10.3934/mbe.2022394
  • A number of studies have suggested that functional foods such as tea, wheat, nuts, and sweet potatoes have beneficial effects on glycemic control; however, the effectiveness of consuming functional foods for the management of diabetes remains unclear. The aim of this review is to summarize the evidence for functional foods in diabetes maintenance. A PubMed and Cochrane Database of Systematic Reviews search utilizing the indexing terms “functional food” and “diabetes” was performed. A total of 11 randomized controlled trials (RCTs) met the criteria. Resveratrol, wheat albumin, ginger, and wine grape pomace flour all improved glycemic control, insulin sensitivity, blood pressure, and lipid profiles. On the other hand, citrus flavonoids-enriched product, arabinogalactan, low-fat milk, raw red onion, and functional yogurt appear to have no effects on diabetes. As for resveratrol, the results are controversial. Although the underlying mechanism is not clear, the differences in the characteristics of study participants may have an influence on the effectiveness of functional foods. However, the effects of functional foods on diabetes remain inconclusive due to the small number of subjects, short duration, and methodological heterogeneity among RCTs. In addition to the current evidence, further RCTs investigating the effects of functional foods on diabetic complications, mortality, and cost-effectiveness, as well as glycemic control, are required.



    In recent years, large-scale knowledge bases such as FreeBase [1], DBpedia [2], YAGO [3], and WordNet [4] have been widely used in various domains, including information retrieval [5], question answering [6], and recommender systems [7]. Their powerful abilities for semantic processing and knowledge generalization have paved a new way for data management and mining. A knowledge graph (KG) is a graph representation of a knowledge base in the form of a collection of fact triples (s,r,o), each of which denotes a relation edge r between a head entity node s and a tail entity node o. The task of knowledge graph completion (KGC) is performed to improve the integrity of the KG which suffers from incompleteness by predicting the missing triples according to the facts. KGC can be performed by either extracting new facts from external sources, such as online encyclopedias and news wires [8], or by inferring them from those already in the KG. The latter approach, typically called Link Prediction (LP), is the focus of our paper.

    Knowledge graph embedding (KGE) models have been shown to achieve the best performance for the task of link prediction in KGs among all the existing methods [9]. To learn low-dimensional vector or matrix representations of entities and relations in KGs, a lot of knowledge graph embedding models are proposed. Specifically, the classic triple-based embedding models are mainly divided into translation-based models (e.g., TransE [10], TransH [11], TransAH [12], TransR [13], TransD [14]), bilinear and tensor models (e.g., DistMult [15], SimplE [16], TuckER [17]), neural network models (e.g., NTN [18], ER-MLP [19], ConvE [20], ConvKB [21], Conv-TransE [22], InteractE [23], DMACM [24], KMAE [25], JointE [26], CTKGC [27]), and complex vector models (e.g., ComplEx [28], RotatE [29], QuatE [30]).

    Among all of the above models, convolutional neural network (CNN) based KGE models are attracting increasing research interest because they benefit from the explosion of deep learning techniques and exhibit strong expression and generalization abilities. The performance of CNN-based KGE models depends on the interactions between entities and relations. Considerable research efforts have been devoted to increasing the interactions between entities and relations. InteractE [23] increases interactions by embedding permutations. The directional multi-dimensional attention mechanism is designed to explore the deep expressive characteristic of the triple in DMACM [24]. Gaussian kernel and multi-attention techniques are adopted to expand the entities and relation embedding in KMAE [25]. JointE [26] utilizes 1D and 2D convolution operations jointly to extract the surface and explicit knowledge and increase the interactions between entities and relations. However, in order to improve performance, these models are complex with a large number of parameters. How to balance the efficiency and effectiveness of the CNN-based KGE model is still an open problem.

    In this paper, we propose a lightweight CNN-based KGE model named IntSE with channel attention to improve the prediction accuracy for LP in KGs. Inspired by InteractE, checkered feature reshaping and circular convolution are adopted to increase the interactions between the components of entity and relation embeddings in IntSE. A channel attention module of IntSE is designed to enhance useful interactions while suppressing useless ones. IntSE is light and flexible, greatly reducing the number of parameters, but still has a competitive prediction accuracy for LP. The main contributions of this paper are summarized as follows:

    1) We propose a lightweight CNN-based KGE model named IntSE to improve the prediction accuracy for LP in KGs. IntSE is efficient and effective for LP through three key components: checkered feature reshaping, circular convolution, and channel attention mechanism. Checkered feature reshaping and circular convolution operations are very effective for increasing interactions between entities and relations. Moreover, the channel attention mechanism further enhances the beneficial interactions.

    2) In order to further improve the performance of link prediction, we propose a channel attention block called LPSENet, which improves the gate mechanism of the SENet prototype. LPSENet improves the expression ability while reducing the model overfitting of IntSE.

    3) We conduct extensive experiments on publicly available datasets to evaluate the performance of IntSE. The results show that IntSE achieves significantly higher accuracy for LP in KGs than the mainstream KGE models.

    Paper organization: The rest of this paper is organized as follows. The existing literature related to this work is discussed in Section 2. Our proposed IntSE models for LP are presented in Section 3. The experimental setup and results are described in Section 4. The ablation studies which investigate how the checkered feature reshaping, circular convolution, and channel attention mechanism affect the performance of IntSE are discussed in Section 5. Finally, the whole paper is concluded in Section 6.

    KGE learns an embedding representation of entities and relations of a KG in a continuous low-dimensional vector space. Wang et al. [31], Nguyen [9], and Liu et al. [32] systematically reviewed KGE models for knowledge graph completion (KGC). Moreover, Rossi et al. [33] and Akrami et al. [34] conducted experimental studies to assess the effectiveness of different link prediction (LP) methods on real-world data. According to the different ways of expressing the semantics between entities and relations, there are four families of KGE models: (a) TransE [10] and its families (such as TransH [11], TransAH [12], TransR [13], and TransD [14]) used the translation invariance to express the association between entities and relations; (b) DistMult [15] and its improved versions [16,17] utilized a diagonal matrix to express the relations for model simplification; (c) ComplEx [28] and its variants [29,30] further extended DistMult to the complex field, which could better model the asymmetric relations in KGs; and (d) Neural network (NN) based models (such as ER-MLP [19], NTN [13], ConvE [20], Conv-TransE [22], ConvKB [21], InteractE [23], DMACM [24], KMAE [25], JointE [26]) used neural network to model semantics between entity embeddings and relation embeddings. Among all the above methods, NN-based models demonstrated outstanding performance in prediction accuracy due to their excellent feature extraction ability. NN-based models for KGE, especially CNN-based models, have received more attention in recent years due to the advantages of convolution operation, such as parameter sharing, generalization, overfitting reduction, and robustness.

    ConvE [20] is the first work on the CNN-based KGE model. It reshaped the head entity embedding and the relation embedding separately and stacked them into a 2D convolution layer to extract semantic features of entities and relations. The generated feature mapping was vectorized and projected into the embedding space of tail entities. Then, all candidate tail entity embedding vectors were matched through their inner products. Conv-TransE [22] considered that the reshaping of entity and relation embeddings in ConvE destroyed the translation invariance of the embedding vector. Thus, it removed the embedding vector reshape operation in ConvE and directly put the known entity and relation embeddings into the convolutional layer. The remaining procedure of Conv-TransE was the same as that of ConvE. Thus, Conv-TransE could be regarded as a variant of ConvE. ConvKB [21] was another CNN-based KGE model, which stacked the head entity embedding, relation embedding, and tail entity embedding into a matrix M, and used a convolution operation with filters ωR1×3 to act on each dimensional vector after M transposition. ConvKB was controversial since it obtained a competitive result on the FB15k-237 [35] dataset, but a disappointing result on the WN18RR [20]. As its performance varies with datasets and implementations, we do not compare it in our experiments. InteractE [23] found that the stacked feature reshaping of ConvE (see Figure 1) limited the feature interaction between entities and relations. Thus, it increased the feature interaction between entity and relation embeddings to improve the performance for LP. Specifically, it reshaped the entity and relation embeddings into a checkered structure (see Figure 1), while using circular convolution to enhance the edge feature interaction. However, feature permutation in InteractE is costly. DMACM [24] found that the existing CNN-based KGE model ignored the directional relation characteristic and implicit fine-grained feature in the triple. Thus, it was proposed to explore the directional information and an inherent deep expressive characteristic of the triple by the directional multi-dimensional attention. Then, the output of directional multi-dimensional attention could be put in the convolution neural network with filters ωR1×3. KMAE [25] first expanded the entity embedding and relation embedding into entity kernel and relation kernel using the Gaussian kernel function. Entity kernel or relation kernel combined relation embedding or entity embedding input into a 2D convolutional layer, then two groups of channel attention and spatial attention captured high-quality feature vector information. JointE [26] consisted of path 1 and path 2 in two feed-forward paths. Path 1 used 1D convolution filters over input the entity and relation embedding to extract the surface and explicit knowledge. To reduce the number of parameters, path 2 employed different 2D convolution filters to extract the deep features from the reshaped entity and relation embedding. The final features of the entity and relation were obtained by element-wise addition between the output of path 1 and the output of path 2.

    Figure 1.  Different types of reshaping methods in KGE models [23].

    In conclusion, CNN-based KGE models should expand feature extraction capability and increase the interactions between the entities and relations to improve the performance of LP. The key components of existing methods include the initial embedding of the entities and relation, the reshaping plan of the entities and relation embeddings, the convolution operations to extract features, and the attention-based feature calibration. Although many complex models can improve the performance for LP, they introduce too many parameters that hinder the model's efficiency. Thus, our goal is to design a CNN-based KGE model that can strike a better balance between performance and cost.

    In this section, we introduce the proposed model IntSE. We first introduce the related notations and problem definition and then describe the architecture of IntSE in detail, as shown in Figure 2.

    Figure 2.  The architecture of IntSE.

    Let E and R denote the set of entities and relation types. A Knowledge Graph (KG) G consists of a set of fact triples (s,r,o), which is formally expressed as follows:

    G={(s,r,o)s,oE,rR}E×R×E

    Link Prediction (LP) is to predict the missing entity in a triple (s,r,o), such as predicting o for a given (s,r,?) or s for a given (?,r,o). LP can be formalized as a single sample learning-to-rank problem. A KGE-based LP method has two key components, namely encoding and scoring. The encoding component maps the head entity s, relation r, and tail entity o to the d-dimensional vector representations es, er, eoRd. Then, the scoring component measures the authenticity of triples. The goal of LP is to learn a scoring function ψ about entity and relation embeddings so that the score ψ(s,r,o) of fact triples (s,r,o) is higher than that ψ(s,r,o) of non-fact triples (s,r,o).

    The architecture of our IntSE model is shown in Figure 2. The key components of IntSE are checkered feature reshaping, circular convolution operation, and channel attention mechanism. Given a d-dimensional entity embedding vector es and a d-dimensional relation embedding vector er, IntSE first reshapes es and er into a checkered matrix ϕchk(es,er)Rm×n, where m×n=2d, and then sends it to a 2D circular convolutional layer. Then, the convolutional layer outputs a feature map tensor X=[x1, x2, , xC]RH×W×C, which is sent to the channel attention block for feature calibration to generate the calibrated feature map tensor ¯XRH×W×C. Next, ¯X is vectorized into vec(¯X)RHWC, projected to the d-dimensional vector space using the linear transformation of the parameter matrix as WRHWC×d, and finally matched with the tail entity eo via the inner product operation.

    The checkered feature reshaping increases the feature interaction between the components of entity and relation embeddings, and thus improves the expression ability of CNNs. Different from the stacked feature reshaping (see Figure 1), a checkered structure (see Figure 1) arranges the entity and relation embeddings such that no two adjacent cells are occupied by components of the same embedding to increase the feature interaction between entity and relation embeddings. The components of entity and relation embedding are randomly assigned to cells of a checkered structure in IntSE. Then, a 2D circular convolutional layer with filters ωR3×3 is adopted to extract the deep and latent knowledge from the checkered features. Circular convolution induces more interactions than standard convolution and thus further expands the expression ability of IntSE.

    In this paper, we are inspired by InteractE and borrow the checkered feature reshaping and circular convolution components to IntSE. Unlike the feature enumerations of InteractE, which take up a large number of parameters, IntSE only uses one checkered feature reshaping and one circular convolutional layer to improve the efficiency of the model.

    SENet [36] explicitly defines the association between feature channels and filters the feature channel information by self-learning the weight of each channel, so as to enhance useful features and suppress useless features. We are inspired by SENet and related works, SENet is added to IntSE to enhance useful interactions and improve the performance of LP. However, the improvement is still quite limited. Therefore, we further propose a new SENet variant called LPSENet as shown in Figure 3 to enhance the useful features for LP.

    Figure 3.  The diagram of improved SENet module: LPSENet.

    For a feature map tensor X=[x1,x2,,xC]RH×W×C, where xi represents the feature map of i-th channel of size H×W, and C denotes the total number of channels. The operation flow of LPSENet is as follows:

    1) Squeeze: LPSENet performs the Global Average Pooling (GAP) on each feature channel xc to obtain statistical information as lc in the following:

    lc=p(X)=1WHW,Hi=1,j=1Xij. (3.1)

    The total C statistical values are aggregated into a compressed feature tensor L=[l1,l2,,lC]R1×1×C, which contains the global information of all features.

    2) Excitation: To capture the dependence between channels, the gating mechanism is used to generate the weight matrix A=[a1,a2,,aC]R1×1×C corresponding to each feature channel. Unlike the gating mechanism of SENet, we design a lighter and more efficient gating mechanism. SENet uses a gating mechanism consisting of a fully connected dimension-reduction layer, a ReLU [37] activation layer, and a fully connected dimension-increasing layer. The fully connected layer has a huge number of parameters and is prone to overfitting. Thus, in the LPSENet, the fully connected layer is changed to the convolutional layer with filters ωR1×1, and Dropout is added to avoid overfitting. Equations (3.2) and (3.3) formalize the gate mechanism of SENet and LPSENet, respectively.

    A=σ(W2f(W1L)), (3.2)
    A=σ(f(f(Lω1)ω2)), (3.3)

    where f is the ReLU function and σ is the Sigmoid function, W1RCq×C and W2RC×Cq are learned to explicitly model the correlation between feature channels, and q is the reduction ratio, ω1 and ω2 are the convolution filters.

    Compared with the SENet, LPSENet introduces more non-linearity. Thus, it has a higher expressive ability and is more robust to overfitting. The convolutional layer has the same output effect as the fully connected layer, but reduces the number of parameters and improves the computational efficiency. The experimental results in Section 5 will show that the LPSENet is more suitable for LP than SENet.

    3) Scale: The output weight A of the excitation operation represents the importance of each feature channel after feature selection, and the final output of the LPSENet block ¯X=[¯x1,¯x2,,¯xC]RH×W×C is obtained by rescaling the input feature map X with the activations:

    ¯xc=acxc (3.4)

    where ¯xc refers to channel-wise multiplication between the scalar ac and the feature map xc.

    The goal of LP is to learn a scoring function ψ about entity and relation embeddings so that the score ψ(s,r,o) of fact triples (s,r,o) is higher than that ψ(s,r,o) of non-fact triples (s,r,o). Table 1 summarizes the scoring functions of CNN-based KGE models, where ¯es and ¯er denote the 2D stack reshapings of the entity embedding es and relation embedding er, [;] represents the concatenation operation, denotes the standard convolution operation, ω denotes the set of convolutional filters, W and w are the learned weight matrix, ϕchk(Pk(es,er)) denotes k different checkered reshaping of es and er, denotes the circular convolution, vec() represents the vectorization function, f and g are non-linear functions, DM() represents the directional multi-dimensional attention functions of DMACM [24], kernel() and Mul() represent the Gaussian kernel functions and multi-attention in KMAE [25], ca() represents the channel attention functions of IntSE.

    Table 1.  Scoring functions of CNN-based KGE models.
    Model fScoring Function ψ(es,er,eo)
    ConvE [20] f(vec(f([¯es;¯er]ω))W)eo
    Conv-TransE [22] g(vec(f([es;er]ω))W)eo
    ConvKB [21] concat(g([es;er;eo]ω)w)
    InteractE [23] g(vec(f(ϕchk(Pk(es,er))ω))W)eo
    DMACM [24] concat(f(DM(es,er,eo))ω)W
    KMAE [25] g(Mul([kernel(¯es);¯er]||[¯es;kernel(¯er)]))eo
    JointE [26] f([es;er]ω11Dω21D+vec([¯esωr;¯erωs])W)eo
    IntSE g(vec(ca(f(ϕchk(es,er)ω)))W)eo

     | Show Table
    DownLoad: CSV

    In order to train the model parameters, we use the binary cross entropy with label smoothing as the loss function [20], as shown in Eq (3.5):

    Γ(p,t)=1Ni(tilog2(pi)+(1ti)log2(1pi)), (3.5)

    where p=σ(ψ(s,r,o)) indicates the probability of correct link prediction through applying the logistic sigmoid function σ() to the scores ψ(s,r,o) and t is the smoothing label. In this paper, we use Adam [38] as the optimizer and label smoothing to reduce the overfitting due to the saturation of output non-linearity at the labels [39].

    In this section, we first describe the setup of our experiments. Then, we present the detailed experimental results and provide thorough analyses.

    Dataset: We use the two most widely used knowledge graph datasets for link prediction, namely FB15k-237 [35] and WN18RR [20], in our experiments. Specifically, FB15k-237 is an improved version of the FB15k [10] dataset derived from FreeBase [1], where all reversal relations in FB15k are deleted to prevent test triples from being directly deduced by reverse training triples. WN18RR is a subset of the WN18 [10] dataset derived from WordNet [4], and the inverse relationships are also deleted similar to FB15k-237. The characteristics of the two datasets are shown in Table 2.

    Table 2.  Statistics of FB15k-237 and WN18RR dataset.
    Dataset #entities #relations #train #valid #test
    FB15k-237 14,541 237 272,115 17,535 20,466
    WN18RR 40,943 11 86,835 3034 3134

     | Show Table
    DownLoad: CSV

    Evaluation metrics: In this paper, we use the four commonly used evaluation criteria for link prediction, namely, MR, MRR, Hits@1, and Hits@10, to measure the performance of different models. The mean rank (MR) represents the mean of the test triples' ranks, i.e.,

    MR=12|T|(s,r,o)T(ranks+ranko),

    where |T| is the size of the test set. The mean reciprocal rank (MRR) is the average inverse of the harmonic mean of the test triples' ranks, i.e.,

    MRR=12|T|(s,r,o)T(1ranks+1ranko).

    And Hits@k is the percentage of top-k results that are correct. Among the four indicators, lower MR and higher MRR, Hits@1, and Hits@10 indicate better performance.

    Hyperparameter setting and implementation: The ranges of hyperparameters are set as follows: the learning rate γ{0.01,0.001,0.005,0.0001}, the entity and relation embedding dimension d{100,200}, the kernel size in the 2D convolutional layer k{5,7,9,11}, the reduction ratio of LPSENet 1/q, where q{2,4,8,16}. In addition, the experiment also uses batch normalization and Dropout to avoid overfitting. To decide the values of hyperparameters, each model is trained for 500 epochs via the grid search method, and the hyperparameters with the best performance are selected according to the MRR on the validation set [20,23]. For the FB15k-237 dataset, the parameter settings are γ=0.0001, d=200, k=9, and q=4, and the batch size is 128. For the WN18RR dataset, the parameter settings are γ=0.001, d=200, k=11, and q=8, and the batch size is 256. We implemented our models with PyTorch in Python 3. All experiments were conducted on a server with 4 Intel® Xeon® w-2104 CPUs @ 3.20GHz, 32GB DDR4 RAM, and 1 NVIDIA® GeForce® RTXTM 2060 GPU with 8GB GDDR6 RAM.

    In the experiments, we compare IntSE with the six baseline models, namely ConvE [20], Conv-TransE [22], InteractE [23], DMACM [24], KMAE [25], JointE [26].

    The prediction accuracy of different models on the FB15k-237 and WN18RR datasets is shown in Table 3. The scores of all the baselines are taken directly from the values reported in the original papers. The best and second-best results are highlighted by bold and underlined texts, respectively. We observe that IntSE achieves high accuracy in both datasets: it achieves competitive performance in all evaluation metrics and outperforms all the baselines in terms of MRR and Hits@1 on FB15k-237. Specifically, we draw the following conclusions from the experimental results:

    Table 3.  Prediction accuracy on the FB15k-237 and WN18RR dataset.
    Model FB15k-237 WN18RR
    MRR MR Hits@10 Hits@1 MRR MR Hits@10 Hits@1
    ConvE [20] 0.325 244 0.501 0.237 0.430 4187 0.520 0.400
    Conv-TransE [22] 0.330 - 0.510 0.240 0.460 - 0.520 0.430
    InteractE [23] 0.354 172 0.535 0.263 0.463 5202 0.528 0.430
    DMACM [24] 0.270 244 0.440 - 0.230 552 0.540 -
    KMAE [25] 0.326 235 0.502 0.240 0.448 4441 0.524 0.415
    JointE [26] 0.356 177 0.543 0.262 0.471 4655 0.537 0.438
    IntSE 0.359 179 0.540 0.267 0.469 5007 0.532 0.439

     | Show Table
    DownLoad: CSV

    1) IntSE outperforms InteractE in terms of MRR, Hits@10, and Hits@1 on both datasets, which indicates that channel attention effectively improves the performance of the CNN-based KGE model for LP. Furthermore, IntSE obtains improvements of 2.5 and 3% over InteractE in terms of MRR and Hits@1 on the FB15k-237 dataset. Then, on the WN18RR dataset, the improvement rates of IntSE over InteractE on MRR and Hits@1 are 1.7 and 2.3%. These results confirm that IntSE not only retains the advantages of InteractE but also further enhances the useful features of InteractE.

    2) Both IntSE and InteractE outperform KMAE in terms of MRR, Hits@10, and Hits@1 on both datasets, which indicates that checkered feature reshaping and circular convolution operations increase feature interactions between the entities and relations are effective. The ablation studies of IntSE in Section 5 further demonstrate the effectiveness of checkered feature reshaping and circular convolution operations.

    3) It is unexpected that DMACM outperforms all the models in the MR and Hits@10 on the WN18RR dataset. However, DMACM is inferior to others models in the MRR metrics on both datasets. The reason may be the distinguishing feature of WN18RR, which has fewer relations but more entities.

    4) Compared to JointE, the state-of-the-art CNN-based KGE model, IntSE performs better in terms of Hits@1 on both datasets. Moreover, they are well-matched in other metrics.

    To further verify the effectiveness of IntSE, we also evaluate the performance of the models for LP in different categories of relations on the FB15k-237 dataset. We use FB15k-237 in this set of experiments because its relation is more diverse. Based on the average number of tail entities per head entity and the average number of head entities per tail entity, the relation is divided into four categories: 1-to-1, 1-to-n, n-to-1, and n-to-m. Here, an average number of less than 1.5 is marked as "1" and "n" otherwise. Among the 224 distinct relations in the test set of FB15k-237, 5.8% are 1-to-1, 11.6% are 1-to-n, 33.9% are n-to-1, and 48.7% are n-to-m relations. But on the WN18RR dataset, the 11 distinct relations in the test set are distributed as 2, 4, 3, and 2 in these four classes [34]. Because all the other baselines don't report the performance of LP in different relation categories in their papers, ConvE [20] and InteractE [23], two classical CNN-based KGE models, are used for comparison in the experiments. MRR and Hits@10 are used as evaluation criteria [14]. Table 4 presents the experimental results of different models for LP on the FB15k-237 dataset. From Table 4, we find that IntSE achieves better performance than ConvE and InteractE in all four relation categories, whether it deals with simple relation categories (e.g., 1-to-1) or more complex relation categories (e.g., 1-to-n and n-to-m). It is verified again that IntSE has good robustness and is suitable for link prediction tasks with various relation categories.

    Table 4.  Prediction accuracy by relation category on the FB15k-237 dataset.
    ConvE [20] InteractE [23] IntSE
    MRR Hits@10 MRR Hits@10 MRR Hits@10
    Head Pred 1-to-1 0.374 0.505 0.386 0.547 0.391 0.561
    1-to-n 0.091 0.170 0.106 0.192 0.112 0.198
    n-to-1 0.444 0.644 0.466 0.647 0.472 0.652
    n-to-m 0.261 0.459 0.276 0.476 0.285 0.479
    Tail Pred 1-to-1 0.366 0.510 0.368 0.547 0.372 0.550
    1-to-n 0.762 0.878 0.777 0.881 0.780 0.888
    n-to-1 0.069 0.150 0.074 0.141 0.079 0.152
    n-to-m 0.375 0.603 0.395 0.617 0.400 0.621

     | Show Table
    DownLoad: CSV

    IntSE only uses one checkered feature reshaping and one circular convolutional layer; that is, the number of feature permutation perm in IntSE is one. We conduct experiments in the same setting to further evaluate the impact of feature permutation on the prediction accuracy of InteractE and IntSE. Table 5 presents the experimental results. We can see that the prediction accuracy of InteractE and IntSE drops slowly with the increase of perm on the FB15k-237 dataset. IntSE and InteractE work best when perm is one. This result is consistent with that reported in the paper InteractE [23]. Nevertheless, IntSE is still superior to InteractE in all cases under all the metrics. From Table 5, we can further conclude that increasing the beneficial feature interactions is crucial to improve the performance of CNN-based KGE models.

    Table 5.  Prediction accuracy with varying feature permutation on the FB15k-237 dataset.
    InteractE [23] IntSE
    perm = 1 perm = 2 perm = 3 perm = 1 perm = 2 perm = 3
    MRR 0.353 0.353 0.349 0.354 0.353 0.350
    Hits@10 0.537 0.534 0.533 0.539 0.537 0.535
    Hits@20 0.619 0.616 0.613 0.620 0.615 0.614
    Hits@40 0.694 0.692 0.688 0.695 0.691 0.688
    Hits@80 0.764 0.761 0.759 0.766 0.763 0.758
    Hits@160 0.829 0.824 0.822 0.830 0.827 0.822

     | Show Table
    DownLoad: CSV

    We also evaluate the parameter efficiency of different models on the FB15k-237 dataset. Table 6 presents the experimental results. The number of parameters in IntSE is larger than ConvE because IntSE adopts 96 convolution filters of size 9×9 while ConvE adopts 32 convolution filters of size 3×3. The fully-connected layer of IntSE has much more parameters than ConvE. The number of parameters in IntSE is nearly equal to that of InteractE when the feature permutation perm is one. The channel attention mechanism of IntSE is lightweight with only a few thousand additional parameters. The number of parameters in InteractE increases significantly with the feature permutation perm because the number of parameters in the convolution and fully-connected layers of InteractE is positively correlated with perm.

    Table 6.  Parameter efficiency of different models on the FB15k-237 dataset.
    Model ConvE [20] InteractE [23] IntSE
    perm = 1 perm = 2 perm = 3 perm = 4
    Parameters 4.96M 10.7M 18.38M 26.07M 33.75M 10.7M

     | Show Table
    DownLoad: CSV

    There are three key components in IntSE: checkered feature reshaping, circular convolution operation, and channel attention mechanism. Based on the public FB15k-237 dataset, ablation studies are conducted to evaluate the effects of these key components on the performance of IntSE. Specially, we evaluate the effects of the three different channel attention mechanisms on the performance of IntSE.

    The effects of key components on the performance of IntSE are shown in Table 7. The sr, cr, sc, cc, and ca denote the stacked feature reshaping, checkered feature reshaping, standard convolution, circular convolution, and channel attention components, respectively. From Table 7, we can see that checkered feature reshaping, circular convolution operation, and channel attention are three components that significantly contribute to the performance of IntSE. The channel attention mechanism significantly boosts performance in all cases. Although cr+cc outperforms most variants of IntSE with the help of checkered feature reshaping and circular convolution, it is inferior to IntSE. IntSE obtains improvements of 4 and 3.5% over cr+cc in terms of MRR and Hits@1 with the help of the channel attention mechanism. These findings further demonstrate the effectiveness of IntSE and the importance of the channel attention mechanism.

    Table 7.  Effect of key components on the performance of IntSE.
    Model MRR MR Hits@10 Hits@1
    sr+sc (ConvE) 0.325 244 0.501 0.237
    sr+sc+ca 0.342 181 0.525 0.251
    cr+sc 0.338 185 0.519 0.249
    cr+sc+ca 0.350 193 0.536 0.262
    cr+cc 0.346 175 0.532 0.258
    cr+cc+ca (IntSE) 0.359 179 0.540 0.267

     | Show Table
    DownLoad: CSV

    The channel attention mechanism is the key component of IntSE. To further confirm the necessity of the channel attention mechanism, we evaluate the effects of the three different channel attention mechanisms on the performance of IntSE. Specifically, SENet [36] is the first work to boost the representation power of a CNN by enhancing channel relationship; SKNet [40] is the first to explicitly focus on the adaptive receptive field size of neurons by introducing the attention mechanism; ECA-Net [41] is the most effective channel attention module of deep CNNs in computer vision applications. Table 8 presents the detailed results of IntSE variants with different channel attention mechanisms under the four evaluation indicators: MR, MRR, Hits@1, and Hits@10.

    Table 8.  Performance of IntSE with different channel attention mechanisms.
    Model MRR MR Hits@10 Hits@1
    IntSE-SENet 0.358 183 0.538 0.266
    IntSE-SKNet 0.350 193 0.527 0.262
    IntSE-ECANet 0.346 223 0.529 0.250
    IntSE-ECANet' 0.355 184 0.536 0.263

     | Show Table
    DownLoad: CSV

    IntSE-SENet performs the best among all models compared. To our surprise, ECA-Net brings side effects on IntSE. Especially the IntSE-ECANet model, which integrates checkered feature reshaping, circular convolution operation, and ECA-Net, has the lowest performance. The main reason is that a KG has different data characteristics from images and video. The IntSE model is so simple that there is only one convolution layer with a small input size, while deep CNNs in computer vision applications often have very large input sizes. Detailed analyses of the experimental results lead to the following views in the CNN-based KGE models for LP:

    1) The channel attention is more important than the spatial attention. SKnet uses soft attention to fuse the features of multiple convolution branches with different kernel sizes. The fact that IntSE outperforms IntSE-SKNet for LP indicates that the convolution kernel size of IntSE is more appropriate. Moreover, since IntSE-SENet outperforms IntSE-SKNet, we again confirm that channel attention is more important than spatial attention. The experimental results in [42] also show that the channel-first order is slightly better than the spatial-first order. Therefore, we should pay more attention to explicitly defining the association between feature channels so as to improve the performance for LP.

    2) The global cross-channel interaction is more important than local cross-channel interaction. Although ECA-Net is an efficient and effective channel attention mechanism for deep CNNs, the performance of IntSE-ECANet is lower than IntSE-SENet. The possible reason is that a fast 1D convolution with kernel size k in ECA-Net only captures the local cross-channel interaction rather than the global cross-channel interaction. To verify the hypothesis, we conduct additional experiments on a model named IntSE-ECANet' which adopts a variant of ECANet with only a single fully connected layer. Since IntSE-ECANet' exhibits better performance than IntSE-ECANet, we further confirm that the global cross-channel interaction is more important than the local cross-channel interaction in CNN-based KGE models for LP. We observe that IntSE-SENet performs the best among all models compared. This indicates that the two fully connected layers designed in SENet to capture the non-linear global cross-channel interactions are effective, and we can improve the performance of the CNN-based KGE model on LP by introducing SENet.

    The channel attention module of IntSE is LPSENet which changes the gating mechanism of SENet. In the LPSENet, the fully connected layer is changed to the convolutional layer with filters ωR1×1, and Dropout is added to avoid overfitting. We also evaluate the performance of IntSE with different gating mechanisms. The results are shown in Table 9. We can see that IntSE achieves better performance with the gating mechanism of LPSENet. The reason is that it has a higher expressive ability and more robustness to overfitting.

    Table 9.  Performance of IntSE with different gating mechanisms in channel attention.
    Model MRR MR Hits@10 Hits@1
    IntSE-SENet 0.466 5123 0.530 0.434
    IntSE 0.469 5007 0.532 0.439

     | Show Table
    DownLoad: CSV

    A lightweight CNN-based knowledge graph embedding (KGE) model with channel attention called IntSE is proposed in this paper. Although CNN-based KGE models attract more attention from the research and achieve higher LP accuracy than other KGE models, they often contain too many parameters and have very low efficiency. We do our utmost to explore the balance between the efficiency and effectiveness of the CNN-based KGE model and propose IntSE through extensive experiments. The key idea of IntSE is to increase the favorable feature interactions between the entities and relations in a triple of KG. Checkered feature reshaping and circular convolution operations increase the feature interactions, and the channel attention further enhances useful feature interactions in IntSE. The ablation studies are carried out to explore the effectiveness of each component on the IntSE for LP. Especially the impacts of different channel attention mechanisms (i.e., SENet, SKNet, and ECA-Net) on IntSE are investigated. Compared with the state-of-the-art CNN-based KGE models, IntSE mostly achieved the best performance under various evaluation criteria on public datasets. These extensive experimental results substantiate the efficiency and effectiveness of our model.

    In future work, we would like to explore whether other improved versions of SENet can further improve the performance of InteractE and investigate whether incorporating the spatial attention module into IntSE can boost its performance. In addition, it would also be interesting to consider how to improve the performance of KGE models for other downstream tasks, e.g., question answering [6], recommendation [7], and entity resolution [31].

    This work was supported by the National Natural Science Foundation of China under Grant (No. 61976032 and No. 62002039), the Fundamental Research Funds for the Central Universities (No. 3132022261 and No. 3132022634).

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    [1] Ooi CP, Loke SC (2013) Sweet potato for type 2 diabetes mellitus. Cochrane Database Syst Rev 9: CD009128.
    [2] Leach MJ, Kumar S (2012) Cinnamon for diabetes mellitus. Cochrane Database Syst Rev 9: CD007170.
    [3] Roberfroid MB (2002) Global view on functional foods: European perspectives. Br J Nutr 88: S133-138. doi: 10.1079/BJN2002677
    [4] Estruch R, Ros E, Salas-Salvadó, et al. (2013) Primary prevention of cardiovascular disease with a Mediterranean diet. N Engl J Med 368: 1279-1290. doi: 10.1056/NEJMoa1200303
    [5] Tapsell LC (2015) Fermented dairy food and CVD risk. Br J Nutr 113: S131-135. doi: 10.1017/S0007114514002359
    [6] Buckland G, Gonzalez CA (2015) The role of olive oil in disease prevention: a focus on the recent epidemiological evidence from cohort studies and dietary intervention trials. Br J Nutr 113: S94-101.
    [7] Ros E (2015) Nuts and CVD. Br J Nutr 113: S111-120. doi: 10.1017/S0007114514003924
    [8] Sikand G, Kris-Etherton P, Boulos NM (2015) Impact of functional foods on prevention of cardiovascular disease and diabetes. Curr Cardiol Rep 17: 39. doi: 10.1007/s11886-015-0593-9
    [9] IDF Diabetes Atlas. 7th ed. Brussels, Belgium: International Diabetes Federation; 2015. Available from: http://www.idf.org/diabetesatlas.
    [10] Bhatt JK, Thomas S, Nanjan MJ (2012) Resveratrol supplementation improves glycemic control in type 2 diabetes mellitus. Nutr Res 32: 537-541. doi: 10.1016/j.nutres.2012.06.003
    [11] Hambrock A, de Oliveira Franz CB, Hiller S, et al. (2007) Resveratrol binds to the sulfonylurea receptor (SUR) and induces apoptosis in a SUR subtype-specific manner. J Biol Chem 282: 3347-3356. doi: 10.1074/jbc.M608216200
    [12] Chen WP, Chi TC, Chuang LM, et al. (2007) Resveratrol enhances insulin secretion by blocking K(ATP) and K(V) channels of beta cells. Eur J Pharmacol 568: 269-277. doi: 10.1016/j.ejphar.2007.04.062
    [13] Kodama T, Miyazaki T, Kitamura I, et al. (2005) Effects of single and long-term administration of wheat albumin on blood glucose control: randomized controlled clinical trials. Eur J Clin Nutr 59: 384-392. doi: 10.1038/sj.ejcn.1602085
    [14] Shidfar F, Rajab A, Rahideh T, et al. (2015) The effect of ginger (Zingiber officinale) on glycemic markers in patients with type 2 diabetes. J Complement Integr Med 12: 165-170.
    [15] Mozaffari-Khosravi H, Talaei B, Jalali BA, et al. (2014) The effect of ginger powder supplementation on insulin resistance and glycemic indices in patients with type 2 diabetes: a randomized, double-blind, placebo-controlled trial. Complement Ther Med 22: 9-16. doi: 10.1016/j.ctim.2013.12.017
    [16] Li Y, Tran VH, Duke CC, et al. (2012) Preventive and Protective Properties of Zingiber officinale (Ginger) in Diabetes Mellitus, Diabetic Complications, and Associated Lipid and Other Metabolic Disorders: A Brief Review. Evid Based Complement Alternat Med 2012: 516870.
    [17] Isa Y, Miyakawa Y, Yanagisawa M, et al. (2008) 6-Shogaol and 6-gingerol, the pungent of ginger, inhibit TNF-alpha mediated downregulation of adiponectin expression via different mechanisms in 3T3-L1 adipocytes. Biochem Biophys Res Commun 373: 429-434. doi: 10.1016/j.bbrc.2008.06.046
    [18] Urquiaga I, D'Acuña S, Pérez D, et al. (2015) Wine grape pomace flour improves blood pressure, fasting glucose and protein damage in humans: a randomized controlled trial. Biol Res 48: 49. doi: 10.1186/s40659-015-0040-9
    [19] Anderson JW, Randles KM, Kendall CW, et al. (2004)) Carbohydrate and fiber recommendations for individuals with diabetes: a quantitative assessment and meta-analysis of the evidence. J Am Coll Nutr 23: 5-17.
    [20] Streppel MT, Arends LR, van’t Veer P, et al. (2005) Dietary fiber and blood pressure: a meta-analysis of randomized placebo-controlled trials. Arch Intern Med 165: 150-156. doi: 10.1001/archinte.165.2.150
    [21] Streppel MT, Ocké MC, Boshuizen HC, et al. (2008) Dietary fiber intake in relation to coronary heart disease and all-cause mortality over 40 y: the Zutphen Study. Am J Clin Nutr 88: 1119-1125.
    [22] Marett R, Slavin JL (2004) No long-term benefits of supplementation with arabinogalactan on serum lipids and glucose. J Am Diet Assoc 104: 636-639. doi: 10.1016/j.jada.2004.01.017
    [23] Lee YJ, Seo JA, Yoon T, et al. (2016) Effects of low-fat milk consumption on metabolic and atherogenic biomarkers in Korean adults with the metabolic syndrome: a randomised controlled trial. J Hum Nutr Diet 29: 477-486. doi: 10.1111/jhn.12349
    [24] Chang BJ, Park SU, Jang YS, et al. (2016) Effect of functional yogurt NY-YP901 in improving the trait of metabolic syndrome. Eur J Clin Nutr 65: 1250-1255.
    [25] Ebrahimi-Mamaghani M, Saghafi-Asl M, Pirouzpanah S, et al. (2014) Effects of raw red onion consumption on metabolic features in overweight or obese women with polycystic ovary syndrome: a randomized controlled clinical trial. J Obstet Gynaecol Res 40: 1067-1076. doi: 10.1111/jog.12311
    [26] Kumari K, Augusti KT (2007) Lipid lowering effect of S-methyl cysteine sulfoxide from Allium cepa Linn in high cholesterol diet fed rats. J Ethnopharmacol 109: 367-371. doi: 10.1016/j.jep.2006.07.045
    [27] Tjokroprawiro A, Pikir BS, Budhiarta AA, et al. (1983) Metabolic effects of onion and green beans on diabetic patients. Tohoku J Exp Med 141: 671-676. doi: 10.1620/tjem.141.Suppl_671
    [28] Faghihzadeh F, Adibi P, Hekmatdoost A (2015) The effects of resveratrol supplementation on cardiovascular risk factors in patients with non-alcoholic fatty liver disease: a randomised, double-blind, placebo-controlled study. Br J Nutr 114: 796-803. doi: 10.1017/S0007114515002433
    [29] Labbé A, Garand C, Cogger VC, et al. (2011) Resveratrol improves insulin resistance hyperglycemia and hepatosteatosis but not hypertriglyceridemia, inflammation, and life span in a mouse model for Werner syndrome. J Gerontol A Biol Sci Med Sci 66: 264-278.
    [30] Yoshino J, Conte C, Fontana L, et al. (2012) Resveratrol supplementation does not improve metabolic function in nonobese women with normal glucose tolerance. Cell Metab 16: 658-664. doi: 10.1016/j.cmet.2012.09.015
    [31] Baur JA, Sinclair DA. Therapeutic potential of resveratrol: the in vivo evidence. Nat Rev Drug Discov 5: 493-506.
    [32] Lagouge M, Argmann C, Gerhart-Hines Z, et al. (2006) Resveratrol improves mitochondrial function and protects against metabolic disease by activating SIRT1 and PGC-1alpha. Cell 127: 1109-1122. doi: 10.1016/j.cell.2006.11.013
    [33] Park SJ, Ahmad F, Philp A, et al. (2012) Resveratrol ameliorates aging-related metabolic phenotypes by inhibiting cAMP phosphodiesterases. Cell 148: 421-433.
    [34] Um JH, Park SJ, Kang H, et al. (2010) AMP-activated protein kinase-deficient mice are resistant to the metabolic effects of resveratrol. Diabetes 59: 554-563. doi: 10.2337/db09-0482
    [35] Gyles CL, Lenoir-Wijnkoop I, Carlberg JG, et al. (2012) Health economics and nutrition: a review of published evidence. Nutr Rev 70: 693-708. doi: 10.1111/j.1753-4887.2012.00514.x
    [36] Li Y, Wang C, Huai Q, et al. (2016) Effects of tea or tea extract on metabolic profiles in patients with type 2 diabetes mellitus: a meta-analysis of ten randomized controlled trials. Diabetes Metab Res Rev 32: 2-10.
    [37] Wang ZM, Zhou B, Wang YS, et al. (2011) Black and green tea consumption and the risk of coronary artery disease: a meta-analysis. Am J Clin Nutr 93: 506-515. doi: 10.3945/ajcn.110.005363
    [38] Akash MS, Rehman K, Chen S (2014) Effects of coffee on type 2 diabetes mellitus. Nutrition 30: 755-763. doi: 10.1016/j.nut.2013.11.020
    [39] Cho SS, Qi L, Fahey GC Jr, et al. (2013) Consumption of cereal fiber, mixtures of whole grains and bran, and whole grains and risk reduction in type 2 diabetes, obesity, and cardiovascular disease. Am J Clin Nutr 98: 594-619.
    [40] Mirmiran P, Bahadoran Z, Azizi F (2014) Functional foods-based diet as a novel dietary approach for management of type 2 diabetes and its complications: A review. World J Diabetes 5: 267-281. doi: 10.4239/wjd.v5.i3.267
    [41] Hou Q, Li Y, Li L, et al. (2015) The Metabolic Effects of Oats Intake in Patients with Type 2 Diabetes: A Systematic Review and Meta-Analysis. Nutrients 7: 10369-10387. doi: 10.3390/nu7125536
    [42] Belobrajdic DP, Bird AR (2013) The potential role of phytochemicals in wholegrain cereals for the prevention of type-2 diabetes. Nutr J 12: 62.
    [43] Williams PG (2014) The benefits of breakfast cereal consumption: a systematic review of the evidence base. Adv Nutr 5: 636S-673S. doi: 10.3945/an.114.006247
    [44] Afshin A, Micha R, Khatibzadeh S, et al. (2014) Consumption of nuts and legumes and risk of incident ischemic heart disease, stroke, and diabetes: a systematic review and meta-analysis. Am J Clin Nutr 100: 278-288. doi: 10.3945/ajcn.113.076901
    [45] Yang B, Chen Y, Xu T, et al. (2011) Systematic review and meta-analysis of soy products consumption in patients with type 2 diabetes mellitus. Asia Pac J Clin Nutr 20: 593-602.
    [46] Buitrago-Lopez A, Sanderson J, Johnson L, et al. (2011) Chocolate consumption and cardiometabolic disorders: systematic review and meta-analysis. BMJ 343: d4488. doi: 10.1136/bmj.d4488
    [47] Eilat-Adar S, Sinai T, Yosefy C, et al. (2013) Nutritional recommendations for cardiovascular disease prevention. Nutrients 5: 3646-3683.
    [48] Neelakantan N, Narayanan M, de Souza RJ, et al. (2014) Effect of fenugreek (Trigonella foenum-graecum L.) intake on glycemia: a meta-analysis of clinical trials. Nutr J 13: 7.
    [49] Baker WL, Baker EL, Coleman CI (2009) The effect of plant sterols or stanols on lipid parameters in patients with type 2 diabetes: a meta-analysis. Diabetes Res Clin Pract 84: e33-e37. doi: 10.1016/j.diabres.2009.01.015
    [50] Evans M, Judy WV, Wilson D, et al. (2015) Randomized, double-blind, placebo-controlled, clinical study on the effect of Diabetinol(®) on glycemic control of subjects with impaired fasting glucose. Diabetes Metab Syndr Obes 8: 275-286.
    [51] Alam MA, Subhan N, Rahman MM, et al. (2014) Effect of citrus flavonoids, naringin and naringenin, on metabolic syndrome and their mechanisms of action. Adv Nutr 5: 404-417. doi: 10.3945/an.113.005603
    [52] Mulvihill EE, Assini JM, Lee JK, et al. (2011) Nobiletin attenuates VLDL overproduction, dyslipidemia, and atherosclerosis in mice with diet-induced insulin resistance. Diabetes 60: 1446-1457. doi: 10.2337/db10-0589
    [53] Lee YS, Cha BY, Saito K, et al. (2010) Nobiletin improves hyperglycemia and insulin resistance in obese diabetic ob/ob mice. Biochem Pharmacol 79: 1674-1683. doi: 10.1016/j.bcp.2010.01.034
    [54] Siró I, Kápolna E, Kápolna B, et al. (2008) Functional food. Product development, marketing and consumer acceptance—a review. Appetite 51: 456-467.
    [55] Mullie P, Guelinckx I, Clarys P, et al. (2009) Cultural, socioeconomic and nutritional determinants of functional food consumption patterns. Eur J Clin Nutr 63: 1290-1296. doi: 10.1038/ejcn.2009.89
    [56] Ozen AE, Bibiloni Mdel M, Pons A, et al. (2013) Sociodemographic and lifestyle determinants of functional food consumption in an adult population of the Balearic Islands. Ann Nutr Metab 63: 200-207. doi: 10.1159/000354559
    [57] Herath D, Cranfield J, Henson S (2008) Who consumes functional foods and nutraceuticals in Canada? Results of cluster analysis of the 2006 survey of Canadians' demand for food products supporting health and wellness. Appetite 51: 256-265.
    [58] Sauder KA, McCrea CE, Ulbrecht JS, et al. (2015) Effects of pistachios on the lipid/lipoprotein profile, glycemic control, inflammation, and endothelial function in type 2 diabetes: A randomized trial. Metabolism 64: 1521-1529. doi: 10.1016/j.metabol.2015.07.021
  • This article has been cited by:

    1. Panfei Yin, Erping Zhao, , A knowledge graph completion model based on weighted fusion description information and transform of the dimension and the scale, 2025, 55, 0924-669X, 10.1007/s10489-025-06230-w
    2. Qiang Yu, Liang Bao, Peng Nie, Lei Zuo, Positionally Restricted Masked Knowledge Graph Completion via Multi-Head Mutual Attention, 2025, 29497159, 10.1016/j.jiixd.2025.02.006
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(7615) PDF downloads(2004) Cited by(4)

Figures and Tables

Tables(3)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog