Research article Topical Sections

Allergic sensitization to peanuts is enhanced in mice fed a high-fat diet

  • Received: 14 August 2020 Accepted: 08 October 2020 Published: 10 October 2020
  • The incidence of peanut (PN) allergy is on the rise. As peanut allergy rates have continued to climb over the past few decades, obesity rates have increased to record highs, suggesting a link between obesity and the development of peanut allergy. While progress has been made, much remains to be learned about the mechanisms driving the development of allergic immune responses to peanut. Remaining unclear is whether consuming a Western diet, a diet characterized by overeating foods rich in saturated fat, salt, and refined sugars, supports the development of PN allergy. To address this, we fed mice a high fat diet to induce obesity. Once diet-induced obesity was established, mice were exposed to PN flour via the airways using our 4-week inhalation model. Mice were subsequently challenged with PN extract to induce anaphylaxis. Mice fed a high-fat diet developed significantly higher titers of PN-specific IgE, as well as stronger anaphylactic responses, when compared to their low-fat diet fed counterparts. These results suggest that obesity linked to eating a high-fat diet promotes the development of allergic immune responses to PN in mice. Such knowledge is critical to advance our growing understanding of the immunology of PN allergy.

    Citation: Joseph J. Dolence, Hirohito Kita. Allergic sensitization to peanuts is enhanced in mice fed a high-fat diet[J]. AIMS Allergy and Immunology, 2020, 4(4): 88-99. doi: 10.3934/Allergy.2020008

    Related Papers:

    [1] Jason Adams, Yumou Qiu, Luis Posadas, Kent Eskridge, George Graef . Phenotypic trait extraction of soybean plants using deep convolutional neural networks with transfer learning. Big Data and Information Analytics, 2021, 6(0): 26-40. doi: 10.3934/bdia.2021003
    [2] Nickson Golooba, Woldegebriel Assefa Woldegerima, Huaiping Zhu . Deep neural networks with application in predicting the spread of avian influenza through disease-informed neural networks. Big Data and Information Analytics, 2025, 9(0): 1-28. doi: 10.3934/bdia.2025001
    [3] Xiaoxiang Guo, Zuolin Shi, Bin Li . Multivariate polynomial regression by an explainable sigma-pi neural network. Big Data and Information Analytics, 2024, 8(0): 65-79. doi: 10.3934/bdia.2024004
    [4] Bill Huajian Yang . Modeling path-dependent state transitions by a recurrent neural network. Big Data and Information Analytics, 2022, 7(0): 1-12. doi: 10.3934/bdia.2022001
    [5] David E. Bernholdt, Mark R. Cianciosa, David L. Green, Kody J.H. Law, Alexander Litvinenko, Jin M. Park . Comparing theory based and higher-order reduced models for fusion simulation data. Big Data and Information Analytics, 2018, 3(2): 41-53. doi: 10.3934/bdia.2018006
    [6] Marco Tosato, Jianhong Wu . An application of PART to the Football Manager data for players clusters analyses to inform club team formation. Big Data and Information Analytics, 2018, 3(1): 43-54. doi: 10.3934/bdia.2018002
    [7] Mingxing Zhou, Jing Liu, Shuai Wang, Shan He . A comparative study of robustness measures for cancer signaling networks. Big Data and Information Analytics, 2017, 2(1): 87-96. doi: 10.3934/bdia.2017011
    [8] Xiangmin Zhang . User perceived learning from interactive searching on big medical literature data. Big Data and Information Analytics, 2017, 2(3): 239-254. doi: 10.3934/bdia.2017019
    [9] Yiwen Tao, Zhenqiang Zhang, Bengbeng Wang, Jingli Ren . Motality prediction of ICU rheumatic heart disease with imbalanced data based on machine learning. Big Data and Information Analytics, 2024, 8(0): 43-64. doi: 10.3934/bdia.2024003
    [10] Yaru Cheng, Yuanjie Zheng . Frequency filtering prompt tuning for medical image semantic segmentation with missing modalities. Big Data and Information Analytics, 2024, 8(0): 109-128. doi: 10.3934/bdia.2024006
  • The incidence of peanut (PN) allergy is on the rise. As peanut allergy rates have continued to climb over the past few decades, obesity rates have increased to record highs, suggesting a link between obesity and the development of peanut allergy. While progress has been made, much remains to be learned about the mechanisms driving the development of allergic immune responses to peanut. Remaining unclear is whether consuming a Western diet, a diet characterized by overeating foods rich in saturated fat, salt, and refined sugars, supports the development of PN allergy. To address this, we fed mice a high fat diet to induce obesity. Once diet-induced obesity was established, mice were exposed to PN flour via the airways using our 4-week inhalation model. Mice were subsequently challenged with PN extract to induce anaphylaxis. Mice fed a high-fat diet developed significantly higher titers of PN-specific IgE, as well as stronger anaphylactic responses, when compared to their low-fat diet fed counterparts. These results suggest that obesity linked to eating a high-fat diet promotes the development of allergic immune responses to PN in mice. Such knowledge is critical to advance our growing understanding of the immunology of PN allergy.


    Printed script letter classification has a tremendous commercial and pedagogical importance for book publishers, online Optical Character Recognition (OCR) tools, bank officers, postal officers, video makers, and so on [1,2,3]. Postal mail sorting according to zip code, the verification of signatures, and check processing are usually done with the application of grapheme classification [4,5,6]. Some sample images of its application are shown in Figure 1.

    Figure 1.  Example application areas of Bangla handwritten letter classification in real-life.

    Statistically, the importance of printed script letter classification is enormous when a large population uses a specific language. For example, with nearly 230 million native speakers, Bengali (also called Bangla) ranks as the fifth most spoken language in the world [7]. It is the official language of Bangladesh and the second most spoken language in India [8] after Hindi.

    Handwritten character classification or recognition is particularly challenging for the Bengali language, as it has 49 letters and 18 potential diacritics (or accents). Moreover, this language supports complex letter structures created from its basic letters and diacritics. In total, the Bangla letter variations are estimated to be around 13, 000, which is 52 times more than the English letter variations [9]. Although several language grapheme classifications have received much attention [10,11,12], Bangla is still a relatively unexplored field, with most works done in the detection of vowels and consonants. The limited progress in exploring the Bengali language has motivated us to classify the Bengali handwritten letters into three constituent elements: root, vowel, and constant.

    Previously, several machine learning models have been used for different language grapheme recognition [13]. Several research using the deep convolutional neural network has been successful in the detection of handwritten characters in Latin, Chinese, and English language [14,15]. In other words, successful feature extraction became possible using different types of layers in neural networks. The convolutional neural network with the augmentation of the images of handwriting can generate a better model through training in deep learning [16,17]. Previous research in this area faced fewer challenges regarding variations, massive data usage, and model creation that deep learning needs most regarding the high number of label classification [19,20].

    This paper proposes a deep convolutional neural network with an encoder-decoder to facilitate the accurate classification of Bangla handwritten letters from images. We use 200, 840 handwritten images to train and test the proposed deep learning model. We train in 4 steps with four subsets of images containing 50, 210 images each. In doing so, we create three different labels (roots, vowels, and consonants) from each handwritten image. The result shows that the proposed model can classify handwritten Bangla letters as follows: roots-93%, vowels-96%, and consonants-97%, which is much better than previous works done on Bangla grapheme variations and dataset size.

    The paper is organized as follows: Section 2 discusses the brief background of existing research on this research area. Section 3 is devoted to present Bangla handwritten letter dataset details. This section discusses the dataset structure and format and augments the dataset to create many variations. Section 4 discusses the architecture of the models for Bangla handwritten grapheme classification, and in section 5, we discuss the experimental process, tools, and used programming language. Section 6 shows the detailed results of the training process and validation. Finally, section 7 concludes the research work by discussing the contributions and applicability in the classification of Bangla handwritten letters.

    Optical Character Recognition is one of the favorite research topics in computer graphics and linguistic research. This section briefly discusses two areas first, how deep learning is used in character recognition, second, how deep learning is used in Bangla handwritten digit and letter classification.

    In optical character recognition, many research works have been proposed. Théodore et al. showed that learning features with CNN is much better for handwritten word recognition [21]. However, the model needs a longer processing time when classifying a word compared to a letter. Zhuravlev et al. mentioned this issue in their research and experimented with a differential classification technique on grapheme images using multiple neural networks [22,23]. However, the model works better with that small dataset and will fail to detect when augmented images will be provided. Jiayue et al. discussed this issue and proved that proper augmentation before feeding into CNN could be more efficient in grapheme classification [24].

    Regarding Bangla character and digit recognition, there are few kinds of research available. A study by Shopon et al. presented unsupervised learning using an auto-encoder with deep ConvNet to recognize Bangla digits [25]. A similar study by Akhand et al. proposed a simple CNN for Bangla handwritten numeral recognition [26,27]. These methods achieved 99% accuracy in detecting Bangla digits and faced fewer challenges in classifying only ten labels than more character recognition labels.

    A study on Bangla character recognition by Rabby et al. used CMATERdb and BanglaLekha as datasets and CNN to recognize handwritten characters. The resulted accuracy of CNN was 98%, and 95.71% for each dataset [28]. Another study by Alif et al. showed ResNet-18 architecture is giving similar accuracy on CMATERdb3, which is a relatively large dataset than previous [29,30]. The limitation of these two research is their image variations and dataset size.

    This paper addresses the limitations of previous research in image augmentation, dataset size, proper model creation, and a high number of label classification.

    This section describes raw data and data augmentation to ensure better data preparation for our proposed model.

    This study uses the dataset from the Bengali.AI community that works as an open-source dataset for research. The dataset was prepared from handwritten Bangla letters by a group of participants. The images of these letters are provided in parquet format. Each image is 137×236 pixels. Some sample handwritten images are shown in Figure 2.

    Figure 2.  Sample handwritten Bangla grapheme images.

    The Bengali language has 49 letters with 11 vowels, 38 consonants, and 18 potential diacritics. The handwritten Bengali characters consist of three components: grapheme-root, vowel-diacritic, and consonant-diacritic. To simplify the data for ML, we organize 168 grapheme-root, 10 vowel-diacritic, and 6 consonant-diacritic as unique labels. All the labels are then introduced as a unique integer. Table 1 summarizes the metadata information of the training dataset.

    Table 1.  Metadata information of training dataset.
    Features Description
    image_id Sample ID number for each handwritten image
    grapheme_root Unique number of vowels, consonants, or conjuncts
    vowel_diacritic Nasalization of vowels, and suppression of the inherent vowels
    consonant_diacritic Nasalization of consonants, and suppression of the inherent consonants
    grapheme Target variable

     | Show Table
    DownLoad: CSV

    The raw training dataset has a total of 200, 840 observations with almost 10, 000 possible handwritten image variations. The raw testing dataset is created separately to distinguish it from the training dataset. Table 2 summarizes the metadata information of the test data.

    Table 2.  Metadata information of the testing dataset.
    Features Description
    image_id An unique image ID for each testing image
    row_id Test id of grapheme root, consonant diacritic, vowel diacritic
    component Grapheme root, consonant diacritic, vowel diacritic

     | Show Table
    DownLoad: CSV

    The raw dataset is a set of images in the parquet format, as discussed in the previous section. The images are generally created from a possible set of grapheme writing, but it does not cover all the aspects of writing variations. To create more variations (more than 10, 000), dataset augmentation becomes a necessary step. In reality, it has around 13, 000 possible letter variations that make the problem harder than any other language grapheme classification. Therefore, a pre-processing of the dataset is done to increase the more number of grapheme variations.

    We apply the following data augmentation techniques: (1) shifting, (2) rotating, (3) changing brightness, and (4) applying zoom. In all cases, some precautions are taken so that augmented handwritten images are well generated. For example, too much shifting or too much brightness can lose the image pixels [31]. Applying random values to those operations is also prohibited during our pre-processing of the dataset.

    In terms of shifting, we apply the following image augmentation: width shift, and height shift on our images. In rotation, an image was rotated to (1) 8 degree, (2) 16 degree, and (3) 24 degree in both positive and negative direction. In case of zoom, we apply (1) 0.15%, and (2) 0.3% zoom in. Some sample output of Bangla handwritten letter's augmentation is shown in Figure 3.

    Figure 3.  Bengali handwritten letter augmentation samples.

    The augmenting was minimized to only four options due to minimizing the risk of false image creation. As our dataset is related to characters, we need to verify that augmentation may add false image augmentation. For example, there is a horizontal flip option for image augmentation. If we apply that to our dataset, some non-Bangla handwritten images and the proposed model may learn wrongly to classify the handwritten letters.

    This section describes the architecture of the models that we build for Bangla grapheme image classification. We also discuss the neural networks and their necessary layers useful for fitting data into the model.

    A neural network is a series of algorithms that process the data through several layers that mimic how the human brain operates. A neural network for any input x can be expressed as:

    y=f(jwjxj+b) (4.1)

    where, wj is the network weights, b is a bias term, and f is a specified activation function. For a better approximation, multiple neurons are used in the form of hidden layers as follows:

    yi=f(jwi,jxj+bi) (4.2)

    To obtain the final output with higher accuracy, multiple hidden layers are used. The final output can be obtained as:

    z=f(kwk(z)yk+b(z)) (4.3)

    For image processing, we flat the image and feed it through neural networks. A vital characteristic of the images is that images have high dimensional vectors and take many parameters to characterize the network. To solve this issue, convolutional neural networks were used to reduce the number of parameters and adapt the network architecture specifically to vision tasks. CNN's are usually composed of a set of layers that can be grouped by their functionalities.

    In this study, we implemented the CNN architecture with an encoder-decoder system. The encoder-decoder classifier works at the pixel level to extract the features. A recent work by Jong et al. shows that how encode-decoder networks using convolutional neural networks work as an optimizer for complex image data sets [33]. The encoder and decoder are developed using convolution, activation, batch normalization, and pooling layers. The detailed picture of these layers is described in the following section and shown in Figure 4.

    Figure 4.  The proposed architecture of neural networks for Bangla grapheme classification.

    The convolution layer extracts feature maps from the input or previous layer. Each output layer is connected to some nodes from the previous layer [34,35] and valuable in the classification process. The convolutional layer is used to the sliding window through the image and convolves to create several sub-image, increasing the volume in terms of depth. The implementation view of the convolutional layer exists in both encoder and decoder, shown in Figure 4.

    This section describes the activation function used in neural networks for proposed model training. We include the rectified Linear Unit (ReLU) function to add the non-linearity in the proposed network. The function is defined as

    f(x)=max(0,x) (4.4)

    where it returns 0 for negative input and x for positive input.

    There is some other nonlinearity function named as sigmoid and tanh. The sigmoid non-linearity function can also be declared as follows

    σ(x)=11+ex (4.5)

    which maps a real number x to a value between [0, 1]. Another nonlinear function tanh maps the values into the interval [1, 1].

    As the Bangla grapheme list contains more labels to detect than other language graphemes, we implement several non-linear layers as an encoder and a corresponding set of decoders to improve the recognition performance. However, we use ReLu due to its linear form, and this function improves the performance of classification compared to sigmoid, and tanh functions [36,37]. An essential feature of such ReLu function is that it shows non-linearity around 0 because its slope is not constant around 0. Therefore, when the CNN model includes bias terms, the nodes can change the slope at different values for our input using the ReLu function [37].

    In our model, we introduce such ReLu functionality after each non-pooling layer in the encoder to map the value of each neuron to an actual number. However, in some cases, such function can die during training. To solve this issue, leaky Relu has been added so that if there is any negative value, it will add a slight negative slope [38].

    The batch normalization layer normalizes each input channel across a mini-batch [39,40]. Our study adjusts and scales the previously filtered sub-images and normalizes the output by subtracting the batch mean and diving by the standard deviation of the batch. Then, it shifts the image input by a learnable offset. Generally, using such a layer after a convolution layer and before non-linear layers is useful in speeding up the training and reducing sensitivity to network initialization [39].

    We also implement a dropout layer during the final classification step. We use this layer to reduce the labels by setting zero, which has less probability in classification [41,42].

    The pooling layer is mainly used to reduce the resolution of the feature maps. To be specific, this layer downsamples the volume along the image dimensions to reduce the size of representation and number of parameters to prevent overfitting. Our model uses a max-pooling layer in each encoder block, which downsamples the volume by taking max from each block.

    The proposed model is trained with a configuration of the epoch, loss function, and batch size in the experiment. Also, the model contains trainable and non-trainable parameters. The trainable parameter size for the proposed model is 136, 048, 154, whereas the non-trainable parameter size is 448. There are thirteen convolution layers, three pooling layers, three normalization layers, and four dropout layers are used in our model. The experiment is implemented using the Python programming language with Keras [43], and Theano [44] library.

    In terms of configuration, the proposed model uses 25 epochs with a batch training size of 128. To calculate the loss function, we use categorical cross-entropy for root, vowel, and constant classification. Mean Squared Error (MSE) is another metric to calculate loss function but categorical cross-entropy performance is better in classification tasks [45].

    After developing the model in Python, we run it on an Intel (R) Core (TM) i7-7500U CPU @ 2.70 GHz machine with 16GB RAM. For both validation and training, the same batch size and epochs were used. The experimental results performance is calculated using accuracy and model performance evaluation metric. We also use a loss function to evaluate how well our deep learning model trains the given data. Both of these metrics are popular in classification tasks using deep learning [46,47,48].

    In this section, we present the outcome of the model in terms of evaluation metrics. The proposed CNN method is applied in four different phases on four subsets of the grapheme dataset and produces the results. This way, we test how more Bangla handwritten letter images are helpful to produce better deep neural network models. However, conducting this research with more subsets of images will have computational and complexity challenges. In evaluation, the accuracy and loss of each epoch of training and validation are used.

    In the first phase, a subset of 50, 210 images is sent to training with 30 epoch. The results show that accuracy in detecting the root is less than the vowel and constant in both training and validation. The training and validation root accuracy are 85% and 88% respectively, where vowel accuracy is 92% and 95%, constant accuracy is 95% and 96%. This is because many root characters are needed to identify than the number of vowels, and constants are needed to identify. Figures 5 and 6 show the train and validation accuracy and cross-entropy loss over epochs. The results show that the model seems to have good converged behavior. It implies the model is well configured and no sign of over or underfitting.

    Figure 5.  Accuracy of proposed model (Phase 1).
    Figure 6.  Loss of proposed model (Phase 1).

    We also test how a CNN model with an encoder-decoder is compared to a traditional CNN that does not have an encoder and decoder concept. There are six convolution layers, three pooling layers, five normalization layers, and four dropout layers are used in the traditional CNN model. Figures 7 and 8 visualize the accuracy and loss of the simple CNN model for 30 epochs. The results we see from the figure that the loss and accuracy are fluctuating. The reason behind this, the simple CNN model is sensitive to noise and produces random classification results. This problem is also known as overfitting. The results show a better performance with an encoder and decoder concept than a traditional one.

    Figure 7.  Accuracy of traditional CNN model.
    Figure 8.  Loss of traditional CNN model.

    After getting a training accuracy of 85% in the first phase, we train another subset of images with the existing model. The hypothesis is that the more variations of images are trained, the more the model is learning when we have many root variations. We take a different 50, 210 images and train with the existing model with 30 epochs in this phase. Figures 9 and 10 visualize the accuracy and loss of the model in phase 2 respectively.

    Figure 9.  Accuracy of proposed model (Phase 2).
    Figure 10.  Loss of proposed model (Phase 2).

    At the beginning of the training stage, we observe that train root accuracy drops from 85% to 80%. Not only train root accuracy, but all other categories' accuracy also drops after adding a new subset of images. The opposite behavior is observed in terms of the loss function. In epoch 0, loss functions of every category are increased. However, the final result of training 2 ends up with better train root, vowel, and constant accuracy of 92%, 95%, and 96%, respectively. In terms of the loss function, we observe the decrements over epochs in every case. These results imply that the model is appropriately converged and trained well due to more subset images used in the training process.

    As a good result is maintained in our previous phases, we introduce another set of images with more variations in learning by our model. However, this time we observe little changes happen after the training. Figures 11 and 12 show the accuracy and loss of the model in phase 3. After another 30 epochs, training root, vowel, and constant accuracy are 94%, 96%, and 97%. The model root accuracy is increased by 2% and vowel, and constant accuracy are increased by 1%. The same behavior is found on the validation data also. It implies the model has converged very well and can be finalized by another training.

    Figure 11.  Accuracy of proposed model (Phase 3).
    Figure 12.  Loss of proposed model (Phase 3).

    This final phase verifies the converge of accuracy and loss function by just doing another final training. Another set of 50, 210 images of Bangla handwritten letters are introduced. Figures 13 and 14 show the accuracy and loss function of the model in the final phase. The results show that root accuracy drops from 94% to 93% in the training stage, and the accuracy drops from 98% to 97% in the validation stage. Nevertheless, in all other cases, it seems improvement or no change. Also, the results start to create bumpy behavior in the accuracy metric, and loss functions are also converged. All the validation loss functions are 3% or below. These all results imply, our final model is converged and ready to report the final accuracy and loss.

    Figure 13.  Accuracy of proposed model (Final Phase).
    Figure 14.  Loss of proposed model (Final Phase).

    Despite the advances in the classification of grapheme images in computer vision, Bengali grapheme classification has mainly remained unsolved due to confusing characters and many variations. Moreover, Bangla is one of the most spoken languages in Asia, and its grapheme classification has not been made, as there is no application as yet. However, many industries like banks, post offices, book publishers, and many more industries need the Bangla handwritten letters recognition.

    In this paper, we implement a CNN architecture with encoder-decoder, classifying 168 grapheme roots, 11 vowel diacritics (or accents), and 7 consonant diacritics from handwritten Bangla letters. One of the challenges is to deal with 13, 000 grapheme variations, which are way more than English or Arabic grapheme variations. The performance results show that the proposed model achieves root accuracy of 93%, vowel accuracy of 96%, and consonant accuracy of 97%, which are significantly better in Bangla grapheme classification than in previous research. Finally, we report the detailed loss and accuracy in 4 phases of training and validation to show how our proposed model learns over time.

    To illustrate the model performance, we compared our model with a traditional CNN applied to the same dataset. The results show that the accuracy and loss function fluctuate over time in the traditional CNN model, which means an over-fitted model. In comparison, we see that the proposed CNN model with encoder-decoder does much better in classifying Bangla handwritten grapheme images.


    Acknowledgments



    This work was supported by grants from the National Institutes of Health (NIH): R01 AI71106 to Hirohito Kita and Joseph Dolence was supported by a T32 Training Grant in Allergic Diseases. Additional funding was provided by the Mayo Foundation. Joseph Dolence is supported by grants from the National Center for Research Resources (NCRR; 5P20RR016469) and the National Institute for General Medical Science (NIGMS; 8P20GM103427), a component of the NIH. We would also like to thank Dr. Koji Iijima for bleeding mice, and Dr. Kimberly Carlson for editing the manuscript.

    Conflict of interests



    All authors declare no conflicts of interest in this paper.

    [1] Hales CM, Fryar CD, Carroll MD, et al. (2018) Trends in obesity and severe obesity prevalence in US youth and adults by sex and age, 2007–2008 to 2015–2016. Jama 319: 1723-1725. doi: 10.1001/jama.2018.3060
    [2] Togias A, Cooper SF, Acebal ML, et al. (2017) Addendum guidelines for the prevention of peanut allergy in the United States: Report of the National Institute of Allergy and Infectious Diseases-sponsored expert panel. J Allergy Clin Immunol 139: 29-44. doi: 10.1016/j.jaci.2016.10.010
    [3] Thorburn AN, Macia L, Mackay CR (2014) Diet, metabolites, and “western-lifestyle” inflammatory diseases. Immunity 40: 833-842. doi: 10.1016/j.immuni.2014.05.014
    [4] Myles IA (2014) Fast food fever: reviewing the impacts of the Western diet on immunity. Nutr J 13: 61. doi: 10.1186/1475-2891-13-61
    [5] Hussain M, Bonilla-Rosso G, Kwong CKCK, et al. (2019) High dietary fat intake induces a microbiota signature that promotes food allergy. J Allergy Clin Immunol 144: 157-170. doi: 10.1016/j.jaci.2019.01.043
    [6] Ley RE, Backhed F, Turnbaugh P, et al. (2005) Obesity alters gut microbial ecology. Proc Natl Acad Sci U S A 102: 11070-11075. doi: 10.1073/pnas.0504978102
    [7] Turnbaugh PJ, Backhed F, Fulton L, et al. (2008) Diet-induced obesity is linked to marked but reversible alterations in the mouse distal gut microbiome. Cell Host Microbe 3: 213-223. doi: 10.1016/j.chom.2008.02.015
    [8] Huang EY, Leone VA, Devkota S, et al. (2013) Composition of dietary fat source shapes gut microbiota architecture and alters host inflammatory mediators in mouse adipose tissue. JPEN J Parenter Enteral Nutr 37: 746-754. doi: 10.1177/0148607113486931
    [9] David LA, Maurice CF, Carmody RN, et al. (2014) Diet rapidly and reproducibly alters the human gut microbiome. Nature 505: 559-563. doi: 10.1038/nature12820
    [10] Carmody RN, Gerber GK, Luevano JM, et al. (2015) Diet dominates host genotype in shaping the murine gut microbiota. Cell Host Microbe 17: 72-84. doi: 10.1016/j.chom.2014.11.010
    [11] Lam YY, Ha CW, Campbell CR, et al. (2012) Increased gut permeability and microbiota change associate with mesenteric fat inflammation and metabolic dysfunction in diet-induced obese mice. PLoS One 7: e34233. doi: 10.1371/journal.pone.0034233
    [12] Wesemann DR, Nagler CR (2016) The microbiome, timing, and barrier function in the context of allergic disease. Immunity 44: 728-738. doi: 10.1016/j.immuni.2016.02.002
    [13] Volynets V, Louis S, Pretz D, et al. (2017) Intestinal barrier function and the gut microbiome are differentially affected in mice fed a western-style diet or drinking water supplemented with fructose. J Nutr 147: 770-780. doi: 10.3945/jn.116.242859
    [14] Arias-Jayo N, Abecia L, Alonso-Saez L, et al. (2018) High-fat diet consumption induces microbiota dysbiosis and intestinal inflammation in zebrafish. Microb Ecol 76: 1089-1101. doi: 10.1007/s00248-018-1198-9
    [15] Noval Rivas M, Burton OT, Wise P, et al. (2013) A microbiota signature associated with experimental food allergy promotes allergic sensitization and anaphylaxis. J Allergy Clin Immunol 131: 201-212. doi: 10.1016/j.jaci.2012.10.026
    [16] Stefka AT, Feehley T, Tripathi P, et al. (2014) Commensal bacteria protect against food allergen sensitization. Proc Natl Acad Sci U S A 111: 13145-13150. doi: 10.1073/pnas.1412008111
    [17] Dolence JJ, Kobayashi T, Iijima K, et al. (2018) Airway exposure initiates peanut allergy by involving the IL-1 pathway and T follicular helper cells in mice. J Allergy Clin Immunol 142: 1144-1158. doi: 10.1016/j.jaci.2017.11.020
    [18] Smarr CB, Hsu CL, Byrne AJ, et al. (2011) Antigen-fixed leukocytes tolerize Th2 responses in mouse models of allergy. J Immunol 187: 5090-5098. doi: 10.4049/jimmunol.1100608
    [19] Wang CY, Liao JK (2012) A mouse model of diet-induced obesity and insulin resistance. Methods Mol Biol 821: 421-433. doi: 10.1007/978-1-61779-430-8_27
    [20] Inui A (2003) Obesity—a chronic health problem in cloned mice? Trends Pharmacol Sci 24: 77-80. doi: 10.1016/S0165-6147(02)00051-2
    [21] Frederich RC, Hamann A, Anderson S, et al. (1995) Leptin levels reflect body lipid content in mice: evidence for diet-induced resistance to leptin action. Nat Med 1: 1311-1314. doi: 10.1038/nm1295-1311
    [22] Lin S, Thomas TC, Storlien LH, et al. (2000) Development of high fat diet-induced obesity and leptin resistance in C57Bl/6J mice. Int J Obes 24: 639-646. doi: 10.1038/sj.ijo.0801209
    [23] Van Heek M, Compton DS, France CF, et al. (1997) Diet-induced obese mice develop peripheral, but not central, resistance to leptin. J Clin Invest 99: 385-390. doi: 10.1172/JCI119171
    [24] Weisberg SP, McCann D, Desai M, et al. (2003) Obesity is associated with macrophage accumulation in adipose tissue. J Clin Invest 112: 1796-1808. doi: 10.1172/JCI200319246
    [25] Xu H, Barnes GT, Yang Q, et al. (2003) Chronic inflammation in fat plays a crucial role in the development of obesity-related insulin resistance. J Clin Invest 112: 1821-1830. doi: 10.1172/JCI200319451
    [26] Silva F, Oliveira EE, Ambrosio MGE, et al. (2020) High-fat diet-induced obesity worsens TH2 immune response and immunopathologic characteristics in murine model of eosinophilic oesophagitis. Clin Exp Allergy 50: 244-255. doi: 10.1111/cea.13533
    [27] Anhe GF, Page CP, et al. (2020) Sex differences in the influence of obesity on a murine model of allergic lung inflammation. Clin Exp Allergy 50: 256-266. doi: 10.1111/cea.13541
    [28] Johnston RA, Zhu M, Rivera-Sanchez YM, et al. (2007) Allergic airway responses in obese mice. Am J Respir Crit Care Med 176: 650-658. doi: 10.1164/rccm.200702-323OC
    [29] Arias K, Chu DK, Flader K, et al. (2011) Distinct immune effector pathways contribute to the full expression of peanut-induced anaphylactic reactions in mice. J Allergy Clin Immunol 127: 1552-1561. doi: 10.1016/j.jaci.2011.03.044
    [30] Krempski JW, Kobayashi T, Iijima K, et al. (2020) Group 2 innate lymphoid cells promote development of T follicular helper cells and initiate allergic sensitization to peanuts. J Immunol 204: 3086-3096. doi: 10.4049/jimmunol.2000029
    [31] Kuroda E, Ozasa K, Temizoz B, et al. (2016) Inhaled fine particles induce alveolar macrophage death and interleukin-1α release to promote inducible bronchus-associated lymphoid tissue formation. Immunity 45: 1299-1310. doi: 10.1016/j.immuni.2016.11.010
    [32] Tashiro H, Takahashi K, Sadamatsu H, et al. (2017) Saturated fatty acid increases lung macrophages and augments house dust mite-induced airway inflammation in mice fed with high-fat diet. Inflammation 40: 1072-1086. doi: 10.1007/s10753-017-0550-4
    [33] Dixon AE, Peters U (2018) The effect of obesity on lung function. Expert Rev Respir Med 12: 755-767. doi: 10.1080/17476348.2018.1506331
    [34] Periyalil HA, Wood LG, Wright TA, et al. (2018) Obese asthmatics are characterized by altered adipose tissue macrophage activation. Clin Exp Allergy 48: 641-649. doi: 10.1111/cea.13109
    [35] Liu J, Divoux A, Sun J, et al. (2009) Genetic deficiency and pharmacological stabilization of mast cells reduce diet-induced obesity and diabetes in mice. Nat Med 15: 940-945. doi: 10.1038/nm.1994
    [36] Sicherer SH, Burks AW, Sampson HA (1998) Clinical features of acute allergic reactions to peanut and tree nuts in children. Pediatrics 102: e6. doi: 10.1542/peds.102.1.e6
    [37] Du Toit G, Roberts G, Sayre PH, et al. (2015) Randomized trial of peanut consumption in infants at risk for peanut allergy. N Engl J Med 372: 803-813. doi: 10.1056/NEJMoa1414850
    [38] Trendelenburg V, Ahrens B, Wehrmann AK, et al. (2013) Peanut allergen in house dust of eating area and bed—a risk factor for peanut sensitization? Allergy 68: 1460-1462. doi: 10.1111/all.12226
    [39] Brough HA, Santos AF, Makinson K, et al. (2013) Peanut protein in household dust is related to household peanut consumption and is biologically active. J Allergy Clin Immunol 132: 630-638. doi: 10.1016/j.jaci.2013.02.034
    [40] Smeekens JM, Immormino RM, Balogh PA, et al. (2019) Indoor dust acts as an adjuvant to promote sensitization to peanut through the airway. Clin Exp Allergy 49: 1500-1511. doi: 10.1111/cea.13486
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4367) PDF downloads(143) Cited by(3)

Figures and Tables

Figures(3)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog