Research article

Soil organic carbon stock of different land uses of Mizoram, Northeast India

  • The study was conducted to assess soil organic carbon (SOC) concentration and stock under eight major land uses: shifting cultivation, wet rice cultivation, homegardens, forest (natural), grassland, bamboo plantation, oil palm plantation and teak plantation of Mizoram, Northeast India. Soil samples at different depths (0–15, 15–30 and 30–45 cm) were collected from each of the land uses under study to estimate SOC content in the laboratory. Forest recorded the highest mean SOC concentration with 2.74% at 0–45cm depth and lowest in the bamboo plantation (1.09%). Mean SOC stock for 0–45 cm soil depth ranged from 27.68 to 52.74 Mg C ha−1 in grassland and forest respectively. Both SOC concentration and SOC stock decreased with increasing soil depth. Soil bulk density of fine soil (<2 mm) was significantly negatively correlated with SOC concentration and positive with SOC stock. SOC stock loss estimated following its conversion from forest was maximum with shifting cultivation (−5.74 Mg C ha−1 yr−1) followed by oil palm plantation (−2.29 Mg C ha−1 yr−1), bamboo plantation (−1.56 Mg C ha−1 yr−1) and the least in homegardens (−0.14 Mg C ha−1 yr−1). The study results indicate the importance of SOC stocks in different land uses which may help devise appropriate management practices to increase the soil carbon sequestration potential in the wake of mitigating climate change.

    Citation: Alice Kenye, Uttam Kumar Sahoo, Soibam Lanabir Singh, Anudip Gogoi. Soil organic carbon stock of different land uses of Mizoram, Northeast India[J]. AIMS Geosciences, 2019, 5(1): 25-40. doi: 10.3934/geosci.2019.1.25

    Related Papers:

    [1] Ming-Zhen Xin, Bin-Guo Wang, Yashi Wang . Stationary distribution and extinction of a stochastic influenza virus model with disease resistance. Mathematical Biosciences and Engineering, 2022, 19(9): 9125-9146. doi: 10.3934/mbe.2022424
    [2] Tingting Xue, Xiaolin Fan, Zhiguo Chang . Dynamics of a stochastic SIRS epidemic model with standard incidence and vaccination. Mathematical Biosciences and Engineering, 2022, 19(10): 10618-10636. doi: 10.3934/mbe.2022496
    [3] Fathalla A. Rihan, Hebatallah J. Alsakaji . Analysis of a stochastic HBV infection model with delayed immune response. Mathematical Biosciences and Engineering, 2021, 18(5): 5194-5220. doi: 10.3934/mbe.2021264
    [4] Kangbo Bao, Qimin Zhang, Xining Li . A model of HBV infection with intervention strategies: dynamics analysis and numerical simulations. Mathematical Biosciences and Engineering, 2019, 16(4): 2562-2586. doi: 10.3934/mbe.2019129
    [5] Yang Chen, Wencai Zhao . Dynamical analysis of a stochastic SIRS epidemic model with saturating contact rate. Mathematical Biosciences and Engineering, 2020, 17(5): 5925-5943. doi: 10.3934/mbe.2020316
    [6] Helong Liu, Xinyu Song . Stationary distribution and extinction of a stochastic HIV/AIDS model with nonlinear incidence rate. Mathematical Biosciences and Engineering, 2024, 21(1): 1650-1671. doi: 10.3934/mbe.2024072
    [7] H. J. Alsakaji, F. A. Rihan, K. Udhayakumar, F. El Ktaibi . Stochastic tumor-immune interaction model with external treatments and time delays: An optimal control problem. Mathematical Biosciences and Engineering, 2023, 20(11): 19270-19299. doi: 10.3934/mbe.2023852
    [8] Tianfang Hou, Guijie Lan, Sanling Yuan, Tonghua Zhang . Threshold dynamics of a stochastic SIHR epidemic model of COVID-19 with general population-size dependent contact rate. Mathematical Biosciences and Engineering, 2022, 19(4): 4217-4236. doi: 10.3934/mbe.2022195
    [9] Divine Wanduku . The stochastic extinction and stability conditions for nonlinear malaria epidemics. Mathematical Biosciences and Engineering, 2019, 16(5): 3771-3806. doi: 10.3934/mbe.2019187
    [10] Xiaomeng Wang, Xue Wang, Xinzhu Guan, Yun Xu, Kangwei Xu, Qiang Gao, Rong Cai, Yongli Cai . The impact of ambient air pollution on an influenza model with partial immunity and vaccination. Mathematical Biosciences and Engineering, 2023, 20(6): 10284-10303. doi: 10.3934/mbe.2023451
  • The study was conducted to assess soil organic carbon (SOC) concentration and stock under eight major land uses: shifting cultivation, wet rice cultivation, homegardens, forest (natural), grassland, bamboo plantation, oil palm plantation and teak plantation of Mizoram, Northeast India. Soil samples at different depths (0–15, 15–30 and 30–45 cm) were collected from each of the land uses under study to estimate SOC content in the laboratory. Forest recorded the highest mean SOC concentration with 2.74% at 0–45cm depth and lowest in the bamboo plantation (1.09%). Mean SOC stock for 0–45 cm soil depth ranged from 27.68 to 52.74 Mg C ha−1 in grassland and forest respectively. Both SOC concentration and SOC stock decreased with increasing soil depth. Soil bulk density of fine soil (<2 mm) was significantly negatively correlated with SOC concentration and positive with SOC stock. SOC stock loss estimated following its conversion from forest was maximum with shifting cultivation (−5.74 Mg C ha−1 yr−1) followed by oil palm plantation (−2.29 Mg C ha−1 yr−1), bamboo plantation (−1.56 Mg C ha−1 yr−1) and the least in homegardens (−0.14 Mg C ha−1 yr−1). The study results indicate the importance of SOC stocks in different land uses which may help devise appropriate management practices to increase the soil carbon sequestration potential in the wake of mitigating climate change.


    Recently, classification systems have a wide range in many fields such as text classification, intrusion detection, bio-informatics, and image retrieval [1]. Unfortunately, a huge amount of data which may include irrelevant or redundant features is one of the main challenges of these systems. The negative effect of these undesirable features reduces the classification performance [2]. For this reason, reducing the number of features by finding an effective subset of features is an important task in classification systems [2]. Feature reduction has two techniques: feature selection and feature extraction [3]. Both reduce a high-dimensional dataset into a representative feature subset of low-dimensional. Feature extraction is effective when the original features fail to discriminate the classes [4], but it requires extra computation. Moreover, it changes the true meaning of the original features. In contrast, the feature selection preserves the true meaning of the selected features, which is important for some classification systems [5]. Furthermore, the result of FS is more understandable for domain experts [6].

    FS tries to find the best feature subset which represents the dataset well and improves the performance of classification systems [7]. It can be classified into three approaches [6]: wrapper, embedded, and filter. According to an evaluation strategy, wrapper and embedded are called classifier-dependent approaches, while filter is called classifier-independent approach [8]. In this paper, we use the filter approach according to its advantages over wrapper or embedded approaches in terms of efficiency, simplicity, scalability, practicality, and classifier-independently [6,9]. Filter approach is a pre-processing task which finds the highly ranked features to be the input of classification systems [7,10]. There are two criteria to rank features: feature relevance, and feature redundancy [11]. Feature relevance is related to how features discriminate different classes, while feature redundancy is related to how features share the same information of each other [12]. To define these criteria, filter approach uses many weighting functions which rank features based on their significance [10] such as correlation [13], mutual information (MI) [14]. MI overcomes the weakness of correlation, whereas, correlation is suitable only for linear relationship and numerical features [1]. MI is suitable for any kind of relationship such as linear and non-linear. Moreover, MI deals with both numerical and categorical features [1].

    Although MI has been widely used in many methods to find the best feature subset that maximizes the relevancy between the candidate feature and class label, and minimizes the redundancy between the candidate feature and pre-selected features [15]. The main limitations of these methods are: (1) difficult to indicate the best candidate features with the same new classification information [16], (2) difficult to deal with continuous features without information loss [17], and (3) consider the inner-class information only [18]. In this paper, we integrate fuzzy concept with mutual information to propose a new FS method called Fuzzy Joint Mutual Information Maximization (FJMIM). The fuzzy concept helps the proposed method to exploit all possible information of data where it can deal with any numerical data and extract the inner and outer-class information. Moreover, the objective function of FJMIM can overcome the feature overestimation problem which happens when the candidate feature be completely correlated with some of pre-selected features and does not depend on the majority of the subset at the same time [8].

    The rest of this paper is organized as follows: Section 2 presents the basic measures of fuzzy information theory. Then, we present the proposed method in section 3. After that, the experiment design was presented in section 4, followed by the results and discussion in section 5. Finally, section 6 concludes the paper.

    For the purpose of measuring the significance of features, information theory introduced many information measures such as entropy, and mutual information. To enhance these measures, fuzzy concept is used to estimate new extensions of information measures based on fuzzy equivalence relations such as fuzzy entropy, and fuzzy mutual information [19,20].‎ Fuzzy entropy measures the average amount of uncertainty of fuzzy relation in order to estimate its discriminative power, while fuzzy mutual information measures the shared amount of information between two fuzzy relations. In the following, we present the basic measures of fuzzy information theory:

    Given a dataset D=FC, where F is a set of n features, and C is the class label. Let ˉF={a1,a2,..,am} be a feature of m samples, where ˉFF. Let S is the feature subset with d of selected features, and the remaining set is {FS}, where ˉFfFS and ˉFsS. Based on the fuzzy equivalence relation RˉF on ˉF, the feature ˉF can be represented by the relation matrix M(RˉF).

    M(RˉF)=(r11r12r1mr21r2mrm1rm2rmm) (2.1)

    where rij=RˉF(ai,aj) is the fuzzy equivalence relation between two samples ai and aj.

    In this paper, the used fuzzy equivalence relation between two elements ai and aj is defined as [21]:

    RˉF(ai,aj)=expaiaj (2.2)

    Fuzzy equivalence class of sample ai on RˉF ‎can be defined as:‎

    [ai]RˉF=[ri1]a1+[ri2]a2++[rim]am (2.3)

    Fuzzy entropy of feature ¯F1 based on fuzzy equivalence relation is defined as:

    H(ˉF)=1mmi=1logm|[ai]RˉF| (2.4)

    ‎ where |[ai]RˉF|=mi=1rij.

    Let ¯F1 and ¯F2 be two features of F, fuzzy joint entropy of ¯F1 and ¯F2 is defined as: ‎

    H(¯F1,¯F2)=H(R¯F1,R¯F2)=1mmi=1logm|[ai]R¯F1[ai]R¯F2| (2.5)

    Fuzzy conditional entropy of ¯F1 given ¯F2 is defined as

    H(¯F1|¯F2)=H(R¯F1|R¯F2)=1mmi=1log|[ai]R¯F2||[ai]R¯F1[ai]R¯F2| (2.6)

    Fuzzy Mutual information between two features ¯F1 and ¯F2 is defined as:

    I(¯F1;¯F2)=I(R¯F1;R¯F2)=1mmi=1logm|[ai]R¯F1[ai]R¯F2||[ai]R¯F1.[ai]R¯F2| (2.7)

    Fuzzy conditional mutual information between feature ¯F1 and ¯F2 given class C is defined as:

    I(¯F1;¯F2|C)=H(¯F1|C)+H(¯F2|C)H(¯F1,¯F2|C) (2.8)

    Fuzzy joint mutual information between two features ¯F1, ¯F2 and class C is defined as:

    I(¯F1,¯F2;C)=I(¯F1;C)+I(¯F2;C|¯F1) (2.9)

    Fuzzy interaction information between among ¯F1, ¯F2 and C is defined as:

    I(¯F1;¯F2;C)=I(¯F1;C)+I(¯F2;C)I(¯F1,¯F2;C) (2.10)

    In this section, we presented the general theoretical frameworks of different feature selection methods based on mutual information. Then, we studied the limitation of previous work. Finally, we introduced the proposed method.

    Brown et al. [22] studied the exist feature selection methods based on MI and analyzed the different criteria to propose the following theoretical framework of these methods.

    J(ˉFf)=I(ˉFf;C)βˉFsSI(ˉFf;ˉFs)+γˉFsSI(ˉFf;ˉFs|C) (3.1)

    This framework is a linear combination of three terms: relevance, redundancy, and conditional that measures the individual predictive power of the feature, the unconditional relation, and the class-conditional relation, respectively. The criteria of different feature selection based on MI depends on the value of β and γ. MIM (β=γ=0) [23] is the simplest FS method based on MI. It considers only the relevance relation only. However, It may suffer from the redundant features. MIFS (γ=0) [24] introduced two criteria to estimate the feature relevance and redundancy. An extension of MIFS, called MIFS-U [24] is proposed to improve the redundancy term of MIFS by considering the uniform distribution of the information. However, Both MIFS and MIFS-U still require an input parameter β. To avoid this limitation, MRMR (β=1|S|, γ=0) [25] introduced the mean of the redundancy term as automatic value to the input parameter (β). JMI (β=γ=1|S|) [26] extended MRMR to extract the benefit of conditional term. In addition, Brown et al. [22] introduced also a similar non-linear framework to represent some methods as CMIM method [27]. According to [22], CMIM can be written as:

    Jcmim=minˉFsS[I(ˉFf;C|ˉFs)]=I(ˉFf;C)maxˉFsS[I(ˉFf;ˉFs)I(ˉFf;ˉFs|C)] (3.2)

    The reason of the non-linear relation on CMIM returns to the using of max operation. Similar to CMIM, JMIM [8] introduces a non-linear relation as follows:

    Jjmim=minˉFsS[I(ˉFs;C)+I(ˉFf;C|ˉFs)]=I(ˉFf;C)maxˉFsS[I(ˉFf;ˉFs)I(ˉFf;ˉFs|C)I(ˉFs;C)] (3.3)

    Although MI has been widely used in many feature selection methods such as MIFS [24], JMI [26], mRMR [25], DISR [28], IGFS [29], NMIFS [30] and MIFS-ND [31]. These methods suffer from the overestimation of the feature significance problem [8]. For this reason, Bennasar et al. [8] proposed JMIM method to address the overestimation of the feature significance problem. However, it may fail to select the best candidate features if they have the same new classification information. To illustrate this problem, Figure 1 shows the FS scenario, where ¯F1 and ¯F2 are two candidate features, ¯Fs is the pre-selected feature subset, and C is the class label. ¯F1 is partially redundant with ¯Fs, while ¯F2 is independent to ¯Fs. Suppose that ¯F1 and ¯F2 have the same new classification information I(¯F1;C|¯Fs) = (area 3) and I(¯F2;C|¯Fs) = (area 5) respectively. In this case, JMIM may fail to indicate the best feature where I(¯F1,¯Fs;C) and I(¯F1,¯Fs;C) are equal.

    Figure 1.  Venn diagram presents the feature selection scenario where the two candidate features ¯F1 and ¯F2 have the same new classification information.

    Unfortunately, JMIM also shares some limitations with the previous methods. Firstly, it can not directly estimate MI between continuous features [17]. To address this limitation, there are two methods were introduced. One is to estimate MI based on Parzen window [18], but it is inefficient in high-dimensional feature spaces with spare samples [17]. Moreover, its performance depends on the used window function which requires a window width parameter [32]. The other one is to discretize continuous features before estimating MI [33], but it may cause information loss [34]. Secondly, JMIM depends only on inner-class information without considering outer-class information [18].

    Motivated by the previous limitation of JMIM, we proposed a new FS method, called Fuzzy Joint Mutual Information Maximization (FJMIM). Both of FJMIM and JMIM depends on "maximum of the minimum" approach. The main difference is that JMIM maximize the joint mutual information of the candidate feature and pre-selected feature subset with class, whereas FJMIM maximizes the joint mutual information of the candidate feature and pre-selected feature subset with class without considering the class-relevant redundancy. To illustrate the difference in Figure 1, JMIM depends on the union of areas 2, 4 and 3, while FJMIM depends on the union of areas 2 and 3. The proposed method discarded the class-relevant redundancy (area 4) because it can reduce the predictive ability of the feature subset when a candidate feature is selected [15]. On the other hand, integrating fuzzy concept with MI has many benefits. Firstly, using a fuzzy concept helps to deal directly with continuous features. Furthermore, it enables MI to take the advantages of inner and outer-class information [35]. Moreover, FS methods based on fuzzy concept are more robust toward any change of the data than methods based on probability concept [36].

    According to FJMIM, the candidate feature ˉFf must satisfy the following condition

    ˉFf=argmaxˉFfFS(minˉFsS(I(ˉFf,ˉFs;C)I(ˉFf;ˉFs;C))) (3.4)

    FJMIM also can be written according to the non-linear framework as follows:

    ˉFf=2I(ˉFf;C)maxˉFsS[I(ˉFf;ˉFs)I(C;ˉFs)I(ˉFf;ˉFs|C)I(ˉFf;C|ˉFs)] (3.5)

    The proposed method can be summarized to find the best feature subset of size d as follows:

    Input: F is a set of n features, C is the class label, and d is the number of selected features.

    Step 1: Initialize the empty selected feature subset S.

    Step 2: Update the selected feature set S and the feature set F.

    Step 2.1: Compute I(ˉF;C) for all ˉF in the feature set F.

    Step 2.2: Add the feature ˉF that maximizes I(ˉF;C) to the selected feature set S.

    Step 2.3: Remove the feature ˉF from the feature set F.

    Step 3: Repeat until |S|=d

    Step 3.1: Add the feature ˉF that satisfies

    argmaxˉFfFS(minˉFsS(I(ˉFf,ˉFs;C)I(ˉFf;ˉFs;C)))

    to the selected feature set S.

    Step 3.2: Remove the feature ˉF from the feature set F.

    Output: Return the selected feature set S.

    Success of FS methods depends on different criteria such as classification performance, and stability [11]. Consequently, we design the experiment based on these criteria (Figure 2). To clarify our improvement, we compared our proposed method FJMIM, with four conventional methods (CMIM [27], JMI [26], QPFS [37], Relief [38]) and five state-of-the-art methods (CMIM3 [39], JMI3 [39], JMIM [8], MIGM [40], and WRFS [41]). The compared methods can be divided into two groups: FS based on fuzzy concept and FS based on probability concept. For the methods which depend on probability concept, data discretization is required as a pre-processing step prior to the FS process. So, the continuous features are transformed into ten bins using EqualWidth discretization [42]. Then, we selected feature subset from all methods based on threshold which is defined as the median position of the ranked features (or the nearest integer position when the number of ranked features is even).

    Figure 2.  Experiment design of the proposed method Fuzzy Joint Mutual Information (FJMIM): firstly, a discretization pre-processing is applied before FS methods based on probability concept. Then, the FS methods are evaluated according to classification performance, and feature stability.

    FS plays an important role to improve the classification performance. There are many measures to evaluate classification performance such as classification accuracy, precision, F-measure, area-under-ROC (AUC), and area-under-PRC (AUCPR) [43]. To clarify our improvement, popular classifiers were used in this study such as Naive Bayes (NB), Support Vector Machine (SVM), and 3-Nearest Neighbors (KNN) [44]. The average classification performance measures were computed by 10-fold cross-validation approach [45].

    Another important evaluation criterion for FS is stability. FS stability measures the impact of any change in the input data on FS result [46]. In this study, we measure the impact of noise on the selected feature subset. Firstly, we produce the noise using standard deviation and the normal distribution of each feature [47]. Then, we injected 10% of the data by adding noise. After that, we repeated this step ten times. Each time produces a different sequence of selected features. Finally, we computed the stability of each method using Kuncheva stability index [48].

    Our experiment was conducted using 13 datasets from UCI machine learning repository [49]. Table 1 presents a brief description about these datasets.

    Table 1.  Datasets Description.
    No. Datasets Instances Features Classes
    1 Acute Inflammations 120 6 2
    2 Arrhythmia 452 279 13
    3 Blogger 100 6 2
    4 Diabetic Retinopathy Debrecen (DRD) 1151 20 2
    5 Hayes-Roth 160 5 3
    6 Indian Liver Patient Dataset (ILPD) 583 10 2
    7 Lenses 24 4 3
    8 Lymphography 148 18 4
    9 Congressional Voting Records (CVR) 435 16 2
    10 Sonar 208 60 2
    11 Thoracic Surgery 470 17 2
    12 Wilt 4889 6 2
    13 Zoo 101 17 7

     | Show Table
    DownLoad: CSV

    1) Accuracy: A paired two-tailed t-test is employed between FJMIM and other compared methods. The notations ([=], [+], and [-]) indicate the statistically significant (5%) that the proposed method (equals, wins, and losses) other methods. According to NB classifier, FJMIM achieved the maximum average accuracy with score 78.02%, while Relief achieved the minimum average accuracy with score 75.34% (Table 2). The proposed method outperformed compared methods in the range from 0.06 to 2.68%. In SVM classifier, FJMIM outperformed other methods with score 80.03%, while Relief achieved the minimum average accuracy with score 77.33% (Table 3). The proposed method outperformed compared methods in the range from 0.69 to 2.7%. Similarly with KNN classifier, FJMIM kept the maximum average accuracy by 81.45%, while Relief achieved the minimum average accuracy with score 77.98% (Table 4). The proposed method outperformed compared methods in the range from 0.57 to 3.47%. Across all datasets, Figure 3(a) shows the distribution of the average accuracy values of all used classifiers. In a detailed box-plot, the box represents upper and lower quartiles, while the black circle represents the median. The box-plot confirms that FJMIM is more consistent and outperformed other compared methods. Figure 3(b) shows the average accuracy of the three used classifiers. FJMIM achieved the best accuracy, followed by QPFS, JMI3, both of JMI and CMIM, both of CMIM3 and JMIM, MIGM, WRFS, and Relief respectively. The proposed method outperformed compared methods in the range from 0.6 to 2.9%.

    Table 2.  Classification accuracy using NB Classifier, FJMIM achieved the highest average accuracy.
    Dataset CMIM CMIM3 JMI JMI3 JMIM MIGM QPFS Relief WRFS FJMIM
    Acute Inflammations 100±0[=] 100±0[=] 100±0[=] 100±0[=] 100±0[=] 99.75±2.5[=] 99.75±2.5[=] 100±0[=] 90.83±8.5[+] 100±0
    Arrhythmia 60.64±7.01[=] 59.22±6.17[=] 59.42±6.49[=] 59.58±6.58[=] 60.62±6.93[=] 58.98±6.39[=] 63.7±6.57[-] 64.94±6.33[-] 59.31±6.3[=] 59.74±6.43
    Blogger 61.9±10.7[=] 61.9±10.7[=] 61.9±10.7[=] 64.9±10.2[=] 61.9±10.7[=] 61.9±10.7[=] 61.9±10.7[=] 65.4±7.84[=] 66.1±9.09[=] 65.9±10.26
    DRD 57.59±4.95[=] 60.66±4.69[-] 60.3±5.04[-] 60.6±4.58[-] 57.59±4.95[=] 60.3±5.04[-] 61.19±4.49[-] 60.61±5.06[-] 57.47±4.98[=] 57.56±4.83
    Hayes-Roth 45.81±10.77[=] 45.81±10.77[=] 45.81±10.77[=] 45.81±10.77[=] 45.81±10.77[=] 45.81±10.77[=] 49.37±10.53[=] 46.31±11.48[=] 45.81±10.77[=] 49.37±10.53
    ILPD 68.99±5.68[=] 67.33±5.87[=] 68.99±5.68[=] 67.33±5.87[=] 68.99±5.68[=] 68.99±5.68[=] 68.73±5.64[=] 64.96±6.45[+] 70.11±6.74[=] 69.54±5.75
    Lenses 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 54.83±30.73[+] 86.83±21.88[=] 86.83±21.88
    Lymphography 79.02±10.57[=] 82.05±9.59[=] 79.02±10.57[=] 77.93±10.87[=] 78.27±10.98[=] 77.25±11.38[=] 79.02±10.57[=] 82.61±9.77[=] 76.91±11.64[+] 81.5±10.29
    CVR 94.3±3.15[=] 94.32±3.12[=] 93.75±3.57[=] 93.75±3.57[=] 93.75±3.57[=] 93.75±3.57[=] 94.32±3.26[=] 93.82±3.28[=] 93.52±3.18[=] 93.75±3.57
    Sonar 75.05±8.33[=] 75.42±9.08[=] 75.75±8.85[=] 76.23±9.99[=] 72.2±9.93[=] 72.14±9.54[+] 76.42±8.69[=] 74.25±8.33[=] 77.91±8.74[=] 77±8.98
    Thoracic Surgery 83.83±3.1[=] 83.38±3.09[=] 83.38±3.09[=] 83.85±2.24[=] 83.38±3.09[=] 84.64±1.83[=] 83.55±3.08[=] 83.06±2.77[=] 83.85±2.74[=] 83.38±3.09
    Wilt 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06[=] 94.61±0.06
    Zoo 95.15±5.74[=] 96.05±5.6[=] 96.03±5.66[=] 96.03±5.66[=] 96.03±5.66[=] 96.93±5.42[=] 94.08±6.6[=] 94.08±6.6[=] 92.96±7.05[=] 95.05±5.87
    Average 77.21 77.51 77.37 77.50 76.92 77.07 77.96 75.34 76.63 78.02

     | Show Table
    DownLoad: CSV
    Table 3.  Classification accuracy using SVM Classifier, FJMIM achieved the highest average accuracy.
    Dataset CMIM CMIM3 JMI JMI3 JMIM MIGM QPFS Relief WRFS FJMIM
    Acute Inflammations 98.33±3.35[=] 99.67±1.64[=] 98.33±3.35[=] 98.33±3.35[=] 98.33±3.35[=] 99.42±2.44[=] 99.42±2.44[=] 100±0[+] 89.92±9.5[+] 99.42±2.44
    Arrhythmia 66.09±6.81[=] 64.08±6.11[=] 65.01±5.96[=] 65.7±6.23[=] 66.14±6.61[=] 65.23±6.39[=] 66.82±6.2[=] 67.22±6.05[=] 65.23±6.15[=] 67.11±6.7
    Blogger 68±4.02[=] 67.7±4.89[=] 68±4.02[=] 67.4±5.62[=] 68±4.02[=] 68±4.02[=] 67.7±4.89[=] 67.7±4.89[=] 68±4.02[=] 67.7±4.89
    DRD 68.03±3.92[=] 67.55±4.07[=] 67.15±3.85[=] 67.03±3.95[=] 68.03±3.92[=] 67.19±3.8[=] 65.1±4.33[+] 61.91±4.51[+] 67.7±4[=] 67.96±4.23
    Hayes-Roth 49.94±12.83[=] 49.94±12.83[=] 49.94±12.83[=] 49.94±12.83[=] 49.94±12.83[=] 49.94±12.83[=] 52±11.61[=] 48.75±13.21[=] 49.94±12.83[=] 52±11.61
    ILPD 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73[=] 71.36±0.73
    Lenses 76.17±22.13[=] 76.17±22.13[=] 76.17±22.13[=] 76.17±22.13[=] 76.17±22.13[=] 76.17±22.13[=] 76.17±22.13[=] 54.5±30.42[+] 76.17±22.13[=] 76.17±22.13
    Lymphography 81.57±10.06[=] 80.56±9.09[=] 81.42±10.15[=] 84.55±8.84[=] 82.02±10.48[=] 79.06±10.44[+] 81.57±10.11[=] 80.3±9[=] 74.25±11.45[+] 85.5±8.3
    CVR 94.09±3.33[=] 94.46±3.37[=] 94.3±3.34[=] 94.32±3.32[=] 94.3±3.34[=] 94.3±3.34[=] 94±3.44[=] 94.43±3.41[=] 94.37±3.41[=] 94.3±3.34
    Sonar 79.34±8.37[=] 80.48±8.05[=] 80.18±8.33[=] 78.46±8.44[=] 80.54±8.43[=] 76.93±8.71[=] 79.91±7.73[=] 82.1±8.07[=] 77.41±8.41[=] 80.67±7.88
    Thoracic Surgery 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0[=] 85.11±0
    Wilt 97.62±0.57[+] 97.97±0.49[=] 97.97±0.49[=] 97.97±0.49[=] 97.62±0.57[+] 97.97±0.49[=] 94.61±0.06[+] 97.97±0.49[=] 97.62±0.57[+] 97.98±0.49
    Zoo 95.75±5.12[=] 93.88±6.72[=] 89.39±5.72[+] 89.49±5.64[+] 89.49±5.64[+] 94.06±6.9[=] 95.27±5.85[=] 93.89±6.96[=] 90.49±7.96[+] 95.06±6.2
    Average 79.34 79.15 78.79 78.91 79.00 78.83 79.16 77.33 77.51 80.03

     | Show Table
    DownLoad: CSV
    Table 4.  Classification accuracy using KNN Classifier, FJMIM achieved the highest average accuracy.
    Dataset CMIM CMIM3 JMI JMI3 JMIM MIGM QPFS Relief WRFS FJMIM
    Acute Inflammations 99.5±1.99[=] 100±0[=] 99.5±1.99[=] 99.5±1.99[=] 99.5±1.99[=] 100±0[=] 100±0[=] 100±0[=] 99.67±1.64[=] 100±0
    Arrhythmia 58.27±5.17[+] 57.63±5.17[+] 57.79±5.34[+] 58.32±5.08[+] 58.94±5.13[=] 57.67±5.21[+] 60.73±4.64[=] 65.13±4.43[-] 58.39±5.22[+] 61.11±5.23
    Blogger 72.1±11.31[=] 72.1±11.31[=] 72.1±11.31[=] 78.8±10.18[=] 72.1±11.31[=] 72.1±11.31[=] 72.1±11.31[=] 70.3±11.41[=] 76.1±12.05[=] 78.6±11.37
    DRD 64.3±4.19[=] 62.62±5.29[=] 63.69±4.58[=] 61.96±4.68[=] 64.3±4.19[=] 63.58±4.56[=] 60.47±4.64[+] 63.51±4.63[=] 64.06±4.4[=] 64.7±4.23
    Hayes-Roth 59.19±10.41[=] 59.19±10.41[=] 59.19±10.41[=] 59.19±10.41[=] 59.19±10.41[=] 59.19±10.41[=] 63.12±11.32[=] 57.56±10.82[=] 59.19±10.41[=] 63.12±11.32
    ILPD 67.16±5.41[=] 68.22±5.06[=] 67.54±5.44[=] 68.22±5.06[=] 67.16±5.41[=] 67.3±5.42[=] 68.74±5.01[=] 67.84±5.56[=] 66.66±5.25[=] 69.15±5.62
    Lenses 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 86.83±21.88[=] 54.17±30.46[+] 86.83±21.88[=] 86.83±21.88
    Lymphography 84.61±7.58[=] 72.61±9.14[+] 84.61±7.58[=] 84.16±9.46[=] 80.99±9.49[=] 76.62±8.87[=] 84.61±7.58[=] 81.44±9.42[=] 79.74±9.36[=] 78.87±8.35
    CVR 93.63±3.43[=] 94.09±3.33[=] 95.26±3.01[=] 95.26±3.01[=] 95.26±3.01[=] 95.26±3.01[=] 94.14±3.37[=] 93.67±3.5[=] 94.09±2.95[=] 95.26±3.01
    Sonar 85.09±7.43[=] 83.51±7.68[=] 87.74±7.51[=] 85.53±8.83[=] 82.7±9.06[=] 82.4±7.72[+] 86.15±7.24[=] 87.95±6.78[=] 84.17±7.73[=] 87.65±7.27
    Thoracic Surgery 80.96±3.45[=] 81.38±3.53[=] 81.38±3.53[=] 83.45±3.05[=] 81.38±3.53[=] 82.64±3.28[=] 83.89±2.95[=] 82.68±3.08[=] 84.85±0.97[-] 81.38±3.53
    Wilt 97.69±0.57[+] 98.12±0.55[=] 98.12±0.55[=] 98.12±0.55[=] 97.69±0.57[+] 98.12±0.55[=] 94.44±0.52[+] 98.12±0.55[=] 97.69±0.57[+] 98.12±0.55
    Zoo 91.9±7.21[=] 92.06±7.35[=] 92.05±7.24[=] 92.05±7.24[=] 92.05±7.24[=] 91.16±7.48[=] 91.42±6.94[=] 91.42±6.94[=] 90.29±7.45[+] 94.08±6.6
    Average 80.09 79.10 80.45 80.88 79.85 79.45 80.51 77.98 80.13 81.45

     | Show Table
    DownLoad: CSV
    Figure 3.  Accuracy result of all compared methods. FJMIM outperformed in all cases.

    2) Precision: Figure 4 shows the precision results of NB, SVM, KNN, and their average. FJMIM achieved the highest precision, while Relief achieved the lowest precision. The proposed method outperformed other methods in the range from 0.6 to 12.7% based on NB, from 0.3 to 3.3% based on SVM, and from 2 to 8.2% based on KNN. According to the average of all classifiers, FJMIM achieved the highest precision by 71.2%, while Relief kept the lowest precision by 63.1%. The second-best method achieved by CMIM3, followed by JMI3, QPFS, JMI, JMIM, MIGM, CMIM and WRFS. The results of precision distribution are inconsistent as shown in Figure 5. Both FJMIM and QPFS achieved the best distribution. FJMIM achieved the highest upper quartile and median, while QPFS achieved the highest lower quartile and median. In addition, FJMIM shares the highest upper quartile with JMI, JMIM and MIGM.

    Figure 4.  Precision result of NB, SVM, KNN and their average. FJMIM achieved the best result in all cases.
    Figure 5.  Precision distribution across all datasets.

    3) F-measure: Figure 6 shows the F-measure results of the three used classifiers and their average. FJMIM achieved the highest F-measure by 79.8, 71.5 and 84% on NB, SVM and KNN respectively. Relief achieved the lowest F-measure on NB and SVM, while WRFS achieved the lowest score on SVM. The proposed method outperformed other methods in the range from 0.3 to 16.6% based on NB, from 0.1 to 1.4% based on SVM, and from 0.6 to 15.2% based on KNN. According to the average of all classifiers, FJMIM achieved the highest precision by 78.4% and outperformed other methods in the range from 2.5 to 10.8%. The second-best method achieved by JMI3, followed by QPFS, CMIM3, JMI, JMIM, CMIM, MIGM, WRFS and Relief. Figure 7 shows the distribution of F-measure across all datasets. The box-plot confirms the outperformance of FJMIM compared to other methods.

    Figure 6.  F-measure result of NB, SVM, KNN and their average. FJMIM achieved the best result in all cases.
    Figure 7.  F-measure distribution across all datasets.

    4) AUC: Figure 8 shows the AUC results of the used classifiers and their average. FJMIM achieved the highest AUC on NB and KNN by 83.9 and 85.2%, while it achieved the second-best score on SVM by 74.8%. On the other hand, Relief achieved the lowest AUC on all classifiers. Although MIGM achieved the best AUC on SVM, FJMIM outperformed on the average of all classifiers. The proposed method outperformed other methods in the range from 0.4 to 4.2%. As shown in the box-plot (Figure 9), the proposed method achieved the highest lower quartile and median values. On the other hand, JMI achieved also the highest lower quartile, while CMIM achieved the highest upper quartile.

    Figure 8.  AUC result of NB, SVM, KNN and their average. FJMIM achieved the best result in all cases except SVM.
    Figure 9.  AUC distribution across all datasets.

    5) AUCPR: The highest AUCPR was achieved by both FJMIM and CMIM3 using NB, MIGM using SVM, and FJMIM using KNN (Figure 10). On the other hand, Relief achieved the lowest AUCPR using all classifiers. According to the average of all classifiers, FJMIM achieved the best AUCPR by 81.6%, while Relief kept the lowest AUCPR by 77.2%. The proposed method outperformed other methods in the range from 0.3 to 4.4%. The second-best AUCPR was achieved by CMIM3, followed by MIGM, both QPFS and JMIM, both CMIM and JMI, and both JMI3 and WRFS. Figure 11 shows the distribution of AUCPR across all datasets. It's obvious that CMIM achieved the highest median, lower, and upper quartiles, while FJMIM achieved the highest median, upper quartile.

    Figure 10.  AUCPR result of NB, SVM, KNN and their average. FJMIM achieved the best result in all cases except SVM.
    Figure 11.  AUCPR distribution across all datasets.

    Figure 12(a) shows the stability of the used FS methods on all datasets. It's obvious that FJMIM is more consistent and stable compared to all other methods. Figure 12(b) confirms the stability of the compared method. FJMIM achieved the highest average stability by 87.8%. The proposed method outperformed other methods in the range from 6.6 to 43%. JMI achieved the second-best position by 81.2%, while Relief achieved the lowest stability by 44.3%.

    Figure 12.  Stability result of all compared methods. FJMIM achieved the best result in all cases.

    More detailed results are presented in the appendix section. According to previous results, It is obvious that FJMIM achieves the best results in most measures. This is expected because our proposed method addresses the feature overestimation problem and handle the candidate feature problem well. Moreover, it avoids the discretization step. Another reason is the advantages of both inner and outer class information which FJMIM depends on it. This information helps the proposed method to be more robust toward the noise. On the other hand, the other compared methods are close to FJMIM than Relief that achieved the lowest result. This is because all compared method except Relief depends on mutual information as the proposed method to estimate the significant of features.

    In this paper, we propose a new FS method, called, Fuzzy Joint Mutual Information Maximization (FJMIM). The proposed method depends on integrating an improved JMIM objective function with fuzzy concept. The benefits of our proposed method include: 1) The ability to deal directly with discrete and continuous features. 2) The suitability to handle any kind of relations between features such as linear and non-linear relation. 3) The ability to take the advantages of inner and outer class information. 4) The robustness toward the noise. 5) The ability to select the most significant feature subset and avoiding the undesirable features.

    To confirm the effectiveness of FJMIM, 13 benchmark datasets have been used to evaluate the proposed method in the term of classification performance (accuracy, precision, F-measure, AUC, and AUCPR) and feature selection stability. According to nine conventional and state-of-the-art feature selection methods, the proposed method achieved promising improvement on feature selection in the terms of classification performance and stability.

    In future work, we plan to extend the proposed method to cover multi-label classification problem. Moreover, we plan to study the effectiveness of imbalanced data on the proposed method.

    This research has been supported by the National Natural Science Foundation (61572368).

    The authors declare no conflict of interest.

    Tables A1A6 show some numerical results of Figures 3(b), 4, 6, 8, 10 and 12(b), while Figures A1–A4 show the statistical results (mean ± standard deviation) of some datasets (DRD, Sonar, Wilt and Zoo) by precision, F-measure, AUC and AUCPR, respectively.

    Figure A1.  The precision results of four datasets (DRD, Sonar, Wilt and Zoo) on the used classifiers (NB, SVM and KNN).
    Figure A2.  The F-measure results of four datasets (DRD, Sonar, Wilt and Zoo) on the used classifiers (NB, SVM and KNN).
    Figure A3.  The AUC results of four datasets (DRD, Sonar, Wilt and Zoo) on the used classifiers (NB, SVM and KNN).
    Figure A4.  The AUCPR results of four datasets (DRD, Sonar, Wilt and Zoo) on the used classifiers (NB, SVM and KNN).
    Table A1.  Average accuracy of compared FS methods.
    FS methodAverage
    CMIM 78.9
    CMIM3 78.6
    JMI 78.9
    JMI3 79.1
    JMIM 78.6
    MIGM 78.4
    QPFS 79.2
    Relief 76.9
    WRFS 78.1
    FJMIM 79.8

     | Show Table
    DownLoad: CSV
    Table A2.  Precision results according to the used classifiers and their average.
    FS method NB SVM KNN Average
    CMIM 0.702 0.641 0.704 0.682
    CMIM3 0.742 0.641 0.705 0.696
    JMI 0.705 0.640 0.713 0.686
    JMI3 0.712 0.636 0.723 0.690
    JMIM 0.701 0.642 0.707 0.683
    MIGM 0.701 0.637 0.708 0.682
    QPFS 0.712 0.636 0.711 0.686
    Relief 0.621 0.612 0.661 0.631
    WRFS 0.698 0.632 0.708 0.679
    FJMIM 0.748 0.645 0.743 0.712

     | Show Table
    DownLoad: CSV
    Table A3.  F-measure results according to the used classifiers and their average.
    FS method NB SVM KNN Average
    CMIM 0.722 0.712 0.818 0.750
    CMIM3 0.795 0.712 0.752 0.753
    JMI 0.724 0.712 0.823 0.753
    JMI3 0.729 0.712 0.834 0.759
    JMIM 0.719 0.714 0.821 0.751
    MIGM 0.722 0.710 0.818 0.750
    QPFS 0.732 0.712 0.825 0.756
    Relief 0.632 0.708 0.688 0.676
    WRFS 0.716 0.701 0.826 0.748
    FJMIM 0.798 0.715 0.840 0.784

     | Show Table
    DownLoad: CSV
    Table A4.  AUC results according to the used classifiers and their average.
    FS method NB SVM KNN Average
    CMIM 0.835 0.733 0.812 0.793
    CMIM3 0.836 0.748 0.844 0.809
    JMI 0.837 0.734 0.815 0.795
    JMI3 0.831 0.729 0.805 0.788
    JMIM 0.836 0.742 0.812 0.796
    MIGM 0.832 0.754 0.807 0.798
    QPFS 0.838 0.710 0.807 0.785
    Relief 0.804 0.707 0.801 0.771
    WRFS 0.831 0.732 0.806 0.789
    FJMIM 0.839 0.748 0.852 0.813

     | Show Table
    DownLoad: CSV
    Table A5.  AUCPR results according to the used classifiers and their average.
    FS method NB SVM KNN Average
    CMIM 0.848 0.708 0.798 0.785
    CMIM3 0.862 0.715 0.862 0.813
    JMI 0.850 0.708 0.797 0.785
    JMI3 0.837 0.705 0.800 0.781
    JMIM 0.852 0.713 0.797 0.787
    MIGM 0.857 0.724 0.827 0.803
    QPFS 0.857 0.708 0.797 0.787
    Relief 0.842 0.685 0.791 0.772
    WRFS 0.850 0.697 0.795 0.781
    FJMIM 0.862 0.718 0.869 0.816

     | Show Table
    DownLoad: CSV
    Table A6.  Average stability of compared FS methods.
    FS method Stability
    CMIM 72.6
    CMIM3 67.3
    JMI 81.2
    JMI3 75.2
    JMIM 75.9
    MIGM 66.2
    QPFS 71.7
    Relief 44.3
    WRFS 66.6
    FJMIM 87.8

     | Show Table
    DownLoad: CSV


    [1] Prentice IC, Farquhar GD, Fasham MJR, et al. (2001) The carbon cycle and atmospheric carbon dioxide. In: Houghton JT, Ding Y, Griggs DJ, et al. (eds) Climate Change 2001: The scientific basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, 183–237.
    [2] Zomer R, Ustin S, Ives J (2003) Using satellite remote sensing for DEM extraction in complex mountainous terrain: Landscape analysis of the Makalu Barun National Park of eastern Nepal. Int J Remote Sens 23: 125–143.
    [3] Singh SK, Pandey CV, Sidhu GS (2011) Concentration and stock of carbon in the soils affected by land uses, soil types and climates in the western Himalaya, India. Catena 87: 78–89. doi: 10.1016/j.catena.2011.05.008
    [4] Lal R, Kimble JM, Follett RF, et al. (1998). The Potential of US Cropland to Sequester C and Mitigate the Greenhouse Effect, Ann Arbor Press, Chelsea, MI, 108.
    [5] Lal R, Follett RF, Kimble JM, et al. (1999) Management of U.S. cropland to sequester carbon in soil. J Soil Water Conserv 54: 374–381.
    [6] Bhattacharyya T, Pal DK, Chandran P, et al. (2008) Soil carbon storage capacity as a tool to prioritize area for carbon sequestration. Curr Sci 95: 482–484.
    [7] Shi Z, Li X, Zhang L, et al. (2015) Impacts of farmland conversion to apple (Malus domestica) orchard on soil organic carbon stocks and enzyme activities in a semiarid loess region. J Plant Nutr Soil Sci 178: 440–451. doi: 10.1002/jpln.201400211
    [8] Murty D, Miko UF, Kirshbaum F (2002) Does conversion of forest to agricultural land change soil carbon and nitrogen? A review of the literature. Glob Chang Biol 8:105–123. doi: 10.1046/j.1354-1013.2001.00459.x
    [9] Guo LB, Gifford RM (2002) Soil carbon stocks and land use change: A meta-analysis. Glob Chang Biol 8: 345–360. doi: 10.1046/j.1354-1013.2002.00486.x
    [10] Lemenih M, Karltun E, Olsson M (2005) Soil Organic Matter Dynamics after Deforestation along a Farm Field Chronosequence in Southern Highlands of Ethiopia. Agroecosyst Environ109: 9–19.
    [11] Sheikh I, Tiwari SC (2013) Sequestration of Soil Organic Carbon Pool under Different Land uses in Bilaspur District of Achanakmar, Chhattisgarh. Int J Sci Res 4: 1920–1924.
    [12] Gupta MK, Sharma SD, Kumar M (2014) Status of sequestered organic carbon in the soils under different land uses in southern region of Haryana. Interl J Sci Environ Techn 3: 811–826.
    [13] Six J, Elliott ET, Paustian K (1999) Aggregate and Soil Organic Matter Dynamics under Conventional and No- Tillage Systems. Soil Sci Soc Am J 63: 1350–1358. doi: 10.2136/sssaj1999.6351350x
    [14] Choudhury BU, Mohapatra KP, Das A, et al. (2013) Spatial variability in distribution of organic carbon stocks in the soils of North East India. Curr Sci 104: 604–614.
    [15] Iqbal MA, Hossen MS, Islam NM (2014) Soil organic carbon dynamics for different land uses and soil management practices in Mymensingh. Proceedings of 5th International Conference on Environmental Aspects of Bangladesh, 16–17.
    [16] Singh SL, Sahoo UK (2015) Soil carbon sequestration in homegardens of different age and size in Aizawl district of Mizoram, Northeast India. NeBIO 6: 12–17.
    [17] Yadav RL, Dwivedi BS, Prasad K, et al. (2000) Yield trends, and changes in soil organic-C and available NPK in a long-term rice-wheat system under integrated use of manures and fertilisers. Field Crops Res 68: 219–246. doi: 10.1016/S0378-4290(00)00126-X
    [18] Bolinder MA, Janzen HH, Gregorich EG, et al. (2007) An approach for estimating net primary productivity and annual carbon inputs to soil for common agricultural crops in Canada. Agric Ecosyst Environ 118: 29–42. doi: 10.1016/j.agee.2006.05.013
    [19] Leu A (2009) Applied organic systems, carbon farming and climate change. Asian J Food Agro-Ind, 307–317.
    [20] Sombroek WG, Nachtergaele FO, Hebel A (1993) Amount, dynamics and sequestering of carbon in tropical and subtropical soils. Ambio 22: 417–426.
    [21] Lal R (2002) Soil erosion and the global carbon budget. Environ Int 29: 437–450.
    [22] FSI (2017) India State of Forest Report. Forest Survey of India (Ministry of Environment and Forest and Climate Change), Dehradun, India, 248–253.
    [23] Singh SL, Sahoo UK, Gogoi A, et al. (2018) Effect of Land Use Changes on Carbon Stock Dynamics in Major Land Use Sectors of Mizoram, Northeast India. J Environ Prot 9: 1262–1285. doi: 10.4236/jep.2018.912079
    [24] Singh SL, Sahoo UK (2018) Assessment of Biomass, Carbon stock and Carbon Sequestration Potential of Two Major Land Uses of Mizoram, India. Inter J Ecol Environ Sci 44: 293–306.
    [25] Devi AS, Singh KS, Lalramnghinglova H (2018) Aboveground biomass production of Melocanna baccifera and Bambusa tulda in a sub-tropical bamboo forest in Lengpui, North-East India. Int Res J Environ Sci 7: 23–28.
    [26] Walkley A, Black IA (1934) An examination of the Degtjareff method for determining soil organic matter, and a proposed modification of the chromic acid titration method. Soil Sci 37: 29–38. doi: 10.1097/00010694-193401000-00003
    [27] IPCC (International Panel on Climate Change) (2003) LUCF sector good practice guidance. In: Penman J, Gytarsky M, Hiraishi T, et al., IPCC Good practice guidance for LULUCF. IPCC National Greenhouse Gas Inventories Programme, and Institute for Global Environmental Strategies (IGES), Hayama, Kanagawa, Japan, 3.1–3.312.
    [28] Deng L, Zhu GY, Tang ZS, et al. (2016) Global Patterns of the Effects of Land Use Changes on Soil Carbon Stocks. Global Ecol Conserv 5: 127–138. doi: 10.1016/j.gecco.2015.12.004
    [29] Bessah E, Bala A, Agodzo SK, et al. (2016) Dynamics of soil organic carbon stock in the Guinea savanna and transition agro-ecology under different land-use systems in Ghana. Cogent Geosci 4: 1–11.
    [30] Mulat Y, Kibret K, Bedadi B, et al. (2018) Soil organic carbon stock under different land use types in Kersa Sub Watershed, Eastern Ethiopia. Afr J Agric Res 13:1248–1256. doi: 10.5897/AJAR2018.13190
    [31] Jones CA (1983) Effect of soil texture on critical bulk densities for root growth. Soil Sci Soc Am J 47: 1028–1121.
    [32] Liu QH, Shi XZ, Weindorf DC (2006) Soil organic carbon storage of paddy soils in China using the 1:1,000,000 soil database and their implications for C sequestration. Global Biogeochem Cycles 20.
    [33] Stern J, Wang Y, Gu B (2007) Distribution and turnover of carbon in natural and constructed wetlands in Florida Everglades. Appl Geochem 22: 1936–1948. doi: 10.1016/j.apgeochem.2007.04.007
    [34] Sahrawat KL (2005) Fertility and organic matter in submerged rice soils. Curr Sci 88:735–739.
    [35] Brady NC, Weil RR (2008) The Nature and Properties of Soils, 14th Edition, Pearson Education, London.
    [36] Gupta MK, Sharma SD (2011) Sequestrated Carbon: Organic Carbon Pool in the Soils under Different Forest Covers and Land Uses in Garhwal Himalayan Region of India. Int J Agric For 1: 14–20.
    [37] Mathieu JA, Hatte C, Balesdent J, et al. (2015). Deep soil carbon dynamics are driven more by soil type than by climate: a worldwide meta-analysis of radiocarbon profile. Glob Chang Biol 21: 4278–4292. doi: 10.1111/gcb.13012
    [38] Rasse DP, Rumpel C, Dignac MF (2005). Is soil carbon mostly root carbon? Mechanisms for a specific stabilisation. Plant Soil 269: 341–356.
    [39] Bernhard-Reversat F (1982) Biogeochemical cycle of nitrogen in a semi-arid savannah. Oikos 38: 321–332. doi: 10.2307/3544672
    [40] Isichel AO, Muoghalu JI (1992) The effect of tree canopy covers on soil fertility in a Nigerian Savannah. J Trop Ecol 8: 329–338. doi: 10.1017/S0266467400006623
    [41] Osman KS, Jashimuddin M, SirajulHaque SM, et al. (2013) Effect of shifting cultivation on soil physical and chemical properties in Bandarban hill district, Bangladesh. J For Res 24: 791–795. doi: 10.1007/s11676-013-0368-3
    [42] Chou CH, Yang CM (1982) Allelopathic research of subtropical vegetation in Taiwan. II. Comparative exclusion of understory by Phyllostachys edulis and Cryptomeria japonica. J Chem Ecol 8:1489–1508.
    [43] Chan EH, Chiu CY (2015) Changes in soil microbial community structure and activity in a cedar plantation invaded by Moso bamboo. Appl Soil Ecol 91: 1–7. doi: 10.1016/j.apsoil.2015.02.001
    [44] Wagener SM, Schimel JP (1998) Stratification of ecological processes: a study of the birch forest Floor in the Alaskan taiga. Oikos 81: 63–74. doi: 10.2307/3546468
    [45] Das B, Bindi (2014) Physical and chemical analysis of soil collected from Jaismand. Univers J Environ Res Technol 4: 260–164.
    [46] Baishya J, Sharma S (2017) Analysis of Physico-Chemical Properties of Soil under Different Land Use System with Special Reference to Agro Ecosystem in Dimoria Development Block of Assam, India. Int J Sci Res Educ 5: 6526–6532.
    [47] Tate KR (1992) Assessment, based on a climosequence of soil in tussock grasslands, of soil carbon storage and release in response to global warming. J Soil Sci 43: 697–707. doi: 10.1111/j.1365-2389.1992.tb00169.x
    [48] Garten CT, Post WM, Hanson PJ, et al. (1999) Forest soil carbon inventories and dynamics along an elevation gradient in the southern Appalachian Mountains. Biogeochem 45: 115–145.
    [49] Quideau SA, Chadwick QA, Benesi A, et al. (2001) A direct link between forest vegetation type and soil organic matter composition. Geoderma 104: 41–60. doi: 10.1016/S0016-7061(01)00055-6
    [50] Trumbore SE, Vitousek PM, Amundson RR (1996) Rapid exchange between soil carbon and atmospheric carbon dioxide driven by temperature change. Science 272: 393–396. doi: 10.1126/science.272.5260.393
    [51] Kara O, Bolat L (2008) The effect of different land uses on soil microbial biomass carbon and nitrogen in Bartin province. Turk J Agric For 32: 281–288.
    [52] Janzen H, Campbell CA, Brandt SA, et al. (1992) Light-fraction organic matter in soils from long-term crop rotations. Soil Sci Soc Am J 56: 1799–1806. doi: 10.2136/sssaj1992.03615995005600060025x
    [53] Manns HR, Maxwellr CD, Emery JN (2007) The effect of ground cover or initial organic carbon on soil fungi, aggregation, moisture and organic carbon in one season with oat (Avena sativa) plots. Soil Tillage Res 96: 83–94. doi: 10.1016/j.still.2007.03.001
  • This article has been cited by:

    1. Lin Sun, Shanshan Si, Jing Zhao, Jiucheng Xu, Yaojin Lin, Zhiying Lv, Feature selection using binary monarch butterfly optimization, 2023, 53, 0924-669X, 706, 10.1007/s10489-022-03554-9
    2. Amjad Qtaish, Malik Braik, Dheeb Albashish, Mohammad T. Alshammari, Abdulrahman Alreshidi, Eissa Jaber Alreshidi, Enhanced coati optimization algorithm using elite opposition-based learning and adaptive search mechanism for feature selection, 2024, 1868-8071, 10.1007/s13042-024-02222-3
    3. Xi-Ao Ma, Hao Xu, Yi Liu, Justin Zuopeng Zhang, Class-specific feature selection using fuzzy information-theoretic metrics, 2024, 136, 09521976, 109035, 10.1016/j.engappai.2024.109035
    4. Lei Xiao, 2024, Evaluation Model Construction Based on 'Three-dimensional' Data Fusion in Cross-cultural Blended Course, 9798400710360, 11, 10.1145/3686424.3686426
    5. Pedro J. Gutiérrez-Diez, Jorge Alves-Antunes, Stock market uncertainty determination with news headlines: A digital twin approach, 2023, 9, 2473-6988, 1683, 10.3934/math.2024083
    6. Ah. E. Hegazy, B. Hafiz, M. A. Makhlouf, Omar A. M. Salem, Optimizing medical data classification: integrating hybrid fuzzy joint mutual information with binary Cheetah optimizer algorithm, 2025, 28, 1386-7857, 10.1007/s10586-025-05102-9
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(6376) PDF downloads(1085) Cited by(25)

Figures and Tables

Figures(6)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog