Loading [MathJax]/jax/output/SVG/jax.js
Research article

A semi-supervised deep neuro-fuzzy iterative learning system for automatic segmentation of hippocampus brain MRI

  • The hippocampus is a small, yet intricate seahorse-shaped tiny structure located deep within the brain's medial temporal lobe. It is a crucial component of the limbic system, which is responsible for regulating emotions, memory, and spatial navigation. This research focuses on automatic hippocampus segmentation from Magnetic Resonance (MR) images of a human head with high accuracy and fewer false positive and false negative rates. This segmentation technique is significantly faster than the manual segmentation methods used in clinics. Unlike the existing approaches such as UNet and Convolutional Neural Networks (CNN), the proposed algorithm generates an image that is similar to a real image by learning the distribution much more quickly by the semi-supervised iterative learning algorithm of the Deep Neuro-Fuzzy (DNF) technique. To assess its effectiveness, the proposed segmentation technique was evaluated on a large dataset of 18,900 images from Kaggle, and the results were compared with those of existing methods. Based on the analysis of results reported in the experimental section, the proposed scheme in the Semi-Supervised Deep Neuro-Fuzzy Iterative Learning System (SS-DNFIL) achieved a 0.97 Dice coefficient, a 0.93 Jaccard coefficient, a 0.95 sensitivity (true positive rate), a 0.97 specificity (true negative rate), a false positive value of 0.09 and a 0.08 false negative value when compared to existing approaches. Thus, the proposed segmentation techniques outperform the existing techniques and produce the desired result so that an accurate diagnosis is made at the earliest stage to save human lives and to increase their life span.

    Citation: M Nisha, T Kannan, K Sivasankari. A semi-supervised deep neuro-fuzzy iterative learning system for automatic segmentation of hippocampus brain MRI[J]. Mathematical Biosciences and Engineering, 2024, 21(12): 7830-7853. doi: 10.3934/mbe.2024344

    Related Papers:

    [1] Minlong Lin, Ke Tang . Selective further learning of hybrid ensemble for class imbalanced increment learning. Big Data and Information Analytics, 2017, 2(1): 1-21. doi: 10.3934/bdia.2017005
    [2] Subrata Dasgupta . Disentangling data, information and knowledge. Big Data and Information Analytics, 2016, 1(4): 377-390. doi: 10.3934/bdia.2016016
    [3] Qinglei Zhang, Wenying Feng . Detecting Coalition Attacks in Online Advertising: A hybrid data mining approach. Big Data and Information Analytics, 2016, 1(2): 227-245. doi: 10.3934/bdia.2016006
    [4] Tieliang Gong, Qian Zhao, Deyu Meng, Zongben Xu . Why Curriculum Learning & Self-paced Learning Work in Big/Noisy Data: A Theoretical Perspective. Big Data and Information Analytics, 2016, 1(1): 111-127. doi: 10.3934/bdia.2016.1.111
    [5] Xin Yun, Myung Hwan Chun . The impact of personalized recommendation on purchase intention under the background of big data. Big Data and Information Analytics, 2024, 8(0): 80-108. doi: 10.3934/bdia.2024005
    [6] Pankaj Sharma, David Baglee, Jaime Campos, Erkki Jantunen . Big data collection and analysis for manufacturing organisations. Big Data and Information Analytics, 2017, 2(2): 127-139. doi: 10.3934/bdia.2017002
    [7] Zhen Mei . Manifold Data Mining Helps Businesses Grow More Effectively. Big Data and Information Analytics, 2016, 1(2): 275-276. doi: 10.3934/bdia.2016009
    [8] Ricky Fok, Agnieszka Lasek, Jiye Li, Aijun An . Modeling daily guest count prediction. Big Data and Information Analytics, 2016, 1(4): 299-308. doi: 10.3934/bdia.2016012
    [9] M Supriya, AJ Deepa . Machine learning approach on healthcare big data: a review. Big Data and Information Analytics, 2020, 5(1): 58-75. doi: 10.3934/bdia.2020005
    [10] Sunmoo Yoon, Maria Patrao, Debbie Schauer, Jose Gutierrez . Prediction Models for Burden of Caregivers Applying Data Mining Techniques. Big Data and Information Analytics, 2017, 2(3): 209-217. doi: 10.3934/bdia.2017014
  • The hippocampus is a small, yet intricate seahorse-shaped tiny structure located deep within the brain's medial temporal lobe. It is a crucial component of the limbic system, which is responsible for regulating emotions, memory, and spatial navigation. This research focuses on automatic hippocampus segmentation from Magnetic Resonance (MR) images of a human head with high accuracy and fewer false positive and false negative rates. This segmentation technique is significantly faster than the manual segmentation methods used in clinics. Unlike the existing approaches such as UNet and Convolutional Neural Networks (CNN), the proposed algorithm generates an image that is similar to a real image by learning the distribution much more quickly by the semi-supervised iterative learning algorithm of the Deep Neuro-Fuzzy (DNF) technique. To assess its effectiveness, the proposed segmentation technique was evaluated on a large dataset of 18,900 images from Kaggle, and the results were compared with those of existing methods. Based on the analysis of results reported in the experimental section, the proposed scheme in the Semi-Supervised Deep Neuro-Fuzzy Iterative Learning System (SS-DNFIL) achieved a 0.97 Dice coefficient, a 0.93 Jaccard coefficient, a 0.95 sensitivity (true positive rate), a 0.97 specificity (true negative rate), a false positive value of 0.09 and a 0.08 false negative value when compared to existing approaches. Thus, the proposed segmentation techniques outperform the existing techniques and produce the desired result so that an accurate diagnosis is made at the earliest stage to save human lives and to increase their life span.



    For a continuous risk outcome 0<y<1, a model with a random effect has potentially a wide application in portfolio risk management, especially, for stress testing [1,2,7,16,19], capital allocation, conditional expected shortfall estimation [3,11,17].

    Given fixed effects x=(x1,x2,,xk), two widely used regression models to estimate the expected value E(y|x) are: the fraction response model [10] and Beta regression model [4,6,8]. There are cases, however, where tail behaviours or severity levels of the risk outcome are relevant. In those cases, a regression model may no longer fit in for the requirements. In addition, a fraction response model of the form E(y|x)=Φ(a0+a1x1++akxk) may not be adequate when data exhibits significant heteroscedasticity, where Φ is a map from R1 to the open interval (0,1).

    In this paper, we assume that the risk outcome y is driven by a model:

    y=Φ(a0+a1x1++akxk+bs), (1.1)

    where s is a random continuous variable following a known distribution, independent of fixed effects (x1,x2,,xk). Parameters a0,a1,,ak are constant, while parameter b can be chosen to be dependent on (x1,x2,,xk) when required, for example, for addressing data heteroscedasticity.

    Given random effect model (1.1), the expected value E(y|x) can be deduced accordingly. It is given by the integral ΩΦ(a0+a1x1++akxk+bs)f(s)ds over the domain Ω of s, where f is the probability density of s. Given the routine QUAD implemented in SAS and Python, this integral can be evaluated as quickly as other function calls. Relative error tolerance for QUAD is 1.49e-8 in Python and is 1e-7 in SAS. But one can rescale the default tolerance to a desired level when necessary. This leads to an alternative regression tool to the fraction response model and Beta regression model.

    We introduce a family of interval distributions based on variable transformations. Probability densities for these distributions are provided (Proposition 2.1). Parameters of model (1.1) can then be estimated by maximum likelihood approaches assuming an interval distribution. In some cases, these parameters get an analytical solution without the needs for a model fitting (Proposition 4.1). We call a model with a random effect, where parameters are estimated by maximum likelihood assuming an interval distribution, an interval distribution model.

    In its simplest form, the interval distribution model y=Φ(a+bs), where a and b, are constant, can be used to model the loss rate as a random distribution for a homogeneous portfolio. Let yα and sα denote the α -quantiles for y and s at level α, 0<α<1. Then yα=Φ(a+bsα). The conditional expected shortfall for loss rate y, at level α, can then be estimated as the integral 11α[sα,+)Φ(a+bs)f(s)ds, where f is the density of s. Meanwhile, a stress testing loss estimate, derived from a model on a specific scenario, can be compared in loss rate to severity yα(=Φ(a+bsα)), to position its level of severity. A loss estimate may not have reached the desired, for example, 99% level yet, if it is far below y0.99, and far below the maximum historical loss rate. In which case, further recalibrations for the model may be required.

    The paper is organized as follows: in section 2, we introduce a family of interval distributions. A measure for tail fatness is defined. In section 3, we show examples of interval distributions and investigate their tail behaviours. We propose in section 4 an algorithm for estimating the parameters in model (1.1).

    Interval distributions introduced in this section are defined for a risk outcome over a finite open interval (c0,c1), where c0< c1 are finite numbers. These interval distributions can potentially be used for modeling a risk outcome over an arbitrary finite interval, including interval (0, 1), by maximum likelihood approaches.

    Let D=(d0,d1), d0<d1, be an open interval, where d0 can be finite or and d1 can be finite or +.

    Let

    Φ:D(c0,c1) (2.1)

    be a transformation with continuous and positive derivatives Φ(x)=ϕ(x). A special example is (c0,c1)=(0,1), and Φ:D(0,1) is the cumulative distribution function (CDF) of a random variable with a continuous and positive density.

    Given a continuous random variable s, let f and F be respectively its density and CDF. For constants a and b>0, let

    y=Φ(a+bs), (2.2)

    where we assume that the range of variable (a+bs) is in the domain D of Φ. Let g(y,a,b) and G(y,a,b) denote respectively the density and CDF of y in (2.2).

    Proposition 2.1. Given Φ1(y), functions g(y,a,b) and G(y,a,b) are given as:

    g(y,a,b)=U1/(bU2) (2.3)
    G(y,a,b)=F[Φ1(y)ab]. (2.4)

    where

    U1=f{[Φ1(y)a]/b},U2=ϕ[Φ1(y)] (2.5)

    Proof. A proof for the case when (c0,c1)=(0,1) can be found in [18]. The proof here is similar. Since G(y,a,b) is the CDF of y, it follows:

    G(y,a,b)=P[Φ(a+bs)y]
    =P{s[Φ1(y)a]/b}
    =F{[Φ1(y)a]/b}.

    By chain rule and the relationship Φ[Φ1(y)]=y, the derivative of Φ1(y) with respect to y is

    Φ1(y)y=1ϕ[Φ1(y)]. (2.6)

    Taking the derivative of G(y,a,b) with respect to y, we have

    G(y,a,b)y=f{[Φ1(y)a]/b}bϕ[Φ1(y)]=U1bU2.

    One can explore into these interval distributions for their shapes, including skewness and modality. For stress testing purposes, we are more interested in tail risk behaviours for these distributions.

    Recall that, for a variable X over (− ,+), we say that the distribution of X has a fat right tail if there is a positive exponent α>0, called tailed index, such that P(X>x)xα. The relation refers to the asymptotic equivalence of functions, meaning that their ratio tends to a positive constant. Note that, when the density is a continuous function, it tends to 0 when x+. Hence, by L’Hospital’s rule, the existence of tailed index is equivalent to saying that the density decays like a power law, whenever the density is a continuous function.

    For a risk outcome over a finite interval (c0,c1), c0,<c1, however, its density can be + when approaching boundaries c0 and c1. Let y0 be the largest lower bound for all values of y under (2.2), and y1 the smallest upper bound. We assume y0=c0 and y1=c1.

    We say that an interval distribution has a fat right tail if the limit limyy1  g(y,a,b)=+, and a fat left tail if limyy+0  g(y,a,b)=+, where yy+0 and yy1 denote respectively y approaching y0 from the right-hand-side, and y1 from the left-hand-side. For simplicity, we write yy0 for yy+0, and yy1 for yy1.

    Given α>0, we say that an interval distribution has a fat right tail with tailed index α if limyy1  g(y,a,b)(y1y)β=+ whenever 0<β<α, and limyy1  g(y,a,b)(y1y)β=0 for β>α. Similarly, an interval distribution has a fat left tail with tailed index α if limyy0  g(y,a,b)(yy0)β=+ whenever 0<β<α, and limyy0  g(y,a,b)(yy0)β=0 for β>α. Here the status at β=α is left open. There are examples (Remark 3.4), where an interval distribution has a fat right tail with tailed index α, but the limit limyy1  g(y,a,b)(y1y)α can either be + or 0. Under this definition, a tailed index of an interval distribution with a continuous density is always larger than 0 and less or equal to 1, if it exists.

    Recall that, for a Beta distribution with parameters α>0 and β>0, its density is given by f(x)=xα1(1x)β1B(α,β), where B(α,β) is the Beta function . Under the above definition, Beta distribution has a fat right tail with tailed index (1β) when 0<β<1, and a fat left tail with tailed index (1α) when 0<α<1.

    Next, because the derivative of Φ is assumed to be continuous and positive, it is strictly monotonic. Hence Φ1(y) is defined. Let

    z=Φ1(y) (2.7)

    Then limyy0z exists (can be ), and the same for limyy1z (can be +). Let limyy0  z=z0, and limyy1  z=z1. Rewrite g(y,a,b) as g(Φ(z),a,b) by (2.7). Let [g(Φ(z),a,b)]1β/z denote the derivative of [g(Φ(z),a,b)]1/β with respect to z.

    Lemma 2.2. Given β>0, the following statements hold:

    (ⅰ) limyy0  g(y,a,b)(yy0)β=limzz0  g(Φ(z),a,b)(Φ(z)y0)β and limyy1  g(y,a,b)(y1y)β=limzz1  g(Φ(z),a,b)(y1Φ(z))β.

    (ⅱ) If limyy0  g(y,a,b)=+ and limzz0{[g(Φ(z),a,b)]1β/z}/ϕ(z) is 0 (resp. +), then limyy0  g(y,a,b)(yy0)β=+ (resp. 0).

    (ⅲ) If limyy1  g(y,a,b)=+ and limzz1{[g(Φ(z),a,b)]1β/z}/ϕ(z)) is 0 (resp. +), then limyy1  g(y,a,b)(y1y)β=+ (resp. 0).

    Proof. The first statement follows from the relationship y=Φ(z). For statements (ⅱ) and (ⅲ), we show only (ⅲ). The proof for (ⅱ) is similar. Notice that

    [g(y,a,b)(y1y)β]1/β=[g(y,a,b)]1/βy1y=[g(Φ(z),a,b)]1/βy1Φ(z). (2.8)

    By L’Hospital’s rule and taking the derivatives of the numerator and the denominator of (2.8) with respect to z, we have limyy1[g(y,a,b)(y1y)β]1/β=0 (resp. +) if limzz0{[g(Φ(z),a,b)]1/β/z}/ϕ(z) is 0 (resp. +). Hence limyy1  g(y,a,b)(y1y)β=+ (resp. 0).

    For tail convexity, we say that the right tail of an interval distribution is convex if g(y,a,b) is convex for y1є<y<y1 for sufficiently small є>0. Similarly, the left tail is convex if g(y,a,b) is convex for y0<y<y0+є for sufficiently small є>0. One sufficient condition for convexity for the right (resp. left) tail is gyy(y,a,b)0 when y is sufficiently close to y1 (resp. y0).

    Again, write g(y,a,b)=g(Φ(z),a,b). Let

    h(z,a,b)=log[g(Φ(z),a,b)], (2.9)

    where log(x) denotes the natural logarithmic function. Then

    g(y,a,b)=exp[h(z,a,b)]. (2.10)

    By (2.9), (2.10), using (2.6) and the relationship z=Φ1(y), we have

    gy=[hz(z)/ϕ(z)]exp[h(Φ1(y),a,b)],gyy=[hzz(z)ϕ2(z)hz(z)ϕz(z)ϕ3(z)+hz(z)hz(z)ϕ2(z)]exp[h(Φ1(y),a,b)]. (2.11)

    The following lemma is useful for checking tail convexity, it follows from (2.11).

    Lemma 2.3. Suppose ϕ(z)>0, and derivatives hz(z),hz(z), and ϕz(z), with respect to z, all exist. If hzz(z)0 and hz(z)ϕz(z)0, then gyy(y,a,b)0.

    In this section, we focus on the case where (c0,c1)=(0,1), and Φ:D(0,1) in (2.2) is the CDF of a continuous distribution . This includes, for example, the CDFs for standard normal and standard logistic distributions.

    One can explore into a wide list of densities with different choices for Φ and s under (2.2). We consider here only the following four interval distributions:

    A. sN(0,1) and Φ is the CDF for the standard normal distribution.

    B. s follows the standard logistic distribution and Φ is the CDF for the standard normal distribution.

    C. s follows the standard logistic distribution and Φ is its CDF.

    D.D. sN(0,1) and Φ is the CDF for standard logistic distribution.

    Densities for cases A, B, C, and D are given respectively in (3.3) (section 3.1), (A.1), (A.3), and (A5) (Appendix A). Tail behaviour study is summarized in Propositions 3.3, 3.5, and Remark 3.6. Sketches of density plots are provided in Appendix B for distributions A, B, and C.

    Using the notations of section 2, we have ϕ=f and Φ=F. We claim that y=Φ(a+bs) under (2.2) follows the Vasicek distribution [13,14].

    By (2.5), we have

    log(U1U2)=z2+2aza2+b2z22b2 (3.1)
    =(1b2)(za1b2)2+b21b2a22b2. (3.2)

    Therefore, we have

    g(y,a,b)=1bexp{(1b2)(za1b2)2+b21b2a22b2}. (3.3)

    Again, using the notations of section 2, we have y0=0 and y1=1. With z=Φ1(y), we have limy0  z= and limy1  z=+. Recall that a variable 0<y<1 follows a Vasicek distribution [13,14] if its density has the form:

    g(y,p,ρ)=1ρρexp{12ρ[1ρΦ1(y)Φ1(p)]2+12[Φ1(y)]2} (3.4)

    where p is the mean of y , and ρ is a parameter called asset correlation.

    Proposition 3.1. Density (3.3) is equivalent to (3.4) under the relationships:

    a=Φ1(p)1ρ  and  b=ρ1ρ. (3.5)

    Proof. A similar proof can be found in [19]. By (3.4), we have

    g(y,p,ρ)=1ρρexp{1ρ2ρ[Φ1(y)Φ1(p)/1ρ]2+12[Φ1(y)]2}
    =1bexp{12[Φ1(y)ab]2}exp{12[Φ1(y)]2}
    =U1/(bU2)=g(y,a,b).

    The following relationships are implied by (3.5):

    ρ=b21+b2, (3.6)
    a=Φ1(p)1+b2. (3.7)

    Remark 3.2. The mode of g(y,p,ρ) in (3.4) is given in [14] as Φ(1ρ12ρΦ1(p)). We claim this is the same as Φ(a1b2). By (3.6), 12ρ=1b21+b2 and 1ρ=11+b2. Therefore, we have

    1ρ12ρΦ1(p)=1+b21b2Φ1(p)=a1b2.

    This means Φ(1ρ12ρΦ1(p))=Φ(a1b2).

    Proposition 3.3. The following statements hold for g(y,a,b) given in (3.3):

    (ⅰ) g(y,a,b) is unimodal if 0<b<1 with mode given by Φ(a1b2), and is in U-shape if b>1.

    (ⅱ) If b>1,then  g(y,a,b) has a fat left tail and a fat right tail with tailed index (11/b2).

    (ⅲ) If b>1, both tails of g(y,a,b) are convex , and is globally convex if in addition a=0.

    Proof. For statement (ⅰ), we have (1b2)<0 when 0<b<1. Therefore by (3.2) function log(U1U2) reaches its unique maximum at z=a1b2, resulting in a value for the mode at Φ(a1b2). If b>1, then (1b2)>0, thus by (3.2), g(y,a,b) is first decreasing and then increasing when y varying from 0 to 1. This means (y,a,b) is in U-shape.

    Consider statement (ⅱ). First by (3.3), if b>1, then limy1  g(y,a,b)=+ and limy0  g(y,a,b)=+. Thus g(y,a,b) has a fat right and a fat left tail. Next for tailed index, we use Lemma 2.2 (ⅱ) and (ⅲ). By (3.1),

    [g(Φ(z),a,b)]1/β=b1/βexp((b21)z2+2aza22βb2) (3.8)

    By taking the derivative of (3.8) with respect to z and noting that ϕ(z)=12πexp(z22), we have

    {[g(Φ(z),a,b)]1β/z}/ϕ(z)=2πb1β(b21)z+aβb2exp((b21)z2+2aza22βb2+z22). (3.9)

    Thus limz+{[g(Φ(z),a,b)]1β/z}/ϕ(z) is 0 if b21βb2>1, and is + if b21βb2<1. Hence by Lemma 2.2 (ⅲ), g(y,a,b) has a fat right tail with tailed index (11/b2). Similarly, for the left tail, we have by (3.9)

    {[g(Φ(z),a,b)]1β/z}/ϕ(z)=2πb1β(b21)z+aβb2exp((b21)z2+2aza22βb2+z22). (3.10)

    Thus limz{[g(Φ(z),a,b)]1β/z}/ϕ(z) is 0 if b21βb2>1, and is + if b21βb2<1. Hence g(y,a,b) has a fat left tail with tailed index (11/b2) by Lemma 2.2 (ⅱ).

    For statement (ⅲ), we use Lemma 2.3. By (2.9) and using (3.2), we have

    h(z,a,b)=log(U1bU2)=(1b2)(za1b2)2+b21b2a22b2log(b).

    When b>1, it is not difficult to check out that hzz(z)0 and hz(z)ϕz(z)0 when z± or when a=0.

    Remark 3.4. Assume β=(11/b2) and b>1. By (3.9), we see

    limz+{[g(Φ(z),a,b)]1β/z}/ϕ(z)

    is + for a=0, and is 0 for a>0. Hence for this β, the limit limy1  g(y,a,b)(1y)β can be either 0 or +, depending on the value of a.

    For these distributions, we again focus on their tail behaviours. A proof for the next proposition can be found in Appendix A.

    Proposition 3.5. The following statements hold:

    (a) Density g(y,a,b) has a fat left tail and a fat right tail for case B for all b>0, and for case C if b>1. For case D, it does not have a fat right tail nor a fat left tail for any b>0.

    (b) The tailed index of g(y,a,b) for both right and left tails is 1 for case B for all b>0, and is (11b) for case C for B for b>1.

    Remark 3.6. Among distributions A, B, C, and Beta distribution, distribution B gets the highest tailed index of 1, independent of the choices of b>0.

    In this section, we assume that Φ in (2.2) is a function from R1 to (0,1) with positive continuous derivatives. We focus on parameter estimation algorithms for model (1.1).

    First, we consider a simple case, where risk outcome y is driven by a model:

    y=Φ(v+bs), (4.1)

    where b>0 is a constant, v=a0+a1x1++akxk, and sN(0,1), independent of fixed effects x=(x1,x2,,xk). The function Φ does not have to be the standard normal CDF. But when Φ is the standard normal CDF, the expected value E(y|x) can be evaluated by the formula ES[Φ(a+bs)]=Φ(a1+b2) [12].

    Given a sample {(x1i,x2i,,xki,yi)}ni=1, where (x1i,x2i,,xki,yi) denotes the ith data point of the sample, let zi=Φ1(yi). and vi=a0+a1x1i++akxki. By (2.3), the log-likelihood function for model (4.1) is:

    LL=ni=1{logf(zivib)logϕ(zi)logb} (4.2)

    where f is the density of s. The part of ni=1logϕ(zi) is constant, which can be dropped off from the maximization.

    Recall that the least squares estimators of a0,a1,,ak, as a row vector, that minimize the sum squares

    SS=ni=1(zivi)2 (4.3)

    has a closed form solution given by the transpose of (XTX)1XTZ [5,9] whenever the design matrix X has a rank of k, where

    X=1x11xk11x12xk21x1nxkn,Z=z1z2zn.

    The next proposition shows there exists an analytical solution for the parameters of model (4.1).

    Proposition 4.1. Given a sample {(x1i,x2i,,xki,yi)}ni=1, assume that the design matrix has a rank of k. If sN(0,1), then the maximum likelihood estimates of parameters (a0,a1,,a), as a row vector, and parameter b are respectively given by the transpose of (XTX)1XTZ, and b2=1nni=1(zivi)2. In absence of fixed effects {x1,x2,,xk}, parameters a0 and b2 degenerate respectively to the sample mean and variance of z1, z2,,zn.

    Proof. Dropping off the constant term from (4.2) and noting f(z)=12πexp(z22), we have

    LL=12b2ni=1(zivi)2nlogb, (4.4)

    Hence the maximum likelihood estimates (a0,a1,,ak) are the same as least squares estimators of (4.3), which are given by the transpose of (XTX)1XTZ. By taking the derivative of (4.4) with respect to b and setting it to zero, we have b2=1nni=1(zivi)2.

    Next, we consider the general case of model (1.1), where the risk outcome y is driven by a model:

    y=Φ[v+ws], (4.5)

    where parameter w is formulated as w=exp(u), and u=b0+b1x1++bkxk. We focus on the following two cases:

    (a) sN(0,1),

    (b) s is standard logistic.

    Given a sample {(x1i,x2i,,xki,yi)}ni=1, let wi=exp(b0+b1x1i++bkxki) and ui=b0+b1x1i++bkxki. The log-likelihood functions for model (4.5), dropping off the constant part log(U2), for cases (a) and (b) are given respectively by (4.6) and (4.7):

    LL=ni=112[(zivi)2/w2iui], (4.6)
    LL=ni=1{(zivi)/wi2log[1+exp[(zivi)/wi]ui}, (4.7)

    Recall that a function is log-concave if its logarithm is concave. If a function is concave, a local maximum is a global maximum, and the function is unimodal. This property is useful for searching maximum likelihood estimates.

    Proposition 4.2. The functions (4.6) and (4.7) are concave as a function of (a0,a1,,ak). As a function of (b0,b1,,bk), (4.6) is concave.

    Proof. It is well-known that, if f(x) is log-concave, then so is f(Az+b), where Az+b : RmR1 is any affine transformation from the m-dimensional Euclidean space to the 1-dimensional Euclidean space. For (4.6), the function f(x)=(zv)2exp(2u) is concave as a function of v, thus function (4.6) is concave as a function of (a0,a1,,ak). Similarly, this function f(x) is concave as a function of u, so (4.6) is concave as a function of (b0,b1,,bk).

    For (4.7), the linear part (zivi)exp(ui), as a function of (a0,a1,,ak), in (4.7) is ignored. For the second part in (4.7), we know log{1+exp[(zv)/exp(u)]}, as a function of v, is the logarithm of the CDF of a logistic distribution. It is well-known that the CDF for a logistic distribution is log-concave. Thus (4.7) is concave with respect to (a0,a1,,ak).

    In general, parameters (a0,a1,,ak) and (b0,b1,,bk) in model (4.5) can be estimated by the algorithm below.

    Algorithm 4.3. Follow the steps below to estimate parameters of model (4.5):

    (a) Given (b0,b1,,bk), estimate (a0,a1,,ak) by maximizing the log-likelihood function;

    (b) Given (a0,a1,,ak), estimate (b0,b1,,bk) by maximizing the log-likelihood function;

    (c) Iterate (a) and (b) until a convergence is reached.

    With the interval distributions introduced in this paper, models with a random effect can be fitted for a continuous risk outcome by maximum likelihood approaches assuming an interval distribution. These models provide an alternative regression tool to the Beta regression model and fraction response model, and a tool for tail risk assessment as well.

    Authors are very grateful to the third reviewer for many constructive comments. The first author is grateful to Biao Wu for many valuable conversations. Thanks also go to Clovis Sukam for his critical reading for the manuscript.

    We would like to thank you for following the instructions above very closely in advance. It will definitely save us lot of time and expedite the process of your paper's publication.

    The views expressed in this article are not necessarily those of Royal Bank of Canada and Scotiabank or any of their affiliates. Please direct any comments to Bill Huajian Yang at h_y02@yahoo.ca.



    [1] A. Obenaus, C. J. Yong-Hing, K. A. Tong, G. E. Sarty, A reliable method for measurement and normalization of pediatric hippocampal volumes, J. Pediatr. Res., 50 (2001), 124–132. https://doi.org/10.1203/00006450-200107000-00022 doi: 10.1203/00006450-200107000-00022
    [2] D. Shen, S. Moffat, S. M. Resnick, C. Davatzikos, Measuring size and shape of the hippocampus in MR images using a deformable shape model, Neuroimage, 15 (2002), 422–434. https://doi.org/10.1006/nimg.2001.0987 doi: 10.1006/nimg.2001.0987
    [3] S. Li, F. Shi, F. Pu, X. Li, T. Jiang, S. Xie, Hippocampal shape analysis of Alzheimer disease based on machine learning methods, J. Neuroradiol., 28 (2007), 1339–1345. https://doi.org/10.3174/ajnr.A0620 doi: 10.3174/ajnr.A0620
    [4] J. H. Morra, Z. Tu, L. G. Apostolova, A. E. Green, A. W. Toga, P. M. Thompson, Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation, IEEE Trans. Med. Imaging, 29 (2010), 30–43. https://doi.org/10.1109/TMI.2009.2021941 doi: 10.1109/TMI.2009.2021941
    [5] H. Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige; P. A. Yushkevich, Multi-atlas segmentation with joint label fusion, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2012), 611–623. https://doi.org/10.1109/TPAMI.2012.143 doi: 10.1109/TPAMI.2012.143
    [6] M. Kim, G. Wu, D. Shen, Unsupervised deep learning for hippocampus segmentation in 7.0 Tesla MR images, Int. Workshop Mach. Learn. Med. Imaging, (2013), 1–8. https://doi.org/10.1007/978-3-319-02267-3_1 doi: 10.1007/978-3-319-02267-3_1
    [7] J. Kim, M. C. Valdes-Hernandez, N. A. Royle, J. Park, Hippocampal shape modeling based on a progressive template surface deformation and its verification, IEEE Trans. Med. Imaging, 34 (2015), 1242–1261. https://doi.org/10.1109/TMI.2014.2382581 doi: 10.1109/TMI.2014.2382581
    [8] D. Zarpalas, P. Gkontra, P. Daras, N. Maglaveras, Accurate and fully automatic hippocampus segmentation using subject-specific 3D optimal local maps into a hybrid active contour model, IEEE J. Transl. Eng. Health Med., 2 (2016), 1–16. https://doi.org/10.1109/JTEHM.2014.2297953 doi: 10.1109/JTEHM.2014.2297953
    [9] S. Sri Devi, A. Mano, R. Asha, MRI brain tumor segmentation and feature extraction using GLCM, Int. J. Res. Appl. Sci. Eng. Technol., 6 (2018), 1911–1916. https://doi.org/10.22214/ijraset.2018.1297 doi: 10.22214/ijraset.2018.1297
    [10] V. Dill, P. C. Klein, A. R. Franco, M. S. Pinho, Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters, J. Comput. Biol. Med., 95 (2018), 90–98. https://doi.org/10.1016/j.compbiomed.2018.02.005 doi: 10.1016/j.compbiomed.2018.02.005
    [11] N. Varuna Shree, T. N. R. Kumar, Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network, Brain Inform., 5 (2018), 23–30. https://doi.org/10.1007/s40708-017-0075-5 doi: 10.1007/s40708-017-0075-5
    [12] E. Gibson, W. Li, C. Sudre, L. Fidon, D. I. Shakir, G. Wang, et al., NiftyNet: a deep-learning platform for medical imaging, Comput. Methods Programs Biomed., 158 (2018), 113–122. https://doi.org/10.1016/j.cmpb.2018.01.025 doi: 10.1016/j.cmpb.2018.01.025
    [13] Y. Shao, J. Kim, Y. Gao, Q. Wang, W. Lin, D. Shen, Hippocampal segmentation from longitudinal infant brain MR images via classification-guided boundary regression, IEEE Access, 7 (2019), 33728–33740. https://doi.org/10.1109/ACCESS.2019.2904143 doi: 10.1109/ACCESS.2019.2904143
    [14] A. Basher, K. Y. Choi, J. J. Lee, B. Lee, B. C. Kim, K. H. Lee, et al., Hippocampus localization using a two-stage ensemble Hough convolutional neural network, IEEE Access, 7 (2019), 73436–73447. https://doi.org/10.1109/ACCESS.2019.2920005 doi: 10.1109/ACCESS.2019.2920005
    [15] S. Liu, Y. Wang, X. Yang, B. Lei, L. Liu, S. X. Li, Deep learning in medical ultrasound analysis: a review, Engineering, 5 (2019), 261–275. https://doi.org/10.1016/j.eng.2018.11.020 doi: 10.1016/j.eng.2018.11.020
    [16] A. Gumaei, M. M. Hassan, M. R. Hassan, A. Alelaiwi, G. Fortino, A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification, IEEE Access, 7 (2019), 36266–36273. https://doi.org/10.1109/ACCESS.2019.2904145 doi: 10.1109/ACCESS.2019.2904145
    [17] Y. Shi, K. Cheng, Z. Liu, Hippocampal subfields segmentation in brain MR images using generative adversarial networks, Biomed. Eng. Online, 18 (2019), 1–12. https://doi.org/10.1186/s12938-019-0623-8 doi: 10.1186/s12938-019-0623-8
    [18] A. S. Lundervold, A. Lundervold, An overview of deep learning in medical imaging focusing on MRI, J. Med. Phys., 29 (2019), 102–127. https://doi.org/10.1016/j.zemedi.2018.11.002 doi: 10.1016/j.zemedi.2018.11.002
    [19] S. M. Nisha, A novel computer-aided diagnosis scheme for breast tumor classification, Int. Res. J. Eng. Technol., 7 (2020), 718–724.
    [20] N. Safavian, S. A. H. Batouli, M. A. Oghabian, An automatic level set method for hippocampus segmentation in MR images, Comput. Methods Biomech. Biomed. Eng. Imaging Vis., 8 (2020), 400–410. https://doi.org/10.1080/21681163.2019.1706054 doi: 10.1080/21681163.2019.1706054
    [21] M. Liu, F. Li, H. Yan, K. Wang, Y. Ma, L. Shen, et al., A multi-model deep convolutional neural network for automatic hippocampus segmentation and classification in Alzheimer's disease, Neuroimage, 208 (2020), 116459. https://doi.org/10.1016/j.neuroimage.2019.116459 doi: 10.1016/j.neuroimage.2019.116459
    [22] M. K. Singh, K. K. Singh, A review of publicly available automatic brain segmentation methodologies, machine learning models, recent advancements, and their comparison, Ann. Neurosci., 28 (2021), 82–93. https://doi.org/10.1177/0972753121990 doi: 10.1177/0972753121990
    [23] L. Liu, L. Kuang, Y. Ji, Multimodal MRI brain tumor image segmentation using sparse subspace clustering algorithm, Comput. Math. Methods Med., (2020), 8620403. https://doi.org/10.1155/2020/8620403 doi: 10.1155/2020/8620403
    [24] D. Carmo, B. Silva, C. Yasuda, L. Rittner, R. Lotufo, Hippocampus segmentation on epilepsy and Alzheimer's disease studies with multiple convolutional neural networks, Heliyon, 7 (2021), e06226. https://doi.org/10.1016/j.heliyon.2021.e06226 doi: 10.1016/j.heliyon.2021.e06226
    [25] R. De Feo, E. Hämäläinen, E. Manninen, R. Immonen, J. M. Valverde, X. E. Ndode-Ekane, et al., Convolutional neural networks enable robust automatic segmentation of the rat hippocampus in mri after traumatic brain injury, Front. Neurol., 13 (2022), 820267. https://doi.org/10.3389/fneur.2022.820267 doi: 10.3389/fneur.2022.820267
    [26] M. Nisha, T. Kannan, K. Sivasankari, M. Sabrigiriraj, Automatic hippocampus segmentation model for MRI of human head through semi-supervised generative adversarial networks, Neuroquantology, 20 (2022), 5222–5232. https://doi.org/10.14704/nq.2022.20.6.NQ22528 doi: 10.14704/nq.2022.20.6.NQ22528
    [27] K. S. Chuang, H. L. Tzeng, S. Chen, J. Wu, T. J. Chen, Fuzzy c-means clustering with spatial information for image segmentation, Comput. Med. Imaging Graph., 30 (2006), 9–15. https://doi.org/10.1016/j.compmedimag.2005.10.001 doi: 10.1016/j.compmedimag.2005.10.001
    [28] B. N. Li, C. K. Chui, S. Chang, S. H. Ong, Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation, Comput. Biol. Med., 41 (2011), 1–10. https://doi.org/10.1016/j.compbiomed.2010.10.007 doi: 10.1016/j.compbiomed.2010.10.007
    [29] C. Militello, L. Rundo, M. Dimarco, A. Orlando, V. Conti, R. Woitek, et al., Semi-automated and interactive segmentation of contrast-enhancing masses on breast DCE-MRI using spatial fuzzy clustering, Biomed. Signal Process. Control, 71 (2022), 103113. https://doi.org/10.1016/j.bspc.2021.103113 doi: 10.1016/j.bspc.2021.103113
    [30] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, J. Liang, U-Net++: A nested U-Net architecture for medical image segmentation, in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, (2018), 3–11. https://doi.org/10.1007/978-3-030-00889-5_1
    [31] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [32] G. Huang, Z. Liu, L. Van Der Maaten., K. Q. Weinberger, Densely connected convolutional networks, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., (2017), 4700–4708. https://doi.org/10.1109/CVPR.2017.243
    [33] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows, in Proc. IEEE/CVF Int. Conf. Comput. Vis., (2021), 10012–10022. https://doi.org/10.1109/ICCV48922.2021.00986
    [34] J. Hu, L. Shen, G. Sun, Squeeze-and-Excitation networks, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., (2018), 7132–7141. https://doi.org/10.48550/arXiv.1709.01507
    [35] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, et al., Deep high-resolution representation learning for visual recognition, IEEE Trans. Pattern Anal. Mach. Intell., 43 (2020), 3349–3364. https://doi.org/10.1109/TPAMI.2020.2983686 doi: 10.1109/TPAMI.2020.2983686
    [36] Z. Szentimrey, A. Al‐Hayali, S. de Ribaupierre, A. Fenster, E. Ukwatta, Semi‐supervised learning framework with shape encoding for neonatal ventricular segmentation from 3D ultrasound, Med. Phys., 2024. https://doi.org/10.1002/mp.17242 doi: 10.1002/mp.17242
    [37] Z. Wang, C. Ma, Dual-contrastive dual-consistency dual-transformer: A semi-supervised approach to medical image segmentation, in Proc. 2023 IEEE/CVF Int. Conf. Comput. Vis. Workshops, (2023), 870–879. https://doi.org/10.1109/ICCVW60793.2023.00094
    [38] L. Huang, S. Ruan, T. Denœux, Semi-supervised multiple evidence fusion for brain tumor segmentation, Neurocomputing, 535 (2023), 40–52. https://doi.org/10.1016/j.neucom.2023.02.047 doi: 10.1016/j.neucom.2023.02.047
    [39] Z. Wang, I. Voiculescu, Exigent examiner and mean teacher: An advanced 3d cnn-based semi-supervised brain tumor segmentation framework, in Med. Image Learn. Limited Noisy Data: 2nd Int. Workshop MILLanD 2023, (2023), 181–190. https://doi.org/10.1007/978-3-031-44917-8_17
    [40] G. Qu, B. Lu, J. Shi, Z. Wang, Y. Yuan, Y. Xia, et al., Motion-artifact-augmented pseudo-label network for semi-supervised brain tumor segmentation, Phys. Med. Biol., 69 (2024), 5. https://doi.org/10.1088/1361-6560/ad2634 doi: 10.1088/1361-6560/ad2634
    [41] R. A. Hazarika, A. K. Maji, R. Syiem, S. N. Sur, D. Kandar, Hippocampus segmentation using U-net convolutional network from brain magnetic resonance imaging (MRI), J. Digit. Imaging, 35 (2022), 893–909. https://doi.org/10.1007/s10278-022-00613-y doi: 10.1007/s10278-022-00613-y
    [42] D. Ataloglou, A. Dimou, D. Zarpalas, P. Daras, Fast and precise hippocampus segmentation through deep convolutional neural network ensembles and transfer learning, Neuroinformatics, 17 (2019), 563–582. https://doi.org/10.1007/s12021-019-09417-y doi: 10.1007/s12021-019-09417-y
    [43] M. Nisha, T. Kannan, K. Sivasankari, Deep integration model: A robust autonomous segmentation technique for hippocampus in MRI images of human head, Int. J. Health Sci., 6 (2022), 13745–13758. https://doi.org/10.53730/ijhs.v6nS2.8756 doi: 10.53730/ijhs.v6nS2.8756
    [44] N. Allinson, H. Yin, L. Allinson, J. Slack, Advances in Self-Organising Maps, Springer, 2001. https://doi.org/10.1007/978-1-4471-0715-6.
    [45] S. N. Sivanandam, S. Sumathi, S. N. Deepa, Applications of Fuzzy Logic: Introduction to Fuzzy Logic Using MATLAB, Springer, 2007. https://doi.org/10.1007/978-3-540-35781-0_8
    [46] V. Conti, C. Militello, L. Rundo, S. Vitabile, A novel bio-inspired approach for high-performance management in service-oriented networks, IEEE Trans. Emerg. Top. Comput., 9 (2021), 1709–1722. https://doi.org/10.1109/TETC.2020.3018312 doi: 10.1109/TETC.2020.3018312
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1262) PDF downloads(302) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog