Processing math: 71%
Research article

Dynamics of a coupled epileptic network with time delay

  • Epilepsy is considered as a brain network disease. Epileptic computational models are developed to simulate the electrophysiological process of seizure. Some studies have shown that the epileptic network based on those models can be used to predict the surgical outcome of patients with drug-resistant epilepsy. Most studies focused on the causal relationship between electrophysiological signals of different brain regions and its impact on seizure onset, and there is no knowledge about how time delay of electrophysiological signal transmitted between those regions related to seizure onset. In this study, we proposed an epileptic model with time delay between network nodes, and analyzed whether the time delay between nodes of epileptic network can cause seizure like event. Our results showed that the time delay between nodes may drive the network from normal state to seizure-like event through Hopf bifurcation. The time delay between nodes of epileptic computational network alone may induce seizure-like event. Our analysis suggested that the time delay of electrophysiological signals transmitted between different regions may be an important factor for seizure happening, which provide a deeper understanding of the epilepsy, and a potential new path for epilepsy treatment.

    Citation: Yulin Guan, Xue Zhang. Dynamics of a coupled epileptic network with time delay[J]. Mathematical Modelling and Control, 2022, 2(1): 13-23. doi: 10.3934/mmc.2022003

    Related Papers:

    [1] Kannat Na Bangchang . Application of Bayesian variable selection in logistic regression model. AIMS Mathematics, 2024, 9(5): 13336-13345. doi: 10.3934/math.2024650
    [2] Yin Xu, Ning Wang . Variable selection and estimation for accelerated failure time model via seamless-L0 penalty. AIMS Mathematics, 2023, 8(1): 1195-1207. doi: 10.3934/math.2023060
    [3] Lijie Zhou, Liucang Wu, Bin Yang . Estimation and diagnostic for single-index partially functional linear regression model with p-order autoregressive skew-normal errors. AIMS Mathematics, 2025, 10(3): 7022-7066. doi: 10.3934/math.2025321
    [4] Qingqing Jiang, Guangming Deng . Ultra-high-dimensional feature screening of binary categorical response data based on Jensen-Shannon divergence. AIMS Mathematics, 2024, 9(2): 2874-2907. doi: 10.3934/math.2024142
    [5] Zouaoui Chikr Elmezouar, Fatimah Alshahrani, Ibrahim M. Almanjahie, Salim Bouzebda, Zoulikha Kaid, Ali Laksaci . Strong consistency rate in functional single index expectile model for spatial data. AIMS Mathematics, 2024, 9(3): 5550-5581. doi: 10.3934/math.2024269
    [6] Hanji He, Meini Li, Guangming Deng . Group feature screening for ultrahigh-dimensional data missing at random. AIMS Mathematics, 2024, 9(2): 4032-4056. doi: 10.3934/math.2024197
    [7] Hsin-Lun Li . Leader–follower dynamics: stability and consensus in a socially structured population. AIMS Mathematics, 2025, 10(2): 3652-3671. doi: 10.3934/math.2025169
    [8] Chenlu Zheng, Jianping Zhu . Promote sign consistency in cure rate model with Weibull lifetime. AIMS Mathematics, 2022, 7(2): 3186-3202. doi: 10.3934/math.2022176
    [9] Jeong-Kweon Seo, Byeong-Chun Shin . Reduced-order modeling using the frequency-domain method for parabolic partial differential equations. AIMS Mathematics, 2023, 8(7): 15255-15268. doi: 10.3934/math.2023779
    [10] Dingyu Wang, Chunming Ye . Single machine and group scheduling with random learning rates. AIMS Mathematics, 2023, 8(8): 19427-19441. doi: 10.3934/math.2023991
  • Epilepsy is considered as a brain network disease. Epileptic computational models are developed to simulate the electrophysiological process of seizure. Some studies have shown that the epileptic network based on those models can be used to predict the surgical outcome of patients with drug-resistant epilepsy. Most studies focused on the causal relationship between electrophysiological signals of different brain regions and its impact on seizure onset, and there is no knowledge about how time delay of electrophysiological signal transmitted between those regions related to seizure onset. In this study, we proposed an epileptic model with time delay between network nodes, and analyzed whether the time delay between nodes of epileptic network can cause seizure like event. Our results showed that the time delay between nodes may drive the network from normal state to seizure-like event through Hopf bifurcation. The time delay between nodes of epileptic computational network alone may induce seizure-like event. Our analysis suggested that the time delay of electrophysiological signals transmitted between different regions may be an important factor for seizure happening, which provide a deeper understanding of the epilepsy, and a potential new path for epilepsy treatment.



    Group testing, or pooled testing, was first introduced by Dorfman [1] to identify syphilis infections among U.S. Army personnel during World War II. This approach involves combining specimens (e.g., blood, plasma, urine, swabs) from multiple individuals and conducting a single test to check for infection. According to Dorfman's procedure, if the combined sample tests negative, all individuals in this sample can be confirmed disease-free. Conversely, a positive result necessitates further testing to identify the affected individuals. This strategy gained prominence during the COVID-19 pandemic [2,3,4] and has been applied to detect various infectious diseases, including HIV [5,6], chlamydia and gonorrhea [7], influenza [8], and the Zika virus [9]. The primary motivation for pooled testing lies in its economic efficiency; for instance, the State Hygienic Laboratory at the University of Iowa saved approximately fanxiexian_myfh3.1 million over five years by employing a modified Dorfman protocol for testing chlamydia and gonorrhea among residents of Iowa [10,11].

    Despite its cost-effectiveness, group testing poses significant challenges for statistical analysis due to the absence of individual response data [12]. However, advancements in digital technology have provided access to rich covariate information, including demographic data, electronic health records, genomic data, lifestyle data, physiological monitoring data, imaging data, and environmental variables [13]. Integrating these covariates into various statistical models for group testing has been shown to enhance accuracy and robustness, as evidenced by studies from Mokalled et al. [14], Huang and Warasi [15], Haber et al. [16]. This integration leads to improved estimations of individual risk probabilities, thereby reducing the number of tests required and overall costs.

    In managing covariates, single-index models offer advantages, such as less restrictive assumptions, good interpretability, and adaptability to high-dimensional data [17]. For high-dimensional single-index models, Radchenko [18] proposed a novel estimation method based on L1 regularization, extending it to generalized linear models. Elmezouar et al. [19] developed a functional single index expectile model with a nonparametric estimator to address spatial dependency in financial data, showing strong consistency and practical applicability. Chen and Samworth [20] explored generalized additive models, deriving non-parametric estimators for each additive component by maximizing the likelihood function, and adapted this approach to generalized additive index models. Kereta et al. [21] employed a k-nearest neighbor estimator, enhanced by geodesic metrics, to extend local linear regression for single-index models. However, research on generalized semi-parametric single-index models in high-dimensional contexts remains limited, particularly in group testing applications, which are still underexplored.

    Most current integrations of covariate information with group testing are developed based on parametric regression models. For example, Wang et al. [22] introduced a comprehensive binary regression framework, while McMahan et al. [11] developed a Bayesian regression framework. Gregory et al. [23] adopted an adaptive elastic net method, which remains effective as data dimensionality increases. Ko et al. [24] compared commonly used group testing procedures with group lasso regarding true positive selection in high-dimensional genomic data analysis. Furthermore, nonparametric regression methods have gained traction for applying covariates in group testing. Delaigle and Hall [25] proposed a nonparametric method for estimating conditional probabilities and testing specificity and sensitivity, addressing the unique dilution effects and complex data structures inherent in group testing. Self et al. [26] introduced a Bayesian generalized additive regression method to tackle dilution effects further, while Yuan et al. [12] developed a semiparametric monotone regression model using the expectation-maximization (EM) algorithm to navigate the complexities of group testing data. Zuo et al. [27] proposed a more flexible generalized nonparametric additive model, utilizing B-splines and group lasso methods for model estimation in high-dimensional data.

    This article proposes a generalized single-index group testing model aimed at enhancing flexibility in addressing various nonlinear models and facilitating the selection of important variables. Given the absence of individual disease testing results in group testing data, the EM algorithm is employed to perform the necessary calculations for the model. B-spline functions are utilized to approximate the nonlinear unknown smooth functions, with model parameters estimated by maximizing the likelihood function. In modern group testing, a substantial amount of individual covariate information is typically collected during sample testing. Consequently, a penalty term is incorporated into the likelihood function, promoting the construction of a sparse model and enabling effective variable selection. We apply the method to four group testing strategies: master pool, Dorfman, halving, and array. The method is evaluated using both simulated and real data.

    The remaining sections are organized as follows. Section 2 introduces our model with B-spline approximation, detailing the corresponding algorithm employing the EM algorithm. Section 3 elaborates on the E-step in the EM algorithm, facilitating the acceleration of our algorithm's convergence. Sections 4 and 5 present comprehensive simulations and real data application, demonstrating the method's robust performance. Finally, we conclude our findings and provide some discussion in Section 6.

    Consider a dataset comprising n individuals. For each i{1,2,,n}, let the true disease status of the i-th individual be denoted by ˜Yi{0,1}, where ˜Yi=1 indicates disease presence, and ˜Yi=0 indicates absence. Additionally, the dataset includes covariate information for each individual, represented as Xi=(Xi1,,Xiqn)TRqn, where Rqn denotes a qn-dimensional real vector space. We assume the number of covariates qn is high-dimensional.

    Let the risk probability for the i-th individual be defined as pi=Pr(˜Yi=1Xi), where i{1,2,,n}. In many cases, the influence of covariates may be nonlinear; imposing linearity can result in inaccurate estimations. This study explores nonlinear scenarios, assuming pi follows a flexible logistic single-index model, expressed as

    Pr(˜Yi=1|Xi)=exp[g(Xiβ)]1+exp[g(Xiβ)], (2.1)

    where β=(β1,β2,,βqn)Rqn represents the unknown parameters, and g() is an unknown smooth function capturing the relationship between covariates and risk probabilities.

    In semiparametric single-index models, the true parameters are generally considered non-identifiable without imposed constraints. To ensure the identifiability of β, we impose a classical constraint: β1=1β122, where β1=(β2,β3,,βqn)Rqn1, and 2 denotes the L2-norm. Note that both the function g() and the coefficient β in the single-index model are unknown. The L2-norm constraint β2=1 is crucial for the identifiability of β as shown by Carroll et al. [28], Zhu et al. [29], Lin et al. [30], Cui et al. [31], and Guo et al. [32]. We assume that the true parameter β is sparse, defining the true model as M={j{1,2,,qn}:βj0}.

    For i{1,2,,n}, ˜Yi follows a Binomial distribution with parameter pi, denoted as ˜YiBinom(1,pi). In traditional single-index model studies, the true status ˜Y={˜Yi,i=1,2,,n}, is directly observable. However, in group testing, ˜Y is unobservable [33]. This paper investigates parameter estimation and statistical inference of single-index models based on group testing data. Moreover, if a group test result is positive, further testing is required to identify infected individuals. These results may depend on shared characteristics, leading to correlations within group test outcomes, complicating the modeling.

    In group testing, we partition n individuals into J groups, denoted as P1,1,P2,1,,PJ,1. Here, Pj,1 represents the initial index set of individuals for the j-th group, ensuring Jj=1Pj,1={1,2,,n}. For j{1,2,,J}, if any testing result for Pj,1 is positive, further testing may be warranted. Define Zj={Zj,l,l=1,2,,Lj} as the set of testing outcomes for the j-th group, where Lj denotes the total number of tests conducted within j-th group. Each Zj,l{0,1}, where Zj,l=0 indicates a negative result and Zj,l=1 indicates a positive result. If Zj,1=0, then Lj=1; otherwise, Lj1. Let Pj={Pj,l,l=1,2,,Lj}, where Pj,l corresponds to the individuals associated with Zj,l. Define ˜Zj={˜Zj,l,l=1,2,,Lj} as the true status corresponding to Zj. The true statuses of individuals determine the group's true status, defined as ˜Zj,l=I(iPj,l˜Yi), where I() denotes the indicator function.

    In practical applications, measurement error of the test kits exists. We define Se=Pr(Zj,l=1˜Zj,l=1) as sensitivity, representing the probability of correctly identifying positive samples, and Sp=Pr(Zj,l=0˜Zj,l=0) as specificity, denoting the probability of correctly identifying negative samples, where l{1,2,,Lj} and j{1,2,,J}. According to the definitions of Se and Sp, given the true status ˜Zj,l, the group's testing results satisfy Zj,l|˜Zj,lBinom(1,Se˜Zj,l(1Sp)1˜Zj,l).

    Our approach is based on two widely accepted fundamental assumptions in group testing. The first assumption is that Se and Sp are independent of group size, supported by various studies [34,35,36,37]. The second assumption posits that, given the true statuses of individuals in the j-th group {˜Yi,iPj,1}, the group's true statuses ˜Zj are mutually independent, as supported by previous research [23,34,35].

    We apply our method to four group testing methods: master pool testing, Dorfman testing, halving testing, and array testing. Figure 1 illustrates the process of four testing methods: (a) Master pool testing, where a group of individuals (e.g., Pj,1 consisting of individuals 1, 2, 3, and 4) is tested as a whole to obtain the group testing result Zj,1; (b) Dorfman testing, where initially the same group testing as in master pool testing is conducted, and if the result of the master pool testing is positive (Zj,1=1), each individual in the group is then tested separately to obtain individual testing results Zj,2, Zj,3, Zj,4, and Zj,5; (c) Halving testing, where the entire group (e.g., Pj,1) is tested as a whole, and if the result is positive (Zj,1=1), the group is divided into two subgroups (e.g., Pj,2 and Pj,3) for subgroup testing, and if the result of subgroup testing is positive (e.g., Zj,2=1), individuals in the positive subgroup are then tested individually; and (d) Array testing, where multiple individuals (e.g., 16 individuals) are arranged in an array for group testing to obtain multiple group testing results such as Zj,1, and if a specific group testing result is positive (e.g., Zj,1=1), further subgroup testing is performed (e.g., obtaining results Zj,2, Zj,3), and if the group testing results for both the row and column where an individual is located are positive (e.g., Zj,3=Zj,4=Zj,7=1), the individuals (e.g., 6-th individual and 10-th individual) are then tested.

    Figure 1.  A flowchart of four group testing procedure.

    Due to the nature of group testing, the true status of individuals, denoted as ˜Y, remains unknown. Our objective is to estimate M, β, and g() based on observed data Z={Zj,j=1,2,,J} and covariate information X=(X1,X2,,Xn)TRn×qn to ascertain individual risk probabilities. The likelihood function based on the observed data Z is defined as

    P(Z|X)=˜Y{0,1}nP(Z|˜Y)P(˜Y|X), (2.2)

    where

    P(Z˜Y)=Jj=1Ljl=1P(Zj,l˜YPj,l),

    and ˜YPj,l={˜Yi,iPj,l} represents the set of true statuses for individuals in Pj,l. Furthermore, the conditional probability P(Zj,l˜YPj,l) is expressed as

    P(Zj,l˜YPj,l)={S˜Zj,le(1Sp)1˜Zj,l}Zj,l{(1Se)˜Zj,lS1˜Zj,lp}1Zj,l.

    The likelihood function for the true disease status ˜Y can be written as

    P(˜Y|X)=ni=1p˜Yii(1pi)1˜Yi.

    Combining this with the logistic single-index model defined in (2.1), we obtain the log-likelihood function for ˜Y:

    lnP(˜Y|X)=ni=1{˜Yig(Xiβ)ln(1+exp[g(Xiβ)])}. (2.3)

    Since the smooth function g() is unknown, we approximate it using B-spline functions. Let the support interval of g() be [a,b]. We partition [a,b] at points a=d0<d1<<dN<b=dN+1 into several segments, referred to as knots or internal nodes. This division generates subintervals Ik=[dk,dk+1) for 0kN1 and IN=[dN,dN+1], ensuring that

    max0kN|dkdk+1|min0kN|dkdk+1|M,

    where M(0,). The B-spline basis functions of order q are denoted as Φ()=(ϕ1(),ϕ2(),,ϕS())RS, with S=N+q. Thus, g() can be approximated as

    g()Ss=1ϕs()γs,

    where γs are the spline coefficients to be estimated [38]. Denote γ=(γ1,γ2,,γS)RS. We approximate g(XTiβ) as

    g(XTiβ)=Φ(Xiβ)γ,

    where Φ(Xiβ)=(ϕ1(Xiβ),ϕ2(Xiβ),,ϕS(Xiβ)). Therefore, we approximate pi by using a spline function, and denote the spline approximation of pi as piB, which is defined as follows:

    piB=exp[Φ(Xiβ)γ]1+exp[Φ(Xiβ)γ]. (2.4)

    In the following, we use the spline approximation piB of pi to construct the log-likelihood function and the objective function in the subsequent EM algorithm. Thus, the log-likelihood function (2.3) for ˜Y can be reformulated as

    lnPB(˜Y|X)=ni=1{˜YiΦ(Xiβ)γln(1+exp[Φ(Xiβ)γ])}.

    Furthermore, the target likelihood function (2.2) can be represented as

    PB(Z|X)=˜Y{0,1}nP(Z|˜Y)PB(˜Y|X).

    By employing spline approximation, we transform the estimation problem of β1 and g() into estimating β1 and γ.

    For high-dimensional group testing data, we aim to estimate β1 using the penalized approach within a single-index model framework. The penalized log-likelihood function is defined as follows:

    lnPB(Z|X)qnj=2Pλ(βj), (2.5)

    where Pλ() is the penalty function and λ is a tuning parameter. We consider three common penalty functions: LASSO [39], SCAD [40], and MCP [41]. Specifically, for LASSO, Pλ(x)=λ|x|. For SCAD, it is defined as

    Pλ(x)={λ|x|if |x|λ,x2+2δλ|x|λ22(δ1)if λ<|x|δλ,(δ+1)λ22if |x|>δλ,

    where δ>2. In MCP, the penalty function is given by

    Pλ(x)={λ|x|x22δif |x|δλ,12δλ2if |x|>δλ,

    with δ>1. The following section will detail the parameter estimation process.

    The penalized log-likelihood function (2.5) lacks the individual latent status ˜Y. The complete data penalized log-likelihood function can be expressed as

    lnPB(Z,˜Y|X)qnj=2Pλ(βj)=lnP(Z|˜Y)+lnPB(˜Y|X)qnj=2Pλ(βj). (2.6)

    Notably, lnP(Z|˜Y) depends solely on known parameters Se and Sp, allowing us to disregard it in computations. The presence of the latent variable ˜Y complicates direct maximization of the complete data penalized log-likelihood function (2.6). Therefore, we employ the EM algorithm, comprising two steps: the Expectation (E) step, and the Maximization (M) step.

    In the E step, given the observed data Z and the parameters from the t-th iteration (β(t)1,γ(t)), calculate the following function:

    S(t)(β1,γ)=E{ni=1{˜YiΦ(Xiβ)γln(1+exp[Φ(Xiβ)γ])}|Z,β(t)1,γ(t)}qnj=2Pλ(βj)=ni=1{w(t)iΦ(Xiβ)γln(1+exp[Φ(Xiβ)γ])}qnj=2Pλ(βj), (2.7)

    where w(t)i=E[˜Yi|Z,γ(t),β(t)1],i=1,2,,n. The calculation of the w(t)i varies among the four grouping testing methods, which will be discussed in Section 3.

    In the M step, we update β(t+1)1 and γ(t+1), respectively. Initially, we update γ(t+1) by maximizing:

    S(t)(β(t)1,γ)=ni=1{w(t)iΦ(Xiβ(t))γln(1+exp[Φ(Xiβ(t))γ])}qnj=2Pλ(β(t)j). (2.8)

    Subsequently, we maximize S(t)(β1,γ(t+1)) to update the parameters β(t+1)1:

    S(t)(β1,γ(t+1))=ni=1{w(t)iΦ(Xiβ)γ(t+1)ln(1+exp[Φ(Xiβ)γ(t+1)])}qnj=2Pλ(βj). (2.9)

    Given that β1 appears in each B-spline basis function ϕ(Xiβ), direct iteration presents challenges. Let ˜g(t)(XTiβ)=Φ(XTiβ)γ(t+1). We apply the approach by Guo et al. [42], approximating ˜g(t)(XTiβ) via a first-order Taylor expansion

    ˜g(t)(XTiβ)˜g(t)(XTiβ(t))+˜g(t)(XTiβ(t))XTiJ(β(t))(β1β(t)1),

    where J(β)=β/β1=(β1/1β122,Iqn1) represents the Jacobian matrix of size qn×(qn1) and Iqn1 denotes the (qn1)-dimensional identity matrix. This approximation is incorporated into S(t)(β1,γ(t+1)) to maximize the expression and update β(t+1)1. Therefore, we approximate S(t)(β1,γ(t+1)) by ˜S(t)(β1,γ(t+1)) as follows:

    ˜S(t)(β1,γ(t+1))=ni=1{w(t)i˜g(t)(Xiβ)ln(1+exp[˜g(t)(Xiβ)])}qnj=2Pλ(βj)=ni=1{w(t)i[˜g(t)(XTiβ(t))+˜g(t)(XTiβ(t))XTiJ(β(t))(β1β(t)1)]ln(1+exp[˜g(t)(XTiβ(t))+˜g(t)(XTiβ(t))XTiJ(β(t))(β1β(t)1)])}qnj=2Pλ(βj). (2.10)

    We employ stochastic gradient descent [43] and coordinate descent [44] to update γ and β, respectively. Let ˆγ and ˆβ1 denote the estimated parameters, and ˆM={j{1,2,,qn}:ˆβj0} represent the estimated model. Furthermore, ˆγ and ˆβ1 can be used to calculate individual risk probabilities and guide subsequent testing strategies. In summary, the EM algorithm offers a structured approach to handle the latent variable ˜Y and estimate model parameters. The detailed steps of this method are summarized in Algorithm 1.

    Algorithm 1: Regularized single-index model for group testing.
    Input: Z, X, tmax and initialization (β(0)1,γ(0)).
    For: t=0,1,2,,tmax

        ● Step 1 (E-step): In the E step, given the parameters (β(t)1,γ(t)) and Z, calculate the conditional expectation S(t)(β1,γ) in (2.7).
        ● Step 2 (M-step): Update the iterative parameters β(t+1)1 and γ(t+1) in two substeps:
            1. Update γ(t+1) by maximizing S(t)(β(t)1,γ) in (2.8).
            2. Update β(t+1)1 by maximizing ˜S(t)(β1,γ(t+1)) in (2.10).

    End for: Repeat steps 1 and 2 until parameters converge or reach the maximum number of iterations tmax.
    Output: The estimates ˆβ1 and ˆγ.

    Implementing Algorithm 1 requires deriving formulas to calculate the conditional expectations of individuals' true statuses. These expressions are essential for the effective application of the EM algorithm in various testing scenarios. Common group testing methods include master pool testing, Dorfman testing, halving testing, and array testing. We have derived the conditional expectation formula of these methods under our methodological framework, which will facilitate our other calculations.

    For master pool testing, samples are divided into J distinct groups, with each sample assigned to only one group, and each group undergoes a single test without subsequent testing. When the i-th individual is assigned to the j-th group, consider two cases for w(t)i:

    While Zj=0,

    w(t)i=P(˜Yi=1,Zj=0)P(Zj=0)=P(Zj=0|˜Yi=1)P(˜Yi=1)P(Zj=0).

    Due to

    P(Zj=1)=P(Zj=1|˜Zj=1)P(˜Zj=1)+P(Zj=1|˜Zj=0)P(˜Zj=0)=Se[1iPj(1p(t)iB)]+(1Sp)iPj(1p(t)iB)=Se+(1SeSp)iPj(1p(t)iB),

    let Δj=Se+(1SeSp)iPj(1p(t)iB), where piB is an approximate result of pi in (2.4). Therefore,

    P(Zj=0)=1[Se+(1SeSp)iPj(1p(t)iB)]=1Δj.

    Then,

    w(t)i=(1Se)p(t)iB1[Se+(1SeSp)iPj(1p(t)iB)]=(1Se)p(t)iB(1Δj).

    While Zj=1,

    w(t)i=P(˜Yi=1|Zj=1)=p(Zj=1|˜Yi=1)P(˜Yi=1)P(Zj=1)=Sep(t)iBSe+(1SeSp)iPj(1p(t)iB)=Sep(t)iBΔj.

    In conclusion,

    w(t)i={P(˜Yi=1|Zj=0)=(1Se)p(t)iB/(1Δj),ifZj=0,P(˜Yi=1|Zj=1)=Sep(t)iB/Δj,ifZj=1.

    We apply our method to four group testing algorithms: master pool testing, Dorfman testing, halving testing, and array testing. For other algorithms, detailed expressions can be found in Appendix C. Using these expressions, we apply the EM algorithm to estimate the model parameters.

    In this section, we assess the performance of the proposed method using simulated datasets. The generation of covariates follows the approach described by Guo et al. [42]. Specifically, covariates XRn×qn are drawn from a truncated multivariate normal distribution. We first generate covariates from N(0,Σ), where ΣRqn×qn and Σij=0.5|ij| for 1i,jqn. These covariates are then truncated to the range (2,2) to obtain X. We consider logistic single-index models to describe pi=Pr(˜Yi=1Xi), with the function g(Xiβ) in the model (2.1) defined as follows,

    Example 4.1. We set n=500 and β=(315.25,2.515.25,0,,0). We consider two scenarios: qn=50 and qn=100. The model is described as follows:

    g(Xiβ)=exp(Xiβ)7.

    Under this setting, the disease prevalence is approximately 8.93%.

    Example 4.2. We set n=1000 and β=(13,13,13,0,,0). We consider two scenarios: qn=100 and qn=500. The model is described as follows:

    g(Xiβ)=Xiβ(1Xiβ)+exp(Xiβ)6.

    In this example, the disease prevalence is approximately 11.41%.

    Example 4.3. We set qn=50 and β=(9181,8181,6181,0,,0). We consider two scenarios: n=500 and n=1000. The model is described as follows:

    g(Xiβ)=Xiβ(1Xiβ)+0.5sin(πXiβ2)6.

    In this example, the disease prevalence is approximately 9.42%.

    Example 4.4. We set qn=100 and β=(0.5,0.5,0.5,0.5,0,,0). Two scenarios are considered: n=750 and n=1000. The model is described as follows:

    g(Xiβ)=Xiβ(1Xiβ)+exp(Xiβ)+0.1sin(πXiβ2)6.

    In this scenario, the disease prevalence is approximately 10.32%.

    In our simulation study, we employed four group testing algorithms: master pool testing (MPT), Dorfman testing (DT), halving testing (HT), and array testing (AT) to evaluate the model. For MPT, DT, and HT, the group size was set to 4, while in AT, individuals were arranged in a 4×4 array. Both sensitivity and specificity were fixed at Se=Sp=0.98. Based on the methodologies of Fan and Li [40] and Zhang [41], we set δ values of 3.7 and 2 for SCAD and MCP, respectively. Each scenario was simulated B=100 times, where ˆβ[b] denotes the estimated β in the b-th simulation, with b{1,2,,B}.

    Following the approach of Guan et al. [45], we measured the estimation accuracy of ˆβj (j=1,2,3,4) using the mean squared error (MSE), defined as

    MSE=1BBb=1(βjˆβ[b]j)2,j=1,2,3,4.

    We utilized average mean squared error (AMSE) to assess the accuracy of ˆβ, consistent with methods employed by Wang and Yang [46]:

    AMSE=1BqnBb=1βˆβ[b]22.

    Average mean absolute error (AMAE) was used to evaluate the estimation performance of g() and individual risk probabilities pi [42]. The AMAE for g() is defined as

    AMAEg=1BnBb=1ni=1|g(Xiβ)g(Xiˆβ[b])|,

    while the AMAE for ˆp[b]i=eg(Xiˆβ[b])1+eg(Xiˆβ[b]) is defined as

    AMAEp=1BnBb=1ni=1|piˆp[b]i|,

    where pi=g(Xiβ)1+eg(Xiβ).

    To evaluate variable selection performance, we employed true positive rate (TPR) and false positive rate (FPR). The FPR represents the proportion of false positives among identified predictors, while the TPR indicates the proportion of true positives among relevant predictors. Table 1 shows the results of variable selection. TPR and FPR are defined as follows:

    TPR=TPTP+FN,FPR=FPFP+TN.
    Table 1.  Four outcomes of variable selection.
    Metric Implication
    True positive (TP) Actual positive and predicted positive
    False positive (FP) Actual negative and predicted positive
    False negative (FN) Actual positive and predicted negative
    True negative (TN) Actual negative and predicted negative

     | Show Table
    DownLoad: CSV

    The simulation results are summarized in Tables 2 to 5. As shown in the tables, the TPR was approximately 97%, with a very low FPR. The result shows that the probability that M is contained in ˆM is very close to 1. This demonstrates the notable performance of our model in variable selection. The AMAE for g() and pi was approximately 0.5 and 0.01, respectively. This shows that we have accurately captured the form of the unknown smooth function g() and are able to precisely predict the individual risk probability. The AMSE for the model parameters β was around 104, while the AMSE for significant variables βj was approximately 103. This demonstrates the accuracy of our model in parameter estimation.

    Table 2.  Simulation results for Example 4.1.
    AMAE AMSE MSE
    Model Setting Test Penalty TPR FPR g() Prob β β1 β2
    Example 4.1 (n=500) qn=50 MPT MCP 0.980 0.061 0.325 0.011 0.0003 0.0022 0.0015
    HT 0.985 0.003 0.413 0.007 0.0002 0.0068 0.0025
    DT 0.968 0.062 0.295 0.011 0.0003 0.0001 0.0004
    AT 0.987 0.035 0.388 0.009 0.0004 0.0019 0.0009
    MPT SCAD 0.967 0.060 0.508 0.014 0.0003 0.0012 0.0021
    HT 0.988 0.001 0.479 0.008 0.0001 0.0021 0.0023
    DT 0.980 0.051 0.511 0.012 0.0003 0.0005 0.0004
    AT 0.974 0.063 0.432 0.009 0.0003 0.0035 0.0024
    MPT LASSO 0.964 0.060 0.337 0.011 0.0003 0.0038 0.0051
    HT 0.986 0.003 0.436 0.006 0.0001 0.0003 0.0002
    DT 1.000 0.029 0.522 0.013 0.0001 0.0004 0.0003
    AT 0.981 0.034 0.320 0.007 0.0001 0.0006 0.0008
    qn=100 MPT MCP 0.985 0.010 0.511 0.009 0.0001 0.0004 0.0004
    HT 0.973 0.038 0.374 0.009 0.0002 0.0022 0.0033
    DT 0.986 0.023 0.338 0.010 0.0001 0.0004 0.0001
    AT 0.982 0.023 0.470 0.005 0.0001 0.0004 0.0005
    MPT SCAD 0.987 0.031 0.265 0.013 0.0002 0.0002 0.0003
    HT 0.988 0.038 0.458 0.015 0.0005 0.0017 0.0001
    DT 0.978 0.051 0.451 0.011 0.0001 0.0008 0.0004
    AT 0.985 0.010 0.422 0.009 0.0001 0.0058 0.0047
    MPT LASSO 0.987 0.026 0.478 0.010 0.0001 0.0008 0.0012
    HT 0.966 0.044 0.364 0.011 0.0003 0.0029 0.0052
    DT 0.984 0.031 0.503 0.012 0.0001 0.0001 0.0003
    AT 0.987 0.031 0.401 0.008 0.0001 0.0016 0.0014

     | Show Table
    DownLoad: CSV
    Table 3.  Simulation results for Example 4.2.
    AMAE AMSE MSE
    Model Setting Test Penalty TPR FPR g() Prob β β1 β2 β3
    Example 4.2 (n=1000) qn=100 MPT MCP 0.980 0.001 0.569 0.011 0.0001 0.0006 0.0025 0.0073
    HT 0.974 0.001 0.626 0.012 0.0003 0.0059 0.0167 0.0054
    DT 0.971 0.027 0.601 0.012 0.0002 0.0035 0.0022 0.0019
    AT 0.986 0.010 0.582 0.011 0.0001 0.0059 0.0011 0.0032
    MPT SCAD 0.970 0.019 0.551 0.010 0.0014 0.0006 0.0011
    HT 0.964 0.029 0.588 0.011 0.0001 0.0021 0.0041 0.0001
    DT 0.972 0.021 0.572 0.011 0.0001 0.0037 0.0002 0.0034
    AT 0.971 0.021 0.575 0.011 0.0001 0.0057 0.0005 0.0042
    MPT LASSO 0.974 0.048 0.553 0.010 0.0001 0.0000 0.0002 0.0003
    HT 0.972 0.056 0.601 0.010 0.0001 0.0003 0.0001 0.0006
    DT 0.982 0.021 0.574 0.010 0.0001 0.0035 0.0001 0.0042
    AT 0.986 0.010 0.584 0.011 0.0001 0.0041 0.0002 0.0056
    qn=500 MPT MCP 0.964 0.011 0.562 0.013 0.0001 0.0005 0.0015 0.0042
    HT 0.972 0.010 0.670 0.018 0.0001 0.0056 0.0001 0.0115
    DT 0.987 0.011 0.567 0.012 0.0044 0.0003 0.0058
    AT 0.986 0.020 0.669 0.015 0.0001 0.0022 0.0108 0.0012
    MPT SCAD 0.965 0.014 0.515 0.010 0.0001 0.0003 0.0055 0.0045
    HT 0.968 0.018 0.547 0.015 0.0001 0.0023 0.0112 0.0069
    DT 0.989 0.007 0.534 0.011 0.0001 0.0048 0.0001 0.0047
    AT 0.985 0.005 0.608 0.010 0.0042 0.0021 0.0007
    MPT LASSO 0.978 0.006 0.536 0.012 0.0001 0.0013 0.0132 0.0104
    HT 0.970 0.002 0.644 0.015 0.0001 0.0000 0.0092 0.0126
    DT 0.987 0.005 0.545 0.012 0.0015 0.0007 0.0019
    AT 0.981 0.002 0.526 0.012 0.0011 0.0093 0.0045
    Symbol indicates value smaller than 0.0001.

     | Show Table
    DownLoad: CSV
    Table 4.  Simulation results for Example 4.3.
    AMAE AMSE MSE
    Model Setting Test Penalty TPR FPR g() Prob β β1 β2 β3
    Example 4.3 (qn=50) n=500 MPT MCP 0.951 0.103 0.466 0.019 0.0003 0.0003 0.0009 0.0011
    HT 0.966 0.091 0.571 0.021 0.0005 0.0007 0.0036 0.0045
    DT 0.982 0.043 0.360 0.006 0.0001 0.0002 0.0001 0.0001
    AT 0.981 0.021 0.464 0.012 0.0001 0.0005 0.0009 0.0006
    MPT SCAD 0.957 0.139 0.527 0.023 0.0005 0.0001 0.0031 0.0098
    HT 0.968 0.082 0.433 0.020 0.0004 0.0006 0.0001 0.0003
    DT 0.954 0.140 0.411 0.013 0.0002 0.0011 0.0018 0.0012
    AT 0.972 0.064 0.793 0.018 0.0002 0.0038 0.0021 0.0004
    MPT LASSO 0.981 0.024 0.604 0.021 0.0003 0.0042 0.0014 0.0019
    HT 0.983 0.021 0.432 0.026 0.0001 0.0017 0.0005 0.0016
    DT 0.971 0.094 0.470 0.013 0.0002 0.0004 0.0014 0.0023
    AT 0.980 0.061 0.447 0.013 0.0002 0.0002 0.0004 0.0015
    n=1000 MPT MCP 0.988 0.040 0.358 0.015 0.0002 0.0011 0.0024 0.0042
    HT 0.984 0.021 0.399 0.017 0.0006 0.0008 0.0009 0.0013
    DT 0.989 0.000 0.583 0.014 0.0001 0.0001 0.0019 0.0024
    AT 0.985 0.009 0.405 0.013 0.0001 0.0017 0.0041 0.0012
    MPT SCAD 0.989 0.043 0.537 0.016 0.0002 0.0025 0.0004 0.0038
    HT 0.987 0.003 0.512 0.012 0.0001 0.0012 0.0032 0.0031
    DT 0.986 0.003 0.515 0.012 0.0001 0.0001 0.0002 0.0004
    AT 1.000 0.000 0.410 0.013 0.0001 0.0013 0.0022 0.0013
    MPT LASSO 0.988 0.004 0.441 0.011 0.0002 0.0029 0.0012 0.0021
    HT 0.982 0.007 0.326 0.007 0.0001 0.0002 0.0004 0.0002
    DT 0.987 0.008 0.489 0.013 0.0001 0.0008 0.0001 0.0032
    AT 0.977 0.043 0.283 0.007 0.0001 0.0012 0.0024 0.0034

     | Show Table
    DownLoad: CSV
    Table 5.  Simulation results for Example 4.4.
    AMAE AMSE MSE
    Model Setting Test Penalty TPR FPR g() Prob β β1 β2 β3 β4
    Example 4.4 (qn=100) n=750 MPT MCP 0.979 0.053 0.744 0.019 0.0004 0.0028 0.0015 0.0036 0.0076
    HT 0.959 0.100 0.970 0.027 0.0011 0.0024 0.0005 0.0022 0.0018
    DT 0.986 0.035 0.611 0.011 0.0001 0.0001 0.0025 0.0034 0.0016
    AT 0.984 0.043 0.789 0.013 0.0002 0.0012 0.0051 0.0026 0.0012
    MPT SCAD 0.966 0.059 0.723 0.014 0.0003 0.0030 0.0078 0.0081 0.0001
    HT 0.978 0.069 0.576 0.014 0.0002 0.0004 0.0083 0.0052 0.0002
    DT 0.989 0.063 0.698 0.013 0.0003 0.0034 0.0155 0.0011 0.0051
    AT 0.981 0.052 0.671 0.022 0.0005 0.0078 0.0023 0.0085 0.0073
    MPT LASSO 0.977 0.072 0.620 0.014 0.0003 0.0047 0.0041 0.0141 0.0002
    HT 0.964 0.069 0.680 0.015 0.0003 0.0018 0.0071 0.0073 0.0007
    DT 0.986 0.041 0.581 0.016 0.0003 0.0034 0.0090 0.0014 0.0005
    AT 0.984 0.065 0.679 0.016 0.0003 0.0001 0.0095 0.0065 0.0003
    n=1000 MPT MCP 0.967 0.029 0.706 0.015 0.0002 0.0068 0.0097 0.0022 0.0015
    HT 0.986 0.001 0.818 0.012 0.0001 0.0035 0.0061 0.0007 0.0001
    DT 0.987 0.032 0.872 0.012 0.0002 0.0007 0.0074 0.0017 0.0016
    AT 0.988 0.037 0.800 0.027 0.0002 0.0013 0.0061 0.0002 0.0025
    MPT SCAD 0.961 0.059 0.724 0.015 0.0002 0.0081 0.0087 0.0030 0.0006
    HT 0.974 0.010 0.779 0.013 0.0001 0.0036 0.0066 0.0012 0.0001
    DT 0.983 0.071 0.405 0.010 0.0001 0.0013 0.0059 0.0008 0.0001
    AT 0.981 0.041 0.422 0.010 0.0001 0.0003 0.0009 0.0020 0.0011
    MPT LASSO 0.977 0.029 0.819 0.017 0.0004 0.0057 0.0012 0.0083 0.0079
    HT 0.951 0.004 0.545 0.043 0.0001 0.0093 0.0004 0.0004 0.0025
    DT 0.985 0.021 0.408 0.009 0.0001 0.0002 0.0011 0.0026 0.0007
    AT 0.989 0.008 0.581 0.010 0.0001 0.0042 0.0003 0.0004 0.0008

     | Show Table
    DownLoad: CSV

    We set up two different sample sizes (n) or covariate scenarios (qn) for each example. Results of Examples 4.1 and 4.2 suggest that our method maintains robust estimation performance as dimensionality increases in small sample scenarios. Furthermore, results of Examples 4.3 and 4.4 demonstrate that estimation accuracy improves with increased sample size. Figure 2 illustrates the estimation performance of g() and individual risk probabilities pi, confirming our method's efficacy in estimating unknown functions and risk probabilities.

    Figure 2.  Estimation of unknown function (a) and risk probability (b) in Example 4.2, with n=1000 and qn=500, using MPT and the SCAD penalty function.

    Moreover, we aim to evaluate our method's performance under different group sizes. Using Example 4.4, we investigated group sizes of 2, 4, 6, and 8 with the Dorfman algorithm and LASSO penalty function. Results are presented in Table 6, reporting the means of ˆβj for j=1,2,3,4. The simulation results indicate that our method consistently delivers strong estimation performance across various group sizes. At the same time, we set up comparative experiments with different Se and Sp, and the simulation results are shown in Tables 8 to 11 in Appendix A. As shown in these tables, our model maintains a certain level of stability, ensuring that M is still contained within ˆM.

    Table 6.  Simulation results for different group size.
    AMAE MEAN
    Model Setting Group Size TPR FPR g() Prob β1 β2 β3 β4
    Example 4 (qn=100) n=750 2 0.970 0.015 0.611 0.011 0.452 0.465 0.478 0.460
    4 0.965 0.020 0.581 0.016 0.445 0.405 0.464 0.477
    6 0.986 0.041 0.627 0.009 0.519 0.497 0.487 0.495
    8 0.973 0.020 0.594 0.012 0.471 0.467 0.484 0.477
    n=1000 2 0.974 0.014 0.447 0.009 0.468 0.484 0.473 0.485
    4 0.964 0.018 0.408 0.009 0.489 0.468 0.450 0.474
    6 0.985 0.021 0.440 0.011 0.486 0.478 0.443 0.471
    8 0.974 0.010 0.466 0.009 0.494 0.494 0.447 0.437

     | Show Table
    DownLoad: CSV

    In this section, we validate the effectiveness of our method using the diabetes dataset from the National Health and Nutrition Examination Survey (NHANES) conducted between 1999 and 2004. NHANES is a probability-based cross-sectional survey representing the U.S. population, collecting demographic, health history, and behavioral information through household interviews. Participants were also invited to equip mobile examination centers for detailed physical, psychological, and laboratory assessments. The dataset is accessible at https://wwwn.cdc.gov/Nchs/Nhanes/.

    The dataset comprises n=5515 records and 17 variables, categorizing individuals as diabetic or non-diabetic. Covariates include age (X1), waist circumference (X2), BMI (X3), height (X4), weight (X5), smoking age (X6), alcohol use (X7), leg length (X8), total cholesterol (X9), hypertension (X10), education level (X11), household income (X12), family history (X13), physical activity (X14), gender (X15), and race (X16). Notably, nominal variables from X10 to X16 are transformed using one-hot encoding, resulting in qn=47 covariates per individual. The first nine variables are continuous, while the remainder are binary. A detailed explanation of the variables as well as the content of the questionnaire can be found at https://wwwn.cdc.gov/Nchs/Nhanes/search/default.aspx. For convenience, the nominal variables are explained in Table 12. in Appendix B.

    For i{1,2,,n}, we define ˜Yi=1 for diabetes and ˜Yi=0 for non-diabetes. Individual covariate information is represented as Xi=(Xi1,Xi2,,Xiqn). We construct the following single-index model for the probability of diabetes risk for the i-th individual:

    Pr(˜Yi=1|Xi)=exp[g(Xiβ)]1+exp[g(Xiβ)],

    where the smooth function g() is unknown, and our objective is to estimate the coefficients β.

    To verify the accuracy of our method, we compare the results with those obtained from two other methods. The first method is penalized logistic regression (PLR), which uses the true individual status, ˜Yi. This method is implemented using the R package "glmnet". The second method is the adaptive elastic net for group testing (aenetgt) data, as introduced by Gregory et al. [23]. This approach utilizes group testing data and employs a penalized Expectation-Maximization (EM) algorithm to fit an adaptive elastic net logistic regression model. The R package "aenetgt" is used for implementation. We generate Dorfman group testing data with a group size of 6, setting both sensitivity and specificity at Se=Sp=0.98.

    To ensure comparability, we adhere to the standardization techniques referenced in Cui et al. [31]. First, we center the covariates to facilitate the comparison of relative effects across different explanatory variables. Second, we normalize the PLR and aenetgt coefficients by dividing them by their L2-norm, as follows:

    ˆβnormPLR=ˆβPLRˆβPLR2,ˆβnormaenet=ˆβaenetˆβaenet2,

    thereby obtaining coefficients with unit norm. This enables a comparison of regression coefficients from PLR, aenetgt, and the single-index group testing model.

    The estimated coefficients from the three models are summarized in Table 7, and the parameter estimation of our method is denoted as ˆβour. In this study, the estimated coefficients for age, ˆβnormPLR and ˆβour, are 0.280 and 0.307, respectively, indicating that the risk of diabetes increases with age, consistent with the findings of Turi et al. [47]. However, the coefficient ˆβnormaenet is close to zero. For waist circumference, the coefficients ˆβnormPLR, ˆβour, and ˆβnormaenet are 0.178, 0.194, and 0.271, respectively, suggesting a positive association between waist circumference and diabetes risk, which is supported by Bai et al. [48] and Snijder et al. [49]. In addition, all three methods also identified leg length [50], hypertension [51], race [52], family history [53], and sex [54] as variables associated with diabetes. These covariates are widely recognized as being related to diabetes in the biomedical field [55].

    Table 7.  Estimated coefficients for the real data model.
    Variable ˆβnormPLR ˆβour ˆβnormaenet Variable ˆβnormPLR ˆβour ˆβnormaenet Variable ˆβnormPLR ˆβour ˆβnormaenet
    age 0.280 0.307 -0.085 Family history Household income
    waist circumference 0.178 0.194 0.271 family history1 0.000 0.000 0.000 household income1 0.000 0.000 0.000
    BMI 0.000 0.000 0.000 family history2 -0.492 -0.567 -0.466 household income2 0.024 0.000 0.000
    height 0.000 0.000 0.000 family history9 0.000 0.000 0.000 household income3 0.000 0.000 0.000
    weight 0.000 0.000 0.000 Physical activity household income4 0.000 -0.069 0.000
    smoking age 0.000 0.007 0.000 physical activity1 0.000 0.056 0.000 household income5 0.000 0.000 0.000
    alcohol use 0.009 0.013 0.000 physical activity2 -0.086 -0.018 0.000 household income6 0.000 0.000 0.000
    leg length -0.048 -0.100 -0.043 physical activity3 -0.134 -0.039 0.000 household income7 0.000 0.000 0.000
    total cholesterol 0.000 0.000 0.000 physical activity4 -0.088 0.000 0.000 household income8 0.001 0.065 0.000
    Hypertension physical activity9 0.000 0.000 0.000 household income9 0.000 0.000 0.000
    hypertension1 0.000 0.000 0.000 Sex household income10 0.000 0.000 0.000
    hypertension2 -0.350 -0.372 -0.641 sex1 -0.010 0.000 0.000 household income11 0.000 0.000 0.000
    Education sex2 -0.237 -0.225 -0.424 household income12 0.000 0.000 0.000
    education1 0.000 0.000 0.000 race household income13 0.000 0.000 0.000
    education2 0.000 0.000 0.000 race1 0.000 0.000 0.000 household income77 0.000 0.231 0.000
    education3 0.000 0.000 0.000 race2 -0.019 -0.073 0.000 household income99 0.000 0.000 0.000
    education4 0.000 0.000 0.000 race3 -0.399 -0.380 -0.330
    education5 -0.014 -0.052 0.000 race4 0.000 0.000 0.000
    education7 -0.523 -0.335 0.000 race5 0.000 0.124 0.000

     | Show Table
    DownLoad: CSV

    We found that the covariable physical activity is associated with diabetes, but the aenetgt method failed to identify this association. The results of a study by Yu et al. [55], which used the same dataset as ours, are consistent with this finding. In addition, we found that education level was also a covariable associated with diabetes (ˆβnormPLR and ˆβour are -0.523 and -0.335). Evidence for this association can also be found in the study by Aldossari et al. [56], and in this dataset, the probability that these participants will not develop diabetes is 100%. We also identified that household income is associated with diabetes, which is consistent with the study by Yen et al. [57]. In this dataset, the probability of developing diabetes for those who refused to answer about their household income is 60%. Furthermore, our model yields results similar to those obtained by the PLR method, which uses individual observations (˜Y), suggesting that our method is able to extract information from group observations.

    This study presents a group testing framework based on a logistic regression single-index model for disease screening in low-prevalence environments. By employing B-splines to estimate unknown functions and incorporating penalty functions, our approach achieves high flexibility in capturing the relationships between covariates and individual risk probabilities while accurately identifying important variables. To address potential computational challenges in individual disease status estimation, we implemented an iterative EM algorithm for model estimation. Our simulation experiments demonstrate the proposed method's performance in high-dimensional covariate contexts with limited sample sizes, while application to real data confirms its efficacy. Our framework offers a unified approach for various group testing methods, showcasing its practical application value.

    Despite these promising outcomes, our study acknowledges several limitations. First, our model assumes that sensitivity and specificity of testing are independent of group size, which may not always hold in practical applications. Second, data quality and variations in the testing population can impact the model's applicability. Therefore, exploring how to integrate prior information to enhance model accuracy and practical value remains a critical research direction. Furthermore, the potential high dimensionality of individual covariates poses significant challenges, necessitating the development of models capable of handling ultra-high-dimensional data.

    Future research could explore the following directions. Firstly, examining model performance under varying group testing configurations, such as changes in testing errors and group sizes, could yield valuable insights. Secondly, investigating methods to incorporate additional prior knowledge to improve estimation accuracy is a worthwhile endeavor. Additionally, considering computational efficiency, developing faster algorithms for processing large-scale datasets will be a key focus for future work.

    Changfu Yang: Methodolog, formal analysis, writing-original draft; Wenxin Zhou: Methodology, formal analysis; Wenjun Xiong: Conceptualization, methodology, writing-original draft, funding acquisition; Junjian Zhang: Conceptualization, methodology, writing-review and editing, funding acquisition; Juan Ding: Conceptualization, formal analysis, writing-review and editin, funding acquisition. All authors have read and approved the final version of the manuscript for publication.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research was supported by the National Natural Science Foundation of China (Grant Nos. 12361055, 11801102), Guangxi Natural Science Foundation (2021GXNSFAA220054), and the Fundamental Research Funds for the Central Universities (B240201095).

    The authors declare that there are no conflicts of interest regarding the publication of this paper.

    In this part, we tested the performance of four examples at different sensitivity and specificity, using the Dofman algorithm and the LASSO penalty function. The simulation results are shown in Tables 8 to 11.

    Table 8.  Example 4.1: Simulation results with different sensitivity and specificity settings.
    AMAE AMSE MSE
    Model Setting (Se,Sp) TPR FPR g() Prob β β1 β2
    Example 4.1 (qn=50) n=500 (0.98, 0.98) 1.000 0.029 0.522 0.013 0.0001 0.0004 0.0003
    (0.95, 0.95) 0.987 0.020 0.474 0.011 0.0001 0.0003 0.0003
    (0.90, 0.90) 0.982 0.036 0.532 0.011 0.0001 0.0006 0.0007
    (0.85, 0.85) 0.984 0.040 0.578 0.016 0.0003 0.0001 0.0002

     | Show Table
    DownLoad: CSV
    Table 9.  Example 4.2: Simulation results with different sensitivity and specificity settings.
    AMAE AMSE MSE
    Model Setting (Se,Sp) TPR FPR g() Prob β β1 β2 β3
    Example 4.2 (qn=100) n=1000 (0.98, 0.98) 0.982 0.021 0.574 0.010 0.0001 0.0035 0.0001 0.0042
    (0.95, 0.95) 0.975 0.030 0.612 0.011 0.0001 0.0047 0.0001 0.0069
    (0.90, 0.90) 0.978 0.020 0.556 0.012 0.0001 0.0023 0.0002 0.0049
    (0.85, 0.85) 0.965 0.020 0.717 0.016 0.0004 0.0158 0.0002 0.0212

     | Show Table
    DownLoad: CSV
    Table 10.  Example 4.3: Simulation results with different sensitivity and specificity settings.
    AMAE AMSE MSE
    Model Setting (Se,Sp) TPR FPR g() Prob β β1 β2 β3
    Example 4.3 (qn=50) n=1000 (0.98, 0.98) 0.987 0.008 0.489 0.013 0.0001 0.0008 0.0001 0.0032
    (0.95, 0.95) 0.971 0.064 0.404 0.011 0.0003 0.0005 0.0033 0.0085
    (0.90, 0.90) 0.963 0.048 0.465 0.011 0.0001 0.0002 0.0012 0.0055
    (0.85, 0.85) 0.966 0.018 0.377 0.015 0.0004 0.0016 0.0023 0.0007

     | Show Table
    DownLoad: CSV
    Table 11.  Example 4.4: Simulation results with different sensitivity and specificity settings.
    AMAE AMSE MSE
    Model Setting (Se,Sp) TPR FPR g() Prob β β1 β2 β3 β4
    Example 4.4 (qn=100) n=750 (0.98, 0.98) 0.986 0.041 0.581 0.016 0.0003 0.0034 0.0090 0.0014 0.0005
    (0.95, 0.95) 0.981 0.026 0.534 0.018 0.0001 0.0016 0.0045 0.0018 0.0005
    (0.90, 0.90) 0.974 0.018 0.546 0.016 0.0002 0.0004 0.0024 0.0014 0.0028
    (0.85, 0.85) 0.976 0.024 0.539 0.011 0.0002 0.0047 0.0085 0.0004 0.0039

     | Show Table
    DownLoad: CSV
    Table 12.  Meaning of the nominal variable.
    Variable Implication Variable Implication
    Hypertension circumstance Family history of diabetes
    hypertension1 Have a history of hypertension family history1 Blood relatives with diabetes
    hypertension2 No history of hypertension family history2 Blood relatives do not have diabetes
    Education level family history9 Not known if any blood relatives have diabetes
    education1 Less Than 9th Grade Physical activity
    education2 9 - 11th Grade (Includes 12th grade with no diploma) physical activity1 Sit during the day and do not walk about very much
    education3 High School Grad/GED or Equivalent physical activity2 Stand or walk about a lot during the day, but do not have to carry or lift things very often
    education4 Some College or AA degree physical activity3 Lift light load or has to climb stairs or hills often
    education5 College Graduate or above physical activity4 Do heavy work or carry heavy loads
    education7 Refuse to answer about the level of education physical activity9 Don't know physical activity level
    Household income Sex
    household income1 0 to 4,999 fanxiexian_myfh sex1 Male
    household income2 5,000 to 9,999 fanxiexian_myfh sex2 Female
    household income3 10,000 to 14,999 fanxiexian_myfh Race/Ethnicity
    household income4 15,000 to 19,999 fanxiexian_myfh race1 Mexican American
    household income5 20,000 to 24,999 fanxiexian_myfh race2 Other Hispanic
    household income6 25,000 to 34,999 fanxiexian_myfh race3 Non - Hispanic White
    household income7 35,000 to 44,999 fanxiexian_myfh race4 Non - Hispanic Black
    household income8 45,000 to 54,999 fanxiexian_myfh race5 Other Race - Including Multi - Racial
    household income9 55,000 to 64,999 fanxiexian_myfh
    household income10 65,000 to 74,999 fanxiexian_myfh
    household income11 75,000 and Over fanxiexian_myfh
    household income12 Over 20,000 fanxiexian_myfh
    household income13 Under 20,000 fanxiexian_myfh
    household income77 Refusal to answer about household income
    household income99 Don't know household income

     | Show Table
    DownLoad: CSV

    In this part, we derive the conditional expectation formulas for Dorfman testing, halving testing, and array testing within the framework of our method. Before proceeding, it is necessary to clarify some notations. Let Pj{i} represent the set of individuals in Pj excluding the i-th individual, and |Pj| denotes the number of individuals in Pj. Let Yi represent the test result of the i-th individual and YPj,l={Yi,iPj,l} represent the set of testing results of individuals in Pj,l.

    If the initial group testing result is negative, no re-testing is performed. However, if Zj,1=1, each individual in the group needs to undergo a separate re-testing.

    1) When Zj,1=0, the result is the same as the master poor testing:

    w(t)i,0=P(˜Yi=1,Zj,1=0)P(Zj,1=0)=(1Se)p(t)iB1[Se+(1SeSp)iPj(1p(t)iB)].

    2) When Zj,1=1, each individual in the group must undergo a separate re-test. In total, the group has undergone |Pj|+1 tests.

    w(t)i,1=P(˜Yi=1,Zj,1,YPj)P(Zj,1,YPj)=P(˜Yi=1)P(Zj,1,YPj|˜Yi=1)P(Zj,1,YPj).

    The denominator is

    P(Zj,1,YPj)=˜YPjP(Zj,1,YPj|˜YPj)P(˜YPj)=˜YPjP(Zj,1|˜Zj,1)iPjP(Yi|˜Yi)P(˜Yi)=˜YPj[S˜Zj,1e(1Sp)1˜Zj,1]iPj[SYie(1Se)(1Yi)]˜Yi×[(1Sp)YiS(1Yi)p](1˜Yi)[p(t)iB]˜Yi[1p(t)iB]1˜Yi=˜YPj[S˜Zj,1e(1Sp)1˜Zj,1]iPj[SYie(1Se)(1Yi)p(t)iB]˜Yi×[(1Sp)YiS(1Yi)p(1p(t)iB)](1˜Yi).

    Thus, the numerator is

    P(˜Yi=1,Zj,1,YPj)=P(Zj,1,YPj|˜Yi=1)P(˜Yi=1)=˜YPjiP(Zj,1,YPj|˜Yi=1,˜YPji)P(˜YPji)P(˜Yi=1)=˜YPjiP(Zj,1|˜Zj,1=1)P(Yi|˜Yi=1)iPj{i}P(Yi|˜Yi)P(˜Yi)P(˜Yi=1)=˜YPjiSeSYie(1Se)(1Yi)iPj{i}[SYie(1Se)(1Yi)]˜Yi×[(1Sp)YiS(1Yi)p](1˜Yi)[p(t)iB]˜Yi[1p(t)iB]1˜Yip(t)iB=˜YPjiS1+Yie(1Se)(1Yi)p(t)iB×iPj{i}[SYie(1Se)(1Yi)p(t)iB]˜Yi×[(1Sp)YiS(1Yi)p(1p(t)iB)](1˜Yi).

    Therefore, the final expression is

    w(t)i=Zj,1w(t)i,1+(1Zj,1)w(t)i,0.

    Assume that the maximum number of partitions required during testing is two. Let the test result of the first testing be Zj,1. At this time, the set of all unpartitioned individuals is Pj,1, which contains |Pj| individuals. After the first partition, the partitioning method is to divide into two equal parts, with the two subsets of individuals being Pj,2 and Pj,3, respectively. The responses of the second testing are Zj,2 and Zj,3. There are five types of testing results in halving testing.

    1) When Zj,1=0:

    Only one testing is performed, and the process is the same as master pool testing. Since the result of one testing is negative, no further partitioning and testing are performed. At this time,

    w(t)i=P(˜Yi=1|Zj,1=0)=P(˜Yi=1,Zj,1=0)P(Zj,1=0)=P(˜Yi=1)P(Zj,1=0|˜Yi=1)P(Zj,1=0)=p(t)iB(1Se)1[Se+(1SeSp)iPj(1p(t)iB)].

    2) When Zj,1=1,Zj,2=0,Zj,3=0:

    That is, the result of the first testing is Zj,1=1. Subsequently, the first partition is performed, dividing into two equal parts Pj,2 and Pj,3. Then, testings are performed on the two sets respectively, with the testing results being Zj,2=Zj,3=0. At this time,

    w(t)i=P(˜Yi=1|Zj,1=1,Zj,2=0,Zj,3=0)=P(Zj,1=1,Zj,2=0,Zj,3=0|˜Yi=1)P(˜Yi=1)P(Zj,1=1,Zj,2=0,Zj,3=0).

    The denominator is

    P(Zj,1=1,Zj,2=0,Zj,3=0)=˜YPj,1P(Zj,1=1,Zj,2=0,Zj,3=0|˜YPj,1)P(˜YPj,2)P(˜YPj,3)=˜YPj,1P(Zj,1=1|˜YPj,1)P(Zj,2=0|˜YPj,2)P(Zj,3=0|˜YPj,3)P(˜YPj,2)P(˜YPj,3)=˜YPj,1[S˜Zj,1e(1Sp)1˜Zj,1][(1Se)˜Zj,2S1˜Zj,2p]iPj,2[p(t)iB]˜Yi[1p(t)iB]1˜Yi×[(1Se)˜Zj,3S1˜Zj,3p]iPj,3[p(t)iB]˜Yi[1p(t)iB]1˜Yi=˜YPj,1[S˜Zj,1e(1Sp)1˜Zj,1]3u=2(1Se)˜Zj,uS1˜Zj,upiPj[p(t)iB]˜Yi[1p(t)iB]1˜Yi.

    Since the placement of the i-th individual in the sets Pj,2 and Pj,3 is symmetric, assume that i-th individual is placed in the set Pj,2. Then, the numerator is

    P(Zj,1=1,Zj,2=0,Zj,3=0,˜Yi=1)=P(Zj,1=1,Zj,2=0,Zj,3=0|˜Yi=1)P(˜Yi=1)=˜YPjiP(Zj,1=1,Zj,2=0,Zj,3=0|˜Yi=1,˜YPj,2)×P(˜YPj,2i)P(˜YPj,3)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=0|˜Zj,2=1)P(Zj,3=0|˜Zj,3)×P(˜YPj,2i)P(˜YPj,3)P(˜Yi=1)=˜YPjiSe(1Se)iPj,2{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi(1Se)˜Zj,3S1˜Zj,3piPj,3[p(t)iB]˜Yi[1p(t)iB]1˜Yip(t)iB=˜YPjiSe(1Se)1+˜Zj,3S1˜Zj,3pp(t)iBiPj{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi.

    3) When Zj,1=1,Zj,2=0,Zj,3=1:

    At this time, the second partitions are performed. The first partition divides all individuals into two sets, Pj,2 and Pj,3, with testing results Zj,2=0 and Zj,3=1, respectively. Individual testings are performed separately on the individuals in Pj,3, and the set of testing results is YPj,3. At this time,

    w(t)i=P(˜Yi=1|Zj,1=1,Zj,2=0,Zj,3=1,YPj,3)=P(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3|˜Yi=1)P(˜Yi=1)P(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3).

    The denominator is

    P(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3)=˜YPjP(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3|˜YPj,2,˜YPj,3)P(˜YPj,2)P(˜YPj,3)=˜YPjP(Zj,1=1|˜YPj)P(Zj,2=0|˜YPj,2)P(Zj,3=1|˜YPj,3)×P(˜YPj,3)P(˜YPj,2)P(YPj,3|˜YPj,3)=˜YPjP(Zj,1=1|˜Zj,1)P(Zj,2=0|˜Zj,2)P(Zj,3=1|˜Zj,3)×iPj,3P(Yi|˜Yi)P(˜YPj,2)P(˜YPj,3)=˜YPj[S˜Zj,1e(1Sp)1˜Zj,1][(1Se)˜Zj,2S1˜Zj,2p][S˜Zj,3e(1Sp)1˜Zj,3]×iPj,2[p(t)iB]˜Yi[1p(t)iB]1˜YiiPj,3[p(t)iB]˜Yi[1p(t)iB]1˜Yi×iPj,3[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi=˜YPj[S˜Zj,1+˜Zj,3e(1Sp)2˜Zj,1˜Zj,3][(1Se)˜Zj,2S1˜Zj,2p]×iPj[p(t)iB]˜Yi[1p(t)iB]1˜YiiPj,3[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi.

    Since an i-th individual may belong to either set Pj,2 or Pj,3, the numerator is discussed accordingly.

    (a) Assume that i-th individual belongs to set Pj,2. Then, the numerator is

    P(˜Yi=1,Zj,1=1,Zj,2=0,Zj,3=1,YPj,3)=˜YPjiP(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3|˜Yi=1,˜YPj,2i,˜YPj,3)×P(˜YPj,2i,˜YPj,3)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=0|˜Zj,2=1)P(Zj,3=1|˜Zj,3)×P(YPj,3|˜YPj,3)P(˜YPj,2i)P(˜YPj,3)P(˜Yi=1)=˜YPjiSe(1Se)S˜Zj,3e(1Sp)1˜Zj,3iPj,2{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi×iPj,3[p(t)iB]˜Yi[1p(t)iB]1˜Yip(t)iB×iPj,3[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi.

    (b) Assume that i-th individual belongs to set Pj,3. Then, the numerator is

    P(˜Yi=1,Zj,1=1,Zj,2=0,Zj,3=1,YPj,3)=˜YPjiP(Zj,1=1,Zj,2=0,Zj,3=1,YPj,3|˜Yi=1,˜YPj,2,˜YPj,3i)×P(˜YPj,2,˜YPj,3i)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=0|˜Zj,2)P(Zj,3=1|˜Zj,3=1)×P(YPj,3i|˜YPj,3i)P(Yi|˜Yi=1)P(˜YPj,2)P(˜YPj,3 i)P(˜Yi=1)=˜YPjiS2e(1Se)˜Zj,2S1˜Zj,2pp(t)iBSYie(1Se)1Yi×iPj,3{i}[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi×iPj{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi.

    4) When Zj,1=1,Zj,2=1, and Zj,3=0, the process is the same as when Zj,1=1,Zj,2=0, and Zj,3=1, and the numerator needs to be discussed accordingly. At this time,

    w(t)i=P(˜Yi=1|Zj,1=1,Zj,2=1,Zj,3=0,YPj,2)=P(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2|˜Yi=1)P(˜Yi=1)P(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2).

    First, the denominator is

    P(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2)=˜YP(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2|˜YPj,2,˜YPj,3)P(˜YPj,2)P(˜YPj,3)=˜YP(Zj,1=1|˜Zj,1)P(Zj,2=1|˜Zj,2)P(Zj,3=0|˜Zj,3)×iPj,2P(Yi|˜Yi)P(˜YPj,2)P(˜YPj,3)=˜YS˜Zj,1+˜Zj,2e(1Sp)2˜Zj,1˜Zj,2(1Se)˜Zj,3S1˜Zj,3p×iPj[p(t)iB]˜Yi[1p(t)iB]1˜YiiPj,2[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi.

    Next, the numerator is discussed.

    (a) Assume that i-th individual belongs to set Pj,2. Then, the numerator is

    P(˜Yi=1,Zj,1=1,Zj,2=1,Zj,3=0,YPj,2)=˜YPjiP(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2|˜Yi=1,˜YPj,2i,˜YPj,3)P(˜YPj,2i,˜YPj,3)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=1|˜Zj,2=1)P(Zj,3=0|˜Zj,3)×P(YPj,2i|˜YPj,2i)P(Yi|˜Yi=1)P(˜YPj,2i)P(˜YPj,3)P(˜Yi=1)=˜YPjiS2e(1Se)˜Zj,3S1˜Zj,3pSYie(1Se)1Yip(t)iB×iPj,2{i}[SYie(1Se)1Yi]˜Yi[(1Sp)YiS1Yip]1˜Yi×iPj{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi.

    (b) Assume that i-th individual belongs to set Pj,3. Then, the numerator is

    P(˜Yi=1,Zj,1=1,Zj,2=1,Zj,3=0,YPj,2)=P(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2|˜Yi=1)P(˜Yi=1)=˜YPjiP(Zj,1=1,Zj,2=1,Zj,3=0,YPj,2|˜Yi=1,˜YPj,2,˜YPj,3i)P(˜YPj,2,˜YPj,3i)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=1|˜Zj,2)P(Zj,3=0|˜Zj,3=1)×P(YPj,2|˜YPj,2)P(˜YPj,2)P(YPj,3i|˜YPj,3i)P(˜YPj,3i)P(˜Yi=1)=˜YPjiSe(1Se)S˜Zj,2e(1Sp)1˜Zj,2p(t)iB×iPj,2[SYie(1Se)1Yi]˜Yi×[(1Sp)YiS1Yip]1˜YiiPj{i}[p(t)iB]˜Yi[1p(t)iB]1˜Yi.

    5) When Zj,1=1,Zj,2=1, and Zj,3=1, two similar partitions are performed as above, and individual retests are conducted separately for all individuals in Pj. At this time, YPj=YPj,2YPj,3, and we have

    w(t)i=P(˜Yi=1|Zj,1=1,Zj,2=1,Zj,3=1,YPj,2,YPj,3)=P(Zj,1=1,Zj,2=1,Zj,3=1,YPj|˜Yi=1)P(˜Yi=1)P(Zj,1=1,Zj,2=1,Zj,3=1,YPj).

    The denominator is

    P(Zj,1=1,Zj,2=1,Zj,3=1,YPj)=˜YPjP(Zj,1=1,Zj,2=1,Zj,3=1,YPj|˜YPj)P(˜YPj)=˜YPjP(Zj,1=1|˜Zj,1)P(Zj,2=1|˜Zj,2)P(Zj,3=1|˜Zj,3)iPjP(Yi|˜Yi)P(˜Yi)=˜YPjS˜Zj,1e(1Sp)1˜Zj,13u=2S˜Zj,ue(1Sp)1˜Zj,u×iPj[SYie(1Se)1Yip(t)iB]˜Yi[(1Sp)YiS1Yip(1p(t)iB)]1˜Yi.

    The results of i-th individual belonging to either set Pj,2 or Pj,3 are symmetric. Assume that i-th individual belongs to set Pj,2. Then, the numerator is

    P(Zj,1=1,Zj,2=1,Zj,3=1,YPj,˜Yi=1)=˜YPjiP(Zj,1=1,Zj,2=1,Zj,3=1,YPj|˜Yi=1,˜YPj,2i,˜YPj,3)P(˜YPj,2i,˜YPj,3)P(˜Yi=1)=˜YPjiP(Zj,1=1|˜Zj,1=1)P(Zj,2=1|˜Zj,2=1)P(Zj,3=1|˜Zj,3)×P(Yi|˜Yi=1)P(YPji|˜YPji)P(˜YPji)P(˜Yi=1)=˜YPjiS2eS˜Zj,3e(1Sp)1˜Zj,3p(t)iBSYie(1Se)1Yi×iPj{i}[SYie(1Se)1Yip(t)iB]˜Yi[(1Sp)YiS1Yip(1p(t)iB)]1˜Yi.

    For convenience, assume that the set of all individuals is G, and all individuals can be arranged into an R×C array, that is, G={(r,c),rR,cC}. Define R=(R1,R2,,RR) and C=(C1,C2,,CC) as the collections of row and column testing results, respectively. Let R=maxRr and C=maxCc. Furthermore, define ˜Rr=maxc˜Yrc and ˜Cc=maxr˜Yrc as the true result sets for rows and columns, respectively. Let Yrc denote the testing result of the individual in the r-th row and c-th column of the array, and ˜Yrc represents the true disease status of the individual in the r-th row and c-th column of the array. Let

    Q={(s,t)Rs=1,Ct=1,1sR,1tC,or Rs=1,C1==CC=0,1sR,or R1==RR=0,Ct=1,1tC}.

    YQ represents the collection of responses from all potentially positive individuals, and ˜YQ denotes the true disease statuses of all potentially positive individuals. Let ZG=(R,C) denote the group testing responses. Since (r,c)G, define

    ˜YG(r,c)={˜Yrc,rR{r},cC{c}}.

    Then,

    w(t)rc=P(˜Yrc=1ZG,YQ)=P(˜Yrc=1,ZG,YQ)P(ZG,YQ).

    1) When ZG=(0,0), there is no need to retest individuals within the group. At this time,

    w(t)rc=P(˜Yrc=1ZG=(0,0))=P(˜Yrc=1,ZG=(0,0))P(ZG=(0,0)).

    The denominator is

    P(ZG=(0,0))=˜YGP(ZG=(0,0)|˜YG)P(˜YG)=˜YGP(R=0|˜YG)P(C=0|˜YG)P(˜YG)=˜YG[Rr=1P(Rr=0|˜Yr1,˜Yr2,,˜YrC)][Cc=1P(Cc=0|˜Y1c,˜Y2c,,˜YRc)]×rRcCp(t)rcB˜Yrc(1p(t)rcB)1˜Yrc=˜YGRr=1[(1Se)˜RrS1˜Rrp]Cc=1[(1Se)˜CcS1˜Ccp](r,c)G{p(t)rcB˜Yrc(1p(t)rcB)1˜Yrc}=˜YGRr=1Cc=1[(1Se)˜Rr+˜CcS2˜Rr˜Ccp](r,c)G{p(t)rcB˜Yrc(1p(t)rcB)1˜Yrc}.

    The numerator is

    \begin{align*} P\big(\mathcal{Z}_G = (0, 0), \tilde{Y}_{rc} = 1\big) & = P\big(\mathcal{Z}_G = (0, 0) | \tilde{Y}_{rc} = 1\big) P(\tilde{Y}_{rc} = 1)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}} P(\mathcal{R} = 0, \mathcal{C} = 0 | \tilde{\mathcal{Y}}_{G \setminus (r, c)}, \tilde{Y}_{rc} = 1) P(\tilde{\mathcal{Y}}_{G \setminus (r, c)})\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}} P(R_r = 0 | \tilde{R}_r = 1) \bigg [\prod\limits_{r'\in R\setminus \{r\}} P(R_{r'} = 0 | \tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \cdots, \tilde{Y}_{r'C}) \bigg ]\\ & \times P(C_c = 0 | \tilde{C}_c = 1) \bigg [\prod\limits_{c'\in C\setminus \{c\}} P(C_{c'} = 0 | \tilde{Y}_{1c'}, \cdots, \tilde{Y}_{Rc'}) \bigg ]\\ & \times \prod\limits_{r' \in R\setminus \{r\}} \prod\limits_{c' \in C\setminus \{c\}} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} p_{rcB}^{(t)}\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}} (1-S_e)^2 \prod\limits_{r'\in R \setminus \{r\}} (1-S_e)^{\tilde{R}_{r'}} S_p^{1-\tilde{R}_{r'}} \prod\limits_{c'\in C \setminus \{c\}} (1-S_e)^{\tilde{C}_{c'}} S_p^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(r', c') \in G} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} p_{rcB}^{(t)}\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}} \prod\limits_{r' \in R\setminus \{r\}} \prod\limits_{c' \in C\setminus \{c\}} (1-S_e)^{2+\tilde{R}_i+\tilde{C}_c} S_p^{2-\tilde{R}_i-\tilde{C}_c}\\ & \times \prod\limits_{(r', c') \in G\setminus(r, c)} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} p_{rcB}^{(t)}. \end{align*}

    2) When \mathcal{Z}_G \neq (0, 0) , \mathcal{Z}_G = (R, C) has multiple scenarios, specifically \mathcal{Z}_G = (R, C) = (1, 0) , (R, C) = (0, 1) , and (R, C) = (1, 1) . Therefore, when \mathcal{Z}_G \neq (0, 0) , the following classifications can be discussed:

    (a) When (R, C) = (1, 0) ,

    \begin{equation*} {w}^{(t)}_{rc} = P\big(\tilde{Y}_{rc} = 1 \mid \mathcal{Z}_G = (1, 0)\big) = \frac{P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (1, 0)\big)}{P\big(\mathcal{Z}_G = (1, 0)\big)}. \end{equation*}

    The denominator is

    \begin{align*} P\big(\mathcal{Z}_G = (1, 0)\big) = & \sum\limits_{\tilde{\mathcal{Y}}_G} P(\mathcal{R} \neq 0, \mathcal{C} = 0, \mathcal{Y}_Q | \tilde{\mathcal{Y}}_G) P(\tilde{\mathcal{Y}}_G) \\ = & \sum\limits_{\tilde{\mathcal{Y}}_G} \bigg[ \prod\limits_{r' = 1}^{R} P\Big(R_{r'} | \tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \dots, \tilde{Y}_{r'C}\Big) \bigg] \bigg[ \prod\limits_{c' = 1}^{C} P\Big(C_{c'} = 0 | \tilde{Y}_{1c'}, \tilde{Y}_{2c'}, \dots, \tilde{Y}_{Rc'}\Big) \bigg] \\ &\times \bigg[ \prod\limits_{(s, t) \in \mathcal{Q}} P\Big(Y_{st} | \tilde{Y}_{st}\Big) \bigg] \bigg[ \prod\limits_{r' \in R} \prod\limits_{c' \in C} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1 - {p_{r'c'B}^{(t)}}\Big)^{1 - \tilde{Y}_{r'c'}} \bigg] \\ = & \sum\limits_{\tilde{\mathcal{Y}}_G} \prod\limits_{r' = 1}^{R} \bigg[ S_e^{R_{r'}} (1 - S_e)^{1 - R_{r'}} \bigg]^{\tilde{R}_{r'c}} \bigg[ (1 - S_p)^{R_{r'}} S_p^{1 - R_{r'}} \bigg]^{1 - \tilde{R}_{r'}} \\ & \times \prod\limits_{c' = 1}^{C} \bigg[ (1 - S_e)^{1 - C_{c'}} \bigg]^{\tilde{C}_{c'}} \bigg[ S_p^{1 - C_{c'}} \bigg]^{1 - \tilde{C}_{c'}} \times \prod\limits_{(s, t) \in Q} \bigg[ S_e^{Y_{st}} (1 - S_e)^{1 - Y_{st}} \bigg]^{\tilde{Y}_{st}} \bigg[ (1 - S_p)^{Y_{st}} S_p^{1 - Y_{st}} \bigg]^{1 - \tilde{Y}_{st}} \\ &\times \prod\limits_{(r', c') \in G} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1 - {p_{r'c'B}^{(t)}}\Big)^{1 - \tilde{Y}_{r'c'}}. \end{align*}

    At this point, the numerator requires further discussion:

    (i) If (r, c) \in Q and R_r = 1 and C_r = 0 , then

    \begin{align*} &P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (1, 0)\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\big(\mathcal{Z}_G = \big(1, 0\big), \mathcal{Y}_Q\big|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big)P\big(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big) \\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\big(\mathcal{R}\neq0\big|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big)P\big(\mathcal{C} = 0\big|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big) \\ & \times P\big(\mathcal{Y}_Q\big|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{Q \setminus (r, c)}\big)P\big(\tilde{Y}_{rc} = 1\big)P\big(\tilde{\mathcal{Y}}_{G \setminus (r, c)}\big) \\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\Big(R_r = 1|\tilde{R}_r = 1\Big) \bigg[\prod\limits_{r'\in R\setminus \{r\}} P\Big(R_{r'}|\tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \cdots, \tilde{Y}_{r'C}\Big)\bigg] \\ & \times P\Big(C_c = 0|\tilde{C}_c = 1\Big)\bigg[\prod\limits_{c'\in C\setminus \{c\}}P\Big(C_{c'} = 0|\tilde{Y}_{1c'}, \cdots, \tilde{Y}_{Rc'}\Big)\bigg]P\Big(Y_{rc}|\tilde{Y}_{rc} = 1\Big) \\ & \times \bigg[\prod\limits_{(s, t)\in Q\setminus {(r, c)}}P\Big(Y_{st}|\tilde{Y}_{st}\Big)\bigg] p_{rcB}^{(t)} \times \bigg[\prod\limits_{r'\in R\setminus \{r\}}\prod\limits_{c'\in C\setminus \{c\}} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1-{p_{r'c'B}^{(t)}}\Big)^{1-{\tilde{Y}_{r'c'}}}\bigg] \\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}S_e^{1+Y_{rc}}\Big(1-S_e\Big)^{2-Y_{rc}}p_{rcB}^{(t)}\\ & \times \prod\limits_{r' \in R\setminus \{r\}}\bigg[S_e^{R_{r'}}\Big(1-S_e\Big)^{1-{R_{r'}}}\bigg]^{\tilde{R}_{r'c}} \Big[(1-S_p)^{R_{r'}}S_p^{1-R_{r'}}\Big]^{1-\tilde{R}_{r'}} \\ & \times \prod\limits_{c' \in C\setminus \{c\}}\bigg[(1-S_e)^{1-{C_{c'}}}\bigg]^{\tilde{C}_{c'}} \Big[S_p^{1-C_{c'}}\Big]^{1-\tilde{C}_{c'}} \\ & \times \prod\limits_{(s, t)\in Q\setminus\{(r, c)\}} \bigg[S_e^{Y_{st}}\Big(1-S_e\Big)^{1-Y_{st}}\bigg]^{\tilde{Y}_{st}} \bigg[(1-S_p)^{Y_{st}}S_p^{1-Y_{st}}\bigg]^{1-\tilde{Y}_{st}} \\ & \times \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1-{p_{r'c'B}^{(t)}}\Big)^{1-{\tilde{Y}_{r'c'}}} . \end{align*}

    (ii) If (r, c) \notin Q , then \mathcal{R} = 0 , \mathcal{C} \neq 0 , but R_r = 0 and C_r = 0 :

    \begin{align*} &P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (1, 0)\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\bigg(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\bigg)P\big(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\bigg(\mathcal{R}|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\bigg) P\bigg(\mathcal{C}|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\bigg) P\big(\mathcal{Y}_Q\big|\tilde{\mathcal{Y}}_Q\big)P\big(\tilde{Y}_{rc} = 1\big)P\big(\tilde{\mathcal{Y}}_{G \setminus (r, c)}\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P\Big(R_r = 0|\tilde{R}_r = 1\Big) \bigg[\prod\limits_{r'\in R\setminus \{r\}} P\Big(R_{r'}|\tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \cdots, \tilde{Y}_{r'C}\Big)\bigg]\\ & \times P\Big(C_c = 0|\tilde{C}_c = 1\Big) \bigg[\prod\limits_{c'\in C\setminus \{c\}}P\Big(C_{c'} = 0|\tilde{Y}_{1c'}, \cdots, \tilde{Y}_{Rc'}\Big)\bigg]\\ & \times \bigg[\prod\limits_{(s, t)\in Q}P\Big(Y_{st}|\tilde{Y}_{st}\Big)\bigg] p_{rcB}^{(t)} \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1-{p_{r'c'B}^{(t)}}\Big)^{1-{\tilde{Y}_{r'c'}}}\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}S_e^{Y_{rc}}\Big(1-S_e\Big)^{3-Y_{rc}}p_{rcB}^{(t)}\\ & \times \prod\limits_{r' \in R\setminus \{r\}} \bigg[S_e^{R_{r'}}\big(1-S_e\big)^{1-{R_{r'}}}\bigg]^{\tilde{R}_{r'c}} \Big[(1-S_p)^{R_{r'}}S_p^{1-R_{r'}}\Big]^{1-\tilde{R}_{r'c}}\\ & \times \prod\limits_{c' \in C\setminus \{c\}} \bigg[(1-S_e)^{1-{C_{c'}}}\bigg]^{\tilde{C}_{c'}} \Big[S_p^{1-C_{c'}}\Big]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q} \bigg[S_e^{Y_{st}}\big(1-S_e\big)^{1-Y_{st}}\bigg]^{\tilde{Y}_{st}} \Big[(1-S_p)^{Y_{st}}S_p^{1-Y_{st}}\Big]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} \bigg[{p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \Big(1-{p_{r'c'B}^{(t)}}\Big)^{1-{\tilde{Y}_{r'c'}}}\bigg]. \end{align*}

    (b) When (R, C) = (0, 1) , the denominator is

    \begin{align*} P\big(\mathcal{Z}_G = (0, 1)\big) = &\sum\limits_{\tilde{\mathcal{Y}}_G}P(\mathcal{R} = 0, \mathcal{C}\neq0, \mathcal{Y}_Q|\tilde{\mathcal{Y}}_G)P\big(\tilde{\mathcal{Y}}_G\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_G} \bigg[ \prod\limits_{r' = 1}^{R} P(R_r' = 0|\tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \cdots, \tilde{Y}_{r'C}) \bigg] \\ & \times \bigg[ \prod\limits_{c' = 1}^{C} P(C_{c'}|\tilde{Y}_{1c'}, \tilde{Y}_{2c'}, \cdots, \tilde{Y}_{Rc'}) \bigg] \\ & \times \prod\limits_{(s, t)\in \mathcal{Q}} P\big(Y_{st}\big|\tilde{Y}_{st}\big) \prod\limits_{r' \in R}\prod\limits_{c' \in C} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} \\ = &\sum\limits_{\tilde{\mathcal{Y}}_G} \prod\limits_{r' = 1}^{R} \bigg[ \big(1-S_e\big)^{1-R_{r'}} \bigg]^{\tilde{R}_{r'}} \Big[ S_p^{1-R_{r'}} \Big]^{1-\tilde{R}_{r'}} \\ & \times \prod\limits_{c' = 1}^{C} \bigg[ S_e^{C_{c'}}\big(1-S_e\big)^{1-C_{c'}} \bigg]^{\tilde{C}_{c'}} \Big[ (1-S_p)^{C_{c'}}S_p^{1-C_{c'}} \Big]^{1-\tilde{C}_{c'}} \\ & \times \prod\limits_{(s, t)\in Q} \bigg[ S_e^{Y_{st}}\big(1-S_e\big)^{1-Y_{st}} \bigg]^{\tilde{Y}_{st}} \Big[ (1-S_p)^{Y_{st}}S_p^{1-Y_{st}} \Big]^{1-\tilde{Y}_{st}} \\ & \times \prod\limits_{(r', c') \in G} \bigg[ {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} \bigg]. \end{align*}

    The numerator requires further discussion:

    (ⅰ) If (r, c) \in Q and R_r = 0 and C_c = 1 , then

    \begin{align*} &P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (0, 1)\big) = P(\tilde{Y}_{rc} = 1, \mathcal{R}, \mathcal{C}, \tilde{\mathcal{Y}}_Q)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})P\big(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)}\big)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}S_e^{Y_{rc}}(3-S_e)^{1-Y_{rc}}p_{rcB}^{(t)} \prod\limits_{r' \in R\setminus \{r\}} \bigg[\big(1-S_e\big)^{1-{R_{r'}}}\bigg]^{\tilde{R}_{r'c}} \Big[S_p^{1-R_{r'}}\Big]^{1-\tilde{R}_{r'c}}\\ & \times \prod\limits_{c' \in C\setminus \{c\}} \bigg[S_e^{C_{c'}}\big(1-S_e\big)^{1-{C_{c'}}}\bigg]^{\tilde{C}_{c'}} \Big[(1-S_p)^{C_{c'}}S_p^{1-C_{c'}}\Big]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q} \bigg[S_e^{Y_{st}}\big(1-S_e\big)^{1-Y_{st}}\bigg]^{\tilde{Y}_{st}} \Big[(1-S_p)^{Y_{st}}S_p^{1-Y_{st}}\Big]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} \bigg[{p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}}\bigg]. \end{align*}

    (ⅱ) If (r, c) \notin Q , then \mathcal{R} = 0 , \mathcal{C} \neq 0 , but R_r = 0 and C_r = 0 :

    \begin{align*} P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (0, 1)\big) = &P(\tilde{Y}_{rc} = 1, \mathcal{R}, \mathcal{C}, \mathcal{Y}_Q)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})P(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}S_e^{Y_{rc}}(3-S_e)^{1-Y_{rc}}p_{rcB}^{(t)} \prod\limits_{r' \in R\setminus \big\{r\big\}} \bigg[ \big(1-S_e\big)^{1-{R_{r'}}} \bigg]^{\tilde{R}_{r'c}} \bigg[ S_p^{1-R_{r'}} \bigg]^{1-\tilde{R}_{r'c}}\\ & \times \prod\limits_{c' \in C\setminus \big\{c\big\}} \bigg[ S_e^{C_{c'}}\big(1-S_e\big)^{1-{C_{c'}}} \bigg]^{\tilde{C}_{c'}} \bigg[ \big(1-S_p\big)^{C_{c'}}S_p^{1-C_{c'}} \bigg]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q} \bigg[ S_e^{Y_{st}}\big(1-S_e\big)^{1-Y_{st}} \bigg]^{\tilde{Y}_{st}} \bigg[ \big(1-S_p\big)^{Y_{st}}S_p^{1-Y_{st}} \bigg]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c')\in G\setminus \big\{(r, c)\big\}} ( {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} \big(1-{p_{r'c'B}^{(t)}}\big)^{1-{\tilde{Y}_{r'c'}}} ). \end{align*}

    (c) When (R, C) = (1, 1) , the denominator is

    \begin{align*} P\big(\mathcal{Z}_G = (1, 1)\big) = &\sum\limits_{\tilde{\mathcal{Y}}}P(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q | \tilde{\mathcal{Y}})P(\tilde{\mathcal{Y}})\\ = &\sum\limits_{\tilde{\mathcal{Y}}} \bigg[\prod\limits_{r' = 1}^{R} P(R_r' | \tilde{Y}_{r'1}, \tilde{Y}_{r'2}, \cdots, \tilde{Y}_{r'C} ) \bigg] \bigg[\prod\limits_{c' = 1}^{C} P(C_{c'} | \tilde{Y}_{1c'}, \tilde{Y}_{2c'}, \cdots, \tilde{Y}_{Rc'} ) \bigg] \\ & \times \prod\limits_{(s, t)\in \mathcal{Q}} P(Y_{st} | \tilde{Y}_{st} ) \prod\limits_{r' \in R} \prod\limits_{c' \in C} {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}}\\ = &\sum\limits_{\tilde{\mathcal{Y}}} \prod\limits_{r' = 1}^{R} \bigg[ S_e^{R_{r'}}(1-S_e)^{1-{R_{r'}}} \bigg]^{\tilde{R}_{r'c}} \bigg[ (1-S_p)^{R_{r'}}S_p^{1-R_{r'}} \bigg]^{1-\tilde{R}_{r'}}\\ & \times \prod\limits_{c' = 1}^{C} \bigg[ S_e^{C_{c'}}(1-S_e)^{1-C_{c'}} \bigg]^{\tilde{C}_{c'}} \bigg[ (1-S_p)^{C_{c'}}S_p^{1-C_{c'}} \bigg]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q} \bigg[ S_e^{Y_{st}}(1-S_e)^{1-Y_{st}} \bigg]^{\tilde{Y}_{st}} \bigg[ (1-S_p)^{Y_{st}}S_p^{1-Y_{st}} \bigg]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c') \in G} \bigg\{ {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} \bigg\}. \end{align*}

    For the numerator, we provide the following derivations:

    (ⅰ) If (r, c) \in Q and R_r = 1 and C_c = 1 , then

    \begin{align*} &P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (1, 1)\big) = P(\tilde{Y}_{rc} = 1, \mathcal{R}, \mathcal{C}, \mathcal{Y}_Q)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q | \tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})P(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}S_e^{2+Y_{rc}}(1-S_e)^{1-Y_{rc}} \cdot p_{rcB}^{(t)}\\ & \times \prod\limits_{r' \in R\setminus \{r\}}\bigg[ S_e^{R_{r'}}(1-S_e)^{1-{R_{r'}}} \bigg]^{\tilde{R}_{r'c}} \bigg[ (1-S_p)^{R_{r'}}S_p^{1-R_{r'}} \bigg]^{1-\tilde{R}_{r'c}}\\ & \times \prod\limits_{c' \in C\setminus \{c\}}\bigg[ S_e^{C_{c'}}(1-S_e)^{1-{C_{c'}}} \bigg]^{\tilde{C}_{c'}} \bigg[ (1-S_p)^{C_{c'}}S_p^{1-C_{c'}} \bigg]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q\setminus \{(r, c)\}} \bigg[ S_e^{Y_{st}}(1-S_e)^{1-Y_{st}} \bigg]^{\tilde{Y}_{st}} \bigg[ (1-S_p)^{Y_{st}}S_p^{1-Y_{st}} \bigg]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} \bigg\{ {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} ( 1-{p_{r'c'B}^{(t)}} )^{1-{\tilde{Y}_{r'c'}}} \bigg\}. \end{align*}

    (ⅱ) If (r, c) \notin Q , then \mathcal{R} \neq 0 , \mathcal{C} \neq 0 , but R_r = 0 and C_r = 0 :

    \begin{align*} &P\big(\tilde{Y}_{rc} = 1, \mathcal{Z}_G = (1, 1)\big) = P(\tilde{Y}_{rc} = 1, \mathcal{R}, \mathcal{C}, \mathcal{Y}_Q)\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}P(\mathcal{R}, \mathcal{C}, \mathcal{Y}_Q|\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})P(\tilde{Y}_{rc} = 1, \tilde{\mathcal{Y}}_{G \setminus (r, c)})\\ = &\sum\limits_{\tilde{\mathcal{Y}}_{G \setminus (r, c)}}(1-S_e)^2S_e^{Y_{rc}}(1-S_e)^{1-Y_{rc}}p_{rcB}^{(t)}\\ & \times \prod\limits_{r' \in R\setminus \{r\}}\bigg [S_e^{R_{r'}}(1-S_e)^{1-{R_{r'}}}\bigg ]^{\tilde{R}_{r'c}} \bigg [(1-S_p)^{R_{r'}}S_p^{1-R_{r'}}\bigg ]^{1-\tilde{R}_{r'c}}\\ & \times \prod\limits_{c' \in C\setminus \{c\}}\bigg [S_e^{C_{c'}}(1-S_e)^{1-{C_{c'}}}\bigg ]^{\tilde{C}_{c'}} \bigg [(1-S_p)^{C_{c'}}S_p^{1-C_{c'}}\bigg ]^{1-\tilde{C}_{c'}}\\ & \times \prod\limits_{(s, t)\in Q} \bigg [S_e^{Y_{st}}(1-S_e)^{1-Y_{st}}\bigg ]^{\tilde{Y}_{st}}\bigg [(1-S_p)^{Y_{st}}S_p^{1-Y_{st}}\bigg ]^{1-\tilde{Y}_{st}}\\ & \times \prod\limits_{(r', c')\in G\setminus \{(r, c)\}} \bigg\{ {p_{r'c'B}^{(t)}}^{\tilde{Y}_{r'c'}} (1-{p_{r'c'B}^{(t)}})^{1-{\tilde{Y}_{r'c'}}} \bigg\}. \end{align*}


    [1] M. Brodie, S. Barry, G. Bamagous, J. D. Norrie, P. Kwan, Patterns of treatment response in newly diagnosed epilepsy, Neurology, 78 (2012), 1548–1554. https://doi.org/10.1212/WNL.0b013e3182563b19 doi: 10.1212/WNL.0b013e3182563b19
    [2] P. Perucca, F. Dubeau, J. Gotman, Intracranial electroencephalographic seizure-onset patterns: effect of underlying pathology, Brain, 137 (2014), 183–196. https://doi.org/10.1093/brain/awt299 doi: 10.1093/brain/awt299
    [3] G. Bettus, E. Guedj, F. Joyeux, S. Confort‐Gouny, E. Soulier, V. Laguitton, et al., Decreased basal fMRI functional connectivity in epileptogenic networks and contralateral compensatory mechanisms, Hum. Brain Mapp., 30 (2009), 1580–1591. https://doi.org/10.1002/hbm.20625 doi: 10.1002/hbm.20625
    [4] B. Ridley, C. Rousseau, J. Wirsich, A. Le Troter, E. Soulier, S. Confort-Gouny, et al., Nodal approach reveals differential impact of lateralized focal epilepsies on hub reorganization, NeuroImage, 118 (2015), 39–48. https://doi.org/10.1016/j.jprot.2014.11.012 doi: 10.1016/j.jprot.2014.11.012
    [5] J. Premysl, C. Marco, J. John, Modern concepts of focal epileptic networks, International Review of Neurobiology, 2014.
    [6] D. Cosandier-Rimélé, F. Bartolomei, I. Merlet, P. Chauvel, F. Wendling, Recording of fast activity at the onset of partial seizures: Depth EEG vs. scalp EEG, NeuroImage, 59 (2012), 3474–3487. https://doi.org/10.1016/j.neuroimage.2011.11.045 doi: 10.1016/j.neuroimage.2011.11.045
    [7] M. A. Kramer, W. Truccolo, U. T. Eden, K. Q. Lepage, L. R. Hochberg, E. N. Eskandar, et al., Human seizures self-terminate across spatial scales via a critical transition, Proceedings of the National Academy of Sciences, 109 (2012), 21116–21121. https://doi.org/10.1073/pnas.1210047110 doi: 10.1073/pnas.1210047110
    [8] T. Proix, F. Bartolomei, P. Chauvel, C. Bernard, V. K. Jirsa, Permittivity coupling across brain regions determines seizure recruitment in partial epilepsy, The Journal of Neuroscience, 34 (2014), 15009–15021. https://doi.org/10.1523/JNEUROSCI.1570-14.2014 doi: 10.1523/JNEUROSCI.1570-14.2014
    [9] M. Ahmadi, D. Hagler, C. McDonald, E. S. Tecoma, V. J. Iragui, A. M. Dale, et al., Side matters: diffusion tensor imaging tractography in left and right temporal lobe epilepsy, Am. J. Neuroradiol., 30 (2009), 1740–1747. https://doi.org/10.3174/ajnr.A1650 doi: 10.3174/ajnr.A1650
    [10] T. Proix, V. K. Jirsa, F. Bartolomei, M. Guye, W. Truccolo, Predicting the spatiotemporal diversity of seizure propagation and termination in human focal epilepsy, Nat. Commun., 9 (2018), 1–15. https://doi.org/10.1038/s41467-018-02973-y doi: 10.1038/s41467-018-02973-y
    [11] V. Jirsa, K. Jantzen, A. Fuchs, J. S. Kelso, Spatiotemporal forward solution of the EEG and MEG using network modeling, IEEE T. Med. Imag., 21 (2002), 493–504. https://doi.org/10.1109/TMI.2002.1009385 doi: 10.1109/TMI.2002.1009385
    [12] F. Wendling, F. Bartolomei, J. Bellanger, P. Chauvel, Epileptic fast activity can be explained by a model of impaired GABAergic dendritic inhibition, Eur. J. Neurosci., 15 (2002), 1499–1508. https://doi.org/10.1046/j.1460-9568.2002.01985.x doi: 10.1046/j.1460-9568.2002.01985.x
    [13] S. Kalitzin, D. Velis, F. Silva, Stimulation-based anticipation and control of state transitions in the epileptic brain, Epilepsy Behav., 17 (2009), 310–323. https://doi.org/10.3200/DEMO.17.4.310-323 doi: 10.3200/DEMO.17.4.310-323
    [14] P. R. Bauer, R. D. Thijs, R. J. Lamberts, D. N. Velis, G. H. Visser, E. A. Tolner, et al., Dynamics of convulsive seizure termination and postictal generalized EEG suppression, Brain, 140 (2017), 655–668.
    [15] V. K. Jirsa, W. C. Stacey, P. P. Quilichini, A. I. Ivanov, C. Bernard, On the nature of seizure dynamics, Brain, 137 (2014), 2210–2230. https://doi.org/10.1093/brain/awu133 doi: 10.1093/brain/awu133
    [16] T. Proix, F. Bartolomei, M. Guye, V. Jirsa, Individual brain structure and modelling predict seizure propagation, Revue Neurologique, 174 (2018), S177. https://doi.org/10.1016/j.neurol.2018.02.047 doi: 10.1016/j.neurol.2018.02.047
    [17] F. Bartolomei, P. Chauvel, F. Wendling, Spatio-temporal dynamics of neuronal networks in partial epilepsy, Revue neurologique, 161 (2005), 767–780. https://doi.org/10.1016/S0035-3787(05)85136-7 doi: 10.1016/S0035-3787(05)85136-7
    [18] K. Bandt, P. Besson, B. Ridley, F. Pizzo, R. Carron, J. Regis, et al., Connectivity strength, time lag structure and the epilepsy network in resting-state fMRI, NeuroImage: Clinical, 24 (2019), 102035. https://doi.org/10.1016/j.nicl.2019.102035 doi: 10.1016/j.nicl.2019.102035
    [19] H. Zhang, P. Xiao, Seizure Dynamics of Coupled Oscillators with Epileptor Field Model, Int. J. Bifurcat. Chaos, 28 (2018), 1850041. https://doi.org/10.1142/S0218127418500414 doi: 10.1142/S0218127418500414
    [20] K. Schneider, Theory and Applications of Hopf Bifurcation, Applied Mathematics and Mechanics, Applied Mathematics and Mechanics, 62 (1981), 713–714.
    [21] H. Wang, Y. Huang, D. Coman, R. Munbodh, R. Dhaher, H.P. Zaveri, et al., Network evolution in mesial temporal lobe epilepsy revealed by diffusion tensor imaging, Epilepsia, 58 (2017), 824–834. https://doi.org/10.1111/epi.13731 doi: 10.1111/epi.13731
    [22] L. Peraza, A. Asghar, G. Green, D. M. Halliday, Volume conduction effects in brain network inference from electroencephalographic recordings using phase lag index, J. Neurosci. Meth., 207 (2012), 189–199. https://doi.org/10.1016/j.jneumeth.2012.04.007 doi: 10.1016/j.jneumeth.2012.04.007
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1963) PDF downloads(87) Cited by(0)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog