Research article

A Vision sensing-based automatic evaluation method for teaching effect based on deep residual network


  • Received: 29 November 2022 Revised: 03 January 2023 Accepted: 09 January 2023 Published: 01 February 2023
  • The automatic evaluation of the teaching effect has been a technical problem for many years. Because only video frames are available for it, and the information extraction from such dynamic scenes still remains challenging. In recent years, the progress of deep learning has boosted the application of computer vision in many areas, which can provide much insight into the above issue. As a consequence, this paper proposes a vision sensing-based automatic evaluation method for teaching effects based on deep residual network (DRN). The DRN is utilized to construct a backbone network for sensing from visual features such as attending status, taking notes, playing phones, looking outside, etc. The extracted visual features are further selected as the basis for the evaluation of the teaching effect. We have also collected some realistic course images to establish a real-world dataset for the performance assessment of the proposal. The proposed method is implemented on collected datasets via computer programming-based simulation experiments, so as to obtain accuracy assessment results as measurement. The obtained results show that the proposal can well perceive typical visual features from video frames of courses and realize automatic evaluation of the teaching effect.

    Citation: Meijuan Sun. A Vision sensing-based automatic evaluation method for teaching effect based on deep residual network[J]. Mathematical Biosciences and Engineering, 2023, 20(4): 6358-6373. doi: 10.3934/mbe.2023275

    Related Papers:

    [1] Hassan Okasha, Mazen Nassar, Saeed A. Dobbah . E-Bayesian estimation of Burr Type XII model based on adaptive Type-Ⅱ progressive hybrid censored data. AIMS Mathematics, 2021, 6(4): 4173-4196. doi: 10.3934/math.2021247
    [2] Yahia Abdel-Aty, Mohamed Kayid, Ghadah Alomani . Generalized Bayesian inference study based on type-Ⅱ censored data from the class of exponential models. AIMS Mathematics, 2024, 9(11): 31868-31881. doi: 10.3934/math.20241531
    [3] Amal Hassan, Sudhansu Maiti, Rana Mousa, Najwan Alsadat, Mahmoued Abu-Moussa . Analysis of competing risks model using the generalized progressive hybrid censored data from the generalized Lomax distribution. AIMS Mathematics, 2024, 9(12): 33756-33799. doi: 10.3934/math.20241611
    [4] Abdullah Ali H. Ahmadini, Amal S. Hassan, Ahmed N. Zaky, Shokrya S. Alshqaq . Bayesian inference of dynamic cumulative residual entropy from Pareto Ⅱ distribution with application to COVID-19. AIMS Mathematics, 2021, 6(3): 2196-2216. doi: 10.3934/math.2021133
    [5] Ahmed Elshahhat, Refah Alotaibi, Mazen Nassar . Statistical inference of the Birnbaum-Saunders model using adaptive progressively hybrid censored data and its applications. AIMS Mathematics, 2024, 9(5): 11092-11121. doi: 10.3934/math.2024544
    [6] Refah Alotaibi, Mazen Nassar, Zareen A. Khan, Ahmed Elshahhat . Analysis of Weibull progressively first-failure censored data with beta-binomial removals. AIMS Mathematics, 2024, 9(9): 24109-24142. doi: 10.3934/math.20241172
    [7] Abbarapu Ashok, Nadiminti Nagamani . Adaptive estimation: Fuzzy data-driven gamma distribution via Bayesian and maximum likelihood approaches. AIMS Mathematics, 2025, 10(1): 438-459. doi: 10.3934/math.2025021
    [8] Rashad M. EL-Sagheer, Mohamed S. Eliwa, Khaled M. Alqahtani, Mahmoud El-Morshedy . Bayesian and non-Bayesian inferential approaches under lower-recorded data with application to model COVID-19 data. AIMS Mathematics, 2022, 7(9): 15965-15981. doi: 10.3934/math.2022873
    [9] Abdulhakim A. Al-Babtain, Amal S. Hassan, Ahmed N. Zaky, Ibrahim Elbatal, Mohammed Elgarhy . Dynamic cumulative residual Rényi entropy for Lomax distribution: Bayesian and non-Bayesian methods. AIMS Mathematics, 2021, 6(4): 3889-3914. doi: 10.3934/math.2021231
    [10] Samah M. Ahmed, Abdelfattah Mustafa . Estimation of the coefficients of variation for inverse power Lomax distribution. AIMS Mathematics, 2024, 9(12): 33423-33441. doi: 10.3934/math.20241595
  • The automatic evaluation of the teaching effect has been a technical problem for many years. Because only video frames are available for it, and the information extraction from such dynamic scenes still remains challenging. In recent years, the progress of deep learning has boosted the application of computer vision in many areas, which can provide much insight into the above issue. As a consequence, this paper proposes a vision sensing-based automatic evaluation method for teaching effects based on deep residual network (DRN). The DRN is utilized to construct a backbone network for sensing from visual features such as attending status, taking notes, playing phones, looking outside, etc. The extracted visual features are further selected as the basis for the evaluation of the teaching effect. We have also collected some realistic course images to establish a real-world dataset for the performance assessment of the proposal. The proposed method is implemented on collected datasets via computer programming-based simulation experiments, so as to obtain accuracy assessment results as measurement. The obtained results show that the proposal can well perceive typical visual features from video frames of courses and realize automatic evaluation of the teaching effect.



    The Poisson distribution is so important among the discrete distributions. Poisson distribution represents the probability of prescribed number of events arising at a fixed time or space interval as these events happen at a predicted constant mean rate and regardless of time after the last one. It's an occurrence. Also the Poisson distribution in other indicated intervals like time, area or volume can be applicable for series of events. Probability mass function (PMF) of Poisson distribution is given by

    P(T=t)=μteμt!,t=0,1,2,...,μ>0. (1.1)

    Sadooghi-Alvandi [1] considered estimation of the parameter of a Poisson distribution using a linex loss function. Zhang et al. [2] computed empirical Bayes estimators of the parameter of the Poisson distribution under Stein's loss function. Li and Hao [3] computed E-Bayesian and hierarchical Bayesian estimators of Poisson distribution parameter under the entropy loss function.

    E-Bayesian estimation was introduced by Han [4] as a simpler method of estimation. Jaheen and Okasha [5] studied E-Bayesian estimate for Burr Type XII model that focuses on type-2 censorship. Karimnezhad et al. [6] considered Bayes, E-Bayes and robust Bayes predicts a potential discovery as a precautionary step Prediction error functions for applications. Gonzalez-Lopez et al. [7] gave the E-Bayesian approximations for device performance and feasible option centered on exponential distribution. The apparatus stability to find parameter dependent on the function with asymmetric loss was proposed by Yousefzadeh [8] for the estimations of E-Bayesian and hierarchical Bayesian. Estimators of E-Bayesian for structural approach based statistical information are obtained by Okasha and Wang [9]. Han [10,12], computed the E-Bayesian estimates for different distributions. Kiapour [13] considered premium estimate and forecast of Bayes, E-Bayes and robust Bayes through square log error loss function. With the help of E-Bayesian method, Okasha [14,15] concerned for calculating estimates for parameters of different distributions for different types of data. Okasha et al. [16] computed E-Bayesian estimators of Burr Type XII distribution based on adaptive Type-II progressive hybrid censored data. Basheer et al. [17] considered E-Bayesian and Hierarchical Bayesian Estimations for the parameter and reliability of the inverse Weibull distribution. E-Bayesian estimates of a simple step stress model for exponential distribution are obtained by Nassar et al. [18]. Athirakrishnan and Abdul-Sathar [19] computed E-Bayesian and hierarchical Bayesian estimates for the scale parameter and reversed hazard rate of inverse Rayleigh distribution. Okasha and Mustafa [20] studied E-Bayesian estimation for Weibull distribution in the case of adaptive Type-I progressive hybrid censored competing risks data.

    The empirical Bayesian analysis considers the case when the parameters of the prior distribution are unknown. This implies that the sampling distribution is identified, however the earlier distribution isn't. The marginal distribution in order to retrieve the prior distribution from representative sample is then used. In this case, hyperparameters are estimated from the random sample, then we used to compute the Bayesian estimators. The empirical Bayes method is introduced in Robbins [21,22,23]. Zhang et al. [24] computed the empirical Bayes estimators of the parameters of the normal distribution with a conjugate normal-inverse-gamma prior. Mikulich-Gilbertson et al. [25] used empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes. Martin et al. [26] proposed empirical Bayes posterior concentration in sparse high dimensional linear models. Empirical Bayes estimates in exponential reliability model was computed by Sarhan [27]. Chang and Li [28] proposed empirical Bayes decision rule for classification on defective items in Weibull distribution. The role of empirical Bayes methodology in medical statistics is discussed by van Houwelingen [29]. Jaheen [30] derived empirical Bayes estimators for the parameter of the generalized exponential distribution based on record values. In this paper, we extend the idea of empirical estimation to E-Bayesian estimation. We consider the hyperparameters of the prior parameters are unknown. We use the moment method to estimate this parameters using the random sample. In the case of the classical E-Bayesian estimation we treat the hyperparameters as being known, but in practice the hyperparameters are unknown and therefore it is more critical to use empirical E-Bayesian estimation.

    Based on a complete sample T1,T2,...,Tn of size n, from Poisson distribution, the likelihood function can be written as

    L(t_;μ)=μni=1tienμni=1ti!,t=0,1,2,...,μ>0. (2.1)

    The prior distribution with gamma conjugate for μ, as

    π(μ)=λθΓ(θ)μθ1eλμ;μ>0,θ>0,λ>0. (2.2)

    From (2.1) and (2.2), the posterior distribution function can be obtained as following

    π(μt_)=(λ)θΓ(θ)μθ1eλμ;μ>0,θ>0,λ>0, (2.3)

    where

    λ=n+λ,θ=θ+ni=1ti. (2.4)

    In view of squared error loss function, the Bayes estimator of the parameter μ is suggested as

    ˆμB=θλ. (2.5)

    According to E-Bayesian method, prior parameters θ and λ ought to be chosen to ensure that π(μ) is a lower bound of μ. The differential of π(μ) is given as

    dπ(μ)dμ=λθΓ(θ)μθ2eλμ[(θ1)λμ]. (3.1)

    Then the prior distribution π(μ) is decreasing function for λ>0,0<θ<1, for more details see Han [4].

    By considering θ and λ are independent parameters with bivariate density function

    g(θ,λ)=g1(θ)g2(λ). (3.2)

    Therefore, the E-Bayesian estimator of the parameter μ is given by

    ˆμEB=E(μt_)=DˆμBg(θ,λ)dθdλ. (3.3)

    In this section, E-Bayesian approximation of μ parameter is based on different distributions of θ and λ. The effect of numerous previous distributions on E-Bayesian estimates of μ is explored by these distributions. The distributions of the parameters θ and λ are considered below:

    g1(θ,λ)=1cB(a,b)θa1(1θ)b1,0<θ<1,0<λ<c,g2(θ,λ)=2c2B(a,b)(cλ)θa1(1θ)b1,0<θ<1,0<λ<c,g3(θ,λ)=2λc2B(a,b)θa1(1θ)b1,0<θ<1,0<λ<c.} (3.4)

    For g1(θ,λ), g2(θ,λ) and g3(θ,λ), the E-Bayesian estimators of the parameter μ are given from (2.5), (3.3) and (3.4), respectively, by

    ˆμEB1=1cln(n+cn)(ni=1ti+aa+b), (3.5)
    ˆμEB2=2c(n+ccln(n+cn)1)(ni=1ti+aa+b), (3.6)

    and

    ˆμEB3=2c(1ncln(n+cn))(ni=1ti+aa+b). (3.7)

    Posterior risk for E-Bayesian approximation was introduced first by Han [31], as follows

    ER(ˆμEB)=DR(ˆμB)g(θ,λ)dθdλ=E[R(ˆμB)], (3.8)

    where posterior risk R[ˆμB] of Bayesian estimation is given by

    R(ˆμB)=Var(μ|t_)=E(μ2|t_)[E(μ|t_)]2. (3.9)

    From (2.3) and (3.9) we have

    R(ˆμB)=θ(λ)2, (3.10)

    where θ and λ as in (2.4).

    The E-posterior risk for ˆμEBi(i=1,2,3) from (3.4), (3.8) and (3.10), respectively, are shown by

    ER(ˆμEB1)=1n(n+c)(ni=1ti+aa+b),ER(ˆμEB2)=2c2(cnln(n+cn))(ni=1ti+aa+b),ER(ˆμEB3)=2c2(ln(n+cn)cn+c)(ni=1ti+aa+b).} (3.11)

    This section explores relationship among ˆμEBi and ER(ˆμEBi), i=1,2,3.

    (A) Relations between ˆμEBi, i=1,2,3

    Lemma 1. It leads from (3.5)–(3.7)

    (i) ˆμEB3<ˆμEB1<ˆμEB2.

    (ii) limnˆμEB1=limnˆμEB2=limnˆμEB3.

    Proof. (ⅰ) Eqs (3.5)–(3.7) provide

    ˆμEB1ˆμEB3=ˆμEB2ˆμEB1=1c(ni=1ti+aa+b)[c+2ncln(n+cn)2]. (4.1)

    For 1<y<1, we have ln(1+y)=yy22+y33y44+...=k=1(1)k1ykk. Let y=cn, when 0<c<n, 0<cn<1, we have

    [c+2ncln(n+cn)2]=c+2nc[cn12(cn)2+13(cn)314(cn)4+15(cn)5+...]2=[cn12(cn)2+13(cn)314(cn)4+15(cn)5+...]+[2cn+23(cn)224(cn)3+25(cn)4...]2=(16(cn)216(cn)3)+(36(cn)4215(cn)5)+...=16(cn)2(1cn)+160(cn)4(98cn)+...>0. (4.2)

    From (4.1) and (4.2), we have

    ˆμEB1ˆμEB3=ˆμEB2ˆμEB1>0,

    that is

    ˆμEB3<ˆμEB1<ˆμEB2.

    (ⅱ) From (4.1) and (4.2), we have

    limn(ˆμEB1ˆμEB3)=limn(ˆμEB2ˆμEB1)=1c(ni=1ti+aa+b)limn{16(cn)2(1cn)+160(cn)4(98cn)+...}=0.

    That is, limnˆμEB1=limnˆμEB2=limnˆμEB3.

    (B) Connections between ER(ˆμEBi), i=1,2,3

    Lemma 2. It follows from (3.11) that

    (i) ER(ˆμEB3)<ER(ˆμEB1)<ER(ˆμEB2).

    (ii) limnER(ˆμEB1)=limnER(ˆμEB2)=limnER(ˆμEB3).

    Proof. (ⅰ) From (3.11), we achieve

    [ER(ˆμEB1)][ER(ˆμEB3)]=[ER(ˆμEB2)][ER(ˆμEB1)]=(ni=1ti+aa+b)[2n+cnc(n+c)nc2ln(n+cn)]. (4.3)

    For 1<y<1, we have ln(1+y)=yy22+y33y44+...=k=1(1)k1ykk. Let y=cn, when 0<c<n, 0<cn<1, we have

    2n+cnc(n+c)nc2ln(n+cn)=2n+cnc(n+c)nc2[cn12(cn)2+13(cn)314(cn)4+15(cn)5]=1n(n+c)+1n2[123(cn)]+c2n3[1225(cn2)]+...>0. (4.4)

    Eqs (4.3) and (4.4) imply that

    ER(ˆμEB3)<ER(ˆμEB1)<ER(ˆμEB2).

    (ⅱ) From (4.3) and (4.4), we have

    limn([ER(ˆμEB1)][ER(ˆμEB3]))=limn([ER(ˆμEB2)][ER(ˆμEB1)])=limn[1n(n+c)+1n2[123(cn)]+c2n3[1225(cn2)]+...]=0.

    That is, limnER(ˆμEB1)=limnER(ˆμEB2)=limnER(ˆμEB3). Thus, the proof is complete.

    In this section, we introduce empirical E-Bayesian method (EE-Bayesian). In this method, we consider the case when the parameters a, b and c in (3.4) which are used in E-Bayesian estimation are unknown parameters, we use empirical Bayes approach to get their estimates. The marginal PMF of the random variable T is obtained from (1.1) and (2.2) as

    f(t)=0f(tμ)π(μ)dμ=Γ(t+θ)Γ(t+1)Γ(λ)λθ(λ+1)t+θ;t=0,1,2,.... (5.1)

    Specifically, since θ is a positive integer, marginal distribution of random variable T is a negative binomial distribution, NB(r,p), with parameters

    r=θ,p=λ1+λ. (5.2)

    The estimators of the parameters θ and λ are obtained by using the moment method. To obtain the moment estimators of θ and λ, we need to calculate the first two moments of T, E(T) and E(T2). It is easy to show that

    E(T)=θλ,E(T2)=θ(θ+λ+1)λ2. (5.3)

    Furthermore, the moment estimators can be obtained by equal the population moments with the sample moments as

    E(Tr)=1nni=1Tri. (5.4)

    From (5.3) and (5.4), we obtain the moment estimators of θ and λ, respectively, by

    ˆθm=S21S2S1S21, (5.5)

    and

    ˆλm=S1S2S1S21, (5.6)

    where

    Sl=1nni=1Tli,l=1,2. (5.7)

    Also, to obtain moment estimators of parameters a and b in (3.4), we need to calculate E(θ) and E(θ2), which is easy to show that

    E(θ)=aa+b,E(θ2)=a(a+1)(a+b)(a+b+1). (5.8)

    Then, from (5.8) and applying moment method, we attain moment estimators of parameters a and b, respectively,

    ˆam=A21A1A2A2A21, (5.9)

    and

    ˆbm=(1A1)(A1A2)A2A21, (5.10)

    where

    Ah=1kki=1θhi1kki=1(ˆθmi)h,h=1,2, (5.11)

    where ˆθm is given by (5.5).

    The moment estimators of c for g1(θ,λ), g2(θ,λ) and g3(θ,λ) is given, respectively, by

    ˆcm12kki=1ˆλmi, (5.12)
    ˆcm23kki=1ˆλmi, (5.13)

    and

    ˆcm332kki=1ˆλmi, (5.14)

    where ˆλm, is given by (5.6).

    Then, the EE-Bayesian estimates of the parameter μ for g1(θ,λ), g2(θ,λ) and g3(θ,λ) are given, respectively, by

    ˆμEEB1=1ˆcm1ln(n+ˆcm1n)(ni=1ti+ˆamˆam+ˆbm), (5.15)
    ˆμEEB2=2ˆcm2(n+ˆcm2ˆcm2ln(n+ˆcm2n)1)(ni=1ti+ˆamˆam+ˆbm), (5.16)

    and

    ˆμEEB3=2ˆcm3(1nˆcm3ln(n+ˆcm3n))(ni=1ti+ˆamˆam+ˆbm). (5.17)

    Empirical E-posterior risk related to estimates of EE-Bayesian ˆμEEBi(i=1,2,3) are given from (3.11), by replacing the parameters a, b and c with their corresponding moment estimators

    EER(ˆμEEB1)=1n(n+ˆcm1)(ni=1ti+ˆamˆam+ˆbm),EER(ˆμEEB2)=2ˆc2m2(ˆcm2nln(n+ˆcm2n))(ni=1ti+ˆamˆam+ˆbm),EER(ˆμEEB3)=2ˆc2m3(ln(n+ˆcm3n)ˆcm3n+ˆcm3)(ni=1ti+ˆamˆam+ˆbm).} (5.18)

    In this part, a Monte Carlo modeling is applied for an examination of E-Bayes and EE-Bayesian assessment with the respective projections estimating maximum likelihood.

    Here, Monte Carlo simulation is being established to equate the maximum likelihood and E-Bayesian model specification. It calls the following steps:

    (ⅰ) We generate θ and λ for specified values of previous parameters (a,b) and (0,c) via beta and uniform priors (3.4), individually.

    (ⅱ) Generate μ from gamma density (2.2) for predicted values of θ and λ.

    (ⅲ) For known values of μ in step (2), we generate complete sample of Poisson distribution with PMF (1.1).

    (ⅳ) Maximum likelihood estimate ˆμML and ˆμEBi, i=1,2,3 are computed based on squared loss function.

    (ⅴ) Also we compute ER(ˆμEBi) from (3.11) where i=1,2,3.

    (ⅵ) The output of all results has been analyzed mathematically in three terms with i=1,2,3. First is taken as the average of the estimates ˆμML and ˆμEBi, the second is the estimated risks (ERs) of ˆμML and ˆμEBi, and the third is E-posterior risks ER(ˆμEBi), i=1,2,3. Repetition 10000 times of above steps, then simulation tests are appeared in Tables 13.

    Table 1.  The estimates of ˆμML and ˆμEBi, (i=1,2,3).
    n c (a,b) ˆμML ˆμEB1 ˆμEB2 ˆμEB3
    10 3 (2, 1) 0.6708 0.6449 0.6731 0.6168
    (3, 4) 0.5366 0.5067 0.5289 0.4846
    4 (2, 1) 0.4999 0.4766 0.5033 0.4499
    (3, 4) 0.4032 0.3752 0.3962 0.3542
    30 3 (2, 1) 0.6673 0.6572 0.6676 0.6467
    (3, 4) 0.5368 0.5252 0.5336 0.5169
    4 (2, 1) 0.5002 0.4904 0.5006 0.4802
    (3, 4) 0.4018 0.3906 0.3987 0.3824
    50 3 (2, 1) 0.6689 0.6626 0.6690 0.6562
    (3, 4) 0.5374 0.5302 0.5353 0.5250
    4 (2, 1) 0.5012 0.4950 0.5013 0.4886
    (3, 4) 0.4027 0.3957 0.4007 0.3906
    100 3 (2, 1) 0.6694 0.6661 0.6694 0.6629
    (3, 4) 0.5380 0.5343 0.5370 0.5317
    4 (2, 1) 0.5017 0.4985 0.5017 0.4952
    (3, 4) 0.40330 0.39965 0.40226 0.39704

     | Show Table
    DownLoad: CSV
    Table 2.  Estimated risks (ERs) of ˆμML and ˆμEBi, (i=1,2,3).
    n c (a,b) ˆμML ˆμEB1 ˆμEB2 ˆμEB3
    10 3 (2, 1) 0.0666 0.0515 0.0555 0.0493
    (3, 4) 0.0529 0.0414 0.0441 0.0398
    4 (2, 1) 0.0503 0.0362 0.0397 0.0343
    (3, 4) 0.0400 0.0291 0.0316 0.0276
    30 3 (2, 1) 0.0214 0.0196 0.0201 0.0193
    (3, 4) 0.0178 0.0164 0.0167 0.0161
    4 (2, 1) 0.0167 0.0148 0.0153 0.0145
    (3, 4) 0.0132 0.0118 0.0122 0.0116
    50 3 (2, 1) 0.0134 0.0127 0.0129 0.0125
    (3, 4) 0.0109 0.0103 0.0105 0.0102
    4 (2, 1) 0.0101 0.0094 0.0096 0.0092
    (3, 4) 0.0080 0.0075 0.0076 0.0074
    100 3 (2, 1) 0.0066 0.00639 0.00645 0.00635
    (3, 4) 0.00521 0.00507 0.00511 0.00504
    4 (2, 1) 0.00492 0.00474 0.00479 0.00471
    (3, 4) 0.00399 0.00385 0.00389 0.00382

     | Show Table
    DownLoad: CSV
    Table 3.  The values of ER(ˆμEBi), (i=1,2,3).
    n c (a,b) ER(ˆμEB1) E-R(ˆμEB2) E-R(ˆμEB3)
    10 3 (2, 1) 0.0567 0.0617 0.0518
    (3, 4) 0.0446 0.0485 0.0407
    4 (2, 1) 0.0405 0.0450 0.0359
    (3, 4) 0.0319 0.0354 0.0283
    30 3 (2, 1) 0.0209 0.0216 0.0202
    (3, 4) 0.0167 0.0172 0.0162
    4 (2, 1) 0.0154 0.0160 0.0147
    (3, 4) 0.0122 0.0127 0.0117
    50 3 (2, 1) 0.0129 0.0131 0.0126
    (3, 4) 0.0103 0.0105 0.0101
    4 (2, 1) 0.0095 0.0098 0.0093
    (3, 4) 0.0076 0.0078 0.0074
    100 3 (2, 1) 0.00656 0.00663 0.00649
    (3, 4) 0.00527 0.00532 0.00521
    4 (2, 1) 0.00489 0.00495 0.00482
    (3, 4) 0.00392 0.00397 0.00387

     | Show Table
    DownLoad: CSV

    In order to compare maximum likelihood and approximation of EE-Bayesian design, a Monto Carlo technique is enforced. The subsequent measures are noticed.

    (ⅰ) We construct θ and λ for the desired value of obtained parameters (a,b) and (0,c) by beta and (3.4), independently.

    (ⅱ) Develop μ from gamma calculated density (2.2) for expected values of θ and λ.

    (ⅲ) For recognized values of μ in step (2), we generate complete sample of Poisson distribution with PMF (1.1).

    (ⅳ) Estimations ˆμML and ˆμEEBi, i=1,2,3 are computed by orienting squared loss function.

    (ⅴ) Also we compute EER(ˆμEEBi), i=1,2,3 from (5.18).

    (ⅵ) The efficiency of all findings was examined statistically in three terms. The first is the average of the estimates ˆμML and ˆμEEBi, the second is the estimated risks (ERs) of ˆμML, ˆμEEBi, and the third is empirical E-posterior risks EER(ˆμEEBi), for all i=1,2,3. Simulation runs by repeating 10000 times of above steps are seen in Tables 46.

    Table 4.  The estimates of ˆμML and ˆμEEBi, (i=1,2,3).
    n c (a,b) ˆμML ˆμEEB1 ˆμEEB2 ˆμEEB3
    10 3 (3, 2) 0.6281 0.6285 0.6301 0.6262
    (4, 3) 0.6078 0.6094 0.6109 0.6072
    4 (3, 2) 0.4693 0.4755 0.4766 0.4741
    (4, 3) 0.4547 0.4608 0.4617 0.4594
    30 3 (3, 2) 0.62631 0.62563 0.62587 0.62532
    (4, 3) 0.60590 0.60573 0.605947 0.60544
    4 (3, 2) 0.46938 0.47108 0.47123 0.47089
    (4, 3) 0.45552 0.45716 0.45730 0.45698
    50 3 (3, 2) 0.62671 0.62616 0.62625 0.62604
    (4, 3) 0.60739 0.60714 0.60722 0.60703
    4 (3, 2) 0.46936 0.47032 0.47038 0.47025
    (4, 3) 0.45586 0.45678 0.45684 0.45672
    100 3 (3, 2) 0.62715 0.62682 0.62684 0.62679
    (4, 3) 0.60866 0.60847 0.60849 0.60844
    4 (3, 2) 0.47019 0.47064 0.47065 0.47062
    (4, 3) 0.45601 0.45645 0.45646 0.45643

     | Show Table
    DownLoad: CSV
    Table 5.  Estimated risks (ERs) of ˆμML and ˆμEEBi, (i=1,2,3).
    n c (a,b) ˆμML ˆμEEB1 ˆμEEB2 ˆμEEB3
    10 3 (3, 2) 0.0616 0.0470 0.0472 0.0466
    (4, 3) 0.0596 0.0457 0.0459 0.0454
    4 (3, 2) 0.0463 0.0362 0.0363 0.0359
    (4, 3) 0.0451 0.0353 0.0355 0.0351
    30 3 (3, 2) 0.02063 0.018716 0.018729 0.018700
    (4, 3) 0.01987 0.01807 0.01808 0.01805
    4 (3, 2) 0.01569 0.01438 0.01438 0.01437
    (4, 3) 0.01507 0.01381 0.01382 0.01380
    50 3 (3, 2) 0.01241 0.01170 0.01171 0.01169
    (4, 3) 0.01211 0.01142 0.01143 0.01142
    4 (3, 2) 0.00934 0.008854 0.008855 0.008852
    (4, 3) 0.00911 0.008635 0.008637 0.008633
    100 3 (3, 2) 0.00611 0.005933 0.005934 0.005933
    (4, 3) 0.00597 0.005793 0.005793 0.005792
    4 (3, 2) 0.00457 0.0044478 0.00444779 0.0044480
    (4, 3) 0.00455 0.0044253 0.0044255 0.0044249

     | Show Table
    DownLoad: CSV
    Table 6.  Values of EER(ˆμEEBi), (i=1,2,3).
    n c (a,b) EER(ˆμEEB1) EER(ˆμEEB2) EER(ˆμEEB3)
    10 3 (3, 2) 0.0552 0.0556 0.0546
    (4, 3) 0.0536 0.0540 0.0531
    4 (3, 2) 0.0422 0.0425 0.0418
    (4, 3) 0.0409 0.0412 0.0406
    30 3 (3, 2) 0.019875 0.019897 0.019846
    (4, 3) 0.019260 0.019280 0.019233
    4 (3, 2) 0.015032 0.015045 0.015014
    (4, 3) 0.0145952 0.0146081 0.0145780
    50 3 (3, 2) 0.012159 0.012164 0.012152
    (4, 3) 0.0117963 0.011801 0.011790
    4 (3, 2) 0.009158 0.009161 0.009154
    (4, 3) 0.008898 0.008901 0.008894
    100 3 (3, 2) 0.0061747 0.0061754 0.0061738
    (4, 3) 0.0059957 0.0059963 0.0059949
    4 (3, 2) 0.0046429 0.0046433 0.0046423
    (4, 3) 0.0045037 0.0045040 0.0045031

     | Show Table
    DownLoad: CSV

    Information on the quantity of deaths caused by horse kicks in light of the perception of 10 Prussian cavalry corps for a very long time (proportionally, 200 corps-years) in Figure 1 are issued. Prussian authorities gathered this data during the last year of the nineteenth century to consider the perils that horses presented to fighters, (see Bortkiewicz [32]). Padilla [33] discussed the validity of the model for the given real data set and showed that Poisson distribution fits quite well to it.

    Figure 1.  Bortkiwewicz data on deaths in Prussian cavalry.

    In the present circumstance, possibilities of death because of a kick from a horse are little while. Although the number of troops exposed to this danger is very high. Hence, a Poisson distribution can well match the results. Then, it is possible to approximate the mean amount of deaths per period as

    ˆμ=0×109+1×65+2×22+3×3+4×1200=0.61. (7.1)

    In this section we consider E-Bayesian and EE-Bayesian estimation for parameter μ.

    For the real data that is considered in Figure 1, we obtain E-Bayesian estimates and E-posterior risk for μ. The computational results are recorded in Table 7.

    Table 7.  Estimates ˆμEBi and corresponding values ER(ˆμEBi), (i=1,2,3).
    ˆμEB1 ˆμEB2 ˆμEB3 ER(ˆμEB1) E-R(ˆμEB2) E-R(ˆμEB3)
    0.6110 0.6115 0.6105 0.003047 0.003052 0.003042

     | Show Table
    DownLoad: CSV

    For the real data that is considered in Figure 1, we obtain EE-Bayesian estimates and EE-posterior risk for the parameter μ. The numerical results are listed in Table 8.

    Table 8.  The estimates ˆμEEBi and corresponding values EER(ˆμEBi), (i=1,2,3).
    ˆμEEB1 ˆμEEB2 ˆμEEB3 EER(ˆμEEB1) EER(ˆμEEB2) EER(ˆμEEB3)
    0.60026 0.60031 0.60021 0.0029407 0.002941 0.002900

     | Show Table
    DownLoad: CSV

    This paper describes E-Bayesian and EE-Bayesian methodology which are considered as to find approximation of unknown parameter of Poisson distribution with complete sample. E-Bayesian and EE-Bayesian estimators are implemented under the function of squared error loss and three hyper-parameter distributions. Also, E-posterior and EE-posterior risks are computed. Some statistical properties of E-Bayesian and E-posterior risk estimates relative to squared error loss function are extracted. Via a simulation analysis, a survey of different estimated parameters is carried out. The simulation results indicate that E-Bayesian and EE-Bayesian methods do very well in the estimation of model parameters under minimal mean square defects. Finally, to follow the E-Bayes and EE-Bayes estimator, one individual data set is scrutinized. It is inferred from the mathematical examination that E-Bayesian and EE-Bayesian estimators operate effectively than classical estimators.

    This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program. My sincere gratitude to my professor Zeinhum F. Jaheen for his support and encouragement during the research.

    The author declares there are no conflicts of interest in this paper.



    [1] Z. Guo, K. Yu, Z. Lv, K. K. R. Choo, P. Shi, J. Rodrigues, Deep federated learning enhanced secure poi microservices for cyber-physical systems, IEEE Wireless Commun., 29 (2022), 22–29. https://doi.org/10.1109/MWC.002.2100272 doi: 10.1109/MWC.002.2100272
    [2] S. Xia, Z. Yao, G. Wu, Y. Li, Distributed offloading for cooperative intelligent transportation under heterogeneous networks, IEEE Trans. Intell. Transp. Syst., 23 (2022), 16701–16714. https://doi.org/10.1109/TITS.2022.3190280 doi: 10.1109/TITS.2022.3190280
    [3] Z. Guo, K. Yu, A. Jolfaei, F. Ding, N. Zhang, Fuz-spam: label smoothing-based fuzzy detection of spammers in internet of things, IEEE Trans. Fuzzy Syst., 30 (2022), 4543–4554. https://doi.org/10.1109/TFUZZ.2021.3130311 doi: 10.1109/TFUZZ.2021.3130311
    [4] L. Zhao, Z. Yin, K. Yu, X. Tang, L. Xu, Z. Guo, et al., A fuzzy logic based intelligent multi-attribute routing scheme for two-layered sdvns, IEEE Trans. Network Serv. Manage., 2022 (2022). https://doi.org/10.1109/TNSM.2022.3202741 doi: 10.1109/TNSM.2022.3202741
    [5] Z. Zhou, X. Dong, Z. Li, K. Yu, C. Ding, Y. Yang, Spatio-yemporal feature encoding for traffic accident detection in vanet environment, IEEE Trans. Intell. Transp. Syst., 23 (2022), 19772–19781. https://doi.org/10.1109/TITS.2022.3147826 doi: 10.1109/TITS.2022.3147826
    [6] S. Zhang, H. Gu, K. Chi, L. Huang, K. Yu, S. Mumtaz, Drl-based partial offloading for maximizing sum computation rate of wireless powered mobile edge computing network, IEEE Trans. Wireless Commun., 21 (2022), 10934–10948. https://doi.org/10.1109/TWC.2022.3188302 doi: 10.1109/TWC.2022.3188302
    [7] D. Peng, D. He, Y. Li, Z. Wang, Integrating terrestrial and satellite multibeam systems toward 6G: techniques and challenges for interference mitigation, IEEE Wireless Commun., 29 (2022), 24–31. https://doi.org/10.1109/MWC.002.00293 doi: 10.1109/MWC.002.00293
    [8] A. Büyükkarci, M. Müldür, digital storytelling for primary school mathematics teaching: product and process evaluation, Educ. Inf. Technol., 27 (2022), 5365–5396. https://doi.org/10.1007/s10639-021-10813-8 doi: 10.1007/s10639-021-10813-8
    [9] Z. Guo, K. Yu, A. K. Bashir, D. Zhang, Y. D. Al-Otaibi, M. Guizani, Deep information fusion-driven POI scheduling for mobile social networks, IEEE Network, 36 (2022), 210–216. https://doi.org/10.1109/MNET.102.2100394 doi: 10.1109/MNET.102.2100394
    [10] A. Cahyadi, Hendryadi, S. Widyastuti, Suryani, Covid-19, emergency remote teaching evaluation: the case of Indonesia, Educ. Inf. Technol., 27 (2022), 2165–2179. https://doi.org/10.1007/s10639-021-10680-3 doi: 10.1007/s10639-021-10680-3
    [11] Y. Lu, L. Yang, S. X. Yang, Q. Hua, A. K. Sangaiah, T. Guo, et al., An intelligent deterministic scheduling method for ultra-low latency communication in edge enabled industrial internet of things, IEEE Trans. Ind. Inf., 19 (2023), 1756–1767. https://doi.org/10.1109/TII.2022.3186891 doi: 10.1109/TII.2022.3186891
    [12] B. Huang, K. Wang, An improved BP neural network-based quality evaluation model for Chinese international education teaching courses, in ICCDA 2022: The 6th International Conference on Compute and Data Analysis, (2022), 122–127. https://doi.org/10.1145/3523089.3523109
    [13] Q. Zhang, K. Yu, Z. Guo, S. Garg, J. Rodrigues, M. M. Hassan, et al., Graph neural network-driven traffic forecasting for the connected internet of vehicles, IEEE Trans. Network Sci. Eng., 9 (2022), 3015–3027. https://doi.org/10.1109/TNSE.2021.3126830 doi: 10.1109/TNSE.2021.3126830
    [14] C. Hou, J. Ai, Y. Lin, C. Guan, J. Li, W. Zhu, Evaluation of online teaching quality based on facial expression recognition, Future Internet, 14 (2022). https://doi.org/10.3390/fi14060177 doi: 10.3390/fi14060177
    [15] S. Qi, L. Liu, B. S. Kumar, A. Prathik, An english teaching quality evaluation model based on gaussian process machine learning, Expert Syst. J. Knowl. Eng., 39 (2022). https://doi.org/10.1111/exsy.12861 doi: 10.1111/exsy.12861
    [16] H. Shu, English teaching effect evaluation based on data association mining, in CIPAE 2021: 2nd International Conference on Computers, Information Processing and Advanced Education, (2021), 1223–1226. https://doi.org/10.1145/3456887.3457494
    [17] P. Gao, VIKOR method for intuitionistic fuzzy multi-attribute group decision-making and its application to teaching quality evaluation of college english, J. Intell. Fuzzy Syst., 42 (2022), 5189–5197. https://doi.org/10.3233/JIFS-211749 doi: 10.3233/JIFS-211749
    [18] D. Wei, Y. Rong, H. Garg, J. Liu, An extended WASPAS approach for teaching quality evaluation based on pythagorean fuzzy reducible weighted maclaurin symmetric mean, J. Intell. Fuzzy Syst., 42 (2022), 3121–3152. https://doi.org/10.3233/JIFS-210821 doi: 10.3233/JIFS-210821
    [19] M. Li, Multidimensional analysis and evaluation of college english teaching quality based on an artificial intelligence model, J. Sens., 2022 (2022), 1–13. https://doi.org/10.1155/2022/1314736 doi: 10.1155/2022/1314736
    [20] S. Zeng, Y. Pan, H. Jin, Online teaching quality evaluation of business statistics course utilizing fermatean fuzzy analytical hierarchy process with aggregation operator, Systems, 10 (2022). https://doi.org/10.3390/systems10030063 doi: 10.3390/systems10030063
    [21] B. Feng, Dynamic analysis of college physical education teaching quality evaluation based on network under the big data, Comput. Intell. Neurosci., 2021 (2021). https://doi.org/10.1155/2021/5949167 doi: 10.1155/2021/5949167
    [22] X. Xu, F. Liu, Optimization of online education and teaching evaluation system based on GA-BP neural network, Comput. Intell. Neurosci., 2021 (2021). https://doi.org/10.1155/2021/8785127 doi: 10.1155/2021/8785127
    [23] J. Heo, S. Han, The mediating effect of literacy of LMS between self-evaluation online teaching effectiveness and self-directed learning readiness, Educ. Inf. Technol., 26 (2021), 6097–6108. https://doi.org/10.1007/s10639-021-10590-4 doi: 10.1007/s10639-021-10590-4
    [24] R. Tárraga-Mínguez, C. S. Guerrero, P. Sanz-Cervera, Digital teaching competence evaluation of pre-service teachers in spain: a review study, IEEE Rev. Iberoam. Tecnol. Aprendizaje, 16 (2021), 70–76. https://doi.org/10.1109/RITA.2021.3052848 doi: 10.1109/RITA.2021.3052848
    [25] Y. V. Tsekhmister, T. Konovalova, B. Y. Tsekhmister, A. Agrawal, D. Ghosh, Evaluation of virtual reality technology and online teaching system for medical students in ukraine during COVID-19 pandemic, Int. J. Emerging Technol. Learn., 16 (2021). https://doi.org/10.3991/ijet.v16i23.26099 doi: 10.3991/ijet.v16i23.26099
    [26] Y. Wang, S. Li, B. Zhao, J. Zhang, Y. Yang, B. Li, A resnet-based approach for accurate radiographic diagnosis of knee osteoarthritis, CAAI Trans. Intell. Technol., 7 (2022), 512–521. https://doi.org/10.1049/cit2.12079 doi: 10.1049/cit2.12079
    [27] Y. Wang, C. Sun, Y. Guo, A multi-attribute fuzzy evaluation model for the teaching quality of physical education in colleges and its implementation strategies, Int. J. Emerging Technol. Learn., 16 (2021). https://doi.org/10.3991/ijet.v16i02.19725 doi: 10.3991/ijet.v16i02.19725
    [28] S. Qiao, S. Pang, G. Luo, S. Pan, Z. Yu, T. Chen, et al., RLDS: an explainable residual learning diagnosis system for fetal congenital heart disease, Future Gener. Comput. Syst., 128 (2022), 205–218. https://doi.org/10.1016/j.future.2021.10.001 doi: 10.1016/j.future.2021.10.001
    [29] S. Qiao, S. Pang, G. Luo, S. Pan, T. Chen, Z. Lv, FLDS: an intelligent feature learning detection system for visualizing medical images supporting fetal four-chamber views, IEEE J. Biomed. Health Inf., 26 (2022), 4814–4825. https://doi.org/10.1109/JBHI.2021.3091579 doi: 10.1109/JBHI.2021.3091579
    [30] S. Qiao, S. Pang, Y. Sun, G. Luo, W. Yin, Y. Zhao, et al., Sprechd: four-chamber semantic parsing network for recognizing fetal congenital heart disease in medical Metaverse, IEEE J. Biomed. Health. Inf., (2022), 1–11. https://doi.org/10.1109/JBHI.2022.3218577 doi: 10.1109/JBHI.2022.3218577
    [31] Y. Zhang, The development of an evaluation model to assess the effect of online english teaching based on fuzzy mathematics, Int. J. Emerging Technol. Learn., 16 (2021). https://doi.org/10.3991/ijet.v16i12.23325 doi: 10.3991/ijet.v16i12.23325
    [32] Y. Han, Evaluation of english online teaching based on remote supervision algorithms and deep learning, J. Intell. Fuzzy Syst., 40 (2021), 7097–7108. https://doi.org/10.3233/JIFS-189539 doi: 10.3233/JIFS-189539
    [33] H. Liang, Role of artificial intelligence algorithm for taekwondo teaching effect evaluation model, J. Intell. Fuzzy Syst., 40 (2021), 3239–3250. https://doi.org/10.3233/JIFS-189364 doi: 10.3233/JIFS-189364
    [34] Y. Liu, Evaluation algorithm of teaching work quality in colleges and universities based on deep denoising autoencoder network, Mobile Inf. Syst., 2021 (2021). https://doi.org/10.1155/2021/8161985 doi: 10.1155/2021/8161985
    [35] G. Li, F. Liu, Y. Wang, Y. Guo, L. Xiao, L. Zhu, A convolutional neural network (CNN) based approach for the recognition and evaluation of classroom teaching behavior, Sci. Program., 2021 (2021). https://doi.org/10.1155/2021/6336773 doi: 10.1155/2021/6336773
    [36] P. Liu, X. Wang, F. Teng, Online teaching quality evaluation based on multi-granularity probabilistic linguistic term sets, J. Intell. Fuzzy Syst., 40 (2021), 9915–9935. https://doi.org/10.3233/JIFS-202543 doi: 10.3233/JIFS-202543
    [37] Q. Wang, Research on teaching quality evaluation of college english based on the CODAS method under interval-valued intuitionistic fuzzy information, J. Intell. Fuzzy Syst., 41 (2021), 1499–1508. https://doi.org/10.3233/JIFS-210366 doi: 10.3233/JIFS-210366
    [38] H. Yu, Online teaching quality evaluation based on emotion recognition and improved aprioritid algorithm, J. Intell. Fuzzy Syst., 40 (2021), 7037–7047. https://doi.org/10.3233/JIFS-189534 doi: 10.3233/JIFS-189534
    [39] Y. Yu, English teaching ability evaluation algorithm based on big data fuzzy k-means clustering, Advances in Intelligent Systems and Computing, Springer, (2021), 557–564. https://doi.org/10.1007/978-3-030-69999-4_77
    [40] A. Amelio, G. Bonifazi, F. Cauteruccio, E. Corradini, M. Marchetti, D. Ursino, et al., Representation and compression of residual neural networks through a multilayer network based approach, Expert Syst. Appl., 215 (2023). https://doi.org/10.1016/j.eswa.2022.119391 doi: 10.1016/j.eswa.2022.119391
  • This article has been cited by:

    1. Hassan Okasha, Yuhlong Lio, Mohammed Albassam, On Reliability Estimation of Lomax Distribution under Adaptive Type-I Progressive Hybrid Censoring Scheme, 2021, 9, 2227-7390, 2903, 10.3390/math9222903
    2. Shu-Fei Wu, Interval Estimation for the Two-Parameter Exponential Distribution under Progressive Type II Censoring on the Bayesian Approach, 2022, 14, 2073-8994, 808, 10.3390/sym14040808
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1491) PDF downloads(76) Cited by(0)

Figures and Tables

Figures(9)  /  Tables(1)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog