Loading [MathJax]/jax/output/SVG/jax.js
Research article

Application of Fisher information to CMOS noise estimation

  • Received: 19 January 2023 Revised: 26 March 2023 Accepted: 11 April 2023 Published: 19 April 2023
  • MSC : 62F12, 65Y04, 68U10, 94A08

  • Analysis of the accuracy of estimated parameters is an important research direction. In the article, the maximum likelihood estimation is used to estimate CMOS image noise parameters and Fisher information is used to analyse their accuracy. The accuracies of the two parameters are different in different situations. Two applications of it are proposed in this paper. The first one is a guide to image representation. The standard pixel image has higher accuracy for signal-dependent noise and higher error for additive noise, in contrast to the normalised pixel image. Therefore, the corresponding image representation is chosen to estimate the noise parameters according to the dominant noise. The second application of the conclusions is a guide to algorithm design. For standard pixel images, the error of additive noise estimation will largely affect the final denoising result if two kinds of noise are removed simultaneously. Therefore, a divide-and-conquer hybrid total least squares algorithm is proposed for CMOS image restoration. After estimating the parameters, the total least square algorithm is first used to remove the signal-dependent noise of the image. Then, the additive noise parameters of the processed image are updated by using the principal component analysis algorithm, and the additive noise in the image is removed by BM3D. Experiments show that this hybrid method can effectively avoid the problems caused by the inconsistent precision of the two kinds of noise parameters. Compared with the state-of-art methods, the new method shows certain advantages in subjective visual quality and objective image restoration indicators.

    Citation: Mingying Pan, Xiangchu Feng. Application of Fisher information to CMOS noise estimation[J]. AIMS Mathematics, 2023, 8(6): 14522-14540. doi: 10.3934/math.2023742

    Related Papers:

    [1] Ekram E. Ali, Miguel Vivas-Cortez, Rabha M. El-Ashwah . New results about fuzzy γ-convex functions connected with the q-analogue multiplier-Noor integral operator. AIMS Mathematics, 2024, 9(3): 5451-5465. doi: 10.3934/math.2024263
    [2] Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, R. Sidaoui, Abdelkader Moumen . Inclusion properties for analytic functions of q-analogue multiplier-Ruscheweyh operator. AIMS Mathematics, 2024, 9(3): 6772-6783. doi: 10.3934/math.2024330
    [3] Alina Alb Lupaş, Shujaat Ali Shah, Loredana Florentina Iambor . Fuzzy differential subordination and superordination results for q -analogue of multiplier transformation. AIMS Mathematics, 2023, 8(7): 15569-15584. doi: 10.3934/math.2023794
    [4] Shatha S. Alhily, Alina Alb Lupaş . Sandwich theorems involving fractional integrals applied to the q -analogue of the multiplier transformation. AIMS Mathematics, 2024, 9(3): 5850-5862. doi: 10.3934/math.2024284
    [5] Ekram E. Ali, Georgia Irina Oros, Rabha M. El-Ashwah, Abeer M. Albalahi . Applications of fuzzy differential subordination theory on analytic p -valent functions connected with q-calculus operator. AIMS Mathematics, 2024, 9(8): 21239-21254. doi: 10.3934/math.20241031
    [6] Alina Alb Lupaş, Georgia Irina Oros . Differential sandwich theorems involving Riemann-Liouville fractional integral of q-hypergeometric function. AIMS Mathematics, 2023, 8(2): 4930-4943. doi: 10.3934/math.2023246
    [7] Ekram E. Ali, Georgia Irina Oros, Abeer M. Albalahi . Differential subordination and superordination studies involving symmetric functions using a q-analogue multiplier operator. AIMS Mathematics, 2023, 8(11): 27924-27946. doi: 10.3934/math.20231428
    [8] F. Müge Sakar, Arzu Akgül . Based on a family of bi-univalent functions introduced through the Faber polynomial expansions and Noor integral operator. AIMS Mathematics, 2022, 7(4): 5146-5155. doi: 10.3934/math.2022287
    [9] Norah Saud Almutairi, Adarey Saud Almutairi, Awatef Shahen, Hanan Darwish . Estimates of coefficients for bi-univalent Ma-Minda-type functions associated with q-Srivastava-Attiya operator. AIMS Mathematics, 2025, 10(3): 7269-7289. doi: 10.3934/math.2025333
    [10] Shujaat Ali Shah, Ekram Elsayed Ali, Adriana Cătaș, Abeer M. Albalahi . On fuzzy differential subordination associated with q-difference operator. AIMS Mathematics, 2023, 8(3): 6642-6650. doi: 10.3934/math.2023336
  • Analysis of the accuracy of estimated parameters is an important research direction. In the article, the maximum likelihood estimation is used to estimate CMOS image noise parameters and Fisher information is used to analyse their accuracy. The accuracies of the two parameters are different in different situations. Two applications of it are proposed in this paper. The first one is a guide to image representation. The standard pixel image has higher accuracy for signal-dependent noise and higher error for additive noise, in contrast to the normalised pixel image. Therefore, the corresponding image representation is chosen to estimate the noise parameters according to the dominant noise. The second application of the conclusions is a guide to algorithm design. For standard pixel images, the error of additive noise estimation will largely affect the final denoising result if two kinds of noise are removed simultaneously. Therefore, a divide-and-conquer hybrid total least squares algorithm is proposed for CMOS image restoration. After estimating the parameters, the total least square algorithm is first used to remove the signal-dependent noise of the image. Then, the additive noise parameters of the processed image are updated by using the principal component analysis algorithm, and the additive noise in the image is removed by BM3D. Experiments show that this hybrid method can effectively avoid the problems caused by the inconsistent precision of the two kinds of noise parameters. Compared with the state-of-art methods, the new method shows certain advantages in subjective visual quality and objective image restoration indicators.



    Some of the topics in geometric function theory are based on q-calculus operator and differential subordinations. Ismail et al. defined the class of q-starlike functions in 1990 [1], presenting the first uses of q-calculus in geometric function theory. Several authors focused on the q-analogue of the Ruscheweyh differential operators established in [2] and the q-analogue of the Sălăgean differential operators defined in [3]. Examples include the investigation of differential subordinations using a specific q-Ruscheweyh type derivative operator in [4].

    In what follows, we recall the main concepts used in this research.

    We denote by H the class of analytic functions in the open unit disc U:={ξC:|ξ|<1}. Also, H[a,n] denotes the subclass of H, containing the functions fH given by

    f(ξ)=a+anξn+an+1ξn+1+...,       ξU.

    Another well-known subclass of H is class A(n), which consists of fH and is given by

    f(ξ)=ξ+κ=n+1aκξκ,ξU, (1.1)

    with nN={1,2,...}, and A=A(1).

    The subclass K is defined by

    K={fA:Re(ξf(ξ)f(ξ)+1)>0, f(0)=0, f(0)=1, ξU},

    means the class of convex functions in the unit disk U.

    For two functions f,L (belong) to A(n), f given by (1.1), and L is given by the next form

    L(ξ)=ξ+κ=n+1bκξκ,ξU,

    the well-known convolution product was defined as: : AA

    (fL)(ξ):=ξ+κ=n+1aκbκξκ,ξU.

    In particular [5,6], several applications of Jackson's q-difference operator dq: AA are defined by

    dqf(ξ):={f(ξ)f(qξ)(1q)ξ(ξ0;0<q<1),f(0)(ξ=0). (1.2)

    Maybe we can put just κN={1,2,3,..}. It is written once previously

    dq{κ=1aκξκ}=κ=1[κ]qaκξκ1, (1.3)

    where

    [κ]q=1qκ1q=1+κ1n=1qnlimq1[κ]q=κ.[κ]q!={κn=1[n]q,            κN,   1                 κ=0.           (1.4)

    In [7], Aouf and Madian investigate the q-analogue Că tas operator Isq(λ,): A A (sN0,,λ0, 0<q<1) as follows:

    Isq(λ,)f(ξ)=ξ+κ=2([1+]q+λ([κ+]q[1+]q)[1+]q)saκξκ,(sN0,,λ0,0<q<1).

    Also, the q-Ruscheweyh operator μqf(ξ) was investigated in 2014 by Aldweby and Darus [8]

    μqf(ξ)=ξ+κ=2[κ+μ1]q![μ]q![κ1]q!aκξκ, (μ0,0<q<1),

    where [a]q and [a]q! are defined in (1.4).

    Let be

    fsq,λ,(ξ)=ξ+κ=2([1+]q+λ([κ+]q[1+]q)[1+]q)sξκ.

    Now we define a new function fs,μq,λ,(ξ) in terms of the Hadamard product (or convolution) such that:

    fsq,λ,(ξ)fs,μq,λ,(ξ)=ξ+κ=2[κ+μ1]q![μ]q![κ1]q!ξκ.

    Next, driven primarily by the q-Ruscheweyh operator and the q-Cătas operator, we now introduce the operator Isq,μ(λ,):AA is defined by

    Isq,μ(λ,)f(ξ)=fs,μq,λ,(ξ)f(ξ), (1.5)

    where sN0,,λ,μ0,0<q<1. For fA and (1.5), it is obvious

    Isq,μ(λ,)f(ξ)=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ, (1.6)

    where

    ψsq(κ,λ,)=([1+]q[1+]q+λ([κ+]q[1+]q))s.

    We observe that:

    (i) If s=0 and q1, we get μf(ξ) is a Russcheweyh differential operator [9] investigated by numerous authors [10,11,12].

    (ii) If we set q1, we obtain Imλ,,μf(ξ) which was presented by Aouf and El-Ashwah [13].

    (iii) If we set μ=0 and q1, we obtain Jmp(λ,)f(ξ), presented by El-Ashwah and Aouf (with p=1) [14].

    (iv) If μ=0, =λ=1, and q1, we get αf(ξ), investigated by Jung et al. [15].

    (v) If μ=0, λ=1,=0, and q1, we obtain Isf(ξ), presented by Sălă gean [16].

    (vi) If we set μ=0 and λ=1, we obtain Iq,sf(ξ), presented by Shah and Noor [17].

    (vii) If we set μ=0, λ=1, and q1, we obtain Jsq, Srivastava–Attiya operator: see [18,19].

    (viii) I1q,0(1,0)=ξ0f(t)tdqt. (q-Alexander operator [17]).

    (ix) I1q,0(1,)=[1+ρ]qξρξ0tρ1f(t)dqt (q-Bernardi operator [20]).

    (x) I1q,0(1,1)=[2]qξξ0f(t)dqt (q-Libera operator [20]).

    Moreover, we have

    (i) Isq,μ(1,0)f(ξ)=Isq,μf(ξ)

    f(ξ)A:Isq,μf(ξ)=ξ+κ=2(1[κ]q)s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,μ0,0<q<1,ξU).

    (ii) Isq,μ(1,)f(ξ)=Is,q,μf(ξ)

    f(ξ)A:Is,q,μf(ξ)=ξ+κ=2([1+]q[κ+]q)s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,>0,μ0,0<q<1,ξU).

    (iii) Isq,μ(λ,0)f(ξ)=Is,λq,μf(ξ)

    f(ξ)A:Is,λq,μf(ξ)=ξ+κ=2(11+λ([κ]q1))s[κ+μ1]q![μ]q![κ1]q!aκξκ, (sN0,λ>0,μ0,0<q<1,ξU).

    Since the investigation of q-difference equations using function theory tools explores various properties, this direction has been considered in many works. Thus, several authors used the q-calculus based linear extended operators recently defined for investigating theories of differential subordination and subordination (see [21,22,23,24,25,26,27,28,29,30,31,32]). Applicable problems involving q-difference equations and q-analogues of mathematical physical problems are studied extensively for: Dynamical systems, q-oscillator, q-classical, and quantum models; q-analogues of mathematical-physical problems, including heat and wave equations; and sampling theory of signal analysis [33,34].

    We denote by Φ the class of analytic univalent functions φ(ξ), which are convex functions with φ(0)=1 and  Reφ(ξ)>0 in U.

    The differential subordination theory, studied by Miller and Mocanu [35], is based on the following definitions:

    f is subordinate to L in U, denote it as fL if there exists an analytic function ϖ, with ϖ(0)=0 and |ϖ(ξ)|<1 for all ξU, such that f(ξ)=L(ϖ(ξ)). Moreover, if L is univalent in U, we have:

    f(ξ)L(ξ)f(0)=L(0)andf(U)L(U).

    Let Φ(r,s,t;ξ):C3×UC and let h in U be a univalent function. An analytic function λ in U, which validates the differential subordination, is a solution of the differential subordination

    Φ(λ(ξ),ξλ(ξ),ξ2λ(ξ);ξ)h(ξ). (1.7)

    We call V a dominant of the solutions of the differential subordination in (1.7) if λ(ξ)V(ξ) for all λ satisfying (1.7). A dominant ˜ϰ is called the best dominant of (1.7) if ˜V(ξ)V(ξ) for all the dominants V.

    The following definitions characterize both of the theories of differential superordination that Miller and Mocanu introduced in 2003 [36]:

    f is superordinate to L, denotes as L f, if there exists an analytic function ϖ, with ϖ(0)=0 and |ϖ(ξ)|<1 for all ξU, such that L(ξ)=f(ϖ(ξ)). For the univalent function f, we have

    L(ξ)f(ξ)f(0)=L(0)andL(U)f(U).

    Let Φ(r,s;ξ):C2×UC and let h in U be an analytic function. A solution of the differential superordination is the univalent function λ such that Φ(λ(ξ),ξλ(ξ);ξ) is univalent in U satisfy the differential superordination

    h(ξ)Φ(λ(ξ),ξλ(ξ);ξ), (1.8)

    then λ is called to be a solution of the differential superordination in (1.8). We call the function V a subordinant of the solutions of the differential superordination in (1.8) if V(ξ) λ(ξ) for all λ satisfying (1.8). A subordinant ˜V is called the best subordinant of (1.8) if V(ξ)˜V(ξ) for all the subordinants V.

    Let say the collection of injective and analytic functions on ¯UE(χ), with χ(ξ)0 for ξUE(χ) and

    E(χ)={ς:ςU  :  limξςχ(ξ)=}.

    Also, (a) is the subclass of with χ(0)=a.

    The proofs of our main results and findings in the upcoming sections can benefit from the usage of the following lemmas:

    Lemma 1.1. (Miller and Mocanu [35]). Suppose g is convex in U, and

    h(ξ)=nγξg(ξ)+g(ξ),

    with ξU, n is +ve integer and γ>0. When

    g(0)+pnξn+pn+1ξn+1+....=p(ξ),   ξU,

    is analytic in U, and

    γξp(ξ)+p(ξ)h(ξ),   ξU,

    holds, then

    p(ξ)g(ξ),

    holds as well.

    Lemma 1.2. (Hallenbeck and Ruscheweyh [37], see also (Miller and Mocanu [38], Th. 3.1.b, p.71)). Let h be a convex with h(0)=a, and let γC with Re(γ)0. When pH[a,n] and

    p(ξ)+ξp(ξ)γh(ξ),      ξU,

    holds, then

    p(ξ)g(ξ)h(ξ),       ξU,

    holds for

    g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt,      ξU.

    Lemma 1.3. (Miller and Mocanu [35]) Let h be a convex with h(0)=a, and let γC, with Re(γ)0. When pQH[a,n], p(ξ)+ξp(ξ)γ is a univalent in U and

    h(ξ)p(ξ)+ξp(ξ)γ,      ξU,

    holds, then

    g(ξ)p(ξ),       ξU,

    holds as well, for g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt, ξU the best subordinant.

    Lemma 1.4. (Miller and Mocanu [35]) Let a convex g be in U, and

    h(ξ)=g(ξ)+ξg(ξ)γ,      ξU,

    with γC, Re(γ)0. If pQH[a,n], p(ξ)+ξp(ξ)γ is a univalent in U and

    g(ξ)+ξg(ξ)γp(ξ)+ξp(ξ)γ,      ξU,

    holds, then

    g(ξ)p(ξ),       ξU,

    holds as well, for g(ξ)=γnξ(γ/n)ξ0h(t)t(γ/n)1dt, ξU the best subordinant.

    For ˊa,ϱ,ˊc and ˊc(ˊcZ0) let consider the following Gaussian hypergeometric function is

    2F1(ˊa,ϱ;ˊc;ξ)=1+ˊaϱˊc.ξ1!+ˊa(ˊa+1)ϱ(ϱ+1)ˊc(ˊc+1).ξ22!+....

    For ξU, the above series completely converges to an analytic function in U, (see, for details, [ [39], Chapter 14]).

    Lemma 1.5. [39] For ˊa,ϱ and ˊc (ˊcZ0), complex parameters

    10tϱ1(1t)ˊcϱ1(1ξt)ˊadt=Γ(ϱ)Γ(ˊcϱ)Γ(ˊc)2F1(ˊa,ϱ;ˊc;ξ)(Re(ˊc)>Re(ϱ)>0);
    2F1(ˊa,ϱ;ˊc;ξ)=2F1(ϱ,ˊa;ˊc;ξ);
    2F1(ˊa,ϱ;ˊc;ξ)=(1ξ)ˊa2F1(ˊa,ˊcϱ;ˊc;ξξ1);
    2F1(1,1;2;ˊaξˊaξ+1)=(1+ˊaξ)ln(1+ˊaξ)ˊaξ;
    2F1(1,1;3;ˊaξˊaξ+1)=2(1+ˊaξ)ˊaξ(1ln(1+ˊaξ)ˊaξ).

    A q-multiplier-Ruscheweyh operator is considered in the study reported in this paper to create a novel convex subclass of normalized analytic functions in the open unit disc U. Then, employing the techniques of differential subordination and superordination theory, this subclass is examined in more detail.

    Isq,μ(λ,)f(ξ) given in (1.6) is a q-multiplier-Ruscheweyh operator that is applied to define the new class of normalized analytic functions in the open unit disc U.

    Definition 2.1. Let α[0,1). The class Ssq,μ(λ,;α) involves of the function fA with

    Re(Isq,μ(λ,)f(ξ))>α,      ξU. (2.1)

    We use the following denotations:

    (i) Ssq,μ(λ,;0)=Ssq,μ(λ,).

    (ii) S0q,0(λ,;α)=S(α) (Ref(ξ)>α), see Ding et al. [40].

    (iii) S0q,0(λ,;0)=S (Ref(ξ)>0), see MacGregor [41].

    The first result concerning the class Ssq,μ(λ,;α) establishes its convexity.

    Theorem 2.1. The class Ssq,μ(λ,;α) is closed under convex combination.

    Proof. Consider

    fj(ξ)=ξ+κ=2ajκξκ,ξU,  j=1,2,

    being in the class Ssq,μ(λ,;α). It suffices to demonstrate that

    f(ξ)=ηf1(ξ)+(1η)f2(ξ),

    belongs to the class Ssq,μ(λ,;α), with η a positive real number.

    f is given by:

    f(ξ)=ξ+κ=2(ηa1κ+(1η)a2κ)ξκ,ξU,

    and

    Isq,μ(λ,)f(ξ)=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!(ηa1κ+(1η)a2κ)ξκ. (2.2)

    Differentiating (2.2), we have

    (Isq,μ(λ,)f(ξ))=1+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!(ηa1κ+(1η)a2κ)κξκ1

    Hence

    Re(Isq,μ(λ,)f(ξ))=1+Re(ηκ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!a1κξκ1)+Re((1η)κ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!a2κξκ1). (2.3)

    Taking into account that f1,f2 Ssq,μ(λ,;α), we can write

    Re(ηκ=2κψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!ajκξκ1)>η(α1). (2.4)

    Using relation (2.4), we get from relation (2.3):

    Re(Isq,μ(λ,)f(ξ))>1+η(α1)+(1η)(α1)=α.

    It demonstrated that the set Ssq,μ(λ,;α) is convex.

    Next, we study a class of differential subordinations Ssq,μ(λ,;α) and a q-multiplier-Ruscheweyh operator Isq,μ(λ,) involving convex functions.

    Theorem 2.2. For g to be convex, we define

    h(ξ)=g(ξ)+ξg(ξ)a+2,     a>0, ξU. (2.5)

    For fSsq,μ(λ,;α), consider

    F(ξ)=a+2ξa+1ξ0taf(t)dt, ξU, (2.6)

    then the differential subordination

    (Isq,μ(λ,)f(ξ))h(ξ), (2.7)

    implies the differential subordination

    (Isq,μ(λ,)F(ξ))g(ξ),

    for the best dominant.

    Proof. We can write (2.6) as:

    ξa+1F(ξ)=(a+2)ξ0taf(t)dt,  ξU,

    and differentiating it, we get

    ξF(ξ)+(a+1)F(ξ)=(a+2)f(ξ)

    and

    ξ(Isq,μ(λ,)F(ξ))+(a+1)Isq,μ(λ,)F(ξ)=(a+2)Isq,μ(λ,)f(ξ),  ξU.

    Differentiating the last relation, we obtain

    ξ(Isq,μ(λ,)F(ξ))a+2+(Isq,μ(λ,)F(ξ))=(Isq,μ(λ,)f(ξ)),  ξU,

    and (2.7) can be written as

    ξ(Isq,μ(λ,)F(ξ))a+2+(Isq,μ(λ,)F(ξ))ξg(ξ)a+2+g(ξ). (2.8)

    Denoting

    p(ξ)=(Isq,μ(λ,)F(ξ))H[1,1], (2.9)

    differential subordination (2.8) has the next type:

    ξp(ξ)a+2+p(ξ)ξg(ξ)a+2+g(ξ).

    Through Lemma 1.1, we find

    p(ξ)g(ξ),

    then

    (Isq,μ(λ,)F(ξ))g(ξ),

    where g is the best dominant.

    Theorem 2.3. Denoting

    Ia(f)(ξ)=a+2ξa+1ξ0taf(t)dt, a>0, (2.10)

    then,

    Ia[Ssq,μ(λ,;α)]Ssq,μ(λ,;α), (2.11)

    where

    α=(2α1)(α1)2F1(1,1,a+3;12). (2.12)

    Proof. Using Theorem 2.2 for h(ξ)=1(2α1)ξ1ξ, and using the identical procedures as Theorem 2.2, proof then

    ξp(ξ)a+2+p(ξ)h(ξ),

    holds, with p defined by (2.9).

    Through Lemma 1.2, we find

    p(ξ)g(ξ)h(ξ),

    similar to

    (Isq,μ(λ,)F(ξ))g(ξ)h(ξ),

    where

    g(ξ)=a+2ξa+2ξ0ta+11(2α1)t1tdt=(2α1)2(a+2)(α1)ξa+2ξ0ta+11tdt.

    By using Lemma 1.5, we get

    g(ξ)=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1).

    Since g is a convex function and g(U) is symmetric around the real axis, we have

    Re(Isq,μ(λ,)F(ξ))min|ξ|=1Reg(ξ)=Reg(1)=α=(2α1)(α1)2F1(1,1,a+3;12).

    If we put α=0, in Theorem 2.3, we obtain

    Corollary 2.1. Let

    Ia(f)(ξ)=a+2ξa+1ξ0taf(t)dt, a>0,

    then,

    Ia[Ssq,μ(λ,)]Ssq,μ(λ,;α),

    where

    α=1+ 2F1(1,1,a+3;12).

    Example 2.1. If a=0 in Corollary 2.1, we get:

    I0(f)(ξ)=2ξξ0f(t)dt,

    then,

    I0[Ssq,μ(λ,)]Ssq,μ(λ,;α), 

    where

    α=1+2F1(1,1,3;12)=34ln2.

    Theorem 2.4. Let g be the convex with g(0)=1, we define

    h(ξ)=ξg(ξ)+g(ξ), ξU.

    If fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.13)

    then the sharp differential subordination

    Isq,μ(λ,)f(ξ)ξg(ξ),   ξU, (2.14)

    holds.

    Proof. Considering

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξ=1+p1ξ+p2ξ2+....,      ξU,

    clearly pH[1,1], this we can write

    ξp(ξ)=Isq,μ(λ,)f(ξ),

    and differentiating it, we obtain

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ).

    Subordination (2.13) takes the form

    ξp(ξ)+p(ξ)h(ξ)=ξg(ξ)+g(ξ), (2.15)

    Lemma 1.1, allows us to have p(ξ)g(ξ), then (2.14) holds.

    Theorem 2.5. Let h be the convex and h(0)=1, if fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.16)

    then we obtain the subordination

    Isq,μ(λ,)f(ξ)ξg(ξ),   ξU,

    for the convex function g(ξ)=(2α1)+2(α1)ξln(1ξ), being the best dominant.

    Proof. Let

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=1+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ1H[1,1],      ξU.

    By differentiating it, we get

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ),

    and differential subordination (2.16) becomes

    ξp(ξ)+p(ξ)h(ξ),

    Lemma 1.2 allows us to have

    p(ξ)g(ξ)=1ξξ0h(t)dt,

    then

    Isq,μ(λ,)f(ξ)ξg(ξ)=(2α1)+2(α1)ξln(1ξ),

    for g is the best dominant.

    If we put α=0 in Theorem 2.5, we have

    Corollary 2.2. Considering the convex h with h(0)=1, if fA verifies

    (Isq,μ(λ,)f(ξ))h(ξ), ξU,

    then we obtain the subordination

    Isq,μ(λ,)f(ξ)ξg(ξ)=12ξln(1ξ),   ξU,

    for the convex function g(ξ), which is the best dominant.

    Example 2.2. From Corollary 2.2, if

    (Isq,μ(λ,)f(ξ))h(ξ), ξU,

    we obtain

    Re(Isq,μ(λ,)f(ξ))min|ξ|=1Reg(ξ)=Reg(1)=1+2ln2,

    Theorem 2.6. Let g be a convex function with g(0)=1. We define h(ξ)=ξg(ξ)+g(ξ), ξU. If fA verifies

    (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ))h(ξ), ξU, (2.17)

    then

    Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)g(ξ),   ξU, (2.18)

    holds.

    Proof. For

    p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)=ξ+κ=2ψs+1q(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκ.

    By differentiating it, we get

    p(ξ)=(Is+1q,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ)p(ξ)(Isq,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ).

    then

    ξp(ξ)+p(ξ)=(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)).

    Differential subordination (2.17), then we obtain (2.15), and Lemma 1.1 allows us to have p(ξ)g(ξ), then (2.18) holds.

    This section examines differential superordinations with respect to a first-order derivative of a q-multiplier-Ruscheweyh operator Isq,μ(λ,). For every differential superordination under investigation, we provide the best subordinant.

    Theorem 3.1. Considering fA, a convex h in U such that h(0)=1, and F(ξ) defined in (2.6). We assume that (Isq,μ(λ,)f(ξ)) is a univalent in U, (Isq,μ(λ,)f(ξ))QH[1,1]. If

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.1)

    holds, then

    g(ξ)(Isq,μ(λ,)F(ξ)),  ξU,

    with g(ξ)=a+2ξa+2ξ0ta+1h(t)dt the best subordinant.

    Proof. Differentiating (2.6), then ξF(ξ)+(a+1)F(ξ)=(a+2)f(ξ) can be expressed as

    ξ(Isq,μ(λ,)F(ξ))+(a+1)Isq,μ(λ,)F(ξ)=(a+2)Isq,μ(λ,)f(ξ),

    which, after differentiating it again, has the form

    ξ(Isq,μ(λ,)F(ξ))(a+2)+(Isq,μ(λ,)F(ξ))=(Isq,μ(λ,)f(ξ)).

    Using the final relation, (3.1) can be expressed

    h(ξ)ξ(Isq,μ(λ,)F(ξ))(a+2)+(Isq,μ(λ,)F(ξ)). (3.2)

    Define

    p(ξ)=(Isq,μ(λ,)F(ξ)),  ξU, (3.3)

    and putting (3.3) in (3.2), we obtain h(ξ)ξp(ξ)(a+2)+p(ξ), ξU. Using Lemma 1.3, given n=1, and α=a+2, it results in g(ξ)p(ξ), similar g(ξ)(Isq,μ(λ,)F(ξ)), with the best subordinant g(ξ)=a+2ξa+2ξ0ta+1h(t)dt convex function.

    Theorem 3.2. Let fA, F(ξ)=a+2ξa+1ξ0taf(t)dt, and h(ξ)=1(2α1)ξ1ξ where Rea>2, α[0,1). Suppose that (Isq,μ(λ,)f(ξ)) is a univalent in U, (Isq,μ(λ,)F(ξ))QH[1,1] and

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.4)

    then

    g(ξ)(Isq,μ(λ,)F(ξ)),  ξU,

    is satisfied for the convex function g(ξ)=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1) as the best subordinant.

    Proof. Let p(ξ)=(Isq,μ(λ,)F(ξ)). We can express (3.4) as follows when Theorem 3.1 is proved:

    h(ξ)=1(2α1)ξ1ξξp(ξ)a+2+p(ξ).

    By using Lemma 1.4, we obtain g(ξ)p(ξ), with

    g(ξ)=a+2ξa+2ξ01(2α1)t1tta+1dt=(2α1)2(α1)(1ξ)12F1(1,1,a+3;ξξ1)(Isq,μ(λ,)F(ξ)),

    g is convex and the best subordinant.

    Theorem 3.3. Let fA and h be a convex function with h(0)=1. Assuming that (Isq,μ(λ,)f(ξ)) is a univalent and Isq,μ(λ,)f(ξ)ξQH[1,1], if

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.5)

    holds, then

    g(ξ)Isq,μ(λ,)f(ξ)ξ,  ξU,

    is satisfied for the convex function g(ξ)=1ξξ0h(t)dt, the best subordinant.

    Proof. Denoting

    p(ξ)=Isq,μ(λ,)f(ξ)ξ=ξ+κ=2ψsq(κ,λ,)[κ+μ1]q![μ]q![κ1]q!aκξκξH[1,1],

    we can write Isq,μ(λ,)f(ξ)=ξp(ξ) and differentiating it, we have

    (Isq,μ(λ,)f(ξ))=ξp(ξ)+p(ξ).

    With this notation, differential superordination (3.5) becomes

    h(ξ)ξp(ξ)+p(ξ).

    Using Lemma 1.3, we obtain

    g(ξ)p(ξ)=Isq,μ(λ,)f(ξ)ξ  for g(ξ)=1ξξ0h(t)dt,

    convex and the best subordinant.

    Theorem 3.4. Suppose that h(ξ)=1(2α1)ξ1ξ with α[0,1). For fA, assume that (Isq,μ(λ,)f(ξ)) is a univalent and Isq,μ(λ,)f(ξ)ξQH[1,1]. If

    h(ξ)(Isq,μ(λ,)f(ξ)),  ξU, (3.6)

    holds, then

    g(ξ)Isq,μ(λ,)f(ξ)ξ,  ξU,

    where

    g(ξ)=(2α1)+2(α1)ξln(1ξ).

    Proof. After presenting Theorem 3.3's proof for p(ξ)=Isq,μ(λ,)f(ξ)ξ, superordination (3.6) takes the form

    h(ξ)=1(2α1)ξ1ξξp(ξ)+p(ξ).

    By using Lemma 1.3, we obtain g(ξ)p(ξ), with

    g(ξ)=1ξξ01(2α1)t1tdt=(2α1)+2(α1)ξln(1ξ)Isq,μ(λ,)f(ξ)ξ,

    g is convex and the best subordinant.

    Theorem 3.5. Let h be a convex function, with h(0)=1. For fA, let (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)) is univalent in U and Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)QH[1,1]. If

    h(ξ)(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)),  ξU, (3.7)

    holds, then

    g(ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),  ξU,

    where the convex g(ξ)=1ξξ0h(t)dt is the best subordinant.

    Proof. Let

    p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),

    after differentiating it, we can write

    p(ξ)=(Is+1q,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ)p(ξ)(Isq,μ(λ,)f(ξ))Isq,μ(λ,)f(ξ),

    in the form ξp(ξ)+p(ξ)=(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)).

    Differential superordination (3.7) becomes h(ξ)ξp(ξ)+p(ξ). Applying Lemma 1.3, we obtain g(ξ)p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ), with the convex g(ξ)=1ξξ0h(t)dt, the best subordinant.

    Theorem 3.6. Assume that h(ξ)=1(2α1)ξ1ξ with α[0,1). For fA, suppose that (ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ))is univalent and Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)QH[1,1]. If

    h(ξ)(ξIs+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ)),  ξU, (3.8)

    holds, then

    g(ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),  ξU,

    where

    g(ξ)=(2α1)+2(α1)ξln(1ξ).

    Proof. By using p(ξ)=Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ), differential superordination (3.8) takes the form

    h(ξ)=1(2α1)ξ1ξξp(ξ)+p(ξ).

    By using Lemma 1.3, we get g(ξ)p(ξ), with

    g(ξ)=1ξξ01(2α1)t1tdt=(2α1)+2(α1)ξln(1ξ)Is+1q,μ(λ,)f(ξ)Isq,μ(λ,)f(ξ),

    g is convex and the best subordinant.

    A new class of analytical normalized functions Ssq,μ(λ,;α), given in Definition 2.1, is related to the novel findings proven in this study given in Definition 2.1. To introduce some subclasses of univalent functions, we develop the q-analogue multiplier-Ruscheweyh operator Isq,μ(λ,) using the notion of a q-difference operator. The q-Ruscheweyh operator and the q-C ătas operator are also used to introduce and study distinct subclasses. In Section 2, these subclasses are subsequently examined in more detail utilizing differential subordination theory methods. Regarding the q-analogue multiplier-Ruscheweyh operatorIsq,μ(λ,) and its derivatives of first and second order, we derive differential superordinations in Section 3. For every differential superordination under investigation, the best subordinant is provided.

    The authors contributed equally to the writing of this paper. All authors have read and agreed to the published version of the manuscript.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This research is supported by "Decembrie 1918" University of Alba Iulia, through the scientific research funds.

    The authors declare that they have no conflicts of interest.



    [1] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), (1998), 839–846. http://dx.doi.org/10.1109/ICCV.1998.710815 doi: 10.1109/ICCV.1998.710815
    [2] S. Osher, M. Burger, D. Goldfarb, J. Xu, E. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Sim., 4 (2005), 460–489. http://dx.doi.org/10.1137/040605412 doi: 10.1137/040605412
    [3] Y. zhang, S. Li, B. Wu, S. Du, Image multiplicative denoising using adaptive Euler's elastica as the regularization, J. Sci. Comput., 90 (2022), 69. http://dx.doi.org/10.1007/s10915-021-01721-7 doi: 10.1007/s10915-021-01721-7
    [4] L. Rudin, P. L. Lions, S. Osher, Geometric level set methods in imaging, vision, and graphics, New York, USA: Springer, 2003. http://dx.doi.org/10.1007/0-387-21810-6_6
    [5] J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model, SIAM J. Imaging Sci., 1 (2008), 294–321. http://dx.doi.org/10.1137/070689954 doi: 10.1137/070689954
    [6] K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492–526. http://dx.doi.org/10.1137/090769521 doi: 10.1137/090769521
    [7] Y. Lv, Total generalized variation denoising of speckled images using a primal-dual algorithm, J. Appl. Math. Comput., 62 (2020), 489–509. http://dx.doi.org/10.1007/s12190-019-01293-8 doi: 10.1007/s12190-019-01293-8
    [8] A. Ben-Loghfyry, A. Hakim, A. Laghrib, A denoising model based on the fractional Beltrami regularization and its numerical solution, J. Appl. Math. Comput., 69 (2023), 1431–1463. http://dx.doi.org/10.1007/s12190-022-01798-9 doi: 10.1007/s12190-022-01798-9
    [9] T. H. Ma, T. Z. Huang, X. L. Zhao, Spatially dependent regularization parameter selection for total generalized variation-based image denoising, Comput. Appl. Math., 37 (2018), 277–296. http://dx.doi.org/10.1007/s40314-016-0342-8 doi: 10.1007/s40314-016-0342-8
    [10] H. Houichet, A. Theljani, M. Moakher, A nonlinear fourth-order PDE for image denoising in Sobolev spaces with variable exponents and its numerical algorithm, Comput. Appl. Math., 40 (2021), 1–29. http://dx.doi.org/10.1007/s40314-021-01462-1 doi: 10.1007/s40314-021-01462-1
    [11] A. Hakim, A. Ben-Loghfyry, A total variable-order variation model for image denoising, AIMS Math., 4 (2019), 1320–1335. http://dx.doi.org/10.3934/math.2019.5.1320 doi: 10.3934/math.2019.5.1320
    [12] J. L. Starck, E. J. Candès, D. L. Donoho, The curvelet transform for image denoising, IEEE T. Image Process., 11 (2002), 670–684. http://dx.doi.org/10.1109/TIP.2002.1014998 doi: 10.1109/TIP.2002.1014998
    [13] J. Yang, Y. Wang, W. Xu, Q. Dai, Image and video denoising using adaptive dual-tree discrete wavelet packets, IEEE T. Circ. Syst. Vid., 19 (2009), 642–655. http://dx.doi.org/10.1109/TCSVT.2009.2017402 doi: 10.1109/TCSVT.2009.2017402
    [14] L. Fan, X. Li, H. Fan, Y. Feng, C. Zhang, Adaptive texture-preserving denoising method using gradient histogram and nonlocal self-similarity priors, IEEE T. Circ. Syst. Vid., 29 (2019), 3222–3235. http://dx.doi.org/10.1109/TCSVT.2018.2878794 doi: 10.1109/TCSVT.2018.2878794
    [15] Z. Long, N. H. Younan, Denoising of images with multiplicative noise corruption, 2005 13th European Signal Processing Conference, (2005), 1–4.
    [16] W. Dong, L. Zhang, G. Shi, X. Li, Nonlocally centralized sparse representation for image restoration, IEEE T. Image Process., 22 (2012), 1620–1630. http://dx.doi.org/10.1109/TIP.2012.2235847 doi: 10.1109/TIP.2012.2235847
    [17] A. Buades, B. Coll, J. M. Morel, A non-local algorithm for image denoising, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), (2005), 60–65. http://dx.doi.org/10.1109/CVPR.2005.38 doi: 10.1109/CVPR.2005.38
    [18] K. Dabov, A. Foi, V. Katkovnik, K. Egiazarian, Image denoising by sparse 3-D transform-domain collaborative filtering, IEEE T. Image Process., 16 (2007), 2080–2095. http://dx.doi.org/10.1109/TIP.2007.901238 doi: 10.1109/TIP.2007.901238
    [19] S. Gu, L. Zhang, W. Zuo, X. Feng, Weighted nuclear norm minimization with application to image denoising, 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 2862–2869. http://dx.doi.org/10.1109/CVPR.2014.366 doi: 10.1109/CVPR.2014.366
    [20] J. Mairal, F. Bach, J. Ponce, G. Sapiro, A. Zisserman, Non-local sparse models for image restoration, 2009 IEEE 12th International Conference on Computer Vision, (2009), 2272–2279. http://dx.doi.org/10.1109/ICCV.2009.5459452 doi: 10.1109/ICCV.2009.5459452
    [21] W. Dong, L. Zhang, G. Shi, X. Li, Nonlocally centralized sparse representation for image restoration, IEEE T. Image Process., 22 (2013), 1620–1630. http://dx.doi.org/10.1109/TIP.2012.2235847 doi: 10.1109/TIP.2012.2235847
    [22] Q. Guo, C. Zhang, Y. Zhang, H. Liu, An efficient SVD-based method for image denoising, IEEE T. Circ. Syst. Vid., 26 (2016), 868–880. http://dx.doi.org/10.1109/TCSVT.2015.2416631 doi: 10.1109/TCSVT.2015.2416631
    [23] M. Yahia, T. Ali, M. M. Mortula, R. Abdelfattah, S. E. Mahdy, N. S. Arampola, Enhancement of SAR speckle denoising using the improved iterative filter, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13 (2020), 859–871. http://dx.doi.org/10.1109/JSTARS.2020.2973920 doi: 10.1109/JSTARS.2020.2973920
    [24] K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE T. Image Process., 26 (2017), 3142–3155. http://dx.doi.org/10.1109/TIP.2017.2662206 doi: 10.1109/TIP.2017.2662206
    [25] Y. Meng, J. Zhang, A novel gray image denoising method using convolutional neural network, IEEE Access, 10 (2022), 49657–49676. http://dx.doi.org/10.1109/ACCESS.2022.3169131 doi: 10.1109/ACCESS.2022.3169131
    [26] G. Wang, Z. Pan, Z. Zhang, Deep CNN Denoiser prior for multiplicative noise removal, Multimed. Tools Appl., 78 (2019), 29007–29019. http://dx.doi.org/10.1007/s11042-018-6294-9 doi: 10.1007/s11042-018-6294-9
    [27] H. Tian, B. Fowler, A. E. Gamal, Analysis of temporal noise in CMOS photodiode active pixel sensor, IEEE J. Solid-St. Circ., 36 (2001), 92–101. http://dx.doi.org/10.1109/4.896233 doi: 10.1109/4.896233
    [28] J. Zhang, K. Hirakawa, Improved denoising via Poisson mixture modeling of image sensor noise, IEEE T. Image Process., 26 (2017), 1565–1578. http://dx.doi.org/10.1109/TIP.2017.2651365 doi: 10.1109/TIP.2017.2651365
    [29] D. Chen, X. Teng, Novel variational approach for generalized signal dependent noise removal, 2018 11th International Symposium on Computational Intelligence and Design (ISCID), 2 (2018), 380–384. http://dx.doi.org/10.1109/ISCID.2018.10187 doi: 10.1109/ISCID.2018.10187
    [30] J. Zhang, Y. Duan, Y. Lu, M. K. Ng, H. Chang, Bilinear constraint based ADMM for mixed Poisson-Gaussian noise removal, arXiv preprint arXiv: 1910.08206, 2019.
    [31] M. Ghulyani, M. Arigovindan, Fast roughness minimizing image restoration under mixed Poisson–Gaussian noise, IEEE T. Image Process., 30 (2021), 134–149. http://dx.doi.org/10.1109/TIP.2020.3032036 doi: 10.1109/TIP.2020.3032036
    [32] S. Huang, T. Lu, Z. Lu, J. Rong, X. Zhao, J. Li, CMOS image sensor fixed pattern noise calibration scheme based on digital filtering method, Microelectron. J., 124 (2022), 10543. http://dx.doi.org/10.1016/j.mejo.2022.105431 doi: 10.1016/j.mejo.2022.105431
    [33] S. Lee, M. G. Kang, Poisson-Gaussian noise reduction for X-Ray images based on local linear minimum mean square error shrinkage in nonsubsampled contourlet transform domain, IEEE Access, 9 (2021), 100637–100651. http://dx.doi.org/10.1109/ACCESS.2021.3097078 doi: 10.1109/ACCESS.2021.3097078
    [34] J. Zhang, K. Hirakawa, Improved denoising via Poisson mixture modeling of image sensor noise, IEEE T. Image Process., 26 (2017), 1565–1578. http://dx.doi.org/10.1109/TIP.2017.2651365 doi: 10.1109/TIP.2017.2651365
    [35] A. Repetti, E. Chouzenoux, J. Pesquet, A penalized weighted least squares approach for restoring data corrupted with signal-dependent noise, 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), (2012), 1553–1557.
    [36] K. Hirakawa, T. W. Parks, Image denoising using total least squares, IEEE T. Image Process., 15 (2006), 2730–2742. http://dx.doi.org/10.1109/TIP.2006.877352 doi: 10.1109/TIP.2006.877352
    [37] Y. Qiu, Z. Gan, Y. Fan, X. Zhu, An adaptive image denoising method for mixture Gaussian noise, 2011 International Conference on Wireless Communications and Signal Processing (WCSP), (2011), 1–5. http://dx.doi.org/10.1109/WCSP.2011.6096774 doi: 10.1109/WCSP.2011.6096774
    [38] J. Byun, S. Cha, T. Moon, FBI-Denoiser: Fast blind image denoiser for Poisson-Gaussian noise, IEEE Conference on Computer Vision and Pattern Recognition, (2021), 5768–5777. http://dx.doi.org/10.1109/CVPR46437.2021.00571 doi: 10.1109/CVPR46437.2021.00571
    [39] D. L. Donoho, De-noising by soft-thresholding, IEEE T. Inform. Theory, 41 (1995), 613–627. https://doi.org/10.1109/18.382009 doi: 10.1109/18.382009
    [40] J. Immerkær, Fast noise variance estimation, Comput. Vis. Image Und., 64 (1996), 300–302. http://dx.doi.org/10.1006/cviu.1996.0060 doi: 10.1006/cviu.1996.0060
    [41] D. Zoran, Y. Weiss, Scale invariance and noise in natural images, 2009 IEEE 12th International Conference on Computer Vision, (2009), 2209–2216. https://doi.org/10.1109/ICCV.2009.5459476 doi: 10.1109/ICCV.2009.5459476
    [42] S. Zhu, Z. Yu, Self-guided filter for image denoising, IET Image Prosess, 14 (2020), 2561–2566. http://dx.doi.org/10.1049/iet-ipr.2019.1471 doi: 10.1049/iet-ipr.2019.1471
    [43] L. Lu, W. Jin, X. Wang, Non-local means image denoising with a soft threshold, IEEE Signal Proc. Lett., 22 (2015), 833–837. http://dx.doi.org/10.1109/LSP.2014.2371332 doi: 10.1109/LSP.2014.2371332
    [44] D. -G. Kim, Y. Ali, M. A. Farooq, A. Mushtaq, M. A. A. Rehman, Z. H. Shamsi, Hybrid Deep Learning Framework for Reduction of Mixed Noise via Low Rank Noise Estimation, IEEE Access, 10 (2022), 46738–46752. http://dx.doi.org/10.1109/ACCESS.2022.3170490 doi: 10.1109/ACCESS.2022.3170490
    [45] D. H. Shin, R. H. Park, S. Yang, J. H. Jung, Block-based noise estimation using adaptive Gaussian filtering, IEEE T. Consum. Electr., 51 (2005), 218–226. http://dx.doi.org/10.1109/TCE.2005.1405723 doi: 10.1109/TCE.2005.1405723
    [46] A. Danielyan, A. Foi, Noise variance estimation in nonlocal transform domain, 2009 International Workshop on Local and Non-Local Approximation in Image Processing, (2009), 41–45. https://doi.org/10.1109/LNLA.2009.5278404 doi: 10.1109/LNLA.2009.5278404
    [47] X. Liu, M. Tanaka, M. Okutomi, Noise level estimation using weak textured patches of a single noisy image, 2012 19th IEEE International Conference on Image Processing, (2012), 665–668. http://dx.doi.org/10.1109/ICIP.2012.6466947 doi: 10.1109/ICIP.2012.6466947
    [48] X. Liu, M. Tanaka, M. Okutomi, Estimation of signal dependent noise parameters from a single image, 2013 IEEE International Conference on Image Processing, (2013), 79–82. http://dx.doi.org/10.1109/ICIP.2013.6738017 doi: 10.1109/ICIP.2013.6738017
    [49] C. Sutour, A.-C. Deledalle, F.-J. Aujol, Estimation of the noise level function based on a non-parametric detection of homogeneous image regions, SIAM J. Imaging Sci., 8 (2015), 1–31. http://dx.doi.org/10.1137/15M1012682 doi: 10.1137/15M1012682
    [50] Z. Wang, Z. Huang, Y. Xu, Y. Zhang, X. Li, X. Li, et al., Image Noise Level Estimation by Employing Chi-Square Distribution, 2021 IEEE 21st International Conference on Communication Technology (ICCT), (2021), 1158–1161. http://dx.doi.org/10.1109/ICCT52962.2021.9657946 doi: 10.1109/ICCT52962.2021.9657946
    [51] V. A. Pimpalkhute, R. Page, A. Kothari, K. M. Bhurchandi, V. M. Kamble, Digital image noise estimation using DWT coefficients, IEEE T. Image Process., 30 (2021), 1962–1972. http://dx.doi.org/10.1109/TIP.2021.3049961 doi: 10.1109/TIP.2021.3049961
    [52] J. Sijbers, A. den Dekker, Maximum likelihood estimation of signal amplitude and noise variance from MR data, Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 51 (2004), 586–594. http://dx.doi.org/10.1002/mrm.10728 doi: 10.1002/mrm.10728
    [53] M. W. Wu, Y. Jin, Y. Li, T. Song, P. Y. Kam, Maximum-likelihood, magnitude-based, amplitude and noise variance estimation, IEEE Signal Proc. Lett., 28 (2021), 414–418. http://dx.doi.org/10.1109/LSP.2021.3055464 doi: 10.1109/LSP.2021.3055464
    [54] A. G. Vostretsov, S. G. Filatova, The Estimation of Parameters of Pulse Signals Having an Unknown Form That Are Observed against the Background of the Additive Mixture of the White Gaussian Noise and a Linear Component with Unknown Parameters, Journal of Communications Technology and Electronics, 66 (2021), 938–947. http://dx.doi.org/10.1134/S106422692108009X doi: 10.1134/S106422692108009X
    [55] R. A. Fisher, On the mathematical foundations of theoretical statistics, Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 222 (1922), 309–368. http://dx.doi.org/10.1098/rsta.1922.0009 doi: 10.1098/rsta.1922.0009
    [56] Y. Ouyang, S. Wang, L. Zhang, Quantum optical interferometry via the photon-added two-mode squeezed vacuum states, J. Opt. Soc. Am. B, 33 (2016), 1373–1381. https://doi.org/10.1364/JOSAB.33.001373 doi: 10.1364/JOSAB.33.001373
    [57] G. C. Knee, W. J. Munro, Fisher information versus signal-to-noise ratio for a split detector, PHYS. REV. A, 92 (2015), 012130. http://dx.doi.org/10.1103/PhysRevA.92.012130 doi: 10.1103/PhysRevA.92.012130
    [58] J. Chao, E. S. Ward, R. J. Ober, Fisher information theory for parameter estimation in single molecule microscopy: tutorial, J. Opt. Soc. Am. A, 33 (2016), B36–B57. https://doi.org/10.1364/JOSAA.33.000B36 doi: 10.1364/JOSAA.33.000B36
    [59] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, et al., Overcoming catastrophic forgetting in neural networks, Proceedings of the national academy of sciences, 114 (2017), 3521–3526. https://doi.org/10.1073/pnas.1611835114 doi: 10.1073/pnas.1611835114
    [60] J. Martens, New insights and perspectives on the natural gradient method, The Journal of Machine Learning Research, 21 (2020), 5776–5851.
  • This article has been cited by:

    1. Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, Rabab Sidaoui, Marwa Ennaceur, Miguel Vivas-Cortez, Convolution Results with Subclasses of p-Valent Meromorphic Function Connected with q-Difference Operator, 2024, 12, 2227-7390, 3548, 10.3390/math12223548
    2. Ekram E. Ali, Rabha M. El-Ashwah, Abeer M. Albalahi, Application of Fuzzy Subordinations and Superordinations for an Analytic Function Connected with q-Difference Operator, 2025, 14, 2075-1680, 138, 10.3390/axioms14020138
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1487) PDF downloads(49) Cited by(3)

Figures and Tables

Figures(6)  /  Tables(2)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog