Feature representations with rich topic information can greatly improve the performance of story segmentation tasks. VAEGAN offers distinct advantages in feature learning by combining variational autoencoder (VAE) and generative adversarial network (GAN), which not only captures intricate data representations through VAE's probabilistic encoding and decoding mechanism but also enhances feature diversity and quality via GAN's adversarial training. To better learn topical domain representation, we used a topical classifier to supervise the training process of VAEGAN. Based on the learned feature, a segmentor splits the document into shorter ones with different topics. Hidden Markov model (HMM) is a popular approach for story segmentation, in which stories are viewed as instances of topics (hidden states). The number of states has to be set manually but it is often unknown in real scenarios. To solve this problem, we proposed an infinite HMM (IHMM) approach which utilized an HDP prior on transition matrices over countably infinite state spaces to automatically infer the state's number from the data. Given a running text, a Blocked Gibbis sampler labeled the states with topic classes. The position where the topic changes was a story boundary. Experimental results on the TDT2 corpus demonstrated that the proposed topical VAEGAN-IHMM approach was significantly better than the traditional HMM method in story segmentation tasks and achieved state-of-the-art performance.
Citation: Jia Yu, Huiling Peng, Guoqiang Wang, Nianfeng Shi. A topical VAEGAN-IHMM approach for automatic story segmentation[J]. Mathematical Biosciences and Engineering, 2024, 21(7): 6608-6630. doi: 10.3934/mbe.2024289
[1] | Feida Jiang . Weak solutions of generated Jacobian equations. Mathematics in Engineering, 2023, 5(3): 1-20. doi: 10.3934/mine.2023064 |
[2] | Pengfei Guan . A weighted gradient estimate for solutions of $ L^p $ Christoffel-Minkowski problem. Mathematics in Engineering, 2023, 5(3): 1-14. doi: 10.3934/mine.2023067 |
[3] | Stefano Almi, Giuliano Lazzaroni, Ilaria Lucardesi . Crack growth by vanishing viscosity in planar elasticity. Mathematics in Engineering, 2020, 2(1): 141-173. doi: 10.3934/mine.2020008 |
[4] | Bin Deng, Xinan Ma . Gradient estimates for the solutions of higher order curvature equations with prescribed contact angle. Mathematics in Engineering, 2023, 5(6): 1-13. doi: 10.3934/mine.2023093 |
[5] | Jinghong Li, Hongyu Liu, Wing-Yan Tsui, Xianchao Wang . An inverse scattering approach for geometric body generation: a machine learning perspective. Mathematics in Engineering, 2019, 1(4): 800-823. doi: 10.3934/mine.2019.4.800 |
[6] | Edgard A. Pimentel, Miguel Walker . Potential estimates for fully nonlinear elliptic equations with bounded ingredients. Mathematics in Engineering, 2023, 5(3): 1-16. doi: 10.3934/mine.2023063 |
[7] | Giovanni Cupini, Paolo Marcellini, Elvira Mascolo . Local boundedness of weak solutions to elliptic equations with $ p, q- $growth. Mathematics in Engineering, 2023, 5(3): 1-28. doi: 10.3934/mine.2023065 |
[8] | Zaffar Mehdi Dar, M. Arrutselvi, Chandru Muthusamy, Sundararajan Natarajan, Gianmarco Manzini . Virtual element approximations of the time-fractional nonlinear convection-diffusion equation on polygonal meshes. Mathematics in Engineering, 2025, 7(2): 96-129. doi: 10.3934/mine.2025005 |
[9] | Marco Cirant, Kevin R. Payne . Comparison principles for viscosity solutions of elliptic branches of fully nonlinear equations independent of the gradient. Mathematics in Engineering, 2021, 3(4): 1-45. doi: 10.3934/mine.2021030 |
[10] | Claudia Lederman, Noemi Wolanski . Lipschitz continuity of minimizers in a problem with nonstandard growth. Mathematics in Engineering, 2021, 3(1): 1-39. doi: 10.3934/mine.2021009 |
Feature representations with rich topic information can greatly improve the performance of story segmentation tasks. VAEGAN offers distinct advantages in feature learning by combining variational autoencoder (VAE) and generative adversarial network (GAN), which not only captures intricate data representations through VAE's probabilistic encoding and decoding mechanism but also enhances feature diversity and quality via GAN's adversarial training. To better learn topical domain representation, we used a topical classifier to supervise the training process of VAEGAN. Based on the learned feature, a segmentor splits the document into shorter ones with different topics. Hidden Markov model (HMM) is a popular approach for story segmentation, in which stories are viewed as instances of topics (hidden states). The number of states has to be set manually but it is often unknown in real scenarios. To solve this problem, we proposed an infinite HMM (IHMM) approach which utilized an HDP prior on transition matrices over countably infinite state spaces to automatically infer the state's number from the data. Given a running text, a Blocked Gibbis sampler labeled the states with topic classes. The position where the topic changes was a story boundary. Experimental results on the TDT2 corpus demonstrated that the proposed topical VAEGAN-IHMM approach was significantly better than the traditional HMM method in story segmentation tasks and achieved state-of-the-art performance.
This research focuses on the comprehensive connection among analytic functions and their inverses, which provides new ideas for investigating coefficient estimates and inequalities. The outcome of the present study is particularly relevant in the framework of geometric function theory (GFT), where particular geometric properties are established for analytic functions employing methods specific to this domain of research, but also could offer applications in other related fields such as partial differential equation theory, engineering, fluid dynamics, and electronics. Tremendous impact in the development of GFT was given by the Bieberbach's conjecture, an essential problem related to coefficient estimates for functions that lie within the family S of univalent functions. This conjecture suggests that for f∈S, expressed through the Taylor–Maclaurin series expansion:
f(υ)=υ+∞∑k=2dkυk,υ∈D, | (1.1) |
where D:={υ∈C:|υ|<1}, the coefficients inequality |dk|≤k holds for all k≥2. The family of such analytic functions with the series representation provided in (1.1) is represented by A. It is worth mentioning that Koebe first introduced S as a subclass of A in 1907. Bieberbach [1] originally proposed this conjecture in 1916, initially verifying it for the case k=2. Subsequent advancements by researchers including Löwner [2], Garabedian and Schiffer [3], Pederson and Schiffer [4], and Pederson [5] offered partial proofs for cases up to k=6. However, the complete proof for k≥7 remained unsolved until 1985, when de-Branges [6] utilized hypergeometric functions to establish it for k≥2.
In 1960, Lawrence Zalcman postulated the functional |d2k−d2k−1|≤(k−1)2 with k≥2 for f∈S in order to establish the Bieberbach hypothesis. This has led to the publication of several papers [7,8,9] on the Zalcman hypothesis and its generalized form |λd2k−d2k−1|≤λk2−2k+1 with λ≥0 for various subfamilies of the family S. This hypothesis remained unproven for a long time until Krushkal's breakthrough in 1999, when he proved it in [10] for k≤6 and solved it by utilizing the holomorphic homotopy of univalent functions in an unpublished manuscript [11] for k≥2. It was also demonstrated that |dlk−dl(k−1)2|≤2l(k−1)−kl with k,l≥2 for f∈S. The Bieberbach conjecture landscape is further enhanced by other conjectures, such as the one presented by Ma [12] in 1999, which is
|djdk−dj+k−1|≤(j−1)(k−1),j,k≥2. |
He restricted his proof to a subclass of S. The challenge for class S remains available.
Now, let us recall the concept of subordination, which essentially describes a relationship between analytic functions. An analytic function g1 is subordinate to g2 if there exists a Schwarz function ω such that g1(υ)=g2(ω(υ)) and it is mathematically represented as g1≺g2. If g2 is univalent in D, then
g1(υ)≺g2(υ),(υ∈D), |
if and only if
g1(0)=g2(0) & g1(D)⊑g2(D). |
In essence, this relationship helps us understand how one function is "contained" within another, providing insights into their behavior within the complex plane. The family of univalent functions comprises three classic subclasses C, S∗, and K, each distinguished by its unique properties. These subclasses are commonly known as convex functions, starlike functions, and close-to-convex functions, respectively. Let us define each class:
C:={f∈S:(υf′(υ))′f′(υ)≺1+υ1−υ,υ∈D}, |
S∗:={f∈S:υf′(υ)f(υ)≺1+υ1−υ,υ∈D} |
and
K:={f∈S:υf′(υ)h(υ)≺1+υ1−υ,υ∈D}, |
for some h∈S∗. The above family K may be reduced to the family of bounded turning functions BT by choosing h(υ)=υ. Moreover, a number of intriguing subfamilies of class S were examined by replacing 1+υ1−υ by other special functions. For the reader's benefit, a few of them are included below:
(i). S∗e≡S∗(ez) and Ce≡C(ez) [13], S∗SG≡S∗(21+e−z) and CSG≡C(21+e−z)[14],
(ii). S∗cr≡S∗(z+√1+z2) and Ccr≡C(z+√1+z2) [15], S∗Ne≡S∗(1+z−13z3) [16],
(iii). S∗(n−1)L≡S∗(Ψn−1(z)) [17] with Ψn−1(z)=1+nn+1z+1n+1zn for n≥2.
(iv). S∗sinh≡S∗(1+sinh(λz)) with 0≤λ≤ln(1+√2) [18].
It is observed that a significant area of mathematics is the study of the inverse functions for the functions in various subclasses of S. The well-known Koebe's 1/4 theorem states that there exists the inverse f−1 for every univalent function f defined in D, at least on the disk with a radius of 1/4, which has Taylor's series form
f−1(ω):=ω+∞∑n=2Bnωn,|ω|<1/4. | (1.2) |
Employing the formula f(f−1(ω))=ω, we acquire
B2=−d2, | (1.3) |
B3=2d22−d3, | (1.4) |
B4=5d2d3−5d32−d4, | (1.5) |
B5=14d42+3d23−21d22d3+6d2d4−d5. | (1.6) |
We consider the Hankel determinant of f−1 given by
ˆHλ,n(f−1)=|BnBn+1…Bn+λ−1Bn+1Bn+2…Bn+λ⋮⋮…⋮Bn+λ−1Bn+λ…Bn+2λ−2|. |
Specifically, the second and third-order Hankel determinants of f−1 are defined as the following determinants, respectively:
ˆH2,2(f−1)=|B2B3B3B4|=B2B4−B23,ˆH3,1(f−1)=|1B2B3B2B3B4B3B4B5|=B3(B2B4−B23)−B4(B4−B2B3)+B5(B3−B22). |
As it is seen, f−1 is also not necessary to be univalent. Thus, this concept is also a natural generalization of the Hankel determinant with coefficients of f∈S as entries. There are very few publications in the literature that address coefficient-related problems of the inverse function, particularly determinants as stated above. Due to such a reason, the researchers motivated, and so this led to the publication of some good articles [19,20,21,22,23] on the above-stated Hankel determinants.
The key mathematical concept in this study is the Hankel determinant ˆHλ,n(f), where n,λ∈{1,2,…}. This concept was introduced by Pommerenke [24,25]. It is composed of the coefficients of the function f∈S and is expressed as
ˆHλ,n(f):=|dndn+1…dn+λ−1dn+1dn+2…dn+λ⋮⋮…⋮dn+λ−1dn+λ…dn+2λ−2|. |
This determinant is utilized in both pure mathematics and applied sciences, including non-stationary signal theory in the Hamburger moment problem, Markov process theory, and a variety of other fields. There are relatively few publications on the estimates of the Hankel determinant for functions in the general class S. Hayman established the best estimate for f∈S in [26] by asserting that |ˆH2,n(f)|≤|η|, where η is a constant. Moreover, it was demonstrated in [27] that |ˆH2,2(f)|≤η where 0≤η≤11/3 for f∈S. The two determinants ˆH2,1(f) and ˆH2,2(f) for different subfamilies of univalent functions have been thoroughly examined in the literature. Notable work was done by Janteng et al. [28], Lee et al. [29], Ebadian et al. [30], and Cho et al. [31], who determine the sharp estimates of the second-order Hankel determinant for certain subclasses of S.
The sharp estimate of the third-order Hankel determinant ˆH3,1(f) for some analytic univalent functions is mathematically more difficult to find than the second-order Hankel determinant. Numerous articles on the third-order Hankel determinant have been published in the literature in which nonsharp limits of this determinant for the fundamental subclasses of analytic functions are determined. Following these arduous investigations, a few scholars were eventually able to obtain sharp bounds of this determinant for the classes C, BT, and S∗, as reported in the recently published works [32,33,34], respectively. These estimates are given by
|ˆH3,1(f)|≤{4135forf∈C,14forf∈BT,49forf∈S∗. |
Later on, Lecko et al. [35] established the sharp estimate for |ˆH3,1(f)| by utilizing similar approaches, specifically for functions that belong to the S∗(1/2) class. Also, the articles [36,37,38] provide more investigations on the exact bounds of this third-order Hankel determinant.
Now, let us consider the three function classes defined respectively by
S∗Sg:={f∈S:2υf′(υ)f(υ)−f(−υ)≺21+e−υ,υ∈D}, |
S∗3l,s:={f∈S:2υf′(υ)f(υ)−f(−υ)≺1+45υ+15υ4,υ∈D} |
and
SKexp:={f∈S:2(υf′(υ))′(f(υ)−f(−υ))′≺eυ,υ∈D}. |
These classes have been studied by Faisal et al. [39], Tang et al. [40], and Mendiratta et al. [13] respectively. In this paper, we improved the bound of the third-order Hankel determinant |ˆH3,1(f−1)|, which was determined by Hu and Deng [41] and published recently in AIMS Mathematics. Furthermore, we obtain the bounds of the initial three inverse coefficients together with the sharp bounds of Krushkal, Zalcman, and Fekete–Szeg ö functionals along with the Hankel determinants |ˆH2,2(f−1)| and |ˆH3,1(f−1)| upper bounds.
Let B0 be the class of Schwarz functions. It is noted that ω∈B0 can be written as
ω(υ)=∞∑n=1σnυn,υ∈D. | (2.1) |
We require the following lemmas to prove our main results.
Lemma 2.1. [42] Let ω(υ) be a Schwarz function. Then, for any real numbers ϱ and ς such that
(ϱ,ς)={|ϱ|≤12, −1≤ς≤1}∪{12≤|ϱ|≤2, 427(|ϱ|+1)3−(|ϱ|+1)≤ς≤1},(ϱ,ς)={2≤|ϱ|≤4, 2(|ϱ|+1)|ϱ|4+|ϱ|2+2|ϱ|≤ς≤112(ϱ2+8)},(ϱ,ς)={12≤|ϱ|≤2, −23(1+|ϱ|)≤ς≤427(1+|ϱ|)3−(1+|ϱ|)}, |
the following sharp estimate holds:
|σ3+ϱσ1σ2+ςσ31|≤1. |
Lemma 2.2. [43] If ω(υ) be a Schwarz function, then
|σn|≤1,n≥1. | (2.2) |
Moreover, for τ∈C, the following inequality holds
|σ2+τσ21|≤max{1,|τ|}. | (2.3) |
Lemma 2.3. [44] Let ω(υ) be a Schwarz function. Then
|σ2|≤1−|σ1|2, | (2.4) |
|σ3|≤1−|σ1|2−|σ2|21+|σ1|, | (2.5) |
|σ4|≤1−|σ1|2−|σ2|2. | (2.6) |
Lemma 2.4. [45] Let ω(υ) be a Schwarz function. Then
|σ1σ3−σ22|≤1−|σ1|2 |
and
|σ4+(1+Λ)σ1σ3+σ22+(1+2Λ)σ21σ2+Λσ41|≤max{1,|Λ|},Λ∈C. | (2.7) |
In this section, we will improve the bound of the third-order Hankel determinant |ˆH3,1(f−1)| with inverse coefficient entries for functions belonging to the class S∗Sg.
Theorem 3.1. Let f−1 be the inverse of the function f∈S∗Sg and has the form (1.2). Then
|ˆH3,1(f−1)|<0.0317. |
Proof. Let f∈S∗Sg. Then, by subordination relationship, it implies
2υf′(υ)f(υ)−f(−υ)=21+e−ω(υ),υ∈D | (3.1) |
and also assumes that
ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+⋯. | (3.2) |
Using (1.1), we obtain
2υf′(υ)f(υ)−f(−υ):=1+2d2υ+2d3υ2+(−2d2d3+4d4)υ3+(−2d23+4d5)υ4+⋯. | (3.3) |
By some easy calculation and utilizing the series expansion of (3.2), we achieve
21+e−ω(υ)=1+12σ1υ+12σ2υ2+(−124σ31+12σ3)υ3+(−18σ21σ2+12σ4)υ4+⋯. | (3.4) |
Now by comparing (3.3) and (3.4), we obtain
d2=14σ1, | (3.5) |
d3=14σ2, | (3.6) |
d4=−196σ31+18σ3+132σ1σ2, | (3.7) |
d5=18σ4+132σ22−132σ21σ2. | (3.8) |
Putting (3.5)–(3.8) in (1.3)–(1.6), we obtain
B2=−14σ1, | (3.9) |
B3=−14σ2+18σ21, | (3.10) |
B4=−13192σ31−18σ3+932σ1σ2, | (3.11) |
B5=532σ22−14σ21σ2+5128σ41−18σ4+316σ1σ3. | (3.12) |
The determinant |ˆH3,1(f−1)| can be reconfigured as follows:
|ˆH3,1(f−1)|=|2B2B3B4−B24−B22B5−B33+B3B5|. |
From (3.9)–(3.12), we easily write
|ˆH3,1(f−1)|=164|−σ23+(12σ2+16σ21)σ1σ3+5576σ61−32σ32+2σ2σ4 −548σ41σ2+516σ21σ22−12σ21σ4|. |
Now we begin by utilizing Lemma 2.1 with ϱ=−12 and ς=−16 that
|σ3[σ3+(−12)σ1σ2+(−16)σ31]|≤|σ3| |
and also by using Lemma 2.3, we have
|σ3|≤1−|σ2|21+|σ1|−|σ1|2≤1−|σ2|22−|σ1|2. |
Applying it and also using |σ4|≤1−|σ2|2−|σ1|2, we achieve
|ˆH3,1(f−1)|≤164E(|σ1|,|σ2|), |
where
E(σ,t)=1−σ2−12t2+5576σ6+32t3+2t(1−σ2−t2)+548σ4t+516σ2t2 +12σ2(1−σ2−t2), σ=|σ1|, t=|σ2|. |
But E is a decreasing function of the variable σ; consequently,
E(σ,t)≤E(0,t)=1−12t2−12t3+2t. |
The function E(0,t) reaches its maximum value in [0,1] if t=−13+13√13, so E(0,t)≤2.0322, which completes the proof.
Conjecture 3.2. If the inverse of f∈S∗Sg is of the form (1.2), then
|ˆH3,1(f−1)|≤164. |
Equality will be obtained by using (1.3)–(1.6) together with
2υf′(υ)f(υ)−f(−υ)=1+12υ3−124υ9+⋯. |
We begin this section by computing the estimates of the first three initial inverse coefficients for functions in the family S∗3l,s.
Theorem 4.1. Let the inverse function of f∈S∗3l,s has the series form (1.2). Then
|B2|≤25,|B3|≤25,|B4|≤15. |
The equality can easily be obtained by utilizing (1.3) up to (1.5) together with
2υf′(υ)f(υ)−f(−υ)=1+12υm−124υ4m+⋯, for m=1,2,3. | (4.1) |
Proof. Let f∈S∗3l,s. Then we easily write
2υf′(υ)f(υ)−f(−υ)=1+45ω(υ)+15(ω(υ))4,υ∈D, |
and here ω represents the Schwarz function. Also, let us assume that
ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+⋯. | (4.2) |
Using (1.1), we obtain
2υf′(υ)f(υ)−f(−υ)=1+2d2υ+2d3υ2+(4d4−2d2d3)υ3 +(4d5−2d23)υ4⋯. | (4.3) |
By some easy calculation and utilizing the series expansion of (4.2), we have
1+45ω(υ)+15(ω(υ))4=1+45σ1υ+45σ2υ2+45σ3υ3+(45σ4+15σ41)υ4+⋯. | (4.4) |
Now, by comparing (4.3) and (4.4), we obtain
d2=25σ1, | (4.5) |
d3=25σ2, | (4.6) |
d4=15σ3+225σ1σ2, | (4.7) |
d5=15σ4+120σ41+225σ22. | (4.8) |
Substituting (4.5)–(4.8) in (1.3)–(1.6), we obtain
B2=−25σ1, | (4.9) |
B3=825σ21−25σ2, | (4.10) |
B4=−825σ31−15σ3+1825σ1σ2, | (4.11) |
B5=7712500σ41−144125σ21σ2+1225σ1σ3+25σ22−15σ4. | (4.12) |
Using (2.2) in (4.9), we achieve
|B2|≤25. |
To prove the second inequality, we can write (4.10) as
|B3|=25|σ2+(−45)σ21|. |
Applying (2.3) in the above equation, we achieve
|B3|≤25. |
From (4.11), we deduce that
|B4|=15|σ3+(−185)σ1σ2+85σ31|. |
Comparing it with Lemma 2.1, we note that
ϱ=−185andς=85. |
It is clear that 2≤|ϱ|≤4 with
2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=45151≤ςandς≤112(ϱ2+8)=13175. |
All the conditions of Lemma 2.1 are satisfied. Therefore
|B4|≤15. |
The required proof is thus completed.
Now, we compute the Fekete–Szegö functional bound for the inverse function of f∈S∗3l,s.
Theorem 4.2. If f−1 is the inverse of the function f∈S∗3l,s with series expansion (1.2), then
|B3−τB22|≤max{25,|4τ−825|},τ∈C. |
This functional bound is sharp.
Proof. Putting (4.9) and (4.10), we obtain
|B3−τB22|=|−25σ2−4τ25σ21+825σ21|=25|σ2+(2τ−45)σ21|. |
Application of (2.3) leads us to
|B3−τB22|≤max{25,|4τ−825|}. |
The bound of the above functional is best possible, and it can easily be checked by (1.3), (1.4), and (4.1) with m=2.
By replacing τ=1 in Theorem 4.2, we arrive at the below result.
Corollary 4.3. If the inverse of the function f∈S∗3l,s is f−1 with series expansion (1.2), then
|B3−B22|≤25. |
This estimate is sharp, and equality will be obtained by using (1.3), (1.4), and (4.1) with m=2.
Next, we investigate the Zalcman functional upper bound for f−1∈S∗3l,s.
Theorem 4.4. If f∈S∗3l,s and its inverse function f−1 have the form (1.2), then
|B2B3−B4|≤15. |
The above estimate is sharp.
Proof. Taking use of (4.9)–(4.11), we achieve
|B2B3−B4|=15|σ3+(−145)σ1σ2+2425σ31|. |
From Lemma 2.1, let
ϱ=−145andς=2425. |
It is clear that 2≤|ϱ|≤4 with
2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=35109≤ςandς≤112(ϱ2+8)=3325. |
Thus, all the conditions of Lemma 2.1 are satisfied. Hence
|B2B3−B4|≤15. |
The required estimate is best possible and will easily be obtained by using (1.3)–(1.5), and (4.1) with m=3.
Further, we intend to compute the Krushkal functional bound for the family S∗3l,s.
Theorem 4.5. If f∈S∗3l,s and its inverse function f−1 have the form (1.2), then
|B4−B32|≤15. |
This estimate is sharp.
Proof. Putting (4.9) and (4.11), we obtain
|B4−B32|=15|σ3+(−185)σ1σ2+(3225)σ31|. |
From Lemma 2.1, let
ϱ=−185andς=3225. |
It is clear that 2≤|ϱ|≤4 with
2|ϱ|(|ϱ|+1)|ϱ|2+2|ϱ|+4=45151≤ςandς≤112(ϱ2+8)=13175. |
Thus, all the conditions of Lemma 2.1 are satisfied. Hence
|B4−B32|≤15. |
This estimate is best possible and will be confirmed by using (1.3), (1.5), and (4.1) with m=3.
In the upcoming result, we will investigate the estimate of ˆH2,2(f−1) for the family S∗3l,s.
Theorem 4.6. Let the inverse function of f∈S∗3l,s has the series expansion (1.2). Then
|ˆH2,2(f−1)|≤425. |
This inequality is sharp, and equality will easily be achieved by using (1.3)–(1.5) and (4.1) with m=2.
Proof. The determinant ˆH2,2(f−1) can be reconfigured as follows:
ˆH2,2(f−1)=B2B4−B23.=−d23+d2d4−d22d3+d42. |
Substituting (4.9)–(4.11), we achieve
|ˆH2,2(f−1)|=425|−425σ41+15σ21σ2−12σ1σ3+σ22|=425|12(σ22−σ1σ3)+12(−825σ41+25σ21σ2+σ22)|≤450|σ22−σ1σ3|+450|−825σ41+25σ21σ2+σ22|=450Y1+450Y2, |
where
Y1=|σ22−σ1σ3|, |
and
Y2=|−825σ41+25σ21σ2+σ22|. |
Utilizing Lemma 2.4, we acquire Y1≤1. Applying (2.4) along with triangle inequality for Y2, we have
|Y2|≤(1−|σ1|2)2+825|σ1|4+25(1−|σ1|2)|σ1|2. |
By setting |σ1|=ϰ with ϰ∈(0,1], we obtain
|Y2|≤−23ϰ425−8ϰ25+1=N(ϰ). |
Clearly N′(ϰ)≤0, N(ϰ) is a decreasing function of ϰ, indicating that it achieves its maxima at ϰ=0, that is,
|Y2|≤1. |
Therefore
|ˆH2,2(f−1)|≤450Y1+450Y2≤425 |
and so the required proof is accomplished.
Theorem 4.7. Let f−1 be the inverse of f∈S∗3l,s with series expansion (1.2). Then
|ˆH3,1(f−1)|<0.11600. |
Proof. The determinant |ˆH3,1(f−1)| is described as follows:
|ˆH3,1(f−1)|=|2B2B3B4−B24−B22B5−B33+B3B5|. |
From (4.9)–(4.12), we easily write
|ˆH3,1(f−1)|=125|−σ23+45σ2σ1σ3−61625σ61−125σ32−67250σ41σ2 +5225σ21σ22−45σ21σ4+2σ2σ4|. |
The below inequality follows easily by using Lemma 2.1 with ϱ=−45 and ς=0
|σ3[σ3+(−45)σ1σ2+(0)σ31]|≤|σ3| |
and also by virtue of Lemma 2.3, we have
|σ3|≤1−|σ2|21+|σ1|−|σ1|2≤1−|σ2|22−|σ1|2. |
Applying it and also using |σ4|≤1−|σ2|2−|σ1|2, we achieve
|ˆH3,1(f−1)|≤125E(|σ1|,|σ2|), |
where
E(σ,t)=1−σ2−12t2+61625σ6+125t3+67250σ4t+5225σ2t2+45σ2(1−σ2−t2) +2t(1−σ2−t2), σ=|σ1|, t=|σ2|. |
But E is an increasing function of the variable σ; consequently,
E(σ,t)≤E(0,t)=1−12t2+25t3+2t. |
The function E(0,t) reaches its maximum value in [0,1] if t=1, so E(0,t)≤2910, which completes the proof.
Conjecture 4.8. If the inverse of f∈S∗3l,s is of the form (1.2), then
|ˆH3,1(f−1)|≤125. |
This result is best possible.
Next, we begin this section by determining the estimates of the first four initial coefficients for functions in the family f∈SKexp.
Theorem 5.1. If the function f∈SKexp has the series form (1.1), then
|d2|≤14,|d3|≤16,|d4|≤116,|d5|≤120. |
The equality is attained by the following extremal functions:
2υf′(υ)f(υ)−f(−υ)=1+υm+12υ2m+⋯, for m=1,2,3,4. | (5.1) |
Proof. Let f∈SKexp. Then there exists a Schwarz function w such that
2(υf′(υ))′(f(υ)−f(−υ))′=eω(υ),υ∈D. | (5.2) |
Also, assuming that
ω(υ)=σ1υ+σ2υ2+σ3υ3+σ4υ4+⋯. | (5.3) |
Using (1.1), we obtain
2(υf′(υ))′(f(υ)−f(−υ))′:=1+4d2υ+6d3υ2+(−12d2d3+16d4)υ3+(−18d23+20d5)υ4+⋯. | (5.4) |
By some easy calculation and utilizing the series expansion of (5.3), we achieve
eω(υ)=1+σ1υ+(σ2+12σ21)υ2+(σ3+σ1σ2+16σ31)υ3+(σ4+σ1σ3+12σ22+124σ41+12σ21σ2)υ4+⋯. | (5.5) |
Now by comparing (5.4) and (5.5), we have
d2=14σ1, | (5.6) |
d3=16σ2+112σ21, | (5.7) |
d4=116σ3+5192σ31+332σ1σ2, | (5.8) |
d5=120σ22+120σ4+120σ21σ2+120σ1σ3+1120σ41. | (5.9) |
Using (2.2) in (5.6), we obtain
|d2|≤14. |
Rearranging of (5.7), we obtain
|d3|=16|σ2+12σ21|. |
Applying (2.3) in the above equation, we achieve
|d3|≤16. |
For d4, we can write (5.8), as
|d4|=116|σ3+32σ1σ2+512σ31|. |
From Lemma 2.1, let
ϱ=32andς=512. |
It is clear that 12≤|ϱ|≤2 with
427(1+|ϱ|)3−(1+|ϱ|)=−527≤ςandς≤1. |
Hence the conditions of Lemma 2.1 are satisfied. Therefore
|d4|≤116. |
From (5.9), we deduce that
|d5|=120|12(2σ1σ3+σ4+σ22+σ41+3σ21σ2)+12(σ4+σ22−23σ41−σ21σ2)|. | (5.10) |
The initial segment is estimated by 12 by utilizing (2.7) with Λ=1. Lemma 2.3 uses for the estimation of the second segment in the following:
12|−23σ41+σ4−σ21σ2+σ22|≤12[−|σ2|2−|σ1|2+1+23|σ1|4+|σ1|2(1−|σ1|2)+|σ2|2].=−1|σ1|46+12≤12. |
By adding the bounds of the segments of (5.10), we achieve
|d5|≤120. |
Thus, the proof is completed.
Lastly, we will investigate the estimates of first three initial inverse coefficients for functions in the family SKexp.
Theorem 6.1. If the inverse function of f∈SKexp is of the form (1.2), then
|B2|≤14,|B3|≤16,|B4|≤116. |
Equalities hold in these bounds and will be confirmed by using (1.3)–(1.5) and (5.1) with m=1,2,3.
Proof. Applying (5.6)–(5.9) in (1.3)–(1.6), we achieve
B2=−14σ1, | (6.1) |
B3=124σ21−16σ2, | (6.2) |
B4=1196σ1σ2−116σ3, | (6.3) |
B5=−1320σ41−43960σ21σ2+7160σ1σ3+130σ22−120σ4. | (6.4) |
Using (2.2) in (6.1), we obtain
|B2|≤14. |
For B3, we can write (6.2), as
|B3|=16|σ2+(−14)σ21|. |
Applying (2.3) in the above equation, we achieve
|B3|≤16. |
For B4, we consider
|B4|=116|σ3+(−116)σ1σ2+(0)σ31|. |
From Lemma 2.1, let
ϱ=−116andς=0. |
It is clear that 12≤|ϱ|≤2 with
−23(|ϱ|+1)=−179≤ςandς≤427(1+|ϱ|)3−(1+|ϱ|)=391729. |
This shows that all conditions of Lemma 2.1 are satisfied. Thus
|B4|≤116. |
Thus, the required proof is completed.
Theorem 6.2. If f∈SKexp has inverse function f−1 with a series form (1.2), then
|B3−τB22|≤max16{1,|−2+3τ8|},τ∈C. |
This inequality is sharp.
Proof. Employing (6.1) and (6.2), we have
|B3−τB22|=16|σ2−14σ21+3τ8σ21|.=16|σ2+(3τ−28)σ21|. |
Implementation of Lemma 2.2 along with triangle inequality leads us to
|B3−τB22|≤max16{1,|−2+3τ8|}. |
The functional bound is sharp and will be obtained from (1.3), (1.4), and (5.1) with m=2.
By replacing τ=1 in Theorem 6.2, we arrive at the below result.
Corollary 6.3. If f∈SKexp has the inverse function with a series form (1.2), then
|B3−B22|≤16. |
The functional bound is sharp. Equality will be achieved by utilizing (1.3), (1.4), and (5.1) with m=2.
Theorem 6.4. If the inverse of the function f∈SKexp is expressed in (1.2), then
|B4−B2B3|≤116. |
This outcome is sharp, and it will be confirmed easily by using (1.3)–(1.5), and (5.1) with m=3.
Proof. From (6.1)–(6.3), we have
|B4−B2B3|=116|σ3−76σ1σ2−16σ31|. |
From Lemma 2.1, let
ϱ=−76andς=−16. |
It is clear that 12≤|ϱ|≤2 with
427(1+|ϱ|)3−(1+|ϱ|)=−481729≤ςandς≤1. |
Thus, all the conditions of Lemma 2.1 are satisfied. Hence
|B4−B2B3|≤116 |
and hence the proof is completed.
Theorem 6.5. If the inverse function of f∈SKexp is provided in (1.2), then
|B4−B32|≤116. |
Equality will be held by using (1.3), (1.5), and (5.1) with m=3.
Proof. Putting (6.1) and (6.3), we have
|B4−B32|=116|σ3+(−116)σ1σ2+(−14)σ31|. |
From Lemma 2.1, let
ϱ=−116andς=−14. |
It is clear that 12≤|ϱ|≤2 with
−23(|ϱ|+1)=−179≤ςandς≤427(1+|ϱ|)3−(1+|ϱ|)=391729. |
Thus, all the conditions of Lemma 2.1 are satisfied. Hence
|B4−B32|≤116. |
Theorem 6.6. Let f−1 be the inverse of f∈SKexp as defined in (1.2). Then
|ˆH2,2(f−1)|=|B2B4−B23|≤136. |
Equality will be achieved by using (1.3)–(1.5) and (5.1) with m=2.
Proof. From (6.1)–(6.3), we have
|ˆH2,2(f−1)|=136|116σ41+1732σ21σ2−916σ1σ3+σ22|=136|12(σ22−σ1σ3)+12(18σ41−18σ1σ3+1716σ21σ2+σ22)|≤172|σ22−σ1σ3|+172|18σ41−18σ1σ3+1716σ21σ2+σ22|=172R1+172R2, |
where
R1=|σ22−σ1σ3| |
and
R2=|18σ41−18σ1σ3+1716σ21σ2+σ22|. |
Utilizing Lemma 2.4, we obtain R1≤1. Also, by virtue of Lemma 2.3 for R2, we achieve
|R2|≤1716|σ1|2|σ2|+|σ1|48+|σ2|2+|σ1|8(−|σ2|2(|σ1|+1)−|σ1|2+1)≤|σ1|48+17|σ2||σ1|216+(−|σ1|8(|σ1|+1)+1)|σ2|2−|σ1|38+|σ1|8. | (6.5) |
Since (−|σ1|8(|σ1|+1)+1)>0. Thus, we can substitute (2.4) in (6.5), and we easily obtain
|R2|≤−|σ1|38+1716|σ1|2(1−|σ1|2)+(−|σ1|8(|σ1|+1)+1)(1−|σ1|2)2 +|σ1|8+|σ1|48. |
The basic computation of maximum and minimum leads us to
|R2|≤1. |
Hence
|ˆH2,2(f−1)|≤172R1+172R2≤136. |
The proof is thus accomplished.
Theorem 6.7. Let f−1 be the inverse function of f∈SKexp and is expressed in (1.2). Then
|ˆH3,1(f−1)|<0.006671. |
Proof. The determinant |ˆH3,1(f−1)| can be expressed as follows:
|ˆH3,1(f−1)|=|2B2B3B4−B24−B22B5−B33+B3B5|. |
From (6.1)–(6.4), we easily write
|ˆH3,1(f−1)|=1256|−σ23+(715σ2+110σ21)σ1σ3−1540σ61−160σ41σ2−13180σ21σ22 +415σ21σ4−32135σ32+3215σ2σ4|. |
At the beginning, it should be noted that
|σ3[σ3+(−715)σ1σ2+(−110)σ31]|≤|σ3|, |
where we have used Lemma 2.1 with ϱ=−715 and ς=−110. Also, by using Lemma 2.3, we have
|σ3|≤1−|σ2|21+|σ1|−|σ1|2≤1−|σ2|22−|σ1|2. |
Applying it and also using |σ4|≤1−|σ2|2−|σ1|2, we achieve
|ˆH3,1(f−1)|≤1256E(|σ1|,|σ2|), |
where
E(σ,t)=1−σ2−12t2+1540σ6+160σ4t+13180σ2t2+415σ2(1−σ2−t2) +32135t3+3215t(1−σ2−t2), σ=|σ1|, t=|σ2|. |
But E is a decreasing function of the variable σ; consequently,
E(σ,t)≤E(0,t)=1−12t2−256135t3+3215t. |
The function E(0,t) reaches its maximum value in [0,1] if t=−45512+1512√100329, so E(0,t)≤1.7079, which completes the proof.
Conjecture 6.8. If the inverse function of f∈SKexp is of the form (1.2), then the sharp bounds
|ˆH3,1(f−1)|≤1256. |
Equality will be achieved by using (1.3)–(1.5) and (5.1) with m=3.
The study of Hankel determinant bounds is of great importance in the research community due to its vast applications in mathematical science. In the current article, we have considered the Hankel determinant involving the coefficients of inverse functions for various subclasses of analytic functions. This generalizes the classical definition of the Hankel determinant and could provide more knowledge of inverse functions. The main focus of this article that we have studied is the coefficient-related problems along with Hankel determinants for the inverse function of the functions that belong to the families of symmetric starlike and symmetric convex functions associated with three different image domains. In particular, these problems include the sharp estimates of some initial inverse coefficients, the Zalcman, Fekete–Szegö, and Krushkal inequalities, along with the sharp estimation of second and third Hankel determinants containing inverse coefficients for functions in the mentioned families by using the concept of a Schwarz function. Also, we have given some conjectures that strongly support our obtained results. Our research introduces a new framework for analyzing the Hankel determinant, emphasizing the importance of inverse coefficients in analytic functions, potentially promoting more attention to coefficient-related problems. This study may be applied to meromorphic analytic functions, and the same methodology can be used to examine higher-order Hankel determinants, as studied in articles [46,47,48].
Huo Tang: Funding acquisition, Methodology, Project administration; Muhammad Abbas: Investigation, Writing-original draft; Reem K. Alhefthi: Formal analysis, Supervision, Writing-review and editing; Muhammad Arif: Formal analysis, Supervision, Writing-review and editing. All authors read and approved the final manuscript.
The first author (Huo Tang) was partly supported by the Natural Science Foundation of China under Grant 11561001, the Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region under Grant NJYT18-A14, the Natural Science Foundation of Inner Mongolia of China under Grants 2022MS01004 and 2020MS01011, and the Higher School Foundation of Inner Mongolia of China under Grant NJZY20200, the Program for Key Laboratory Construction of Chifeng University (No. CFXYZD202004), the Research and Innovation Team of Complex Analysis and Nonlinear Dynamic Systems of Chifeng University (No. cfxykycxtd202005) and the Youth Science Foundation of Chifeng University (No. cfxyqn202133).
The third author (Reem K. Alhefthi) would like to extend their sincere-appreciation to the Researchers Supporting Project number (RSPD2024R802) King Saud University, Riyadh, Saudi Arabia.
The authors declare that they have no conflicts of interest.
[1] | U. R. Gondhi, Intra-Topic Clustering for Social Media, 2020. |
[2] |
M. Adedoyin-Olowe, M. M. Gaber, F. Stahl, A survey of data mining techniques for social media analysis, J. Data Mining Digital Humanit., 2014 (2014). https://doi.org/10.46298/jdmdh.5 doi: 10.46298/jdmdh.5
![]() |
[3] |
L. F. Rau, P. S. Jacobs, U. Zernik, Information extraction and text summarization using linguistic knowledge acquisition, Inf. Process. Manage., 25 (1989), 419–428. https://doi.org/10.1016/0306-4573(89)90069-1 doi: 10.1016/0306-4573(89)90069-1
![]() |
[4] | L. Lee, B. Chen, Spoken document understanding and organization, IEEE Signal Process. Mag., 22 (2005), 42–60. |
[5] | W. Dan, C. Liu, Eye tracking analysis in interactive information retrieval research, J. Libr. Sci. China, 2 (2019), 109–128. |
[6] |
B. Zhang, Z. Chen, D. Peng, J. A. Benediktsson, B. Liu, L. Zhou, et al., Remotely sensed big data: Evolution in model development for information extraction[point of view], Proc. IEEE, 107 (2019), 2294–2301. https://doi.org/10.1109/JPROC.2019.2948454 doi: 10.1109/JPROC.2019.2948454
![]() |
[7] |
S. Soderland, Learning information extraction rules for semi-structured and free text, Mach. Learn., 34 (1999), 233–272. https://doi.org/10.1023/A:1007562322031 doi: 10.1023/A:1007562322031
![]() |
[8] |
W. Chen, B. Liu, W. Guan, ERNIE and multi-feature fusion for news topic classification, Artif. Intell. Appl., 2 (2024), 149–154. https://doi.org/10.47852/bonviewAIA32021743 doi: 10.47852/bonviewAIA32021743
![]() |
[9] | W. Hsu, L. Kennedy, C. W. Huang, S. F. Chang, C. Y. Lin, G. Iyengar, News video story segmentation using fusion of multi-level multi-modal features in trecvid 2003, in 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE, (2004), 645. |
[10] | J. Wan, T. Peng, B. Li, News video story segmentation based on naive bayes model, in 2009 Fifth International Conference on Natural Computation, IEEE, (2009), 77–81. |
[11] | G. Hadjeres, F. Nielsen, F. Pachet, GLSR-VAE: Geodesic latent space regularization for variational autoencoder architectures, in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, (2017), 1–7. https://doi.org/10.1109/SSCI.2017.8280895 |
[12] | I. Malioutov, A. Park, R. Barzilay, J. Glass, Making sense of sound: Unsupervised topic segmentation over acoustic input, in Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, (2007), 504–511. |
[13] |
L. Chaisorn, T. S. Chua, C. H. Lee, A multi-modal approach to story segmentation for news video, World Wide Web, 6 (2003), 187–208. https://doi.org/10.1023/A:1023622605600 doi: 10.1023/A:1023622605600
![]() |
[14] |
S. Banerjee, A. Rudnicky, A TextTiling based approach to topic boundary detection in meetings, Proc. Interspeech 2006, (2006), 1827. https://doi.org/10.21437/Interspeech.2006-15 doi: 10.21437/Interspeech.2006-15
![]() |
[15] | I. I. M. Malioutov, Minimum cut model for spoken lecture segmentation, in ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, (2006), 25–32. https://doi.org/10.3115/1220175.1220179 |
[16] |
S. S. Naziya, R. Deshmukh, Speech recognition system—A review, IOSR J. Comput. Eng., 18 (2016), 3–8. https://doi.org/10.9790/0661-1804020109 doi: 10.9790/0661-1804020109
![]() |
[17] | L. Li, Y. Zhao, D. Jiang, Y. Zhang, F. Wang, I. Gonzalez, et al., Hybrid deep neural network--hidden markov model (DNN-HMM) based speech emotion recognition, in 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, IEEE, (2013), 312–317. |
[18] |
Z. S. Harris, Distributional structure, Word, 10 (1954), 146–162. https://doi.org/10.1080/00437956.1954.11659520 doi: 10.1080/00437956.1954.11659520
![]() |
[19] |
Y. Zhang, R. Jin, Z. H. Zhou, Understanding bag-of-words model: A statistical framework, Int. J. Mach. Learn. Cybern., 1 (2010), 43–52. https://doi.org/10.1007/s13042-010-0001-0 doi: 10.1007/s13042-010-0001-0
![]() |
[20] | A. Alahmadi, A. Joorabchi, A. E. Mahdi, A new text representation scheme combining bag-of-words and bag-of-concepts approaches for automatic text classification, in 2013 7th IEEE GCC Conference and Exhibition (GCC), IEEE, (2013), 108–113. https://doi.org/10.1109/IEEEGCC.2013.6705759 |
[21] | Q. Le, T. Mikolov, Distributed representations of sentences and documents, in International Conference on Machine Learning, PMLR, (2014), 1188–1196. |
[22] |
Y. M. Costa, L. S. Oliveira, C. N. Silla Jr, An evaluation of convolutional neural networks for music classification using spectrograms, Appl. Soft Comput., 52 (2017), 28–38. https://doi.org/10.1016/j.asoc.2016.12.024 doi: 10.1016/j.asoc.2016.12.024
![]() |
[23] | J. Yu, X. Xiao, L. Xie, E. S. Chng, H. Li, A DNN-HMM approach to story segmentation, Proc. Interspeech, (2016), 1527–1531. |
[24] |
L. Xie, Y. L. Yang, Z. Q. Liu, On the effectiveness of subwords for lexical cohesion based story segmentation of Chinese broadcast news, Inf. Sci., 181 (2011), 2873–2891. https://doi.org/10.1016/j.ins.2011.02.013 doi: 10.1016/j.ins.2011.02.013
![]() |
[25] | L. Xie, Y. Yang, J. Zeng, Subword lexical chaining for automatic story segmentation in Chinese broadcast news, in Lecture Notes in Computer Science, Springer, (2008), 248–258. https://doi.org/10.1007/978-3-540-89796-5_26 |
[26] |
H. Yu, J. Yang, A direct LDA algorithm for high-dimensional data—with application to face recognition, Pattern Recognit., 34 (2001), 2067–2070. https://doi.org/10.1016/S0031-3203(00)00162-X doi: 10.1016/S0031-3203(00)00162-X
![]() |
[27] | M. Lu, L. Zheng, C. C. Leung, L. Xie, B. Ma, H. Li, Broadcast news story segmentation using probabilistic latent semantic analysis and laplacian eigenmaps, in APSIPA ASC 2011, (2011), 356–360. |
[28] | T. Hofmann, Probabilistic latent semantic indexing, in Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, (1999), 50–57. https://doi.org/10.1145/312624.312649 |
[29] |
L. Manevitz, M. Yousef, One-class document classification via neural networks, Neurocomputing, 70 (2007), 1466–1481. https://doi.org/10.1016/j.neucom.2006.05.013 doi: 10.1016/j.neucom.2006.05.013
![]() |
[30] |
Y. Cheng, Z. Ye, M. Wang, Q. Zhang, Document classification based on convolutional neural network and hierarchical attention network, Neural Network World, 29 (2019). https://doi.org/10.14311/NNW.2019.29.007 doi: 10.14311/NNW.2019.29.007
![]() |
[31] |
C. H. Li, S. C. Park, An efficient document classification model using an improved back propagation neural network and singular value decomposition, Expert Syst. Appl., 36 (2009), 3208–3215. https://doi.org/10.1016/j.eswa.2008.01.014 doi: 10.1016/j.eswa.2008.01.014
![]() |
[32] |
Q. Yao, Q. Liu, T. G. Dietterich, S. Todorovic, J. Lin, G. Diao, et al., Segmentation of touching insects based on optical flow and NCuts, Biosyst. Eng., 114 (2013), 67–77. https://doi.org/10.1016/j.biosystemseng.2012.11.008 doi: 10.1016/j.biosystemseng.2012.11.008
![]() |
[33] | Y. Cen, Z. Han, P. Ji, Chinese term recognition based on hidden Markov model, in 2008 IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application, (2008), 54–58. https://doi.org/10.1109/PACⅡA.2008.242 |
[34] | J. P. Yamron, I. Carp, L. Gillick, S. Lowe, P. van Mulbregt, A hidden Markov model approach to text segmentation and event tracking, in Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, (1998), 333–336. |
[35] |
F. Lan, Research on text similarity measurement hybrid algorithm with term semantic information and TF‐IDF Method, Adv. Multimedia, (2022), 7923262. https://doi.org/10.1155/2022/7923262 doi: 10.1155/2022/7923262
![]() |
[36] |
J. Yu, H. Shao, Broadcast news story segmentation using sticky hierarchical dirichlet process, Appl. Intell., 2 (2022), 12788–12800. https://doi.org/10.1007/s10489-021-03098-4 doi: 10.1007/s10489-021-03098-4
![]() |
[37] |
C. E. Antoniak, Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems, Ann. Stat., (1974), 1152–1174. https://doi.org/10.1214/aos/1176342871 doi: 10.1214/aos/1176342871
![]() |
[38] | D. M. Blei, M. I. Jordan, Variational Inference for Dirichlet Process Mixtures, (2006). |
[39] | M. D. Hoffman, P. R. Cook, D. M. Blei, Data-driven recomposition using the hierarchical Dirichlet process hidden Markov model, ICMC, (2008). |
[40] | Y. W. Teh, Dirichlet process, encyclopedia of machine learning, 1063 (2010), 280–287. https://doi.org/10.1007/978-0-387-30164-8_219 |
[41] | W. M. Bolstad, Understanding Computational Bayesian Statistics, John Wiley & Sons, 2009. https://doi.org/10.1002/9780470567371 |
[42] |
G. Casella, E. I. George, Explaining the Gibbs sampler, Am. Stat., 46 (1992), 167–174. https://doi.org/10.1080/00031305.1992.10475878 doi: 10.1080/00031305.1992.10475878
![]() |
[43] | C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2 (2006), 1122–1128. |
[44] | T. Cohn, P. Blunsom, Blocked inference in Bayesian tree substitution grammars, in Proceedings of the ACL 2010 Conference Short Papers, (2010), 225–230. |
[45] | J. Fiscus, G. Doddington, J. Garofolo, A. Martin, NIST's 1998 topic detection and tracking evaluation (TDT2), in Proceedings of the 1999 DARPA Broadcast News Workshop, (1999), 19–24. https://doi.org/10.21437/Eurospeech.1999-65 |
[46] | G. Karypis, CLUTO-a clustering toolkit, (2002). https://doi.org/10.21236/ADA439508 |
[47] | M. Lu, L. Zheng, C. Leung, L. Xie, B. Ma, H. Li, Broadcast news story segmentation using probabilistic latent semantic analysis and laplacian eigenmaps, in APSIPA ASC 2011, 356–360, (2011). |
[48] | J. Eisenstein, Barzilay R, Bayesian unsupervised topic segmentation, in EMNLP '08: Proceedings of the Conference on Empirical Methods in Natural Language Processing, (2008), 334–343. https://doi.org/10.3115/1613715.1613760 |
[49] |
C. Wei, S. Luo, X. Ma, H. Ren, J. Zhang, L. Pan, Locally embedding autoencoders: A semi-supervised manifold learning approach of document representation, PloS One, 11 (2016), e0146672. https://doi.org/10.1371/journal.pone.0146672 doi: 10.1371/journal.pone.0146672
![]() |
[50] | C. Yang, L. Xie, X. Zhou. Unsupervised broadcast news story segmentation using distance dependent Chinese restaurant processes, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), (2014), 4062–4066. https://doi.org/10.1109/ICASSP.2014.6854365 |
[51] |
J. Yu, L. Xie, A hybrid neural network hidden Markov model approach for automatic story segmentation, J. Ambient Intell. Humanized Comput., 8 (2017), 925–936. https://doi.org/10.1007/s12652-017-0501-9 doi: 10.1007/s12652-017-0501-9
![]() |
1. | Neil S. Trudinger, A note on second derivative estimates for Monge-Ampère-type equations, 2023, 23, 2169-0375, 10.1515/ans-2022-0036 | |
2. | Feida Jiang, Weak solutions of generated Jacobian equations, 2023, 5, 2640-3501, 1, 10.3934/mine.2023064 | |
3. | Cale Rankin, Strict convexity and $$C^1$$ regularity of solutions to generated Jacobian equations in dimension two, 2021, 60, 0944-2669, 10.1007/s00526-021-02093-4 | |
4. | Cale Rankin, First and second derivative Hölder estimates for generated Jacobian equations, 2023, 62, 0944-2669, 10.1007/s00526-022-02406-1 | |
5. | Cale Rankin, Strict g-Convexity for Generated Jacobian Equations with Applications to Global Regularity, 2023, 55, 0036-1410, 5685, 10.1137/22M1518852 | |
6. | Grégoire Loeper, Neil S. Trudinger, On the convexity theory of generating functions, 2024, 1536-1365, 10.1515/ans-2023-0160 |