
We consider single-phase flow in a fractured porous medium governed by Darcy's law with spatially varying hydraulic conductivity matrices in both bulk and fractures. The width-to-length ratio of a fracture is of the order of a small parameter ε and the ratio Kf⋆/Kb⋆ of the characteristic hydraulic conductivities in the fracture and bulk domains is assumed to scale with εα for a parameter α∈R. The fracture geometry is parameterized by aperture functions on a submanifold of codimension one. Given a fracture, we derive the limit models as ε→0. Depending on the value of α, we obtain five different limit models as ε→0, for which we present rigorous convergence results.
Citation: Maximilian Hörl, Christian Rohde. Rigorous derivation of discrete fracture models for Darcy flow in the limit of vanishing aperture[J]. Networks and Heterogeneous Media, 2024, 19(1): 114-156. doi: 10.3934/nhm.2024006
[1] | Wanru Du, Xiaochuan Jing, Quan Zhu, Xiaoyin Wang, Xuan Liu . A cross-modal conditional mechanism based on attention for text-video retrieval. Mathematical Biosciences and Engineering, 2023, 20(11): 20073-20092. doi: 10.3934/mbe.2023889 |
[2] | Hai Lu, Enbo Luo, Yong Feng, Yifan Wang . Video-based person re-identification with complementary local and global features using a graph transformer. Mathematical Biosciences and Engineering, 2024, 21(7): 6694-6709. doi: 10.3934/mbe.2024293 |
[3] | Liang She, Meiyue You, Jianyuan Wang, Yangyan Zeng . Video-based Person re-identification with parallel correction and fusion of pedestrian area features. Mathematical Biosciences and Engineering, 2023, 20(2): 3504-3527. doi: 10.3934/mbe.2023164 |
[4] | Xiaobo Zhang, Donghai Zhai, Yan Yang, Yiling Zhang, Chunlin Wang . A novel semi-supervised multi-view clustering framework for screening Parkinson's disease. Mathematical Biosciences and Engineering, 2020, 17(4): 3395-3411. doi: 10.3934/mbe.2020192 |
[5] | Miin-Shen Yang, Wajid Ali . Fuzzy Gaussian Lasso clustering with application to cancer data. Mathematical Biosciences and Engineering, 2020, 17(1): 250-265. doi: 10.3934/mbe.2020014 |
[6] | Jun Wu, Xinli Zheng, Jiangpeng Wang, Junwei Wu, Ji Wang . AB-GRU: An attention-based bidirectional GRU model for multimodal sentiment fusion and analysis. Mathematical Biosciences and Engineering, 2023, 20(10): 18523-18544. doi: 10.3934/mbe.2023822 |
[7] | Wenkui Zheng, Guangyao Zhang, Chunling Fu, Bo Jin . An adaptive feature selection algorithm based on MDS with uncorrelated constraints for tumor gene data classification. Mathematical Biosciences and Engineering, 2023, 20(4): 6652-6665. doi: 10.3934/mbe.2023286 |
[8] | Xiaojun Liang . RETRACTED ARTICLE: A video images-aware knowledge extraction method for intelligent healthcare management of basketball players. Mathematical Biosciences and Engineering, 2023, 20(2): 1919-1937. doi: 10.3934/mbe.2023088 |
[9] | Meijuan Sun . A Vision sensing-based automatic evaluation method for teaching effect based on deep residual network. Mathematical Biosciences and Engineering, 2023, 20(4): 6358-6373. doi: 10.3934/mbe.2023275 |
[10] | Sonza Singh, Anne Marie France, Yao-Hsuan Chen, Paul G. Farnham, Alexandra M. Oster, Chaitra Gopalappa . Progression and transmission of HIV (PATH 4.0)-A new agent-based evolving network simulation for modeling HIV transmission clusters. Mathematical Biosciences and Engineering, 2021, 18(3): 2150-2181. doi: 10.3934/mbe.2021109 |
We consider single-phase flow in a fractured porous medium governed by Darcy's law with spatially varying hydraulic conductivity matrices in both bulk and fractures. The width-to-length ratio of a fracture is of the order of a small parameter ε and the ratio Kf⋆/Kb⋆ of the characteristic hydraulic conductivities in the fracture and bulk domains is assumed to scale with εα for a parameter α∈R. The fracture geometry is parameterized by aperture functions on a submanifold of codimension one. Given a fracture, we derive the limit models as ε→0. Depending on the value of α, we obtain five different limit models as ε→0, for which we present rigorous convergence results.
Video summarization, the task of concussing the original video to a summary, can fully catch the eye-catching video information. Since the 1990s, video summarization technology has gained considerable domestic and international attention. It has made significant contribution in quickly understanding the video information, which is an efficient tool for fast browsing and retrieval of videos [1].
Key-frame extraction is an indispensable assistant for static video summarization technology that characterizes the principal video contents with representative frames extraction in order to provide a convenient method to quickly and comprehensively grasp video information. The prevailing assumption is that the goal is to extract video summary accurately and automatically. Key frame extraction methods can be divided into three categories: shot based, content based and cluster based [2]. Currently, some shot based techniques are developed in the area of computer vision and image processing [3]. Huang C. extracts representative frames from each shot by computing the frame image difference in saliency and edge map features [4]. Mehmood I. analyzes the difference between frame images in a shot by modeling an auditory and perceptual attention feature [5]. Song G. H. computes the color difference in one shot by employing the average histogram method [6]. It is common for shot based methods to segment the original video into several shots at first. However, the shot segmentation process is computationally expensive. Content based methods can avoid this problem. Rachida H. proposes MSKVS, a content-based method, to measure the inter-frame distance by time and visual features. MSKVS guarantees superior performance over other content-based methods [7]. Gianluigi C. conducts experiment on six new and sport competition videos by employing his content-based method. Experimental results demonstrate that his method can effectively extract key frames [8]. Generally, these content-based methods analyze the video content by extracting color, texture or motion feature. A limiting factor of content-based methods is that the computational cost is incurred in the process of frame image features [9]. This limitation encourages the development of cluster-based methods. Cluster based techniques work by clustering together the similar frames and extracting one representative frame of each class. They avoid shot segmentation error of shot based methods, decrease inter-frame difference analysis frequency of content-based methods, and can be well-suited to key frame extraction in related fields [10,11,12].
Numerous limitations have been posed to encourage the development of cluster-based key-frame extraction algorithms [13]. The prevailing steps are cluster data, aggregate video frames into multiple clusters, and extract representative frame to compose a video summarization [14]. The cluster-based key-frame extraction not only avoids shot segmentation error and complexity, but also decreases the inter-frame difference analysis frequency. There are also many other problems of the cluster-based methods, such as extracting single image feature cannot fully represent frame image, manually determining the number of clusters and the initial cluster center caused a low degree of automation, the extracted key frame cannot represent the original videos. Therefore, this paper aims to improve the accuracy and the automation of key-frame extraction. In this paper, a novel cluster based key frame extraction approach is proposed. The benefits of this approach are as follows:
1) A video content analysis method is proposed to improve the representative of video feature data by using three visual features: color, texture and information complexity.
2) We develop a threshold optimization method to avoid manual selection of clustering threshold. This method can improve the automation of cluster-based key frame extraction.
3) We utilize the fusion of the frame density, inter-cluster distance and intra-cluster distance to filter the key frame candidates and employ the max weight factor parameter to further refine key frame candidates. This method is favorable to improve the frame representation and the overall fidelity.
The rest of this paper is organized as follows. Section 2 describes the details of cluster-based methods. In section 3, we present the implementation of feature extraction method. Section 4 provides a detailed description of the proposed cluster based key frame extraction method. In section 5, we explore the performance of the proposed approach. Finally, the major work is discussed and wrapped up in Section 6.
It is quite common for cluster-based methods to transform the frame image into data points in feature space and cluster these data points to extract key frames. It is similar to the clustering algorithm, which gathers similar elements together, and takes the cluster center as the representative of clusters. Recently, some cluster-based key frame extraction methods have been proposed in literature [15,16].
The basic cluster-based methods can be categorized into two: automatic and semi-automatic cluster-based methods [17]. In general, semi-automatic cluster-based method requires manual determination the initial cluster centers and the number of clusters. Setting the number of clusters in advance may affect key frames extraction results [14]. It is more reasonable to determine the number of clusters according to different video contents in clustering process. Therefore, the automatic key frame extraction technology is more practical. Among the automatic cluster schemes, Kuanar extracts the color and texture features and propose an automated method of video key frame extraction using dynamic Delaunay graph clustering [10]. In [11], a fused key frame extraction framework is proposed. This method generates video summaries by combining sparse selection and agglomerative hierarchical clustering method based on mutual information. It extracts candidate key frames by an improved MIAHC algorithm caused the fidelity and operation efficiency are improved. In [12], the authors propose the coarse clustering and fine clustering key frame extraction. The traditional spectral clustering method with simple histogram features is used to remove most of the redundant frames. Then, the image classification method based on SIFT feature sparse coding is used to perform fine clustering for each time period. In [19], Liu and his colleges propose a key frame extraction method combining k-means algorithm and hierarchical clustering algorithm. They obtain initial clusters by employing the improved hierarchical clustering algorithm. Then they use the k-means algorithm to optimize the initial clusters to obtain the optimal clusters. In [20], the authors proposed a novel cluster-based algorithm, which is inspired by the idea of high-density peak search clustering algorithm. This method gathers similar frames into classes by integrating the important attributes of the video. In [21], the proposed cluster-based method combines image information entropy and uses a density clustering algorithm to extract key frames in gesture videos. In cluster-based key frame extraction, calculation method of clustering threshold has a great impact on the fidelity and compression rate of key frames. The core idea behind such automatic cluster-based methods is to set a favorable threshold. In literature, researchers usually receive the threshold by defining formula or by setting fixed value. Kuanar S. K. computes the cluster threshold by employing formula 2(1 - ε) [10]. Jeong D. J. selects 0.0001 as the cluster threshold [12]. The other researchers compute cluster threshold by a self-defined formula [11,19]. In representative frame extraction, some cluster-based techniques take the cluster centers or centroids as the representative frames of each class. In [10], the author selects the frames which are closest to the cluster centroids as the representative frames. In [19,20], the authors directly extract cluster centers as the representative frames.
In general, those cluster-based methods may remain redundant because the clustering threshold setting influences optimal key frame extraction. Also, these methods evaluate the representative of the frames in one cluster by a single image feature. However, a single image feature cannot be able to fully characterize the frame content and complexity. In a more reasonable way, optimal threshold and feature fusion should be computed in cluster-based key frame extraction.
In this section, we provide a detailed description of the proposed OTMW method which includes feature extraction and key frame extraction. At first, the color, texture, and information complexity features are computed to express video content. Then, an optimization function is developed to compute the optimal clustering threshold. Next, the frame density, inter-distance and intra-distance are computed and fused as the clustering weight factor. Finally, a Max Weigh method is proposed to extract the cluster representative frame. The proposed approach is summarized in figure 1.
In this section, we describe the proposed video content analysis method which distinguishes frames by computing feature data [22]. We extract the color, texture and information complexity features to discriminate different frame images. The video frame feature data is construct by
FFV = [C T E] | (1) |
where C, T and E represent the color, texture and information complexity, respectively.
We take the color feature as one feature to characterize the difference of frame images. We compute the first color moment, second color moment and third color moment in H, S, and V channels to construct the color feature data vectors of frame images. The first color moment reflects the brightness difference, which is calculated by
Cm=1w×hw∑p=1h∑q=1fi(xp,yq)[1,2] | (2) |
where the parameters w and h are the pixel width and height, fi(xp,yq) is the pixel value in position (xp,yq), and 1⩽p⩽w,1⩽q⩽h. The second color moment reflects the color distribution range, which is compute by
Cv=(1w×hw∑p=1h∑q=1(fi(xp,yq)−Cm)2)12 | (3) |
The third color moment represents the color distribution symmetry, which is computed by
Cs=(1w×hw∑p=1h∑q=1(fi(xp,yq)−Cv)3)13 | (4) |
where Cm, Cv and Cs include first moment mean, second moment variance and third moment slope three parameters, respectively.
We take the texture feature as another feature to characterize the difference of frame images in image surface structural organization information\cite{r23}. We compute the mean of angular second moment, contrast, correlation and homogeneity texture features in 0, 45, 90 and 135 directions to construct video frame texture feature data vectors. The angular second moment characterizes the thickness and gray distribution uniformity of images, and it can be calculated by
TA=1w×hw∑p=1h∑q=1fi(xp,yq)2 | (5) |
The contrast characterizes the groove depth and clarity of images, it can be calculated by
TCon=1w×hw∑p=1h∑q=1(p−q)2fi(xp,yq) | (6) |
The correlation characterizes the local gray similarity in row or column direction, it can be calculated by
TCor=w∑p=1h∑q=1(xp−w∑p=1h∑q=1fi(xp,yq)×p)×(yp−w∑p=1h∑q=1fi(xp,yq)×q)×fi(xp,yq)2w∑p=1h∑q=1fi(xp,yq)×(xp−w∑p=1h∑q=1fi(xp,yq)×p)2 | (7) |
The homogenization characterizes the local gray level uniformity of images, it can be calculated by
TH=w∑p=1h∑q=1fi(xp,yq)×11+(p−q)2 | (8) |
We take the information complexity as the last feature to characterize the difference of frame images in aggregated and spatial feature. Information entropy proposed by Shannon is a holistic perspective information complexity measuring method [24]. It can characterize the image aggregated and spatial features. Larger image information entropy and greater internal non-uniformity degree commonly occur together in higher diversity level. The two-dimensional information entropy Efi can be calculated by
E=−1w×hCf×log2Cf | (9) |
where Cf is the occurrence probability of each gray level in i-th frame image.
In this section, we describe the proposed cluster based key frame extraction method, which develops a new optimization function to compute the optimal threshold.
We narrow the search interval of optimal threshold by computing the function values of trial points. In cluster-based method, the fidelity [8] and ratio [7] are negatively and positively correlated with the threshold, respectively. Therefore, we infer that the quality of key frames is optimal when fidelity and ratio are infinitely close. We introduce a new parameter FR to characterize this relationship and to obtain the optimal key frames. The distance between frame x(i) and frame x(j) is compute by
dij=‖x(i)−x(j)‖2=√n∑u=1|x(i)u−x(j)u|2 | (10) |
where (i,j)∈[1,2,...,m].
The average distance is computed by
dc=2m(m−2)m∑i=1m∑j=1dij | (11) |
The threshold is defined as
t=dc±ε×stdij | (12) |
where ε is a variable factor, stdij is the standard deviation of dij. We define the new parameter FR as
FR(t)=fidelity(t)ratio(t) | (13) |
The threshold optimization function is defined as
f(t)=∣FR(t)−1∣ | (14) |
Giving a, b(a < b, a=dc - 3×stdij, b=dc + 3×stdij) and c. We compute threshold change factor c and compute f(a), f(b) and f(c). We change c by comparing the f(a), f(b) and f(c). When f(c)<0, Fidelity is less than Ratio. Therefore, we change c to increases Fidelity and decrease Ratio. When f(c)⩾0, Fidelity is more than Ratio. We change c to decrease Fidelity and increase Ratio. If f(c) = f(a) = f(b) in consecutive three times and (b−a)⩽0.001, we compute the optimal cluster threshold by ∂ = a+b2. The calculation process is Algorithm1.
Algorithm 1 Compute the optimal cluster threshold |
Input: parameters a and b Output: optimal threshold ρ compute c=a+0.618*(b-a); compute f(a), f(b), f(c); if not f(a) = f(b) = f(c) and (b−a) > 0.001 then while (b−a) > 0.001 do compute c=a+0.382*(b-a); compute f(a), f(b), f(c); while not f(a) = f(b) = f(c) do if f(c) < 0 then change c to increase fidelity and decrease ratio, b = c; else change c to decrease fidelity and increase ratio, a = c; end if end while end while ρ=(a+b)/2; else ρ=(a+b)/2; end if |
In this section, we compute the initial cluster centers and the number of clusters by clustering the data of FFV. Frame Feature Vector data sets FFV={x(1),x(2),⋯,x(m)} is a 14-dimensional video frame feature data, which includes color, texture and information entropy features.
The density of the sample frame i in video stream is defined as:
ρi=∑je−(dijdc)2 | (15) |
The distance of the j-th frame image from the i-th frame image in the same cluster is defined as:
ηi = dist(x(i),x(j)),i∈D,j≠i | (16) |
where i and j are in the same cluster.
The distance between the i-th frame element and the element j of another cluster is defined as:
φi=dist(x(i),x(j)),i∈D,j∈μ(k) | (17) |
where i is a frame in one cluster, j is the elements of the cluster center that has completed the clustering.
According Eq. (17), φi may contain multiple elements. The product of ρi, ηi−1 and φi weight factor is defined as the weighted product:
ω(x)=Π(ρi×ηi−1×φi) | (18) |
Initial cluster center and cluster number k are directly related to ω(x), the solution is shown in Algorithm 2. Firstly, the density of samples is calculated using Eq. (15), and then the maximum density frame is selected as the first initial clustering center μ(1). The distance between frames is computed and set as the first initial cluster center. Then the frames are classified with a less than t distance into the first cluster, and these frames are removed from D. The ωi of D is also calculated using Eq. (18), and the second initial cluster center μ(2) is obtained by calculating the maximum of ω(x). Similarity, the samples that satisfies the same condition are classified as the second, three and k cluster, and are removed from D. Finally, all samples are assigned to cluster, and the initial cluster centers μ(1),μ(2),⋯,μ(k) and the number of k are obtained. Clustering process is shown in Figure 3.
Algorithm 2 Compute the initial cluster centers and the number of clusters |
Input: FFV={x(1),x(2),...,x(m)} Output: the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k for each sample i∈D compute the density ρi; end for while D≠∅ do set the frame with maximum ρi as the initial cluster center μ(1); if d(μ(1),i)⩽t then i∈C1, remove i from D; end if for each sample i∈D∪(Ci∉D) do compute ηi,φi,ωi; cluster center μ(i)=argminiωi i∈Ci, remove i from D; end for end while |
In this section, we classify frame images into k clusters C={C1,C2,...,Ck} and extract representative frames from this clusters. The error square sum criterion function is used as the criterion function. The frame images are classified into different clusters by employing Algorithm 3. Giving the cluster C=C1,C2,⋯,Ck, the specific steps are as follows:
Algorithm 3 Cluster the frame feature value |
Input: frame feature value FFV={x(1),x(2),...,x(m)}, the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k. Output: k clusters C={C1,C2,...,Ck} Let Ci=∅(1⩽i⩽k); repeat for j=1,2,...,m do compute the distance between the sample x(j) and the cluster center μ(i)(1⩽i⩽k); compute the λj=argmindji; divide the sample x(j) into the nearest cluster Cλj=Cλj∪x(j); end for for i=1,2,...,k do calculate the new cluster center (μ(i))′=1|Ci|∑x∈Cix; if (μ(i))′=μ(i) then update the current cluster center μ(i) to (μ(i))′; else keep the current mean vector unchanged; endif end for until the current cluster center vectors are not updated. |
Step1: Calculate frame ρi, ηi and φi in cluster C1 by Eq. (13), (14) and (15).
Step2: Compute maximum weight factor by Eq. (17) and select key-frame f1 from cluster C1.
Step3: Similarly, the key-frame fk can be obtained by the computing of maximum weight factor in Eq. (17).
Step4: Repeat Step3 until all cluster representative frames are selected.
Step5: Key-frames f=f1,f2,⋯,fk are extracted. Video summarization is generated.
In this section, we turn to an empirical study on the proposed OTMW method in key frame extraction tasks and compare it with state-of-the-art cluster-based key frame extraction methods such as HVM [25], ACSC [26], RGPH [27] and FCME [28]. We report improved performance across open video dataset. We conduct a set of experiments by using surveillance, documentary, lecture on TV and phone recording four different video datasets. These videos are publicly shared on https://open-video.org/. In this section, we take Hcil2000_01 video as an example to report the performance of OTMW method. Here Hcil2000_01 is a random video of open video dataset.
The parameters of Hcil2000_01 video in threshold optimization is shown in table 1. In Hcil2000_01 video, the average distance dc and standard deviation stdij of dij are 2.3107 and 0.2259, respectively. Therefore, the parameters a = 1.6442, b = 2.9993. The variable interval of parameter c is [1.6442, 2.9993]. The parameter c is computed by c = a + 0.618*(b-a) = 2.481652. In sixth iteration, the f(a) = f(b) = f(c) = 0.0063. The calculation of parameter c is changed to c = a + 0.382*(b-a) in subsequent iterations. As shown in table 1, the value of f(a) = f(b) = f(c) = 0.0063 are not change in the next two iterations. However, b-a = 0.122156 > 0.001. Finally, b-a = 0.000509 < 0.001 in 13th iteration. Therefore, the optimal threshold ˜n=a+b2=1.644709.
iterations | a | b | c | f'(a) | f'(b) | f'(c) |
1 | 1.6442 | 2.9993 | 2.4817 | -0.63 | -7.6 | -6.95 |
2 | 1.6442 | 2.4817 | 2.1617 | -0.63 | -6.95 | -3.33 |
3 | 1.6442 | 2.1617 | 1.9640 | -0.63 | -3.33 | -2.11 |
4 | 1.6442 | 1.9640 | 1.8419 | -0.63 | -2.11 | -2.11 |
5 | 1.6442 | 1.8419 | 1.7664 | -0.63 | -2.11 | -0.63 |
6 | 1.6442 | 1.7663 | 1.7197 | -0.63 | -0.63 | -0.63 |
7 | 1.6442 | 1.7197 | 1.6720 | -0.63 | -0.63 | -0.63 |
8 | 1.6442 | 1.6720 | 1.6544 | -0.63 | -0.63 | -0.63 |
9 | 1.6442 | 1.6544 | 1.6479 | -0.63 | -0.63 | -0.63 |
10 | 1.6442 | 1.6479 | 1.6455 | -0.63 | -0.63 | -0.63 |
11 | 1.6442 | 1.6455 | 1.6447 | -0.63 | -0.63 | -0.63 |
12 | 1.6442 | 1.6447 | 1.6443 | -0.63 | -0.63 | -0.63 |
13 | 1.6442 | 1.6447 | 1.6445 | -0.63 | -0.63 | -0.63 |
Note: f′(a)=100∗f(a), f′(b)=100∗f(b) and f′(c)=100∗f(c) |
The key frames are extracted by employing OTMW method across open video dataset. The key frames of Hcil2000_01 video is shown in figure 3. The fidelity and ratio results are shown in table 2. Nf represents the number of frames, Nrf represents the number of key frames. The fidelity measures of different videos are changed from 93 to 98 with an average of 96.12. The ratio measures are changed from 95 to 98 with an average of 97.13. The key frames are consistent with artificial judgment.
video name | Nf | Nrf | fidelity | ratio |
Traffic monitoring | 120 | 5 | 95.94 | 95.83 |
Office video_01 | 161 | 5 | 96.84 | 96.89 |
Entertainment_01 | 151 | 5 | 97.14 | 96.69 |
Entertainment_02 | 191 | 7 | 98.01 | 96.34 |
Entertainment_03 | 101 | 4 | 97.79 | 96.04 |
Office video_02 | 77 | 3 | 96.34 | 96.10 |
Indi012 | 201 | 6 | 94.53 | 97.00 |
UGS01_008 | 301 | 5 | 93.58 | 98.34 |
UGS07_005 | 400 | 8 | 95.07 | 98.00 |
UGS01_006 | 360 | 8 | 95.81 | 97.78 |
UGS01_001 | 331 | 7 | 94.26 | 97.89 |
Hcil2000_01 | 210 | 7 | 95.58 | 96.67 |
Marchionini | 100 | 4 | 96.96 | 96.00 |
RayDiessenIBM | 230 | 6 | 96.73 | 97.39 |
Entertainment_04 | 201 | 3 | 97.63 | 98.51 |
Daily life | 283 | 5 | 97.45 | 98.23 |
We compare OTMW with state-of-the-art cluster-based algorithm in term of the fidelity and ratio measure performance. In experimental, the number of clusters of semi-automatic cluster-based methods are same as OTMW method. The results of fidelity and ratio are shown in figure 4 and table 3. HVM method with an average fidelity of 83.75 and an average ratio of 95.59. ACSC with an average fidelity of 85.75 and an average ratio of 94.79. The average fidelity of RGPH and FCME are 85.61 and 85.38. The proposed OTMW method with an average fidelity of 96.24 and with an average ratio of 97.15. OTMW method achieves a 10.63–12.49 fidelity improvement over other cluster-based methods. The fluctuations of ratio measure of different videos are shown in figure 5. OTMW method with a ratio variance of 0.73. The ratio variance of HVM and ACSC cluster-based methods are 22.11 and 11.12. They are 15 and 30 times larger than OTMW, respectively. OTMW method achieves a 1.56–2.24 ratio improvement over other cluster-based methods and has a small fluctuation.
video name | HVM | ACSC | OTMW |
Traffic monitoring | 97.5 | 95.0 | 95.83 |
Officevideo_01 | 98.76 | 88.20 | 96.8 |
Entertainment_01 | 81.46 | 92.05 | 96.67 |
Entertainment_02 | 98.43 | 93.72 | 96.34 |
Entertainment_03 | 98.14 | 94.06 | 96.04 |
Officevideo_02 | 98.70 | 89.61 | 96.10 |
Indi012 | 99.50 | 94.50 | 97.02 |
UGS01_008 | 95.26 | 98.50 | 98.34 |
UGS07_005 | 93.75 | 98.50 | 98.00 |
UGS01_006 | 96.11 | 97.78 | 97.78 |
UGS01_001 | 89.43 | 98.19 | 97.89 |
Hcil2000_01 | 90.95 | 97.14 | 96.67 |
Marchionini | 99.00 | 96.00 | 96.96 |
RayDiessenIBM | 94.35 | 99.13 | 97.39 |
Entertainment_04 | 99.50 | 89.55 | 98.51 |
Daily life | 98.58 | 94.70 | 98.23 |
To assess the performance of OTMW method, we consider key frame extraction tasks on surveillance, documentary, lecture on TV and phone recording datasets. The HVM method achieves an average fidelity of 82.42, 86.90, 84.36 and 81.22, respectively. It achieves an average ratio of 95.76, 92.73, 94.767 and 99.05. ACSC method achieves an average fidelity of 88.40, 84.07, 81.02 and 89.57. It achieves an average ratio of 92.19, 97.37, 97.42 and 92.13. The average fidelities of FCME method are 86.39, 83.77, 85.51 and 87.62. The average ratios of RGPH method are 86.39, 83.18, 85.94 and 86.45. OTMW method achieves an average fidelity of 97.07, 94.40, 95.87 and 97.54 and an average ratio of 96.43, 97.83, 97.01 and 98.37. The fidelity measures on various datasets are shown in figure 6. OTMW method achieves a 9.91-11.66 fidelity and 0.91-2.77 ratio improvement over HVM, ACSC, FCME, RGPH.
In this paper, an innovative cluster based key frame extraction method is presented for multi-type videos. This method analyzes the video content by extracting the color, texture and information complexity features. The threshold optimization function is constrained by fidelity and ratio measures. It avoids the dependence on a fixed threshold problem in traditional cluster-based method by computing the optimal threshold. The parameters density ρi, inter-distance ηi, intra-distance φi and weight factor ωi are used to compute the initial cluster centers and the number of clusters, extract representative frames from k clusters. The method shows promising result on different video datasets. Meanwhile, OTMW achieves competitive and even better fidelity and ratio measure performance when compared with several state-of-the-art cluster-based methods. Overall, we found that OTMW well suited to process key frame extraction problem in the field of static video summarization. However, whether the proposed method also applies in the real-life production and life environment is subject to be verified. In the future, we will explore to apply our proposed method for real-time video surveillance. In addition, we will investigate how to integrate the proposed method into the camera client and how to apply it to daily production and life.
The subject is sponsored by the National Natural Science Foundation of P. R. China (No. 61872196, No. 61872194, No. 61902196, No. 62102194 and No. 62102196), Scientific and Technological Support Project of Jiangsu Province (No. BE2019740, No. BK20200753 and No. 20KJB520001), Major Natural Science Research Projects in Colleges and Universities of Jiangsu Province (No. 18KJA520008), Major Natural Science Research Projects in Colleges and Universities of Anhui Province (No.KJ2019ZD20), Six Talent Peaks Project of Jiangsu Province (No. RJFW-111), Postgraduate Research and Practice Innovation Program of Jiangsu Province (No. KYCX19_0909, No. KYCX19_0911, No. KYCX20_0759, No. KYCX21_0787, No. KYCX21_0788 and No. KYCX21_0799).
The authors declare there is no conflict of interest.
[1] |
S. Burbulla, M. Hörl, C. Rohde, Flow in porous media with fractures of varying aperture, SIAM J. Sci. Comput., 45 (2023), A1519–A1544. https://doi.org/10.1137/22M1510406 doi: 10.1137/22M1510406
![]() |
[2] |
L. Y. Wang, Z. Y. Yin, Fluid flow and mass transport in fractured media with curved fractures and varying apertures: A 3d modeling approach, Int. J. Numer. Methods Eng., 124(2023), 4311–4338. https://doi.org/10.1002/nme.7314 doi: 10.1002/nme.7314
![]() |
[3] |
I. Berre, F. Doster, E. Keilegavlen, Flow in fractured porous media: A review of conceptual models and discretization approaches, Transp. Porous Med., 130 (2019), 215–236. https://doi.org/10.1007/s11242-018-1171-6 doi: 10.1007/s11242-018-1171-6
![]() |
[4] |
R. C. Liu, B. Li, Y. J. Jiang, N. Huang, Review: Mathematical expressions for estimating equivalent permeability of rock fracture networks, Hydrogeol. J., 24 (2016), 1623–1649. https://doi.org/10.1007/s10040-016-1441-8 doi: 10.1007/s10040-016-1441-8
![]() |
[5] |
M. Oda, Permeability tensor for discontinuous rock masses, Géotechnique, 35 (1985), 483–495. https://doi.org/10.1680/geot.1985.35.4.483 doi: 10.1680/geot.1985.35.4.483
![]() |
[6] |
T. Arbogast, J. Douglas, Jr., U. Hornung, Derivation of the double porosity model of single phase flow via homogenization theory, SIAM J. Numer. Anal., 21 (1990), 823–836. https://doi.org/10.1137/0521046 doi: 10.1137/0521046
![]() |
[7] |
G. I. Barenblatt, I. P. Zheltov, I. N. Kochina, Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks, J. Appl. Math. Mech., 24 (1960), 1286–1303. https://doi.org/10.1016/0021-8928(60)90107-6 doi: 10.1016/0021-8928(60)90107-6
![]() |
[8] |
M. Chen, S. A. Masum, H. R. Thomas, 3d hybrid coupled dual continuum and discrete fracture model for simulation of CO2 injection into stimulated coal reservoirs with parallel implementation, Int. J. Coal Geol., 262 (2022), 104103. https://doi.org/10.1016/j.coal.2022.104103 doi: 10.1016/j.coal.2022.104103
![]() |
[9] |
D. C. Karvounis, P. Jenny, Adaptive hierarchical fracture model for enhanced geothermal systems, Multiscale Model Simul., 14 (2016), 207–231. https://doi.org/10.1137/140983987 doi: 10.1137/140983987
![]() |
[10] |
E. Ahmed, J. Jaffré, J. E. Roberts, A reduced fracture model for two-phase flow with different rock types, Math. Comput. Simul., 137 (2017), 49–70. https://doi.org/10.1016/j.matcom.2016.10.005 doi: 10.1016/j.matcom.2016.10.005
![]() |
[11] |
K. Brenner, J. Hennicker, R. Masson, P. Samier, Hybrid-dimensional modelling of two-phase flow through fractured porous media with enhanced matrix fracture transmission conditions, J. Comput. Phys., 357 (2018), 100–124. https://doi.org/10.1016/j.jcp.2017.12.003 doi: 10.1016/j.jcp.2017.12.003
![]() |
[12] | M. Bukac, I. Yotov, P. Zunino, Dimensional model reduction for flow through fractures in poroelastic media, ESAIM Math. Model. Numer. Anal., 51 (2017), 1429–1471. |
[13] |
S. Burbulla, C. Rohde, A finite-volume moving-mesh method for two-phase flow in dynamically fracturing porous media, J. Comput. Phys., 458 (2022), 111031. https://doi.org/10.1016/j.jcp.2022.111031 doi: 10.1016/j.jcp.2022.111031
![]() |
[14] |
M. Lesinigo, C. D'Angelo, A. Quarteroni, A multiscale Darcy–Brinkman model for fluid flow in fractured porous media, Numer. Math., 117 (2011), 717–752. https://doi.org/10.1007/s00211-010-0343-2 doi: 10.1007/s00211-010-0343-2
![]() |
[15] |
V. Martin, J. Jaffré, J. E. Roberts, Modeling fractures and barriers as interfaces for flow in porous media, SIAM J. Sci. Comput., 26 (2005), 1667–1691. https://doi.org/10.1137/S1064827503429363 doi: 10.1137/S1064827503429363
![]() |
[16] |
I. Rybak, S. Metzger, A dimensionally reduced Stokes–Darcy model for fluid flow in fractured porous media, Appl. Math. Comput., 384 (2020), 125260. https://doi.org/10.1016/j.amc.2020.125260 doi: 10.1016/j.amc.2020.125260
![]() |
[17] | M. Starnoni, I. Berre, E. Keilegavlen, J. M. Nordbotten, Modeling and discretization of flow in porous media with thin, full-tensor permeability inclusions, Internat. J. Numer. Methods Engrg., 122 (2021), 4730–4750. |
[18] |
P. Paranamana, E. Aulisa, M. Toda, Geometric model of the fracture as a manifold immersed in porous media, J. Math. Phys., 62 (2021), 051508. https://doi.org/10.1063/1.5109730 doi: 10.1063/1.5109730
![]() |
[19] |
A. Armiti-Juber, On the limit of a two-phase flow problem in thin porous media domains of Brinkman type, Math. Methods Appl. Sci., 45 (2022), 2563–2581. https://doi.org/10.1002/mma.7940 doi: 10.1002/mma.7940
![]() |
[20] |
H. P. Huy, E. Sanchez-Palencia, Phénomènes de transmission à travers des couches minces de conductivité élevée, J. Math. Anal. Appl., 47 (1974), 284–309. https://doi.org/10.1016/0022-247X(74)90023-7 doi: 10.1016/0022-247X(74)90023-7
![]() |
[21] |
F. List, K. Kumar, I. S. Pop, F. A. Radu, Rigorous upscaling of unsaturated flow in fractured porous media, SIAM J. Math. Anal., 52 (2020), 239–276. https://doi.org/10.1137/18M1203754 doi: 10.1137/18M1203754
![]() |
[22] | F. A. Morales, R. E. Showalter, The narrow fracture approximation by channeled flow, J. Math. Anal. Appl., 365 (2010), 320–331. |
[23] |
F. A. Morales, R. E. Showalter, A Darcy–Brinkman model of fractures in porous media, J. Math. Anal. Appl., 452 (2017), 1332–1358. https://doi.org/10.1016/j.jmaa.2017.03.063 doi: 10.1016/j.jmaa.2017.03.063
![]() |
[24] |
F. A. Morales, R. E. Showalter, Interface approximation of darcy flow in a narrow channel, Math. Methods Appl. Sci., 35 (2012), 182–195. https://doi.org/10.1002/mma.1555 doi: 10.1002/mma.1555
![]() |
[25] | E. Sanchez-Palencia, Problèmes de perturbations liés aux phénomènes de conduction à travers des couches minces de grande résistivité, J. Math. Pures Appl., 53 (1974), 251–269. |
[26] |
M. Dugstad, K. Kumar, Dimensional reduction of a fractured medium for a two-phase flow, Adv. Water Resour., 162 (2022), 104140. https://doi.org/10.1016/j.advwatres.2022.104140 doi: 10.1016/j.advwatres.2022.104140
![]() |
[27] |
K. Kumar, F. List, I. S. Pop, F. A. Radu, Formal upscaling and numerical validation of unsaturated flow models in fractured porous media, J. Comput. Phys., 407 (2020), 109138. https://doi.org/10.1016/j.jcp.2019.109138 doi: 10.1016/j.jcp.2019.109138
![]() |
[28] |
T. Mel'nyk, C. Rohde, Asymptotic approximations for semilinear parabolic convection-dominated transport problems in thin graph-like networks, J. Math. Anal. Appl., 529 (2024), 127587. https://doi.org/10.1016/j.jmaa.2023.127587 doi: 10.1016/j.jmaa.2023.127587
![]() |
[29] | Jan Březina, Jan Stebel, Analysis of model error for a continuum-fracture model of porous media flow, In T. Kozubek, R. Blaheta, J. Šístek, M. Rozložník, M. Čermák, editors, High Performance Computing in Science and Engineering, Cham: Springer, 2016,152–160. |
[30] |
M. J. Gander, J. Hennicker, R. Masson, Modeling and analysis of the coupling in discrete fracture matrix models, SIAM J. Numer. Anal., 59 (2021), 195–218. https://doi.org/10.1137/20M1312125 doi: 10.1137/20M1312125
![]() |
[31] |
W. M. Boon, J. M. Nordbotten, J. E. Vatne, Functional analysis and exterior calculus on mixed-dimensional geometries, Ann. Mat. Pura Appl., 200 (2021), 757–789. https://doi.org/10.1007/s10231-020-01013-1 doi: 10.1007/s10231-020-01013-1
![]() |
[32] |
W. M. Boon, J. M. Nordbotten, Mixed-dimensional poromechanical models of fractured porous media, Acta Mech., 234 (2023), 1121–1168. https://doi.org/10.1007/s00707-022-03378-1 doi: 10.1007/s00707-022-03378-1
![]() |
[33] |
A. Mikelić, M. F. Wheeler, T. Wick, Phase-field modeling of a fluid-driven fracture in a poroelastic medium, Comput. Geosci., 19 (2015), 1171–1195. https://doi.org/10.1007/s10596-015-9532-5 doi: 10.1007/s10596-015-9532-5
![]() |
[34] |
S. Burbulla, L. Formaggia, C. Rohde, A. Scotti, Modeling fracture propagation in poro-elastic media combining phase-field and discrete fracture models, Comput. Methods Appl. Mech. Eng., 403 (2023), 115699. https://doi.org/10.1016/j.cma.2022.115699 doi: 10.1016/j.cma.2022.115699
![]() |
[35] | G. P. Galdi, An Introduction to the Mathematical Theory of the Navier-Stokes Equations, New York: Springer, 2011. |
[36] |
G. Leobacher, A. Steinicke, Existence, uniqueness and regularity of the projection onto differentiable manifolds, Ann. Glob. Anal. Geom., 60 (2021), 559–587. https://doi.org/10.1007/s10455-021-09788-z doi: 10.1007/s10455-021-09788-z
![]() |
[37] | M. C. Delfour, J. P. Zolésio, Shapes and Geometries, Philadelphia: SIAM, 2011. |
[38] | E. Hebey, Nonlinear Analysis on Manifolds, Providence: American Mathematical Society, 2000. |
[39] | J. Wloka, Partial Differential Equations, Cambridge: Cambridge University Press, 1987. |
[40] | B. Booß-Bavnbek, K. P. Wojciechowski, Elliptic Boundary Problems for Dirac Operators, Boston: Birkhäuser, 1993. |
1. | Biao Ma, Minghui Ji, Miaochao Chen, Motion Feature Retrieval in Basketball Match Video Based on Multisource Motion Feature Fusion, 2022, 2022, 1687-9139, 1, 10.1155/2022/9965764 | |
2. | Shiyue Qin, Xinyi Ke, Cuijia Chen, 2022, Algorithm Exploration and Experimentation of RPCA-Based Matrix Separation, 9781450396806, 12, 10.1145/3582935.3582938 | |
3. | Alina Banerjee, Ravinder Megavath, Ela Kumar, Opposition-based optimized max pooled 3D convolutional features for action video retrieval, 2024, 16, 2511-2104, 4815, 10.1007/s41870-024-02102-7 | |
4. | Alexandros Vrochidis, Nikolaos Dimitriou, Stelios Krinidis, Savvas Panagiotidis, Stathis Parcharidis, Dimitrios Tzovaras, A Deep Learning Framework for Monitoring Audience Engagement in Online Video Events, 2024, 17, 1875-6883, 10.1007/s44196-024-00512-w | |
5. | Jhuma Sunuwar, Samarjeet Borah, A comparative analysis on major key-frame extraction techniques, 2024, 83, 1573-7721, 73865, 10.1007/s11042-024-18380-z | |
6. | Yubo Sun, Xiaofang Chen, Weihua Gui, Yongfang Xie, Shiwen Xie, 2023, Superheat Degree Category Refinement of Aluminum Electrolysis Cell Using Flame Hole Video Apparent Spatio-Temporal Feature Clustering Network, 979-8-3503-0375-9, 1675, 10.1109/CAC59555.2023.10450214 | |
7. | M. Dhanushree, R. Priya, P. Aruna, R. Bhavani, Static video summarization with multi-objective constrained optimization, 2024, 15, 1868-5137, 2621, 10.1007/s12652-024-04777-z |
Algorithm 1 Compute the optimal cluster threshold |
Input: parameters a and b Output: optimal threshold ρ compute c=a+0.618*(b-a); compute f(a), f(b), f(c); if not f(a) = f(b) = f(c) and (b−a) > 0.001 then while (b−a) > 0.001 do compute c=a+0.382*(b-a); compute f(a), f(b), f(c); while not f(a) = f(b) = f(c) do if f(c) < 0 then change c to increase fidelity and decrease ratio, b = c; else change c to decrease fidelity and increase ratio, a = c; end if end while end while ρ=(a+b)/2; else ρ=(a+b)/2; end if |
Algorithm 2 Compute the initial cluster centers and the number of clusters |
Input: FFV={x(1),x(2),...,x(m)} Output: the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k for each sample i∈D compute the density ρi; end for while D≠∅ do set the frame with maximum ρi as the initial cluster center μ(1); if d(μ(1),i)⩽t then i∈C1, remove i from D; end if for each sample i∈D∪(Ci∉D) do compute ηi,φi,ωi; cluster center μ(i)=argminiωi i∈Ci, remove i from D; end for end while |
Algorithm 3 Cluster the frame feature value |
Input: frame feature value FFV={x(1),x(2),...,x(m)}, the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k. Output: k clusters C={C1,C2,...,Ck} Let Ci=∅(1⩽i⩽k); repeat for j=1,2,...,m do compute the distance between the sample x(j) and the cluster center μ(i)(1⩽i⩽k); compute the λj=argmindji; divide the sample x(j) into the nearest cluster Cλj=Cλj∪x(j); end for for i=1,2,...,k do calculate the new cluster center (μ(i))′=1|Ci|∑x∈Cix; if (μ(i))′=μ(i) then update the current cluster center μ(i) to (μ(i))′; else keep the current mean vector unchanged; endif end for until the current cluster center vectors are not updated. |
iterations | a | b | c | f'(a) | f'(b) | f'(c) |
1 | 1.6442 | 2.9993 | 2.4817 | -0.63 | -7.6 | -6.95 |
2 | 1.6442 | 2.4817 | 2.1617 | -0.63 | -6.95 | -3.33 |
3 | 1.6442 | 2.1617 | 1.9640 | -0.63 | -3.33 | -2.11 |
4 | 1.6442 | 1.9640 | 1.8419 | -0.63 | -2.11 | -2.11 |
5 | 1.6442 | 1.8419 | 1.7664 | -0.63 | -2.11 | -0.63 |
6 | 1.6442 | 1.7663 | 1.7197 | -0.63 | -0.63 | -0.63 |
7 | 1.6442 | 1.7197 | 1.6720 | -0.63 | -0.63 | -0.63 |
8 | 1.6442 | 1.6720 | 1.6544 | -0.63 | -0.63 | -0.63 |
9 | 1.6442 | 1.6544 | 1.6479 | -0.63 | -0.63 | -0.63 |
10 | 1.6442 | 1.6479 | 1.6455 | -0.63 | -0.63 | -0.63 |
11 | 1.6442 | 1.6455 | 1.6447 | -0.63 | -0.63 | -0.63 |
12 | 1.6442 | 1.6447 | 1.6443 | -0.63 | -0.63 | -0.63 |
13 | 1.6442 | 1.6447 | 1.6445 | -0.63 | -0.63 | -0.63 |
Note: f′(a)=100∗f(a), f′(b)=100∗f(b) and f′(c)=100∗f(c) |
video name | Nf | Nrf | fidelity | ratio |
Traffic monitoring | 120 | 5 | 95.94 | 95.83 |
Office video_01 | 161 | 5 | 96.84 | 96.89 |
Entertainment_01 | 151 | 5 | 97.14 | 96.69 |
Entertainment_02 | 191 | 7 | 98.01 | 96.34 |
Entertainment_03 | 101 | 4 | 97.79 | 96.04 |
Office video_02 | 77 | 3 | 96.34 | 96.10 |
Indi012 | 201 | 6 | 94.53 | 97.00 |
UGS01_008 | 301 | 5 | 93.58 | 98.34 |
UGS07_005 | 400 | 8 | 95.07 | 98.00 |
UGS01_006 | 360 | 8 | 95.81 | 97.78 |
UGS01_001 | 331 | 7 | 94.26 | 97.89 |
Hcil2000_01 | 210 | 7 | 95.58 | 96.67 |
Marchionini | 100 | 4 | 96.96 | 96.00 |
RayDiessenIBM | 230 | 6 | 96.73 | 97.39 |
Entertainment_04 | 201 | 3 | 97.63 | 98.51 |
Daily life | 283 | 5 | 97.45 | 98.23 |
video name | HVM | ACSC | OTMW |
Traffic monitoring | 97.5 | 95.0 | 95.83 |
Officevideo_01 | 98.76 | 88.20 | 96.8 |
Entertainment_01 | 81.46 | 92.05 | 96.67 |
Entertainment_02 | 98.43 | 93.72 | 96.34 |
Entertainment_03 | 98.14 | 94.06 | 96.04 |
Officevideo_02 | 98.70 | 89.61 | 96.10 |
Indi012 | 99.50 | 94.50 | 97.02 |
UGS01_008 | 95.26 | 98.50 | 98.34 |
UGS07_005 | 93.75 | 98.50 | 98.00 |
UGS01_006 | 96.11 | 97.78 | 97.78 |
UGS01_001 | 89.43 | 98.19 | 97.89 |
Hcil2000_01 | 90.95 | 97.14 | 96.67 |
Marchionini | 99.00 | 96.00 | 96.96 |
RayDiessenIBM | 94.35 | 99.13 | 97.39 |
Entertainment_04 | 99.50 | 89.55 | 98.51 |
Daily life | 98.58 | 94.70 | 98.23 |
Algorithm 1 Compute the optimal cluster threshold |
Input: parameters a and b Output: optimal threshold ρ compute c=a+0.618*(b-a); compute f(a), f(b), f(c); if not f(a) = f(b) = f(c) and (b−a) > 0.001 then while (b−a) > 0.001 do compute c=a+0.382*(b-a); compute f(a), f(b), f(c); while not f(a) = f(b) = f(c) do if f(c) < 0 then change c to increase fidelity and decrease ratio, b = c; else change c to decrease fidelity and increase ratio, a = c; end if end while end while ρ=(a+b)/2; else ρ=(a+b)/2; end if |
Algorithm 2 Compute the initial cluster centers and the number of clusters |
Input: FFV={x(1),x(2),...,x(m)} Output: the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k for each sample i∈D compute the density ρi; end for while D≠∅ do set the frame with maximum ρi as the initial cluster center μ(1); if d(μ(1),i)⩽t then i∈C1, remove i from D; end if for each sample i∈D∪(Ci∉D) do compute ηi,φi,ωi; cluster center μ(i)=argminiωi i∈Ci, remove i from D; end for end while |
Algorithm 3 Cluster the frame feature value |
Input: frame feature value FFV={x(1),x(2),...,x(m)}, the initial cluster centers {μ(1),μ(2),...,μ(k)} and the number of clusters k. Output: k clusters C={C1,C2,...,Ck} Let Ci=∅(1⩽i⩽k); repeat for j=1,2,...,m do compute the distance between the sample x(j) and the cluster center μ(i)(1⩽i⩽k); compute the λj=argmindji; divide the sample x(j) into the nearest cluster Cλj=Cλj∪x(j); end for for i=1,2,...,k do calculate the new cluster center (μ(i))′=1|Ci|∑x∈Cix; if (μ(i))′=μ(i) then update the current cluster center μ(i) to (μ(i))′; else keep the current mean vector unchanged; endif end for until the current cluster center vectors are not updated. |
iterations | a | b | c | f'(a) | f'(b) | f'(c) |
1 | 1.6442 | 2.9993 | 2.4817 | -0.63 | -7.6 | -6.95 |
2 | 1.6442 | 2.4817 | 2.1617 | -0.63 | -6.95 | -3.33 |
3 | 1.6442 | 2.1617 | 1.9640 | -0.63 | -3.33 | -2.11 |
4 | 1.6442 | 1.9640 | 1.8419 | -0.63 | -2.11 | -2.11 |
5 | 1.6442 | 1.8419 | 1.7664 | -0.63 | -2.11 | -0.63 |
6 | 1.6442 | 1.7663 | 1.7197 | -0.63 | -0.63 | -0.63 |
7 | 1.6442 | 1.7197 | 1.6720 | -0.63 | -0.63 | -0.63 |
8 | 1.6442 | 1.6720 | 1.6544 | -0.63 | -0.63 | -0.63 |
9 | 1.6442 | 1.6544 | 1.6479 | -0.63 | -0.63 | -0.63 |
10 | 1.6442 | 1.6479 | 1.6455 | -0.63 | -0.63 | -0.63 |
11 | 1.6442 | 1.6455 | 1.6447 | -0.63 | -0.63 | -0.63 |
12 | 1.6442 | 1.6447 | 1.6443 | -0.63 | -0.63 | -0.63 |
13 | 1.6442 | 1.6447 | 1.6445 | -0.63 | -0.63 | -0.63 |
Note: f′(a)=100∗f(a), f′(b)=100∗f(b) and f′(c)=100∗f(c) |
video name | Nf | Nrf | fidelity | ratio |
Traffic monitoring | 120 | 5 | 95.94 | 95.83 |
Office video_01 | 161 | 5 | 96.84 | 96.89 |
Entertainment_01 | 151 | 5 | 97.14 | 96.69 |
Entertainment_02 | 191 | 7 | 98.01 | 96.34 |
Entertainment_03 | 101 | 4 | 97.79 | 96.04 |
Office video_02 | 77 | 3 | 96.34 | 96.10 |
Indi012 | 201 | 6 | 94.53 | 97.00 |
UGS01_008 | 301 | 5 | 93.58 | 98.34 |
UGS07_005 | 400 | 8 | 95.07 | 98.00 |
UGS01_006 | 360 | 8 | 95.81 | 97.78 |
UGS01_001 | 331 | 7 | 94.26 | 97.89 |
Hcil2000_01 | 210 | 7 | 95.58 | 96.67 |
Marchionini | 100 | 4 | 96.96 | 96.00 |
RayDiessenIBM | 230 | 6 | 96.73 | 97.39 |
Entertainment_04 | 201 | 3 | 97.63 | 98.51 |
Daily life | 283 | 5 | 97.45 | 98.23 |
video name | HVM | ACSC | OTMW |
Traffic monitoring | 97.5 | 95.0 | 95.83 |
Officevideo_01 | 98.76 | 88.20 | 96.8 |
Entertainment_01 | 81.46 | 92.05 | 96.67 |
Entertainment_02 | 98.43 | 93.72 | 96.34 |
Entertainment_03 | 98.14 | 94.06 | 96.04 |
Officevideo_02 | 98.70 | 89.61 | 96.10 |
Indi012 | 99.50 | 94.50 | 97.02 |
UGS01_008 | 95.26 | 98.50 | 98.34 |
UGS07_005 | 93.75 | 98.50 | 98.00 |
UGS01_006 | 96.11 | 97.78 | 97.78 |
UGS01_001 | 89.43 | 98.19 | 97.89 |
Hcil2000_01 | 90.95 | 97.14 | 96.67 |
Marchionini | 99.00 | 96.00 | 96.96 |
RayDiessenIBM | 94.35 | 99.13 | 97.39 |
Entertainment_04 | 99.50 | 89.55 | 98.51 |
Daily life | 98.58 | 94.70 | 98.23 |