Research article Recurring Topics

Simulation of Spike Wave Propagation and Two-to-one Communication with Dynamic Time Warping

  • Received: 21 September 2016 Accepted: 30 November 2016 Published: 07 December 2016
  • Although intercommunication among the different areas of the brain is well known, the rules of communication in the brain are not clear. Many previous studies have examined the firing patterns of neural networks in general, while we have examined the involvement of the firing patterns of neural networks in communication. In order to understand information processing in the brain, we simulated the interactions of the firing activities of a large number of neural networks in a 25 × 25 two-dimensional array for analyzing spike behavior. We stimulated the transmitting neurons at 0.1 msec. Then we observed the generated spike propagation for 120 msec. In addition, the positions of the firing neurons were determined with spike waves for different variances in the temporal fluctuations of the neuronal characteristics. These results suggested that for the changes (diversity) in the propagation routes of neuronal transmission resulted from variance in synaptic propagation delays and refractory periods. The simulation was used to examine differences in the percentages of neurons with significantly larger test statistics and the variances in the synaptic delay and refractory period. These results suggested that multiplex communication was more stable if the synaptic delay and refractory period varied.

    Citation: Shun Sakuma, Yuko Mizuno-Matsumoto, Yoshi Nishitani, Shinichi Tamura. Simulation of Spike Wave Propagation and Two-to-one Communication with Dynamic Time Warping[J]. AIMS Neuroscience, 2016, 3(4): 474-486. doi: 10.3934/Neuroscience.2016.4.474

    Related Papers:

    [1] Mingju Chen, Hongyang Li, Hongming Peng, Xingzhong Xiong, Ning Long . HPCDNet: Hybrid position coding and dual-frquency domain transform network for low-light image enhancement. Mathematical Biosciences and Engineering, 2024, 21(2): 1917-1937. doi: 10.3934/mbe.2024085
    [2] Si Li, Limei Peng, Fenghuan Li, Zengguo Liang . Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. Mathematical Biosciences and Engineering, 2023, 20(6): 9728-9758. doi: 10.3934/mbe.2023427
    [3] Jie Wang, Jian Wang, Jiafeng Cao . A heterogeneous parasitic-mutualistic model of mistletoes and birds on a periodically evolving domain. Mathematical Biosciences and Engineering, 2020, 17(6): 6678-6698. doi: 10.3934/mbe.2020347
    [4] Tian Ma, Boyang Meng, Jiayi Yang, Nana Gou, Weilu Shi . A half jaw panoramic stitching method of intraoral endoscopy images based on dental arch arrangement. Mathematical Biosciences and Engineering, 2024, 21(1): 494-522. doi: 10.3934/mbe.2024022
    [5] Mengjuan Li, Yao Fan, Shaowen Sun, Lianyin Jia, Teng Liang . Efficient entry point encoding and decoding algorithms on 2D Hilbert space filling curve. Mathematical Biosciences and Engineering, 2023, 20(12): 20668-20682. doi: 10.3934/mbe.2023914
    [6] Zhihe Wang, Huan Wang, Hui Du, Shiyin Chen, Xinxin Shi . A novel density peaks clustering algorithm for automatic selection of clustering centers based on K-nearest neighbors. Mathematical Biosciences and Engineering, 2023, 20(7): 11875-11894. doi: 10.3934/mbe.2023528
    [7] Haifeng Song, Weiwei Yang, Songsong Dai, Lei Du, Yongchen Sun . Using dual-channel CNN to classify hyperspectral image based on spatial-spectral information. Mathematical Biosciences and Engineering, 2020, 17(4): 3450-3477. doi: 10.3934/mbe.2020195
    [8] Yuqing Zhang, Yutong Han, Jianxin Zhang . MAU-Net: Mixed attention U-Net for MRI brain tumor segmentation. Mathematical Biosciences and Engineering, 2023, 20(12): 20510-20527. doi: 10.3934/mbe.2023907
    [9] Yijun Yin, Wenzheng Xu, Lei Chen, Hao Wu . CoT-UNet++: A medical image segmentation method based on contextual transformer and dense connection. Mathematical Biosciences and Engineering, 2023, 20(5): 8320-8336. doi: 10.3934/mbe.2023364
    [10] Xiaoli Zhang, Kunmeng Liu, Kuixing Zhang, Xiang Li, Zhaocai Sun, Benzheng Wei . SAMS-Net: Fusion of attention mechanism and multi-scale features network for tumor infiltrating lymphocytes segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 2964-2979. doi: 10.3934/mbe.2023140
  • Although intercommunication among the different areas of the brain is well known, the rules of communication in the brain are not clear. Many previous studies have examined the firing patterns of neural networks in general, while we have examined the involvement of the firing patterns of neural networks in communication. In order to understand information processing in the brain, we simulated the interactions of the firing activities of a large number of neural networks in a 25 × 25 two-dimensional array for analyzing spike behavior. We stimulated the transmitting neurons at 0.1 msec. Then we observed the generated spike propagation for 120 msec. In addition, the positions of the firing neurons were determined with spike waves for different variances in the temporal fluctuations of the neuronal characteristics. These results suggested that for the changes (diversity) in the propagation routes of neuronal transmission resulted from variance in synaptic propagation delays and refractory periods. The simulation was used to examine differences in the percentages of neurons with significantly larger test statistics and the variances in the synaptic delay and refractory period. These results suggested that multiplex communication was more stable if the synaptic delay and refractory period varied.


    Singular point (SP) is an important global feature of fingerprints. By Henry's definition [1], it is classified as Core, the topmost point of the innermost curving ridge, and Delta, the center of a triangular region where three different ridges meet.

    In modern research of automatic fingerprint identification system (AFIS), there are two main applications of SP usually. SPs are firstly used to index fingerprint types [2,3] by narrowing down the search space when it needs to match fingerprint samples in a large-scale database. In this case, the SP type is the only concern. Secondly SPs are also used to align a registered fingerprint sample with the input one to decrease the computational cost [4,5] and here SP locations are much more important than the first application, even decide the matching performance. Whether a SP location is accurate or not at pixel-level will directly result in fingerprint's alignment perfect or not. To some big extent, it decides a difficult problem which the image rotation or translation is negative to fingerprint recognition to be resolved perfectly or not. Because Deltas are often located on the lower parts of fingerprints, they are not guaranteed to appear in the corresponding fingerprint image even if they indeed exist in a fingerprint. Therefore, accurate determination of core locations at pixel-level is especially important to fingerprint alignment and a big challenge to SP detection.

    Among existing methods of SP detection, Poincare Index (PI), introduced by Kawagoe and Tojo [6], is a classical one, and has been widely used because of the advantages of its simple design, the robustness against image rotation, and the ability to distinguish SP types. For instance, Fan et al. [7] and Jin and Kim [8] proposed improved methods that built upon PI, respectively. However, PI-based methods are sensitive to noise. A decrease in fingerprint image quality will degrade the detection rate significantly. To deal with this problem, some researchers attempted to combine PI with other methods [9]. Meanwhile, non-PI-based methods were proposed such as orientation curvature based approach [7,10].

    Although SP detection has been extensively researched, some problems and challenges still exist. For example, there still exist big discrepancies between automatically detected SPs and those manually identified by Henry's definition. Mostly existing methods of SP detection are based on block-level orientation field [4,5,7,9,10], and consequently the center of a block is usually regarded as a SP's location. Bazen and Gerez [11] combined Principal Components Analysis(PCA) and PI to compute high resolution Orientation Field (OF) and further detect SPs at pixel-level, but there are often more false-positive SPs in detection results because of linear filters used for computing high resolution OF. Jin and Kim [8] improved the method and decreased the number of false-positive SPs, using multi-scale Gaussian filters and Nested-PI to estimate OF and detect SPs respectively. However, but for high-quality fingerprint images, the detected SP's locations are inconsistent with Henry's definition because the high-curvature area of fingerprint is degraded by Gaussian filters. It is known that SPs are always located in the high-curvature area.

    In fact, there are also some methods that use singularity or pseudo-singularity points for classification/authentication. C. Militello et al. [12] proposed a fingerprint recognition approach based on core and delta SPs detection. V. Conti et al. [13] proposed a fingerprint recognition approach based on SPs detection and singularity regions analysis. Both of their proposed systems are based on core and delta position, their relative distance and orientation to perform both classification and matching tasks. And the approach proposed by V. Conti et al. [13] enhances the performance of SPs based methods introducing pseudo-SPs when the standard SPs can not be extracted.

    In addition, M. Sabir et al. [14] proposed an approach to perform alignment of fingerprints followed by their matching in fewer computations. Ridges are not included in the matching process to avoid redundant computations, which is where the approach proposed by M. Sabir et al. [14] differs from the traditional cross-correlation based matching algorithms. And the real-time image filtering technique that allows for faster implementation of fingerprint image enhancement proposed by T. M. Khan et al. [15] circumventing the hurdle of expensive hardware implementation of fingerprint filtering techniques.

    To improve SP detection and provide more accurate reference points for fingerprint alignment, this paper proposes a novel core point pixel-level localization method only based on fingerprint features of spatial domain. The proposed method does not consider OF and frequency-domain (including filters) of fingerprint. The specific ridge/valley distribution in a core point area is used firstly, which is called Furcation and Confluence characteristics, to extract the innermost Curve of ridges. Then the summit of this Curve is regarded as the location of a core. Furthermore, a Furcation and Confluence correlation based schedule to remove false Furcation and Confluence is designed to enhance the method's robustness against noise. Experimental results indicate that the proposed method achieved better core localization accuracy for 40.8% of the samples, similar accuracy for 55% the samples, and less accuracy for 4.2% of the samples compared with the method of Jin and Kim [8] on the database of FVC2000-DB2.

    The remainder of this paper is organized as follows. Section 2 describes how to extract Furcation and Influence from fingerprint images. Section 3 introduces the anti-noise measures to remove false Furcation and Confluence from a candidate set based on the correlation of Furcation and Confluence. Section 4 describes the final step to extract the innermost curve of ridges and locate accurately a core point. Section 5 presents experimental results and analyses. Finally, Section 6 draws conclusion.

    Both Furcation and Confluence (FC) are spatial-domain features of fingerprint ridges/valleys. They are directly and natured ridges/valleys presentation of fingerprint images and different with representation of extracted features which are transformed by filters or operators. It is the reason why FC is used in here.

    The extraction of Furcation or Confluence is based on the distribution of ridges and valleys. In other words, the pixel values will not be directly taken into account. Therefore, binary images are used for FC extraction.

    Figure 1 shows the binary image of an input fingerprint with a resolution of 500 dpi, where the binary image is obtained by the short time Fourier transform(STFT) [16]. Let Cn be the nth core that is detected by a specific algorithm [8] (as shown in Figure 1) and θn be the opening direction of Cn calculated by the local OF around it [17] (as shown the purple line in Figure 1, where the direction indicated by the arrow is the opening direction). Then, according to Cn and θn, a single-pixel-wide line l with length of 60 pixels can be drawn (as shown the blue line in Figure 1). Treating each pixel of l as the midpoint, 60 line segments can be constructed that are perpendicular to l with length of 41 pixels. These lines, shown the red lines in Figure 1, are called Judge Lines (JLs). Note here two number parameters (one is 60 which defines the total number of judge line segments; another is 41 which defines the length of each judge line segment) are determined by the resolution of the input fingerprint image. Basically, the higher the resolution is, the larger the parameters will be. Specifically, take Figure 1 as an example, the width of ridges and valleys is roughly 5 pixels at a resolution of 500 dpi of the input fingerprint image, and the four ridges or valleys are approximately 20 pixels wide, so the line segment l is 20 pixels on each side and the length of JL is 41 pixels. Similarly, since the ridges and valleys are interconnected, the length of l is 60 pixels in order to allow line segment l to pass through up to 6 pairs of ridges and valleys.

    Figure 1.  The formation of judge lines.

    Figure 2 shows the local region of a binary fingerprint image, in which the red lines are JLs. Consider the number of fingerprint lines: including ridges and valleys, intersected by these judge lines. If the lower JL intersects more fingerprint lines than does the adjacent upper one, the corresponding position is called a Furcation. And according to the exact difference of intersections between these two JLs, a Furcation is further divided into four categories: two-Furcation (as shown in Figure 2a), four-Furcation, six-Furcation and eight-Furcation. Using the same rule but in the opposite direction, Confluence can be defined and divided into four classes: two-Confluence (as shown in Figure 2b), four-Confluence, six-Confluence and eight-Confluence. In particular, six-Furcation, eight-Furcation, six-Confluence and eight-Confluence are caused by noise.

    Figure 2.  The definition of Furcation and Confluence: (a) Furcation; (b) Confluence.

    Curves (its definition and extraction will be discussed in Section 4) inside the local region of Cn can be extracted based on Furcation and Confluence. In this paper, the summit of the innermost curve is regarded as the core point. Hence, the extraction of FC is the first step of the proposed method.

    Through experiments and observation, the following statements can be made for FC: if two adjacent JLs of a fingerprint start at the same ridge/valley and end at another same ridge/valley, then

    Statement 1: Their intersection number will be same without furcation or confluence on the two adjacent JLs, and

    Statement 2: The widths of corresponding intersections between two adjacent JLs will be nearly same without furcation or confluence on the two JLs.

    However, as shown in Figure 3, when both Furcation and Confluence appear on the same JL, the number of intersections of two adjacent JLs could be same, and make Statement 1 inapplicable for FC extraction. Consequently, Only Statement 2 is adopted. Two kinds of FC extractors namely Furcation extractor, cz0, and Confluence extractor, cz1, are introduced. According to the definition of FC in Section 2.1, the number and width of fingerprint lines intersecting adjacent JLs can be used to determine Furcation or Confluence, and the extractor cz0 and cz1 (both of them are two-dimensional matrices) are based on this principle. Specifically, let JLi and JLi+1 be adjacent JL. The value in cz0 is defined as the result of comparing the total width of the first k intersections of JLi with the fingerprint line (from left to right, including ridges and valleys) with the first k+1 intersections of JLi+1, which corresponds to Furcation if it is less than 0. Similarly, the value in cz1 is defined as the result of comparing the total width of the first k intersections of JLi+1 with the fingerprint line with the first k+1 intersections of JLi, which corresponds to Confluence if it is greater than 0. The mathematical expressions are given in Eq (7) and Eq (9), respectively. Take Figure 4 as an example, the total width of the first 6 intersections of JL2 is less than the first 5 intersections of JL1, and the value of cz0 is less than 0 at this time, indicating that the extraction to Furcation.

    Figure 3.  Furcation and Confluence located in the same judge line.
    Figure 4.  FC extractor formation of an ideal situation.

    To better explain the formation of cz0 and cz1, some matrices are required to be constructed. The matrices are described as follows: Ei, YSi and Zi.

    1) Ei, the ith row of E, records the width of fingerprint line: For the ith JL. The width of every intersection is counted and recorded from left to right, padding 0 at empty positions, and then the Ei can be obtained.

    2) YSi, the ith row of YS, records the type of every intersection: For every nonzero element of Ei. Let 1 represent intersections located in valleys and 1 represent intersections located in ridges, padding 0 if Ei(j)=0.

    3) Zi, the ith row of Z, records the number of intersections of the ith JL: For each row of E, the number of positive integers is counted, then the vector Zi is obtained.

    As discussed above, Statement 2 works under the premise that the adjacent two JLs start at same fingerprint line and end at another same fingerprint line, and that is called an ideal situation. We will explain how the Statement 2 is used for FC extractor formation in an ideal situation.

    As shown in Figure 4, let JL1 be the upper JL, JL2 be the adjacent one below JL1, and JLi(j) be the width of jth intersection on the ith JL. Then we have

    JL1(j)JL2(j),when j[1,4] (1)

    Equation (1) is the rule that is described in Statement 2. But the rule is broken while j=5. When a Furcation appears in Figure 4, we have

    JL1(5)>JL2(5)

    and even

    JL1(5)>JL2(5)+JL2(6) (2)

    Equation (2) is satisfied only if a Furcation or Confluence appears. The general format of Eq (2) can be expressed as

    JL1(k)>JL2(k)+JL2(k+1) (3)

    where k denotes the intersection that Furcation appears.

    If we use JLi and JLi+1 to represent two adjacent JLs, then Eqs (1) and (3) can be written in E respectively as

    k1j=1E(i,j)k1j=1E(i+1,j) (4)

    and

    E(i,j)>E(i+1,j)+E(i+1,j+1) (5)

    Adding Eq (4) to Eq (5), we obtain Eq (6). It is true only that k is an intersection where Furcation appears.

    kj=1E(i,j)>k+1j=1E(i+1,j) (6)

    According to the definition of cz0 above, then we have

    cz0(i,k)=k+1j=1E(i+1,j)kj=1E(i,j) (7)

    where cz0(i,k) denotes the Furcation extractor in ideal situation. And then cz0(i,k)<0 denotes the appearance of a Furcation while traverse E using Eq (7).

    Similarly, while a Confluence appears, the intersection's relationship between two adjacent JLs can be expressed as

    E(i+1,j)>E(i,j)+E(i,j+1) (8)

    Then the Confluence extractor in ideal situation can be written as

    cz1(i,k)=kj=1E(i+1,j)k+1j=1E(i,j) (9)

    where cz1(i,k)>0 denotes the appearance of a Confluence.

    However, the ideal situation is not always satisfied. The two adjacent JLs may start or end at different fingerprint lines, a situation called "Edge Dislocation"(ED) as shown in Figure 5. Besides, the emerging of more than two FCs in the same JL will generate mistakes if Eqs (7) and (9) are used directly. These two problems can be solved by introducing the "Edge Compensating Factor" (ECF) and the "Jumper Factor" (JF) respectively.

    Figure 5.  The case of a non-ideal situation: (a) a binary image with two adjacent JLs; (b) the width of corresponding intersections in Ei and Ei+1; (c) the types of corresponding intersections in YSi and YSi+1.

    Let JLi be the upper JL in Figure 5(a), and JLi+1 be the lower one. As Edge Dislocation occurs between these two JLs, the extra intersection, which is Ei(1) in this example, needs to be cut off prior to subsequent processing.

    Through observation, there are two properties that can be used to identify the ED:

    Property 1 The first intersections of two adjacent JLs are of different types when an ED appears;

    Property 2 The difference of the widths between the first intersections of two adjacent JLs is greater than a threshold when an ED appears.

    Equations (10) and (11) below can be obtained by using these properties.

    cys(1)=YSi(1)×YSi+1(1) (10)
    d(1)=Ei+1(1)Ei(1) (11)

    where cys(1)=1 indicates the satisfaction of Property 1, and |d(1)|dT indicates the satisfaction of Property 2, where dT denotes the average width of ridge/valley in a binary fingerprint image. An ED emerges when both Properties are satisfied.

    Then, the values of ECFs need to be assigned. Let cw0 and cw1, with initial values of 0, be the ECF of JLi and JLi+1 respectively. The assignment rules are as follows:

    {cw01cw10, if d(1)>0

    {cw00cw11, if d(1)0

    Incorporating these ECFs into Eqs (7) and (9), we obtain Eqs (12) and (13), which are capable of cutting off the extra intersections.

    zc01(j,w)=w+1+cw1k=1+cw1Ei(j+1,k)w+cw0l=1+cw0Ei(j,l) (12)
    zc11(j,w)=w+1+cw0l=1+cw0Ei(j,l)w+cw1k=1+cw1Ei(j+1,k) (13)

    To clarify the necessity of introducing JF, we redraw Figure 5(b) by adding some marks, as shown in Figure 6, and then deal with the JLs in Figure 5(a) by using Eqs (12) and (13).

    Figure 6.  Add marks to Figure 5(b).

    When w=1, there are

    zc11(j,1)=3<0

    which indicates an appearance of a Confluence. As this is a two-Confluence (the way of discriminating the type of FC will be discussed later), the upper three intersections, consisting of Ei(j,2), Ei(j,3) and Ei(j,4), correspond to the Ei(j+1,1), as the red rectangles marked in Figure 6. Then if we continue to traverse Ei by increasing w by 1, it is

    zc11(j,2)=5<0

    which indicates a mistakenly detected Confluence. The JF need to be introduced to eliminate this kind of error by adjusting w in Eqs (12) and (13). In this example, JF equals to 2 and we only adjust w in Ei(j,l). As show in Figure 6, when the two-Confluence is detected, w of Ei(j,l) jumps two more steps from the dotted blue circle to the solid blue one. And the w in Ei(j+1,k) stays at the solid blue circle in the second row. After this process, the next Furcation or Confluence can be extracted accurately.

    The value of JF is determined according to the type of FC, so we will discuss the assignment method of JF along with the total procedure of FC extraction in Section 2.2.5.

    Through observation, we found that the FC type can be determined by calculating the width of intersections on JLs. Take Figure 7 as an example to interpret how FC types can be determined. The red line in Figure 7(a), (b) are two adjacent JLs, the blue pixels in Figure 7(b) represent the intersections of the JL with the curved ridge. Figure 7(c) shows the pixels on JLs of the Furcation area, where the white squares indicate the pixels on the valleys and the dark-gray ones indicate the pixels on the ridges. It can be easily seen from Figure 7(c) that the sum of lower five intersections nearly equal the width of the upper intersection. This quantitative relation can also be found in other types of Furcation or Confluence and is therefore useful for the determination of FC types.

    Figure 7.  The determination of Furcation type: (a) the first JL; (b) the second JL; (c) pixels on JLs of the Furcation area in (a) & (b).

    Take Furcation as an example, the above quantitative relation can be expressed as

    Ei(j,x)x+Nk=xEi(j+1,k) (14)

    where x denotes column coordinates of a Furcation and N denotes the type of a Furcation (N=2 for a two-Furcation, N=4 for a four-Furcation, and so on). Then the problem of determining the Furcation type is converted to finding the optimal solution for N satisfying Eq (14). The determination of Confluence type is almost the same.

    Take ECF and JF into consideration, the FC extractors in a practical application are as follows

    cz0(j,w)=w1+1k0=1Ei(j+1,k0)w0l0=1Ei(j,l0) (15)
    cz1(j,w)=w0+1l1=1Ei(j,l1)w1k1=1Ei(j+1,k1) (16)

    where

    w0=w+cw0+a0
    w1=w+cw1+a1

    a0 and a1 are cumulative values of JF (the assignment method of them will be discussed later).

    The procedure of FC extraction is as follows:

    Step 1: Traverse Ei by using Eqs (15) and (16) (Note here that Eqs (10) and (11) will be used to calculate the corresponding cw0 and cw1, and that both a0 and a1 need to be initialized to 0 once j is increased by 1).cz0(j,w)<0 indicates the appearance of a Furcation, and cz1(j,w)<0 indicates the appearance of a Confluence. Then, execute Step 2) to determine FC type.

    Step 2: The numerical implementation of FC type determination will be discussed in this step. Since the determination of the Furcation type and the Confluence type are theoretically the same, we take Furcation as an example to show the numerical implementation.

    As shown in Figure 8, let a Furcation appear on the intersection marked with w0, four intervals correspond to four different values of N in Eq (14) ranging from N=2 to N=8. It can be easily seen that to determine the optimal solution for N that satisfies Eq (14) is equivalent to finding the interval in Figure 8 whose |Ei(j+1)Ei(j)| value is the minimal. This can be achieved iteratively, which means that the |Ei(j+1)Ei(j)| values for all four intervals do not need to be calculated, which can reduce the computational cost.

    Figure 8.  The determination of Furcation type.

    Equations (17) and (18) are used to determine the Furcation type.

    g01=|c0+w1+2+2bc0k01=w1+1Ei(j+1,k01)| (17)
    g02=|c0+w1+4+2bc0k02=w1+1Ei(j+1,k02)| (18)

    where c0=cz0(j,w)Ei(j+1,w1+1) and the initial value of bc0 is 0.

    We use pseudocode formatting to depict the process of determining the Furcation types, as shown in Algorithm 1.

    Algorithm 1 the process of determining the Furcation type
    1: bc00
    2: while true do
    3:   g01|c0+w1+2+2bc0k01=w1+1Ei(j+1,k01)|
    4:   g02|c0+w1+4+2bc0k02=w1+1Ei(j+1,k02)|
    5:   b0w1+4+2bc0
    6:   if b0>Zi(j+1,1) or g01>g02 then
    7:     N=2bc02 /*N indicates the Furcation type*/
    8:     break
    9:   else
    10:     bc0bc0+1
    11:   end if
    12: end while

     | Show Table
    DownLoad: CSV

    Similarly, the equations for Confluence type determination are as follows

    g11=|c1+w0+2+2bc1k11=w0+1Ei(j,k11)| (19)
    g12=|c1+w0+4+2bc1k12=w0+1Ei(j,k12)| (20)

    where c1=cz1(j,w)Ei(j,w0+1), and the initial value of bc1 is 0.

    The process of Confluence type determination is also described using a pseudocode format, as shown in Algorithm 2.

    Algorithm 2 the process of determining the Confluence type
    1: bc10
    2: while true do
    3:   g11=|c1+w0+2+2bc1k11=w0+1Ei(j,k11)|
    4:   g12=|c1+w0+4+2bc1k12=w0+1Ei(j,k12)|
    5:   b1=w0+4+2bc1
    6:   if b1>Zi(j,1) or g11>g12 then
    7:     N2bc12 /*N indicates the Furcation type*/
    8:     break
    9:    else
    10:     bc1bc1+1
    11:   end if
    12: end while

     | Show Table
    DownLoad: CSV

    FC types need to be stored once they are determined. A zero matrix FJi with the same dimensions as Ei is defined and it can be updated as,

    FJi(j+1,w1+bc0+1)2bc0+2 (21)
    FJi(j+1,w1)2bc02 (22)

    Each positive value in FJi corresponds to a Furcation type, and a negative value corresponds to a Confluence type.

    Note that there is another trait of Furcation that should be recorded. Figure 9 shows two different two-Furcation. In Figure 9a, the intersection lines of the Furcation in the upper JL lines in valley, on the contrary, as shown in Figure 9b, the intersection lines in ridge. This trait is recorded by a matrix LXi which has the same dimension as Ei and each element has an initial value of zero, and is updated in the following way

    LXi(j+1,w1+bc0+1)YSi(j,w0) (23)
    Figure 9.  Two kinds of two-Furcation.

    Equation (23) is used once a Furcation is detected.

    Now, Step 2 is finished. Then, go to Step 3 to assign value of JF.

    Step 3: The value of JF is bounded up with the FC type. Let jf0 be the JF of a Furcation, jf1 be the JF of a Confluence, a0 and a1 be the cumulative value of JF. The equations or assignment of these factors are as follows:

    LXi(j+1,w1+bc0+1)YSi(j,w0) (24)
    jf12bc1+2 (25)
    a0a0+jf1 (26)
    a1a1+jf0 (27)

    where a0 and a1 are cumulative values of JF, whose initial values are both 0. Once a Furcation is detected and the Furcation type is determined, the jf0 will be calculated and the corresponding a1 will be updated; Confluences are processed in a similar way.

    Steps 1-3 are repeated and executed for the entire Ei. The final result FJi contains all Furcations and Confluences that have been extracted.

    After FC extraction, matrix FJi contains all candidate FCs of the local region of core Cn. False FCs exist in FJi due to two kinds of noise (noise will be introduced in section 3.1). We analyzed the effect of noise on FCs and proposed an algorithm to remove false FCs.

    As shown in Figure 10, island, lake, independent ridge, and the insufficiency of Binarization algorithm are regarded as the first kind of noise, which will result in noisy binarized images (as shown in Figure 10e). False FCs will be detected in these noisy regions (as shown in Figure 10f). This kind of noise can be solved by developing a special Binarization algorithm to obtain an ideal binary image, as shown in Figure 11. However, this idea has two main drawbacks: one is that the special Binarization algorithm is difficult to design; the other is that the second kind of noise cannot be removed by adopting any Binarization algorithm.

    Figure 10.  The first kind of noise and the corresponding effects on FCs: (a) island; (b) lake; (c) independent ridge; (d) the insufficient of Binarization algorithm; (e) the effects of (a) (b) (c) (d) on a binary image; (f) the occurrence of false FCs.
    Figure 11.  An ideal binary image.

    Figure 12 shows the second kind of noise, which is caused by the characteristic of the digital image. Figure 12a shows an ideal binary image with three adjacent JLs. By magnifying the blue rectangular area, the spatial relationship between each JL and the ridge can be illustrated clearly, as shown in Figure 12bd respectively. Note that in Figure 12bd, each square represents a pixel, with the red ones indicating pixels on the JLs, the light gray ones indicating pixels on the ridges, and the dark gray ones indicating pixels that intersect between the JLs and the ridges. Based on Figures 12b, c, a four-Furcation can be detected, while from Figures 12c, d, a two-Confluence can be detected. However, by definition, only a two-Furcation should be detected from these three JLs.

    Figure 12.  The second kind of noise and the corresponding effect on FC.

    This kind of misdetection can be eliminated by adding or padding some pixels at the specific position in the binary image. For the case of Figure 12, the corresponding added pixel is shown in Figure 13. It can be seen that only a two-Furcation will be detected from Figure 13b, c. Theoretically, this method can achieve our goal. But technically, it is difficult to implement. We have not found a unified rule to deal with this specific situation.

    Figure 13.  The method of adding pixels to remove the second kind of noise.

    Either developing a special Binarization algorithm or designing a pixel padding method is of tremendous difficulty. So we have experimented alternative ways for FC validation. Through experiment, we found some correlations between false FCs that can be used to eliminate them.

    The first correlation is that a pair of Furcation and Confluence is always detected from a given noisy region. Take Figure 14 as an example, which contains a lake like noisy region. One can see this correlation clearly that a false two-Furcation and a false two-Confluence appear together. The same phenomenon can also be found in Figure 12 that a false four-Furcation is followed by a false two-Confluence.

    Figure 14.  False FCs in a lake like noise region.

    Therefore, the first correlation of false FCs can be stated as:

    Correlation 1. False FCs always appear in pairs.

    The second correlation is about the coordinates of paired false FCs in FJi. Let fct be a false Furcation and (ft,ct) be its coordinate in FJi. Let jht(jt,ht) be the paired false Confluence. Two quantitative relations between them can be expressed as:

    1) |ftjt|<T. (T=12 for an image with resolution of 500 dpi, and the value of T should vary with the image resolution);

    2){lb(jht)lb(fct)FJi(ft,ct)2lb(jht)lb(fct)+FJi(ft,ct)22 (28)

    where lb(z) denotes the normalized column coordinate of z. So, the second correlation can be concluded as:

    Correlation 2. Paired false FCs satisfy the above two quantitative relations.

    If let

    {at=lb(fct)FJi(ft,ct)2bt=lb(fct)+FJi(ft,ct)22 (29)

    then Eq (28) can be written as

    lb(jht)[at,bt] (30)

    The validity of Eq (30) will be explained that by the following examples.

    Let's reconsider Figure 14 as the first example and let the top red line be the j1th JL, the bottom red line be the (j1+m1)th JL, the false Furcation be fct1, the false Confluence be jht1. Then the coordinates of the Furcation in FJi can be written as fct1(j1,3), and the Confluence can be written asjht1(j1+m1,2). The type of fct1 isFJi(j1,3)=2. Note that each normalized column coordinate of these two FCs is equal to its column coordinate that

    {lb(fct1)=3lb(jht1)=2

    Then it can be directly calculated by Eq (29) that

    {at1=2bt1=2

    which means that Eq (30) is satisfied that

    lb(jht1)[at1,bt1]

    Take Figure 12 as the second example, let fct2 be the false Furcation and jht2 be the false Confluence. As normalized column coordinate is also equal to the column coordinate, and it would be calculated similarly that

    lb(jht2)=2
    [at2,bt2]=[1,3]

    which means Eq (30) is satisfied.

    Equation (30) is still valid for some special examples such as Figures 15 and 16. Let the false Furcation in Figure 15 be fct3 and the false Confluence be jht3. It can be seen that lb(jht3)=1 and [at3,bt3]=[1,3], which satisfy Eq (30). For Figure 16, let the false Furcation and false Confluence be fct4 and jht4, respectively. There are lb(jht4)=3 and [at4,bt4]=[1,3] which also satisfy Eq (30).

    Figure 15.  A special case of false Furcation and false Confluence.
    Figure 16.  Another special case of false Furcation and false Confluence.

    FC validation is based on these two correlations discussed above. Note that the concept of normalized column coordinate should be introduced and it will be explained in the following section.

    Usually, the normalization of columns is necessary because that the ridges/valleys around FCs affect the calculation of Eq (30). As shown in Figure 17, the coordinates of paired FCs A and B should satisfy Eq (30) due to the influence of ED and the left ridge, but they do not. This means the correlation-based FC validation discussed above will fail.

    Figure 17.  The effects of surrounding fingerprint lines on the coordinates of FCs.

    The effect of ridges/valleys can be offset by a method called column coordinate normalization. One can see from Figure 17 that these effects can be divided into two categories:

    1) The effect of ED.

    2) The effect of ridges on the left side.

    Column coordinates normalization aims at offsetting these two kinds of effects.

    The effect of ED on adjacent JLs had been discussed in the previous section. Here, we need to offset the ED between any two JLs. The Edge Compensating Vector (ECV) is introduced to record the number of ED between any JL and the first JL, while the first JL will be used as a benchmark to offset the ED between any two JLs. The ECV can be calculated recursively from the ECF as

    Bv(1,j+1)=cw1cw0+Bv(1,j) (31)

    where Bv is the ECV with length of 60 and initial value of zero. cw1 and cw0 are ECF of adjacent JLs numbered j+1 and j respectively. After traversing the 60 JLs by Eq (31), the vector Bv can be obtained.

    We will use Figure 17 as an example to express the idea of eliminating the effect of the left ridge and will discuss the numerical implementation of column coordinate normalization.

    For the false FCs in Figure 17, it is easy to see that if the left side ridge marked by the blue rectangle is "erased", its effect on FCs' column coordinates will disappear. We don't really erase these ridges, but instead, adjust the column coordinates accordingly to achieve the effect of erasure. So the basic idea is: for Figure 17, traverse JL one by one from A to B, decrease or increase the column coordinate of A once ridges appear or disappear on the left side of it. Then the effect of left side ridges on A and B can be eliminated.

    Let fct5(ft5,ct5) be a Furcation and jht5(jt5,ht5) be a Confluence below it. We use pseudocode formatting to depict the process of column coordinates normalization, as shown in Algorithm 3.

    Algorithm 3 the process of column coordinates normalization
    1: Calculate the absolute columns of FCs as
      WF(fct5)=ct5+Bv(1,ft5)FJi(ft5,ct5)2WJ(jht5)=ht5+Bv(1,jt5)(32)
       /* where WF(fct5) is the absolute column of the Furcation and WJ(jht5) is the absolute column of the Confluence */
    2: Assign WF(fct5) to wf, and traverse the elements of FJi between (ft5+1,1) and (jt5,ht51). For each non-zero element fj, whose coordinate is (x,y), using Eq (32) to calculate the absolute column of fj
    3:   if WF(fj)<wf or WJ(fj)<wf then
    4:    wfwf+FJi(x,y)(33)
    5: end if /* wf and WJ(jht5) are the normalized columns of the Furcation and the Confluence respectively */

     | Show Table
    DownLoad: CSV

    We can use the two correlations discussed above to detect paired false FCs and correct or eliminate false FCs.

    Let's take Figures 12 and 14 as examples to illustrate this rule. In Figure 14, the value of the false Furcation is FJi(j1,3)=2, and the value of the false Confluence is FJi(j1+m,2)=2. Then the summation will be 0. By assigning the summation to FJi(j1,3) and to FJi(j1+m,2) respectively, the paired false FCs will be corrected. For Figure 12, the value of the false Furcation is FJi(j2,3)=4, and the value of the false Confluence is FJi(j2+1,2)=2. Then the summation will be 2. By assigning the summation to FJi(j2,3) and assigning 0 toFJi(j2+1,2) respectively, the false FCs will be corrected.

    Every paired false FCs can be corrected or eliminated by the above method. For a paired false FCs, let FJi(x,y) be the false Furcation and FJi(x,y) be the false Confluence, the above method can be written as

    {aFJi(x,y)+FJi(x,y)b0 (34)

    where

    {a=FJi(x,y)b=FJi(x,y),|FJi(x,y)||FJi(x,y)|
    {a=FJi(x,y)b=FJi(x,y),|FJi(x,y)|<|FJi(x,y)|

    Then, take all the aspects discussed in Chapter 3.2 into account, the validation process is given in the form of pseudocode, as shown in Algorithm 4.

    Algorithm 4 FC validation process
    /* Assign initial values to some variables. */
    1: A[8642], j1
    2: pbA(1,j), ¯FJiFJi
    3: while there is any element in A that has not completed traversal of ¯FJido
    4:   while ¯FJi(x1,y1)=pddoes not exist do
    5:     /* Where ¯FJi(x1,y1) represent any element of it */
    6:     jj+1
    7:     pbA(1,j)
    8:     traverse ¯FJi
    9:   end while
    10:   find(x2,y2)satisfying ¯FJi(x2,y2)>0, wherex2[max(x112,1),x11]
    11:    if find and that(x1,y1), (x2,y2)satisfy Eq (30) then
    12:     use Eq.(34) to update ¯FJi(x1,y1)and ¯FJi(x2,y2)
    13:     jj+1
    14:     pbA(1,j)
    15:   else
    16:     jj+1
    17:     pbA(1,j)
    18:   end if
    19: end while /* Exiting the loop if all the elements in A finished traversal of ¯FJi*/
    20: /* ¯FJi is the result of FC validation. */

     | Show Table
    DownLoad: CSV

    A fraction of a crooked ridge is called a Curve, as shown the red part in Figure 18. It can be seen that the core point is the summit of the innermost Curve. As shown in Figure 19, the red ridge is the innermost curve, and the blue star is the core.

    Figure 18.  Curve.
    Figure 19.  The summit of an innermost Curve and corresponding core point.

    As shown in Figure 20, red pixels denote the Furcation or Confluence. One can find two Furcations located on each side of a Curve, which can be used as marks of the Curve. For each Curve, let fch1 be the upper Furcation and fch2 be the other one. Three properties can be used to identify them:

    Figure 20.  Curves and the corresponding Furcations.

    1) The absolute difference between their row coordinates is less than th; (th is determined by the resolution of input images. For example, th=12 is for the input image of 500 dpi.)

    2) After column coordinate normalization, the absolute difference between their columns is 1;

    3) In LXi, the value of fch1 is 1, and the value of fch2 is 1.

    All Curves can be identified by using the three properties above. In fact, only the innermost Curve is concerned. We depict the extraction process of the innermost curve in the form of pseudocode, as shown in Algorithm 5.

    Algorithm 5 innermost curve extraction process
    1: while true do
    2:   while true do
    3:     Traverse ¯FJi and LXi from the bottom up to find out the element that satisfies
            {¯FJi(xh2,yh2)>0LXi(xh2,yh2)=1(37)
    4:     /* Element that satisfies Eq.(35) represents a Furcation lying in a ridge. Use fch2 to denote this Furcation. */
    5:     Traverse ¯FJi and LXi whose row coordinates are in the range of [xh2th,xh2] to find out the element that satisfies
             {¯FJi(xh1,yh1)>0LXi(xh1,yh1)=1(38)
    6:     if find then
    7:       use fch1 to denote this Furcation
    8:       break
    9:   end while
    10:   Use Eq (32) and Eq (33) to calculate the normalized column coordinates between fch1 and fch2
    11:    if the absolute difference between their normalized columns is 1 then
    12:     fch1 and fch2 are the two Furcations that mark the innermost Curve, which can be recorded by
            {Hu(1,1)(xh1,yh1)Hu(2,1)(xh2,yh2)(39)
    13:      break
    14:    end if
    15: end while

     | Show Table
    DownLoad: CSV

    The summit of the innermost Curve can be obtained by averaging fch1(xh1,yh1) and fch2(xh2,yh2). Since fch1 and fch2 are preserved in ¯FJi, a mapping is needed to transfer the coordinates of ¯FJi to the coordinates of input fingerprint image. Let (x,y) be the coordinate of a FC in ¯FJi, ˉy can be obtained by

    ˉy=y1j=1Ei(x,j)+Ei(x,y)2 (35)

    where (x,ˉy) denotes the ˉyth pixel of the xth judge line. Finally, the coordinate of a core can be obtained by

    xc=(xh1+xh2)2yc=(¯yh1+¯yh2)2 (36)

    where (xc,yc) denotes the coordinate of the core.

    For a direct comparison with the approach in [8], we use the same dataset(FVC2000-DB2) for our experiments. As manually defined cores are always below the initial cores detected by the algorithm proposed by [8], we only consider the area below the initial core. If no curve is extracted, the initial core is regarded as the final result; otherwise, the summit of the innermost Curve is regarded as the final result.

    FVC2000-DB2 contains 880 fingerprint images from 110 individuals captured by a commercial capacitive sensor with a resolution of 500 dpi. Except for the initial cores that either cannot be detected or located near the boundary of the fingerprint images, the remaining 960 cores are suitable for our experiments.

    Figure 21 shows some examples of the experimental results. Pictures on the left column show the local regions of core points, in which black points (sblack) represent the initial cores that were detected by the method in [8], and red points (sred) represent the final core located by the proposed method in this paper. Pictures on the right column are binary images whose innermost Curves are marked in red. These results can be divided into three categories: Figure 21(a)(c) belong to the first category; Figure 21(d)(f) belong to the second category; Figure 21(g), (h) belong to the third category.

    Figure 21.  The initial core and the located core.

    In the first category, sred is closer to the manually defined core than sblack. This means that the core location accuracy increases. This category can be further divided into two kinds. The first kind, such as Figure 21(a), (b), has a relatively ideal binary image while the second kind, such as Figure 21(c), hasn't. Some mistakes in the binary image do not affect the location results, and this demonstrates the robustness of the proposed method.

    In the second category, core point localization accuracy does not increase and sred and sblack are nearly at the same position. This category can be further divided into two kinds. In the first kind, like Figure 21(d), sblack is closer to the manually defined core so that there's no space for accuracy improvement. In the second kind, like Figure 21(e), (f), no Curves can be extracted due to the less-than-ideal binarization results, and consequently, sblack is regarded as the core location result.

    In the third category, localization accuracy declines because of the noisy binary image. Some false innermost Curves were extracted, making sred further away from the manually defined core than sblack.

    All experimental results can be classified into the above three categories. Table 1 shows the statistical results comparing with the method of Jin and Kim [8] using the database of FVC2000-DB2, in which 40.8% of cores' localization accuracy increased while 4.2% declined. By the result of Figures 22 and 23, it is clear that there is a significant increase in core localization accuracy for the proposed method.

    Table 1.  The numbers of each of SP in SPc.
    Location accuracy Promoted Stayed the same Declined
    Number of cores 391 529 40
    Percentage 40.8% 55% 4.2%

     | Show Table
    DownLoad: CSV
    Figure 22.  The located cores from the extracted Furcation and confluence: (a) The summits of curves extracted from Furcation and Confluence; (b) The initial core from Jin and Kim [8] and the located core by the propose method.
    Figure 23.  Overall performance comparison between the proposed (the blue curve) and the Jin and Kim [8] (the red curve) on FVC2000-DB2. The horizontal ordinate denotes different individual fingerprints. The vertical ordinate denotes the core number of fingerprint each individual. Each point of curves means that, to an individual fingerprint, the number of cores locating by the corresponding method same as the manually defined ones by Henry's definition.

    For processing the pixel-level accuracy grade of singular point detection and providing more accurate reference points for the fingerprint alignment. This paper proposed a novel core point pixel-level localization method only based on fingerprint features of spatial domain. The method defined new fingerprint characteristics called Furcation and Confluence to represent ridge/valley distribution in a core point area and used them to extract the innermost Curve of ridges. Furthermore, the correlation between Furcation and Confluence was utilized to remove false Furcation and Confluence, enhancing the computation robustness against noise. Finally the summit of this Curve is regarded as the core point location. Experimental results comparing with the method of Jin and Kim [8] using the database FVC2000-DB2 demonstrated that the proposed method achieved better core localization accuracy for 40.8% of the samples, similar accuracy for 55% samples, and less accuracy for 4.2% of the samples.

    This work was supported by the National Key Natural Science Foundation of China(No. U19B2016) and the National Science and Foundation of China(No. 60802047) through the Pattern Recognition and Information Security Lab (PRIS) in Hangzhou Dianzi University.

    All authors declare no conflict of interest in this paper.

    [1] Eugene M. Izhikevich (2006) Polychronization: Computation with spikes. Neural Compu 18: 245-282. doi: 10.1162/089976606775093882
    [2] Hiroyoshi Miyagawa, Masashi Inoue (2013) Biophysics of neurons. Maruzen Publishing.
    [3] Donald Olding Hebb (1972) Textbook of Psychology, Philadelphia: Saunders, Pa, USA, 3rd edition.
    [4] Anders Lansner (2009) Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations, Trends Neurosci 32: 178-186.
    [5] Masato Okada (1996) Notions of Associative Memory and Sparse Coding, Neural Networks 9: 1429-1458.
    [6] Yuval Aviell, David Horn, Moshe Abeles (2004) Synfire waves in small balanced networks. Neurcomputing 58-60: 123-127. doi: 10.1016/j.neucom.2004.01.032
    [7] Takuma Tanaka, Takeshi Kaneko, Toshio Aoyagi (2009) Recurrent infomax generates cell assemblies, neuronal avalanches, and simple cell-like selectivity. Neural Computation 21: 1038-1067. doi: 10.1162/neco.2008.03-08-727
    [8] Yuko Mizuno-Matsumoto, Masatsugu Ishijima, Kazuhiro Shinosaki, et al. (2001) Transient Global Amnesia (TGA) in an MEG Study. Brain Topography 13: 269-274. doi: 10.1023/A:1011176612377
    [9] Yuko Mizuno-Matsumoto, Toshiki Yoshimine, Yasuo Nii, et al. (2001) Landau-Kleffner Syndrome: Localization of Epileptogenic lesion Using Wavelet-Crosscorrelation Analysis, Epilepsy Behavior 2: 288-294.
    [10] Yuko Mizuno-Matsumoto, Satoshi Ukai, Ryosuke Ishii, et al. (2005) Wavelet-crosscorrelation analysis: Non-stationary analysis of neurophysiological signals. Brain Topogr 17: 237-252. doi: 10.1007/s10548-005-6032-2
    [11] Shinichi Tamura, Shigenori Nakano, Kozo Okazaki (1985) Optical code-multiplex transmission by Gold-sequences, IEEE/OSA J. Lightwave Tech 1: 21-127.
    [12] Yoshi Nishitani, Chie Hosokawa, Yuko Mizuno-Matsumoto, et al. (2012) Detection of M-sequences from spike sequence in neuronal networks. Computational Intelligence and Neuroscience 2012, Article ID 862579, 9 pages. doi:10.1155/2012/862579.
    [13] Shinichi Tamura, Yoshi Nishitani, Chie Hosokawa, et al. (2016) Spike code flow in cultured neuronal networks. Computational Intelligence and Neuroscience 2016, Article ID 7267691, 11 pages. http://dx.doi.org/10.1155/2016/7267691.
    [14] Shinichi Tamura, Yoshi Nishitani, Yakuya Kamimura, et al. (2013) Multiplexed Spatiotemporal Communication Model in Artificial Neural Networks. Auto Control Intelligent System 1: 121-130. doi: 10.11648/j.acis.20130106.11. doi: 10.11648/j.acis.20130106.11
    [15] Yoshi Nishitani, Chie Hosokawa, Yuko Mizuno-Matsumoto, et al. (2016) Variance of spatiotemporal spiking patterns by different stimulated neurons in cultured neuronal networks. Int J Academic Res Reflect 4: 11-19.
    [16] Müller Meinard (2007) Information Retrieval for Music and Motion, Springer 69-84.
    [17] Wulfram Gerstner, Werner M. Kistler (2002) Spiking Neuron Models. Single Neurons, Populations, Plasticity, Cambridge University Press.
    [18] Shinichi Tamura, Yoshi Nishitani, Chie Hosokawa, et al. (2016) Simulation of code spectrum and code flow of cultured neuronal networks. Comput Intell Neurosci 2016, Article ID 7186092, 12 pages.
    [19] Shinichi Tamura Yoshi Nishitani, Chie Hosokawa (2016) Feasibility of multiplex communication in a 2D mesh asynchronous neural network with fluctuations. AIMS Neurosci 3: 385-397. DOI: 10.3934/Neuroscience.2016.4.385. doi: 10.3934/Neuroscience.2016.4.385
  • This article has been cited by:

    1. Shaoli Cui, Te Kong, Jiahao Wang, Zhihao Gao, Aoxiang Fu, Yanbei Xi, Liguo Ji, Na Gao, Linlin Yang, Guangjie He, An AIE enhanced fluorescence probe based on the “rotor” structure to detect level 3 structure of latent fingerprints, 2024, 224, 01437208, 112040, 10.1016/j.dyepig.2024.112040
  • Reader Comments
  • © 2016 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4854) PDF downloads(1256) Cited by(2)

Figures and Tables

Figures(11)  /  Tables(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog