Loading [MathJax]/jax/output/SVG/jax.js
Research article

Modified pixel level snake using bottom hat transformation for evolution of retinal vasculature map

  • Received: 28 March 2021 Accepted: 15 June 2021 Published: 25 June 2021
  • Small changes in retinal blood vessels may produce different pathological disorders which may further cause blindness. Therefore, accurate extraction of vasculature map of retinal fundus image has become a challenging task for analysis of different pathologies. The present study offers an unsupervised method for extraction of vasculature map from retinal fundus images. This paper presents the methodology for evolution of vessels using Modified Pixel Level Snake (MPLS) algorithm based on Black Top-Hat (BTH) transformation. In the proposed method, initially bimodal masking is used for extraction of the mask of the retinal fundus image. Then adaptive segmentation and global thresholding is applied on masked image to find the initial contour image. Finally, MPLS is used for evolution of contour in all four cardinal directions using external, internal and balloon potential. This proposed work is implemented using MATLAB software. DRIVE and STARE databases are used for checking the performance of the system. In the proposed work, various performance metrics such as sensitivity, specificity and accuracy are evaluated. The average sensitivity of 76.96%, average specificity of 98.34% and average accuracy of 96.30% is achieved for DRIVE database. This technique can also segment vessels of pathological images accurately; reaching the average sensitivity of 70.80%, average specificity of 96.40% and average accuracy of 94.41%. The present study provides a simple and accurate method for the detection of vasculature map for normal fundus images as well as pathological images. It can be helpful for the assessment of various retinal vascular attributes like length, diameter, width, tortuosity and branching angle.

    Citation: Meenu Garg, Sheifali Gupta, Soumya Ranjan Nayak, Janmenjoy Nayak, Danilo Pelusi. Modified pixel level snake using bottom hat transformation for evolution of retinal vasculature map[J]. Mathematical Biosciences and Engineering, 2021, 18(5): 5737-5757. doi: 10.3934/mbe.2021290

    Related Papers:

    [1] Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464
    [2] Rafsanjany Kushol, Md. Hasanul Kabir, M. Abdullah-Al-Wadud, Md Saiful Islam . Retinal blood vessel segmentation from fundus image using an efficient multiscale directional representation technique Bendlets. Mathematical Biosciences and Engineering, 2020, 17(6): 7751-7771. doi: 10.3934/mbe.2020394
    [3] Yifan Zhang, Zhi Zhang, Shaohu Peng, Dongyuan Li, Hongxin Xiao, Chao Tang, Runqing Miao, Lingxi Peng . A rotation invariant template matching algorithm based on Sub-NCC. Mathematical Biosciences and Engineering, 2022, 19(9): 9505-9519. doi: 10.3934/mbe.2022442
    [4] Shenghan Li, Linlin Ye . Multi-level thresholding image segmentation for rubber tree secant using improved Otsu's method and snake optimizer. Mathematical Biosciences and Engineering, 2023, 20(6): 9645-9669. doi: 10.3934/mbe.2023423
    [5] Jiali Tang, Yan Wang, Chenrong Huang, Huangxiaolie Liu, Najla Al-Nabhan . Image edge detection based on singular value feature vector and gradient operator. Mathematical Biosciences and Engineering, 2020, 17(4): 3721-3735. doi: 10.3934/mbe.2020209
    [6] G. Prethija, Jeevaa Katiravan . EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(3): 4742-4761. doi: 10.3934/mbe.2024208
    [7] Amsa Shabbir, Aqsa Rasheed, Huma Shehraz, Aliya Saleem, Bushra Zafar, Muhammad Sajid, Nouman Ali, Saadat Hanif Dar, Tehmina Shehryar . Detection of glaucoma using retinal fundus images: A comprehensive review. Mathematical Biosciences and Engineering, 2021, 18(3): 2033-2076. doi: 10.3934/mbe.2021106
    [8] Jinke Wang, Lubiao Zhou, Zhongzheng Yuan, Haiying Wang, Changfa Shi . MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(4): 6912-6931. doi: 10.3934/mbe.2023298
    [9] Shudong Wang, Yuliang Lu, Xuehu Yan, Longlong Li, Yongqiang Yu . AMBTC-based visual secret sharing with different meaningful shadows. Mathematical Biosciences and Engineering, 2021, 18(5): 5236-5251. doi: 10.3934/mbe.2021266
    [10] Yu Li, Meilong Zhu, Guangmin Sun, Jiayang Chen, Xiaorong Zhu, Jinkui Yang . Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy. Mathematical Biosciences and Engineering, 2022, 19(5): 5293-5311. doi: 10.3934/mbe.2022248
  • Small changes in retinal blood vessels may produce different pathological disorders which may further cause blindness. Therefore, accurate extraction of vasculature map of retinal fundus image has become a challenging task for analysis of different pathologies. The present study offers an unsupervised method for extraction of vasculature map from retinal fundus images. This paper presents the methodology for evolution of vessels using Modified Pixel Level Snake (MPLS) algorithm based on Black Top-Hat (BTH) transformation. In the proposed method, initially bimodal masking is used for extraction of the mask of the retinal fundus image. Then adaptive segmentation and global thresholding is applied on masked image to find the initial contour image. Finally, MPLS is used for evolution of contour in all four cardinal directions using external, internal and balloon potential. This proposed work is implemented using MATLAB software. DRIVE and STARE databases are used for checking the performance of the system. In the proposed work, various performance metrics such as sensitivity, specificity and accuracy are evaluated. The average sensitivity of 76.96%, average specificity of 98.34% and average accuracy of 96.30% is achieved for DRIVE database. This technique can also segment vessels of pathological images accurately; reaching the average sensitivity of 70.80%, average specificity of 96.40% and average accuracy of 94.41%. The present study provides a simple and accurate method for the detection of vasculature map for normal fundus images as well as pathological images. It can be helpful for the assessment of various retinal vascular attributes like length, diameter, width, tortuosity and branching angle.



    Retinal vascular network is a tree-like structure that includes arteries, arterioles, capillaries, veins and venules. It is useful to find the other normal features of retina such as macula or fovea or optic disk or for the automatic identification of pathological elements like hemorrhage, micro aneurysms, exudates or lesions [1]. As vascular diseases present a challenging health problem for society, so an efficient vascular segmentation algorithm is needed for understanding and analysis of vascular diseases in a better way. Segmentation of blood vessels using manual method and semi-automatic method is tedious and time consuming task because high skills and training is required in both these methods. Moreover, these segmentation techniques are susceptible to errors. With the use of fully automatic segmentation techniques, problems of manual segmentation and semi-automatic segmentation can be overcome. These automatic techniques are helpful in the advancement of computer-aided diagnostic systems which are used for identification of various ophthalmic disorders. Segmentation of vessels in an accurate manner is a tedious job because of the less variations in the contrast between vasculature and surrounding tissue, presence of noise in the retina image; variation in the vessel width, shape, branching angle and brightness of image and presence of lesions, exudates, hemorrhage and other pathologies.

    Although various segmentation techniques have been used [2,3,4] for segmentation of different diseases and anatomical structures of the body but the main goal of this paper is to present the methodology used for extraction of vessels from fundus image. Modified Pixel level snake (PLS) technique has been used for extraction of vessels. PLS is an iterative technique, in which internal and external forces are used for evolution of pixels of contour. Various segmentation approaches can be used for extraction of blood vessels, which includes unsupervised approach [5,6,7,8], supervised approach [9,10,11,12,13,14,15,16], tracking approach [17,18,19], deformable models approach [20,21,22,23,24,25,26], filtering approach [27] and morphological approach [28,31].

    Stall et al. [29] proposed a method for extraction of ridges of image from the color retinal fundus image. Soares et al. [30] presented a technique in which enhancement of retinal fundus image is performed using gabor filter and classification is performed using Bayesian classifier. Martinez et al. [32] proposed a method based on multiscale feature extraction approach for segmentation of vasculature map from red-free and fluorescein retinal fundus images. You et al. [33] presented a scheme based on the radial projection and semi-supervised method, for the extraction of retinal vasculature map. Alonso-Montes et al. [40] proposed a method based on PLS. In this, PLS has been used and tested in single instruction multiple data (SIMD) parallel processor array, for segmentation of retinal vasculature map. Time of execution and accuracy has also been analyzed. Perfetti et al. [41] proposed Cellular neural network (CNN) technique for extraction of vessels. Normally, in PLS, external potential is computed using edge based techniques. In our proposed methodology, BTH transformation is used for the computation of external potential, which results in improved accuracy of the extracted vasculature map.

    Contribution of the proposed approach is as follows:

    ● Bimodal masking is applied for the extraction of the mask of the fundus image.

    ● Global thresholding is used for segmentation of vasculature map of fundus image.

    ● MPLS based on BTH transformation has been proposed for evolution of map in four cardinal directions.

    The remaining paper is structured as follows. Section 2 described the materials used for proposed work. Section 3 described the methodology used for extraction of the vasculature map of the fundus image. Section 4 represented the results and discussions and section 5 represented the conclusion of the work.

    For analysis purpose, the color fundus image of the retina has been taken from the DRIVE (Digital Retinal Images for Vessels Extraction) database [42]. This database contains 40 images, out of which 20 are test images and 20 are training images. Out of 40 images, 7 images are pathological images and 33 are normal images. Images are produced at 45° field of view (FOV), using Canon CR5 nonmydriatic 3CCd camera. Size of each image in this database is 565 × 584 pixels.

    After implementation of the algorithm on the DRIVE dataset, simulation is also performed on 20 images (700 × 605 pixels each) of the STARE (Structured Analysis of Retina) database [43].

    Pixel based classification is used for extraction of vasculature map from the fundus image. Pixel classification is done on the basis of whether the pixel belongs to the vessel or surrounding tissue. So, four different possible events are possible which include pixel classifications and pixel misclassifications. True positive (TP) and true negative (TN) are the two pixel classifications and false positive (FP), and false negative (FN) are the two pixel misclassifications which are used for evaluation of various performance metrics. An event is classified as TP if a vessel pixel is correctly identified as vessel and TN if the non-vessel pixel or pixel in the surrounding tissue is correctly identified as a non-vessel pixel. An event is said to be FN if the predicted pixel represents a non-vessel pixel but actually it was a vessel pixel. An event is said to be FP if the predicted pixel represents vessel pixel but actually it was a non-vessel pixel. The important performance metrics which can be derived from the above events are sensitivity, specificity, and accuracy.

    SN metrics represents the ability of a segmentation method to detect the vessel pixels. SN is defined as the ratio of TP to the sum of TP and FN. Range of sensitivity is between 0 and 1. More SN means the algorithm is able to identify vessel pixels correctly. SN measure is expressed by Eq (1).

    SN=TP(TP+FN) (1)

    SP metrics represents the ability of a segmentation algorithm to detect background or non-vessel pixels. SP is also defined as the ratio of TN to the sum of TN and FP. Range of specificity is also between 0 and 1. More SP means the algorithm is able to identify non-vessel pixels correctly. SP measure is expressed by Eq (2).

    SP=TN(TN+FP) (2)

    Acc is evaluated by taking the ratio of total number of true events which is the sum of TP and TN, to the total population which is the total number of pixels actually present in the image. The formula for accuracy is expressed by Eq (3).

    Acc=TP+TNTP+TN+FP+FN (3)

    Proposed methodology for automated extraction of vasculature map of fundus image is presented in Figure 1. The main components of the pre-processing are: RGB to gray conversion, generation of mask using bimodal masking and contrast enhancement using CLAHE (Contrast Limited Adaptive Histogram Equalization).

    Figure 1.  Proposed algorithm.

    Initially RGB fundus image (1_test of DRIVE database) is read and converted to gray scale image for segmentation of vessels. Conversion of RGB to gray image also reduces the time required for processing of the image. Different weights for R, G & B components are selected for conversion purpose. RGB to gray conversion is performed using the formula represented by Eq (4) [44].

    G=0.2989r+0.5870 g+0.1140 b (4)

    Where r, g and b symbolize the red, green, and blue channels of the fundus image respectively and G is the gray image produced after conversion. The green channel is the channel which gives the maximum information of the fundus image, so the weight of the green channel is chosen larger as compared to other channels. RGB image and its corresponding gray-scale image is represented by Figure 2(a), (b) respectively.

    Figure 2.  (a) Original image (b) Grayscale image (c) Flowchart for generation of mask (d) Histogram of image (e) Mask generated using bimodal masking (f) Mask generated using thresholding.

    Vasculature map of retinal fundus image is used for detection of diseases and monitoring of diseases. After applying certain image processing techniques on fundus images, the speed of analyzing these images can be increased. Since analyses require more time and computational effort, operations should be focused only on the object pixels. For getting the object pixels, first the binary mask is generated and then multiplied with the original image so as to get the accurate image required for segmentation.Flow chart for the generation of mask using bimodal masking is shown in Figure 2(c). Mask is generated from the gray image, obtained after RGB to gray conversion. Initially histogram of image is generated as shown in Figure 2(d) and then dominant peaks and valleys are identified according to the histogram [45]. Then the second valley is selected as threshold level for the conversion of gray image into binary image. Binary image is produced after thresholding is termed as final mask, shown in Figure 2(e). Figure 2(f) represents the mask of image, produced after simple thresholding techniques. So, it can be easily analyzed that the mask produced after bimodal masking is accurate as compared to simple thresholding technique.

    CLAHE operation is performed for enhancement of contrast of an image. CLAHE works on small areas in the image rather than the entire image. These small regions are termed as 'tiles'. Enhancement of contrast has been performed tile wise. After that all neighboring tiles are combined using bilinear interpolation method. In this paper, two level enhancements have been done using CLAHE technique. It means CLAHE has been applied two times for getting the proper enhanced image. Size of tiles used for CLAHE is [8 8] and the number of bins is 128. Enhanced image produced after applying CLAHE technique is shown in Figure 3(a).

    Figure 3.  (a) Enhanced image (b) Block diagram of adaptive segmentation (c) Average image (d) Segmented image (e) Initial contour image (f) Contour image without border.

    The main component of the pre-processing is global thresholding used for extraction of initial contour.

    Enhanced image produced using CLAHE is further used to produce the segmented image. For generation of segmented image, initially adaptive segmentation is required because gray values along vessels in retina vasculature are non-uniform. Input of adaptive segmentation is preprocessed image (enhanced image) and output is segmented image. Figure 3(b) represents the block diagram of adaptive segmentation. Average filter having size 9 is applied on preprocessed image to produce averaged image as shown in Figure 3(c). After that, a subtracted image is produced by taking the difference between the averaged image and the preprocessed image. Then a global thresholding technique is applied on the subtracted image for computation of threshold level of image. Global thresholding technique (Algorithm 1) is stated as follows:

    Algorithm 1: Global thresholding technique.
    1. Choose an initial random threshold T for segmentation. This threshold is called the global threshold.
    2. Using threshold T, segment the fundus image. Two groups of pixels are produced:
    (i) All pixels having value more than T, belong to group G1.
    (ii) All pixels having value less than or equal to T, belong to group G2.
    3. Evaluate the average intensities m1 and m2 of both the groups G1 and G2 respectively.
    4. Again compute the threshold using T = (1/2)(m1 + m2).
    5. Repeat steps 2-4 until the successive iterations threshold difference is smaller than the already defined value.
    6. Segment the image by taking T as threshold value.

     | Show Table
    DownLoad: CSV

    Using this threshold, the subtracted image is converted to the binary image, which is called as the segmented image as represented by Figure 3(d). This segmented image will be used to find the initial contour which is defined implicitly as the region boundary. After that morphological operation, closing is applied on the segmented image by taking disk shaped structuring element (SE) having size 1. Then small areas having pixel size less than 35 are removed from the closed image. The image produced is called the initial contour of the image shown in Figure 3(e).

    Next task is to remove the border from the contour image because the retinal vasculature map does not include the outer border as shown in the initial contour image represented by Figure 3(e). So, the border is removed from the initial contour image using the mask produced using bimodal masking. To perform this operation, the initial contour image is subtracted from the complement of the mask. If after subtraction, some pixels contain values greater than 0, then value 1 is assigned to those pixels and if some pixels contain values less than zero, then 0 is assigned to those pixels. So after subtraction and assigning values to the subtracted image, a contour image without border is produced. After that morphological operation (dilation) using disk shape SE having size 1 is applied on the image. The image shown in Figure 3(f) is further used for evolution using MPLS.

    The main components of the post processing are evolution of contour using modified PLS and removal of noise.

    In a PLS algorithm the evolution of contour is performed by pixel-by-pixel shifting of the contour, towards a position where the potential is minimum. Figure 4 shows the flowchart of PLS algorithm. In this algorithm, contour pixels evolve iteratively according to the potential field. This potential field comprises of three potentials named as internal, external and balloon potential and the weights of these potentials are adjusted according to the application. The main components of a PLS algorithm is contour evolution. Topological Transformations module is also used to handle merging and splitting of contours. This module is basically used to avoid collisions between contours.

    Figure 4.  Flow chart of PLS evolution.

    (1) External potential computation

    In existing methods, external potential is calculated using edge based techniques like sobel and canny edge detection. But in this paper, MPLS is proposed in which external potential of image is calculated by using BTH transformation method because the external potential produced using BTH method contains more information about the vasculature map resulting in higher accuracy. Using this external potential, the total potential of the image is computed which evolves the contour in an efficient way as compared to previous existing methods. Flow chart for computation of external potential for modified PLS is shown in Figure 5(a). Here, BTH transformation is applied on the green channel of the retinal fundus image (as shown by Figure 5(b)) by taking three different structuring elements.

    Figure 5.  (a) Flow Chart for computation of external potential for MPLS (b) Green channel of masked image (c) External potential image (d) Complemented weighted external potential image after 1st Iteration.

    The BTH transform is the transformation which is evaluated by subtracting the input image from the closing of the input image. The BTH transform of image (f) is given by Eq (5).

    Tb(I)=IbI (5)

    Here, I is the input image; b represents the SE and represents the closing operation.

    Output Tb(I) represents the BTH transformed image.

    Three different disk shaped structuring elements having size 2, 7, 11 are used for closing operation in computation of external potential. Then the sum of all three BTH transformed images is taken to produce the external potential image as represented by Figure 5(c). For evolution of contour towards minimum potential, complement of external potential is taken. Complement of external potential image (Pe) is represented by Figure 5(d).

    This is the potential which guides the contour of the image towards edges of the vasculature map. If the image is static, then external potential is computed once. Applications like real time computer vision, in which moving images are there, external potential is computed for each frame of the image. Evolution of PLS will be done by external potential which is stronger in areas close to the edges.

    (2) Internal potential computation

    This potential is useful in maintaining the smooth shape of the contour. During the evolution of PLS, all vessel discontinuities are avoided using internal potential. Internal potential is computed from the initial contour. Flow chart for computation of internal potential of the image is shown in Figure 6(a). In this, initially a binary contour edge image is produced from the initial contour image shown in Figure 5(e) using the expression: C = IC and not (ICN and ICS and ICW and ICE). Here IC represents the initial contour of the image, ICN represents IC(x, y - 1), i.e., active region pixels in NORTH direction from the current pixel IC(x, y).Similarly ICE, ICW and ICS represents IC(x, y+1), IC(x -1, y), IC(x+1, y), i.e., active region pixels in East, West and South directions from the current pixel IC(x, y) respectively. Figure 6(b) represents the edge image produced from the initial contour image. Diffusion of image has been performed on edge image of initial contour by anisotropic diffusion method taking lambda = 0.25; and no. of iterations = 20. Anisotropic diffusion of contour image is performed to obtain an internal potential field. Anisotropic diffusion, also called Perona-Malik diffusion, is a technique used to reduce noise present in the image without removing important parts of the image content.

    Figure 6.  (a) Flow chart for computation of Internal Potential (b) Edge of contour image (c) Diffused image (d) Weighted internal potential image (e) complement of weighted image.

    Anisotropic diffusion is defined by Eq (6).

    It=div(c(x,y,t)I)=c.I+c(x,y,t)ΔI (6)

    Where Δ denotes the laplacian, denotes the gradient, div is the divergence operator and c(x, y, t) is the diffusion coefficient. Two functions for diffusion coefficient are proposed by Perona and Malik which are representedby Eqs 7(a) and 7(b).

    c(I)=e(IK)2 (7a)

    And

    c(I)=11+(IK)2 (7b)

    Constant K is used to control the sensitivity to edges. Here the value of K is chosen as 40. The diffused image as shown in Figure 6(c) is multiplied with weight having value 0.1 to get an internal potential image. After that, the complement of the internal potential image (Pi) has been taken for contour evolution in the proper direction. Figure 6(d), (e) represent the weighted internal potential image and complemented image respectively.

    (3) Balloon potential computation

    When external potential is too weak, then it's not possible for external potential to guide the contour in all directions. In that case, there is one potential which produces forces, to guide the contour towards object pixels. Initially PLS is controlled by balloon potential because initially contour is far from vessel edges. To get balloon potential image, initial contour is multiplied with weight having value 0.1. Flowchart for computation of balloon potential is shown in Figure 7(a). Figure 7(b), (c) represents weighted balloon potential and complement of balloon potential image (Pb) respectively.

    Figure 7.  (a) Flow chart of balloon potential (b) weighted balloon potential image (c) complemented image (d) Potential image.

    (4) Guiding force extraction module

    Potential field is computed as the weighted sum of external, internal and balloon potential as represented by equation 8. All internal and external forces are produced through the potential field that guides the evolution of contour towards minimum energy level.

    PT=Pe+Pi+Pb (8)

    Here PT represents the total potential field of an image, Pe represents the external potential of image, Pi represents the internal potential image and Pb represents the balloon potential of image. Figure 7(d) represents the potential field image produced after the addition of all potential images.

    (5) Directional contour evolution with topological transformation and collision detection module

    Evolution of directional contour is the most important part of PLS technique. It is performed in four prime directions: NORTH, EAST, WEST and SOUTH (NEWS). After iteration each pixel of contour is moved towards a position, where it can acquire minimum potential.

    The expansion of contour may result in merging and splitting of contours. But in segmentation of retinal vasculature, it's required to prevent the collision between the contours. For preventing the collision, a collision detection module has been used. When contour expands in each direction, the two vessels may combine with each other. So, this expansion of contour is done in such a way that there should be no danger of collision. Figure 8(a) represents the different cases for dangers of collisions. Considering the danger of collision point in mind, expression for expansion in NORTH, SOUTH, EAST and WEST direction is computed below.

    Figure 8.  (a) Different cases for dangers of collisions (b) Initial contour (c) contour expanded in north direction (d) Initial contour matrix (e) Potential matrix (f) Expansion matrix in north direction.

    Expansion in north direction:

    Pixel wise expansion of initial contour has been performed in the north direction based on potential of image. Expression for expansion in NORTH direction is given by Condition 1.

    If (not D) and ICS and (PT < PT S), then IC becomes equal to 1.      Cond.(1)

    Here D = R or RE or RW and R = (not IC) and ICN.

    Here, PT represents the total potential field, PTS represents potential field in south direction, R represents the active/background pixel pairs in the vertical direction, D represents the danger of collision which is determined by taking logic 'OR' of the current pixel of R with its east (RE) and west (RW) neighbor.

    Expansion of the initial contour in the north direction is represented by taking a matrix of 10:10 elements. Figures 8(b), (c) represents the initial contour of 100 pixels and its expanded version in north direction respectively. Observe the pixel values and potential of initial contour highlighted with red box in Figure 8(d), (e) respectively. If the pixel has value 0 and its potential is less than potential in its south direction, then value 1 is assigned to that pixel as represented in Figure 8(f).

    Similarly, expressions in other directions can be computed by considering the danger of collision.

    Expression in south direction is given by Cond.(2)

    If (not D) and ICN and (PT < PTN) then IC becomes equal to 1.      Cond.(2)

    Here D = R or RE or RW and R = (not IC) and ICS.

    Expression in east direction is given by Cond.(3)

    If (not D) and ICW and (PT < PTW) then IC becomes equal to 1.      Cond.(3)

    Here D = R or RN or RS and R = (not IC) and ICE.

    Expression in west direction is given by Cond.(4)

    If (not D) and ICE and (PT < PTE) then IC becomes equal to 1.      Cond.(4)

    Here D = R or RN or RS and R = (not IC) and ICW.

    Images produced after expansion in N, S, E, W directions are represented by Figure 9(a)-(d).

    Figure 9.  Expanded image in (a) North (b) South (c) East (d) West direction.

    (6) Inversion

    Image produced after expansion in all directions is inverted to ensure contour evolution towards minimum potential. Inversion of the active region produces a new contour shifted by one pixel. So, the inverted image is not simply calculated by (not IC) but using the following expression.

    Inv = (not IC) or c. Here c = IC and not (ICN and ICS and ICW and ICE). Inverted image is represented by Figure 10(a). After inversion, evolution of contour image is again performed in all directions. So, expansion and contraction of active and background regions is performed in each iteration. Also steady direction of forces which are inflating and deflating in nature, can be maintained by inverting balloon potential after each iteration. Figure 10(b)-(e) represents contour expansion in all directions in the second iteration respectively.

    Figure 10.  (a) Inverted image (b) Expanded image in north direction (c) Expanded image in south direction (d) Expanded image in east direction (e) Expanded image in west direction.

    In the last stage, small objects (extracted from retinal vasculature obtained after MPLS evolution) having pixels less than 30 are removed. Noise which is present outside the border has been removed by multiplying the final vasculature map with the complement of mask. After that morphological closing with SE disk having size 1 is applied on the image. This is the final vasculature map as shown by Figure 11(a), which can be further used for identification of various diseases.

    Figure 11.  (a) Extracted map (b) Ground truth.

    Various performance metrics such as SN, SP and Acc have been computed by using extracted vasculature map and ground truth map. Acc of proposed algorithm is better is due to extraction of accurate binary mask of fundus image; better enhancement of image and evolution of vasculature map using MPLS technique in all four cardinal directions. Figure 11(a), (b) represents extracted map and ground truth image of original retinal fundus image.

    The algorithm can be applied on all images of the DRIVE database. Figure 12 represents the results of the three normal images of the DRIVE database and their extracted vasculature map respectively.

    Figure 12.  (a)-(c) Original images, (d)-(f) Corresponding extracted vasculature map.

    Table 1 represents comparative analysis of SN, SP and Acc metrics for DRIVE database. For analysis purpose, the proposed method results are compared with the results obtained by Staal [29] et al., Soares [30] et al., Mendonc-a [31] et al., Martinez-Perez [32] et al., You [33] et al., Fraz [34] et al., Ravichandran et al. [35], Zhao et al. [36], Yin et al. [37], Frucci [38] et al. and Zhang [39] et al., Adapa [46], Ma [47]. It has been observed that for MPLS, average SN comes out to be better than existing methodologies except [47] and SP comes out to be better than existing methodologies except [38], because there is a trade-off between SN and SP of image. Acc of proposed methodology is better than Acc of all existing methodologies.

    Table 1.  Comparative analysis of SN, SP & Acc for DRIVE database.
    Method Year SN SP Acc
    Staal [29] 2004 0.7194 0.9773 0.9442
    Soares [30] 2006 0.7230 0.9762 0.9446
    Mendonc-a [31] 2006 0.7344 0.9764 0.9452
    Martinez-Perez [32] 2007 0.7246 0.9655 0.9344
    You [33] 2011 0.7410 0.9751 0.9434
    Fraz [34] 2012 0.7406 0.9807 0.9480
    Ravichandran [35] 2014 0.7259 0.9799 0.9574
    Zhao [36] 2014 0.7354 0.9789 0.9477
    Yin [37] 2015 0.7246 0.9790 0.9403
    Frucci [38] 2016 0.670 0.986 0.959
    Zhang [39] 2017 0.7861 0.9712 0.9466
    Adapa [46] 2019 0.6994 0.9811 0.945
    Ma [47] 2020 0.7875 0.9813 0.9566
    Proposed Method 2021 0.76959 0.9834 0.9630

     | Show Table
    DownLoad: CSV

    This algorithm has been tested on all test images and pathological images of the DRIVE database. Figure 13 represents the results of the two pathological images of the DRIVE database and their extracted vasculature map respectively. Table 2 represents SN, SP and Acc results for 5 pathological images of the DRIVE database. It has been observed that the average SN, SP and Acc for pathological images is 70.80%, 96.40% and 94.41% respectively. Simulation has also been performed on 20 images of the STARE dataset. Table 3 shows the comparative analysis of average values of SN, SP and Acc for different images of STARE dataset and it has been observed that proposed technique has high Acc even for the STARE dataset also which proves the robust nature of proposed algorithm.

    Figure 13.  (a)-(b) Pathological images of DRIVE database, (c)-(d) Corresponding extracted vasculature map.
    Table 2.  SN, SP & Acc values of pathological images of DRIVE database.
    Image SN SP Acc
    1 0.6769 0.9941 0.9664
    2 0.7617 0.9646 0.9519
    3 0.7483 0.9220 0.9083
    4 0.7012 0.9868 0.9702
    5 0.6522 0.9523 0.9239
    Average 70.80 96.40 94.41

     | Show Table
    DownLoad: CSV
    Table 3.  Comparative analysis of SN, SP & Acc for STARE database.
    Method Year SN SP Acc
    Soares et al. [30] 2006 0.7181 0.9765 0.9500
    Fraz et al. [34] 2012 0.7262 0.9764 0.9511
    Azzopardi et al. [7] 2015 0.7716 0.9701 0.9497
    Li et al. [16] 2015 0.7726 0.9844 0.9628
    Roychowdhury et al. [8] 2016 0.7720 0.9730 0.9510
    Li et al. [48] 2017 0.7843 0.9837 0.9690
    WA-Net [47] 2020 0.7740 0.9871 0.9645
    Proposed Work 2021 0.7930 0.9895 0.9745

     | Show Table
    DownLoad: CSV

    Accurate segmentation of the vasculature map of the fundus image plays a crucial role in the diagnostic procedure of various retinal disorders. In the proposed work, initially the binary mask of fundus image is generated using bimodal masking technique and vasculature map is extracted using global thresholding technique. MPLS technique has been used for evolution of contour in all directions to extract the vasculature map in accurate manner. Simulated results demonstrate that the proposed technique can extract vasculature maps of normal images as well as pathological images accurately. Since the methodology used for extraction of vessels is unsupervised, no training is required. Vessel connectivity has also been done without any danger of collision. Proposed algorithm is an efficient technique because it is used for extraction of vasculature maps from normal as well as pathological images. Further extracted vasculature maps can be used to find the features of retina such as macula or fovea or optic disk or for the automatic identification of pathological elements like hemorrhage, microaneurysms, exudates or lesions accurately. Also the quantitative and objective assessment of arteriovenous nicking can be performed in future.

    We would like to show our gratitude to Dr. Mausumi Acharyya, AdvenioTechnoSys, India, for sharing their pearls of wisdom with us during this research. The authors would like to thank people who provide the public databases used in this work.

    The authors declare no conflict of interest in this paper.



    [1] Z. Yavuz, C. Köse, Blood vessel extraction in color retinal fundus images with enhancement filtering and unsupervised classification, J. Healthcare Eng., 2017 (2017).
    [2] J. Lei, X. You, M. Abdel-Mottaleb, Automatic ear landmark localization, segmentation, and pose classification in range images, IEEE Trans. Syst. ManCybern. Syst., 46 (2016), 165-176.
    [3] I. Oksuz, J. R. Clough, B. Ruijsink, E. P. Anton, A. Bustin, G. Cruz, et al., Deep learning-based detection and correction of cardiac MR motion artefacts during reconstruction for high-quality segmentation, IEEE Trans. Med. Imaging, 39 (2020), 4001-4010.
    [4] M. A. Khan, M. A. Khan, F. Ahmed, M. Mittal, L. M. Goyal, D. J. Hemanth, et al., Gastrointestinal diseases segmentation and classification based on duo-deep architectures, Pattern Recognit. Lett., 131 (2020), 193-204.
    [5] U. T. V. Nguyen, A. Bhuiyan, L. A. F. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern Recognit., 46 (2013), 703-715. doi: 10.1016/j.patcog.2012.08.009
    [6] M. Krause, R. M. Alles, B. Burgeth, J. Weickert, Fast retinal vessel analysis, J. Real Time Image Proc., 11 (2016), 413-422.
    [7] G. Azzopardi, N. Strisciuglio, M. Vento, N. Petkov, Trainable COSFIRE filters for vessel delineation with application to retinal images, Med. Image Anal., 19 (2015), 46-57. doi: 10.1016/j.media.2014.08.002
    [8] S. Roychowdhury, D. D. Koozekanani, K. K. Parhi, Blood vessel segmentation of fundus images by major vessel extraction and subimage classification, IEEE J. Biomed. Heal. Inf., 19 (2015), 1118-1128.
    [9] D. Marín, A. Aquino, M. E. Gegúndez-Arias, J. M. Bravo, A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features, IEEE Trans. Med. Imaging, 30 (2011), 146-158. doi: 10.1109/TMI.2010.2064333
    [10] P. Dai, H. Luo, H. Sheng, Y. Zhao, L. Li, J. Wu, et al., A new approach to segment both main and peripheral retinal vessels based on gray-voting and Gaussian mixture model, PLoS One, 10 (2015), e0127748.
    [11] S. Abbasi-Sureshjani, I. Smit-Ockeloen, J. Zhang, B. T. H. Romeny, Biologically-inspired supervised vasculature segmentation in SLO retinal fundus images, Int. Conf. Image Anal. Recognit., 2015 (2015), 325-334.
    [12] J. I. Orlando, E. Prokofyeva, M. B. Blaschko, A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images, IEEE Trans. Biomed. Eng., 64 (2017), 16-27. doi: 10.1109/TBME.2016.2535311
    [13] N. Strisciuglio, G. Azzopardi, M. Vento, N. Petkov, Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters, Mach. Vis. Appl., 27 (2016), 1137-1149. doi: 10.1007/s00138-016-0781-7
    [14] E. Ricci, R. Perfetti, Retinal blood vessel segmentation using line operators and support vector classification, IEEE Trans. Med. Imaging, 26 (2007), 1357-1365. doi: 10.1109/TMI.2007.898551
    [15] M. M. Fraz, A. R. Rudnicka, C. G. Owen, S. A. Barman, Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification, Int. J. Comput. Assist. Radiol. Surg., 9 (2014), 795-811. doi: 10.1007/s11548-013-0965-9
    [16] Q. Li, B. Feng, L. Xie, P. Liang, H. Zhang, T. Wang, A cross-modality learning approach for vessel segmentation in retinal images, IEEE Trans. Med. Imaging, 35 (2016), 109-118.
    [17] J. S. Kumar, K. Chitra, Segmentation of blood vessels using improved line detection and entropy based thresholding, J. Theor. Appl. Inf. Technol., 63 (2014), 233-239.
    [18] E. Bekkers, R. Duits, T. Berendschot, B. TerHaarRomeny, A multi-orientation analysis approach to retinal vessel tracking, J. Math. Imaging Vis., 49 (2014), 583-610. doi: 10.1007/s10851-013-0488-6
    [19] Y. Chen, Y. Zhang, J. Yang, Q. Cao, G. Yang, J. Chen, et al., Curve-like structure extraction using minimal path propagation with backtracking, IEEE Trans. Image Process., 25 (2016), 988-1003.
    [20] B. Al-Diri, A. Hunter, D. Steel, An active contour model for segmenting and measuring retinal vessels, IEEE Trans.Med. Imaging., 28 (2009), 1488-1497. doi: 10.1109/TMI.2009.2017941
    [21] Y. Zhao, L. Rada, K. Chen, S. P. Harding, Y. Zheng, Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images, IEEE Trans. Med. Imaging., 34 (2015), 1797-1807. doi: 10.1109/TMI.2015.2409024
    [22] K. W. Sum, P. Y. S. Cheung, Vessel extraction under non-uniform illumination: A level set approach, IEEE Trans. Biomed. Eng., 55 (2008), 358-360. doi: 10.1109/TBME.2007.896587
    [23] L. Espona, M. J. Carreira, M. G. Penedo, M. Ortega, Retinal vessel tree segmentation using a deformable contour model, Proc. Int. Conf. Pattern Recognit., 2008 (2008), 1-4.
    [24] A. Nieto, V. Brea, D. L. Vilariño, R. R. Osorio, Performance analysis of massively parallel embedded hardware architectures for retinal image processing, EURASIP J. Image Video Proc., 2011 (2011), 1-17.
    [25] B. Dizdaro, E. Ataer-Cansizoglu, J. Kalpathy-Cramer, K. Keck, M. F. Chiang, D. Erdogmus, Level sets for retinal vasculature segmentation using seeds from ridges and edges from phase maps, IEEE Int. Work. Mach. Learn. Signal Proc., 2012 (2012), 1-6.
    [26] Y. Tian, Q. Chen, W. Wang, Y. Peng, Q. Wang, F. Duan, et al., A vessel active contour model for vascular segmentation, Biomed Res. Int., 2014 (2014).
    [27] J. Zheng, P. R. Lu, D. Xiang, Y. K. Dai, Z. B. Liu, D. J. Kuai, et al., Retinal image graph-cut segmentation algorithm using multiscale Hessian-enhancement-based nonlocal mean filter, Comput. Math. Methods Med., 2013 (2013).
    [28] G. Hassan, N. El-Bendary, A. E. Hassanien, A. Fahmy, S. Abullahm, V. Snasel, Retinal Blood Vessel Segmentation Approach Based on Mathematical Morphology, Procedia Comput. Sci., 2015 (2015), 612-622.
    [29] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, B. Van Ginneken, Ridge-based vessel segmentation in color images of the retina, IEEE Trans. Med. Imaging., 23 (2004), 501-509. doi: 10.1109/TMI.2004.825627
    [30] J. V. B. Soares, J. J. G. Leandro, R. M. Cesar, H. F. Jelinek, M. J. Cree, Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification, IEEE Trans. Med. Imaging., 25 (2006), 1214-1222. doi: 10.1109/TMI.2006.879967
    [31] A. M. Mendonça, A. Campilho, Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction, IEEE Trans. Med. Imaging., 25 (2006), 1200-1213. doi: 10.1109/TMI.2006.879955
    [32] M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, K. H. Parker, Segmentation of blood vessels from red-free and fluorescein retinal images, Med. Image Anal., 11 (2007), 47-61. doi: 10.1016/j.media.2006.11.004
    [33] X. You, Q. Peng, Y. Yuan, Y. M. Cheung, J. Lei, Segmentation of retinal blood vessels using the radial projection and semi-supervised approach, Pattern Recognit., 44 (2011), 2314-2324. doi: 10.1016/j.patcog.2011.01.007
    [34] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, et al., An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Trans. Biomed. Eng., 59 (2012), 2538-2548.
    [35] C. G. Ravichandran, J. B. Raja, A fast enhancement/thresholding based blood vessel segmentation for retinal image using contrast limited adaptive histogram equalization, J. Med. Imaging Heal. Inf., 4 (2014), 567-575. doi: 10.1166/jmihi.2014.1289
    [36] Y. Q. Zhao, X. H. Wang, X. F. Wang, F. Y. Shih, Retinal vessels segmentation based on level set and region growing, Pattern Recognit., 2014 (2014), 2437-2446.
    [37] B. Yin, H. Li, B. Sheng, X. Hou, Y. Chen, W. Wu, et al., Vessel extraction from non-fluorescein fundus images using orientation-aware detector, Med. Image Anal., 26 (2015), 232-242.
    [38] M. Frucci, D. Riccio, G. S. di Baja, L. Serino, Severe: Segmenting vessels in retina images, Pattern Recognit. Lett., 82 (2016), 162-169. doi: 10.1016/j.patrec.2015.07.002
    [39] J. Zhang, Y. Chen, E. Bekkers, M. Wang, B. Dashtbozorg, B. M. ter H. Romeny, Retinal vessel delineation using a brain-inspired wavelet transform and random forest, Pattern Recognit., 69 (2017), 107-123. doi: 10.1016/j.patcog.2017.04.008
    [40] C. Alonso-Montes, D. L. Vilariño, P. Dudek, M. G. Penedo, Fast retinal vessel tree extraction: A pixel parallel approach, Int. J. Circuit Theory Appl., 36 (2008), 641-651. doi: 10.1002/cta.512
    [41] R. Perfetti, E. Ricci, D. Casali, G. Costantini, Cellular neural networks with virtual template expansion for retinal vessel segmentation, IEEE Trans. Circuits Syst. Ⅱ Express Briefs., 54 (2007), 141-145.
    [42] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, B. van Ginneken, Ridge-based vessel segmentation in color images of the retina, IEEE Trans. Med. Imaging., 23 (2004), 501-509.
    [43] A. Hoover, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Imaging., 19 (2000), 203-210. doi: 10.1109/42.845178
    [44] M. D. Saleh, C. Eswaran, A. Mueen, An automated blood vessel segmentation algorithm using histogram equalization and automatic threshold selection, J. Digit. Imaging., 24 (2011), 564-572. doi: 10.1007/s10278-010-9302-9
    [45] D. V. S. X. de Silva, W. A. C. Fernando, H. Kodikaraarachchi, S. T. Worrall, A. M. Kondoz, Adaptive sharpening of depth maps for 3D-TV, Electron. Lett., 46 (2010), 1546-1548. doi: 10.1049/el.2010.2320
    [46] K. Ganesan, G. Naik, D. Adapa, A. N. J. Raj, S. N. Alisetti, Z. Zhuang, A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features, PLoS One, 15 (2020), e0229831.
    [47] Y. Ma, X. Li, X. Duan, Y. Peng, Y. Zhang, Retinal vessel segmentation by deep residual learning with wide activation, Comput. Intell. Neurosci., 2020 (2020).
    [48] M. Li, Z. Ma, C. Liu, G. Zhang, Z. Han, Robust retinal blood vessel segmentation based on reinforcement local descriptions, Biomed Res. Int., 2017 (2017).
  • This article has been cited by:

    1. Vatsala Anand, Sheifali Gupta, Deepika Koundal, Soumya Ranjan Nayak, Paolo Barsocchi, Akash Kumar Bhoi, Modified U-NET Architecture for Segmentation of Skin Lesion, 2022, 22, 1424-8220, 867, 10.3390/s22030867
    2. Goldy Verma, 2024, Xception Model for Accurate Indian Rock Python Detection in Deep Learning, 979-8-3315-2963-5, 406, 10.1109/ICUIS64676.2024.10866422
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2767) PDF downloads(90) Cited by(2)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog