
The improper circulation of blood flow inside the retinal vessel is the primary source of most of the optical disorders including partial vision loss and blindness. Accurate blood vessel segmentation of the retinal image is utilized for biometric identification, computer-assisted laser surgical procedure, automatic screening, and diagnosis of ophthalmologic diseases like Diabetic retinopathy, Age-related macular degeneration, Hypertensive retinopathy, and so on. Proper identification of retinal blood vessels at its early stage assists medical experts to take expedient treatment procedures which could mitigate potential vision loss. This paper presents an efficient retinal blood vessel segmentation approach where a 4-D feature vector is constructed by the outcome of Bendlet transform, which can capture directional information much more efficiently than the traditional wavelets. Afterward, a bunch of ensemble classifiers is applied to find out the best possible result of whether a pixel falls inside a vessel or non-vessel segment. The detailed and comprehensive experiments operated on two benchmark and publicly available retinal color image databases (DRIVE and STARE) prove the effectiveness of the proposed approach where the average accuracy for vessel segmentation accomplished approximately 95%. Furthermore, in comparison with other promising works on the aforementioned databases demonstrates the enhanced performance and robustness of the proposed method.
Citation: Rafsanjany Kushol, Md. Hasanul Kabir, M. Abdullah-Al-Wadud, Md Saiful Islam. Retinal blood vessel segmentation from fundus image using an efficient multiscale directional representation technique Bendlets[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 7751-7771. doi: 10.3934/mbe.2020394
[1] | Jinke Wang, Lubiao Zhou, Zhongzheng Yuan, Haiying Wang, Changfa Shi . MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(4): 6912-6931. doi: 10.3934/mbe.2023298 |
[2] | G. Prethija, Jeevaa Katiravan . EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(3): 4742-4761. doi: 10.3934/mbe.2024208 |
[3] | Caixia Zheng, Huican Li, Yingying Ge, Yanlin He, Yugen Yi, Meili Zhu, Hui Sun, Jun Kong . Retinal vessel segmentation based on multi-scale feature and style transfer. Mathematical Biosciences and Engineering, 2024, 21(1): 49-74. doi: 10.3934/mbe.2024003 |
[4] | Yinlin Cheng, Mengnan Ma, Liangjun Zhang, ChenJin Jin, Li Ma, Yi Zhou . Retinal blood vessel segmentation based on Densely Connected U-Net. Mathematical Biosciences and Engineering, 2020, 17(4): 3088-3108. doi: 10.3934/mbe.2020175 |
[5] | Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464 |
[6] | Meenu Garg, Sheifali Gupta, Soumya Ranjan Nayak, Janmenjoy Nayak, Danilo Pelusi . Modified pixel level snake using bottom hat transformation for evolution of retinal vasculature map. Mathematical Biosciences and Engineering, 2021, 18(5): 5737-5757. doi: 10.3934/mbe.2021290 |
[7] | Yanxia Sun, Xiang Li, Yuechang Liu, Zhongzheng Yuan, Jinke Wang, Changfa Shi . A lightweight dual-path cascaded network for vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(6): 10790-10814. doi: 10.3934/mbe.2023479 |
[8] | Yun Jiang, Jie Chen, Wei Yan, Zequn Zhang, Hao Qiao, Meiqi Wang . MAG-Net : Multi-fusion network with grouped attention for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(2): 1938-1958. doi: 10.3934/mbe.2024086 |
[9] | Wencong Zhang, Yuxi Tao, Zhanyao Huang, Yue Li, Yingjia Chen, Tengfei Song, Xiangyuan Ma, Yaqin Zhang . Multi-phase features interaction transformer network for liver tumor segmentation and microvascular invasion assessment in contrast-enhanced CT. Mathematical Biosciences and Engineering, 2024, 21(4): 5735-5761. doi: 10.3934/mbe.2024253 |
[10] | Kun Lan, Jianzhen Cheng, Jinyun Jiang, Xiaoliang Jiang, Qile Zhang . Modified UNet++ with atrous spatial pyramid pooling for blood cell image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 1420-1433. doi: 10.3934/mbe.2023064 |
The improper circulation of blood flow inside the retinal vessel is the primary source of most of the optical disorders including partial vision loss and blindness. Accurate blood vessel segmentation of the retinal image is utilized for biometric identification, computer-assisted laser surgical procedure, automatic screening, and diagnosis of ophthalmologic diseases like Diabetic retinopathy, Age-related macular degeneration, Hypertensive retinopathy, and so on. Proper identification of retinal blood vessels at its early stage assists medical experts to take expedient treatment procedures which could mitigate potential vision loss. This paper presents an efficient retinal blood vessel segmentation approach where a 4-D feature vector is constructed by the outcome of Bendlet transform, which can capture directional information much more efficiently than the traditional wavelets. Afterward, a bunch of ensemble classifiers is applied to find out the best possible result of whether a pixel falls inside a vessel or non-vessel segment. The detailed and comprehensive experiments operated on two benchmark and publicly available retinal color image databases (DRIVE and STARE) prove the effectiveness of the proposed approach where the average accuracy for vessel segmentation accomplished approximately 95%. Furthermore, in comparison with other promising works on the aforementioned databases demonstrates the enhanced performance and robustness of the proposed method.
Among all the organs in the human body, eyes offer the most sensational feeling which provides us with the scope to praise the beauty and to communicate with others through visual expression. Clear vision plays a vital role in our life. However, due to different eye diseases, sometimes it becomes crucial to maintain a healthy vision. That is why it is important to test different eye components regularly. Amongst diverse components, the retina is the most important ingredient which can exhibit the symptoms of most of the optical disorders. Different morphological properties of retinal blood vessels including branching pattern, length, tortuosity, angular features, and diameter are the basic observation area to detect any eye disease. Study [1] shows that hypertension can be classified by narrow arterioles with bright reflexes. Cardiovascular disease and diabetic retinopathy can be diagnosed by bifurcation angles and tortuosity information.
Accurate blood vessel segmentation is important as well as a challenging task for determining early vision problems. However, different branching patterns, bad contrast, low image resolution make the blood vessel segmentation problem cumbersome. Besides, ill-defined optic disk as well as the random number of the hemorrhage, cotton wool spots, and microaneurysms create noises that lead to false-positive detection. Moreover, it is more troublesome to identify the thin vessels from the background image. Bifurcation, central light reflex, arbitrary branch crossings, contrast variation in the vessel maps are usually the general causes for the failure of the complete isolation of retinal blood vessels from color fundus image.
Matched Filtering [2] methodology based on the characteristics of the blood vessel was the first significant achievement in the field of retinal blood vessel segmentation. Next, the idea of multi-scale approach [3] was developed using a scale-space analysis of the maximum principal curvature. Later on, utilizing morphological reconstruction and vessel centerline detection, the strategy of morphological processing based solution was proposed [4]. Moreover, Machine learning-based techniques were also applied to isolate the vessel tree where a handcrafted feature vector is formed before passed it to a suitable classifier. Nowadays, deep learning-based techniques [5] are dominating this vessel segmentation application akin to most of the segmentation and classification tasks in computer vision.
Wavelet-based methods have already proved their effectiveness in many applications including image denoising, compression, texture analysis, and edge detection. Gabor wavelet, Curvelets [6], Contourlets [7], Shearlets [8] are commonly employed to detect curved like structures which are already applied by many researchers in the application of retinal vessel segmentation. Bendlets [9] is the recent addition to the family of wavelet-based multiscale directional transform technique which can capture curvatures with fewer coefficient values than others.
This paper presents an efficient and robust retinal blood vessel segmentation technique utilizing the new multiscale directional transform methodology named Bendlets. Firstly, some preprocessing steps have been followed to reduce some false outcomes as well as make it suitable for the feature extraction stage. Secondly, in order to enhance the contrast of the input image morphological top-hat and bottom-hat transform are employed which helps to detect the thin vessels more effectively. Thirdly, Bendlet transform is applied on different scales to construct a robust feature vector. Fourthly, a small group of ensemble classifiers is analyzed to find out the most satisfactory classifier. Finally, to improve the accuracy a little bit more, a post-processing step is carried out to fulfill the probable gaps inside the vessel as well as remove the isolated false candidates.
The intensity profile along the cross-section of a retinal blood vessel has the Gaussian-shaped curve. Chaudhuri et al. [2] applied a 2-D Gaussian kernel for separating blood vessels from the background. Based on this concept of matched filtering, Hoover et al. [10] designed a piecewise threshold probing algorithm considering some region-based properties. Zhang et al. [11] proposed a combination of zero-mean Gaussian and first-order derivative of the Gaussian filter which resolves some of the limitations of previous methods. Another variation proposed by Odstrcilik et al. [12] where illumination correction and contrast equalization were performed before matched filtering. Chakraborti et al. [13] improved the concept of matched filtering by incorporating orientation histogram and a vesselness filter.
Martinez-Perez et al. [3] started the idea of applying the multi-scale approach in detecting vessel from the background image by using gradient magnitude and maximum principal curvature. Vlachos and Dermatas [14] developed a multi-scale line tracking program from different vessel properties. Nguyen et al. [15] proposed a multi-scale line detection method where the response from eight different scale values is linearly combined to get the final vessel map. In another work, Yanli Hou [16] presented a vessel tree extraction process by utilizing a multi-scale top-hat transform and multi-scale line detection algorithm. Yue et al. [17] made a little optimization in the work of [15] where the improved multi-scale line detector chooses the maximum line response from the available multi-scale windows.
Mendonca and Campilho [4] established the concept of segmenting blood vessels by means of several morphological filters such as multi-scale top-hat transform, region growing process, and multi-scale morphological vessel reconstruction. Miri and Mahloojifar [18] applied connected component analysis and multi-structure elements morphology by reconstruction after using curvelet transform for retinal vessel enhancement. Fraz et al. [19] developed another unique blood vessel segmentation technique where the multidirectional morphological top-hat transform is utilized to enhance the image and morphological bit plane slicing is employed to segment the vessel map. By using morphological component analysis Imani et al. [20] designed another method with combining Morlet wavelet transform and adaptive thresholding. Gao et al. [21] presented an efficient procedure for the vessel segmentation which includes automatic random walks based on centerline extraction.
Pattern classification based methods are more popular than previous approaches. Ricci and Perfetti [22] made a framework using a support vector machine (SVM) where the features are generated from line detectors based on the average grey-level value of the retinal image. Martin et al. [23] assembled a 7-D feature vector for each pixel representation comprised of moment invariants and grey-level features which are trained by neural network classifier. On the other hand, Fan et al. [24] executed a random forest classifier where the feature vector of each candidate pixel is formed from integral channel features.
Although local image feature-based machine learning approaches are satisfactory enough, the feature vector composed of wavelet-based methods is more robust in terms of accuracy. Soares et al. [25] developed the feature vector from a 2-D Morlet wavelet transform and trained it by a Gaussian mixture model classifier. Xu et al. [26] included wavelet and curvelet transform technique to obtain a 12-D feature vector that is trained by SVM. In another work, Fraz et al. [27] made a 9-D feature vector where four features are taken from the Gabor filter response at four different scales.
Ghadiri et al. [28] utilized Non-subsampled Contourlet Transform (NSCT) to extract the vessel tree from the retinal fundus image. Bankhead et al. [29] proposed a wavelet transform to detect the vessel map along with centerline refinement using spline fitting. Azzopardi et al. [30] developed a novel algorithm for vessel segmentation using COSFIRE (Combination of Shifted Filter Response) and DoG (Difference-of-Gaussians) filters. Moreover, Levet et al. [31] proposed applying Shearlet transform with the hysteresis threshold strategy to segment the blood vessel from the input image. Based on elite guided multi-objective artificial bee colony (EMOABC) technique Khomri et al. [32] designed another unsupervised method for retinal vessel segmentation for fundus images.
After the wave of deep learning, most of the applications related to computer vision and NLP are governed by various deep learning methods. R2U-Net [33] using U-Net and Recurrent Residual Convolutional Neural Network (RRCNN) is able to achieve around 97% accuracy on the STARE dataset. A Graph Neural Network (GNN) [34] has been introduced to extract the vessel structure utilizing the local appearance and global structure of vessels. However, their performance is not superior as the average accuracy is 92.71% and 93.78% on DRIVE and STARE databases respectively. IterNet [36] is another deep learning model where multiple iterations of U-Net have been performed and the accuracy is slightly better than R2U-Net [33]. However, deep learning generally uses millions of parameters to learn the model with lots of training data as well as requires a heavy computational load whereas our proposed methodology uses only a limited number of filters which could run in any lightweight system. On the other hand, if the test sample comes from a different domain usually the deep learning-based trained model performs poorly in this case.
The proposed complete work flow for retinal blood vessel segmentation is illustrated in Figure 1 where the proposed method uses the following basic steps: (1) Pre-processing, (2) Retinal Vessel Enhancement, (3) Feature Extraction using Bendlets, (4) Classification, and (5) Post Processing.
Retinal fundus images generally prone to noise, lighting variations, and poor contrast [23]. In order to obtain a better classification result the following actions are executed: (1) Green channel extraction, (2) Central light reflex diminution, and (3) Extension of the border.
(1) Green Channel Extraction: The green channel provides the best view for blood vessels compared to other two channels [35]. For this reason, the conversion of RGB color image to Gray-scale image is performed by considering the green plane only, as shown in Figure 2(a).
(2) Central Light Reflex Diminution: Due to lighting variation, some blood vessels may contain a light streak through the central area of the blood vessel which is known as central light reflex. As a result, the middle portion of the vessel and the background become similar thus creates some false results. To eliminate this central light reflex morphological opening is employed with a two-pixel radius disk-shaped structuring element. The parameter of disk size is kept to the possible lowest value otherwise it could merge the nearest vessels. An example is shown in Figure 2(b), (c) where the effect of central light reflex and its elimination with the help of opening filtering operation is illustrated respectively.
(3) Extension of Border: The circular-shaped border of the inverted green channel image is expanded to take aside unenvied border effects by the algorithm used in [25]. Otherwise, the output image produces some false-positive results along the border area. Here, an iterative strategy has been followed where the mean value of the pixel’s neighbors inside the aperture is operated to replace the pixels outside the aperture. As a result, an artificially increasing area can be seen along the border, as shown in Figure 2(d).
The method designed by Kushol et al. [37,38] using morphological operators has been employed to enhance the contrast of the retinal image. At first, the original input image is added with the resultant image of the top-hat transform. Simultaneously, the outcome of the morphological bottom-hat is deducted from the initial image. Equation (3.1) expresses the morphological operations performed to enhance the contrast of the image.
Aenhance=A+Atop−hat−Abottom−hat | (3.1) |
The major contribution of this work is the automatic selection of the structuring element (SE) by means of edge content (EC) based contrast matrix which is computed by the gradient magnitude value. The characteristics of EC with respect to SE is shown in Figure 3 where the graph indicates incrementing the value of SE also results in an increased value of EC at the beginning. However, after a certain period of iterations, the incrementing behavior stops and provides the best contrast-enhanced image.
Traditional wavelet-based methods can detect point singularities as well as they are useful for multiresolution and proper localization properties. However, they have weaknesses in the case of optimal edge representation. Since the elements are not highly anisotropic it could not yield optimal geometric structure or curved singularities. In order to obtain the curve singularities and hyperplane singularities in higher dimensions, Candes and Donoho first designed curvelet transform [6]. Curvelet elements can be constructed by a parabolic dilation or scale parameter a(0<a<1), a location or translation parameter t, and an orientation θ to a shaped function ψ as shown in Eqs (3.2) and (3.3):
ψa,θ,t(x)=a−3/4ψ(DaRθ(x−t)) | (3.2) |
Da=(1/a001/√a) | (3.3) |
Here, Da is a parabolic scaling matrix and Rθ is a rotation by θ radians.
Another multiscale directional transform Shearlets also follows the basic principles of wavelets except the isotropic dilation is replaced by anisotropic dilation and shearing. One of the unique characteristics of shearlets is the use of shearing to control directional decision, in contrast to rotation practiced by curvelets. Shearlet elements can be represented by ψa,s,t where a,s,t are the dilation, shearing, and translation variable respectively and can be expressed by Eqs (3.4)–(3.6):
ψa,s,t(x)=a−3/4ψ(A−1aS−1s(x−t)) | (3.4) |
Aa=(a00√a),a>0 | (3.5) |
Ss=(1s01),sεR | (3.6) |
Here, Aa is a scaling (or dilation) matrix and Ss represents shear matrix.
Bendlets can be referred to an extended or second-order shearlet system considering bending as an extra parameter. The basic difference of bendlets compared to classical shearlets appears in α-scaling. If a>0 and 0≤α<∞ the α-scaling operation is denoted by Eq (3.7):
Aa,α:=(a00aα) | (3.7) |
Here, the variable α expresses the scaling anisotropy. For instance, the value of α=1 indicates isotropic scaling whereas the value of α=1/2 means parabolic scaling, and α=0 leads to pure directional scaling. Bendlets can accurately determine direction, location, and curvature of a discontinuity curve with an anisotropic scaling of 1/3<α<1/2.
For lεN and r=(r1,....,rl)TεRl the l–th order shearing operator S(l)r:R2→R2 is represented by Eq (3.8):
S(l)r:=(1l∑m=1rmxm−1201)(x1,x2)T | (3.8) |
Here, for l=1 generates an typical shearing matrix and for l=2 the operator contains the characteristics of both shearing and bending.
If ψεL2(R2),lεN,α>0. then a unitary expression of the higher-order shearlet groups S(l,α) have the natural representation shown in Eq (3.9):
π(l,α)(a,s,t)ψ=α−(l+α)/2ψ(A−1a,αS(l)−s(.−t)) | (3.9) |
The parameters a,s, and t indicate scale, shear (orientation), and location variable respectively. So, the l–th order α –shearlet system is denoted as Eq (3.10):
SHl,αψ={π(l,α)(a,s,t)ψ|(a,s,t)ϵSl,α} | (3.10) |
When l=2, the above equation is considered as second-order shearlet transform or bendlet transform. So the bendlet system can be expressed as Eq (3.11) where b takes the role of bending as an extra parameter.
BS(α)ψ=SH(2,α)ψ={ψa,s,b,t|(a,s,b,t)ϵS(2,α) | (3.11) |
Bendlets are different from other directional multiscale transforms because of this extra bending parameter which helps to detect curve singularities more efficiently. The shape of bendlet atoms or elements varies according to different parameter values which are depicted in Figure 4 to Figgure 7. In Figure 4(a), Figure 6(b), (c), and Figure 7(a) coefficient values are close to zero because a small amount of edge energy is included in bendlet area. In Figure 4(c), Figure 5(c), and Figure 7(b) bendlet coefficients are moderately high because of the large area of edge energy are enclosed in bendlet. The coefficient response from Figure 6(a) is very high due to entire bendlet frame is enclosed with the edge energy.
In the case of blood vessel segmentation from the retinal fundus image, the aim is to extract vessel-like structure. As we can see from a retinal image, the most occupied region of the image is taken by vessels and they usually do not follow any specific direction. All the time the vessels are changing their direction abruptly and form curved like edge throughout the whole image. Traditional directional wavelets can detect these types of curvature with their elements coefficient response value. As the shape of these elements is a square-like structure, it requires a huge amount of coefficient response to fully capture all the vessels as well as increases the noise inside the image. On the other hand, curvelet and shearlet transform create parabolic rectangular-shaped element according to the parabolic scaling principle length2∼width. As a result, few coefficients are required to fully detect the vessels of an image. Further reduction is possible in the number of coefficients with Bendlets where shearlet elements are bent in different directions by adding one extra parameter bending b. One comparison among traditional wavelets, curvelets, and Bendlets with the number of elements required to capture a little amount of vessel-like region is shown in Figure 8 where we can see wavelet needs around 15 coefficient values, curvelet needs 5 coefficient values and bendlet requires only 3 coefficient values to fully detect the curved region.
After applying the bendlet transform, soft thresholding is performed on the coefficient values to retain the large response of the image. Because of the curve like the structure of the bendlet elements, a high response is produced whenever the process finds any vessel-related shape. For different scale values, different reconstructed output images can be obtained by utilizing bendlet transform. However, the scale value of 3, 4, and 5 generate significantly accurate output while reducing unnecessary noise from the background image. Finally, a feature vector of size four is constructed with three different aforementioned scale values of bendlet and another one is taken by subtracting the background from the enhanced image using an averaging filter of size 10×10. These four features can be expressed in Eqs (3.12)–(3.15):
f1(x,y)=Isub(x,y) | (3.12) |
f2(x,y)=Iscale=3bendlet(x,y) | (3.13) |
f3(x,y)=Iscale=4bendlet(x,y) | (3.14) |
f4(x,y)=Iscale=5bendlet(x,y) | (3.15) |
Here, Isub(x,y) is the pixel value obtained by subtracting the original enhanced image with average filtered image and Ibendlet(x,y) is the reconstructed image with modified bendlet coefficient values for a given scale information. The individual output for each of these features is shown in Figure 9.
In the feature development stage, each pixel from an input image is characterized by a vector in a 4-D feature space as Eq (3.16):
F(x,y)=[f1(x,y),f2(x,y),f3(x,y),f4(x,y)] | (3.16) |
The purpose of classification stage is to assign one candidate pixel to either Cv(vessel) or Cnv(non−vessel) class. An efficient and frequently used method in machine learning is ensemble classifiers that combine multiple individual learning models to yield an agammaegate model. The major advantage of an ensemble classifier is having the ability to avoid the mistakes of a single classification model. In a given data set, one individual learning model may result in an over-fitting problem in a specific portion of the data whereas another model can overcome this limitation thus increasing the prediction performance. Moreover, there are mainly two types of ensemble models which are Bagging (subsampling the training data) and Boosting (re-learning with different weights on misclassified instances).
● Bagging (Bootstrap Agammaegation):
To reduce the problem of overfitting as well as variance the Bagging or Bootstrap agammaegation of sample data can be used primarily. However, the Bagging concept is mostly applied in decision tree methods. Each weak learner in Bootstrap agammaegation contains equal weight and it trains individual models from the training set by randomly selecting a subset. For instance, to achieve high accuracy in terms of classification performance the random forest method combines random decision trees with bagging.
● Boosting:
Boosting is a weighted average approach that can be utilized for mitigating bias as well as variance. Boosting algorithms are formed by multiple weak learners which are combined to become a final powerful learner. The weight of each weak learner is recalculated in order to overcome the problem of increased weight from the misclassification result. From the knowledge of previous mislabeled classifiers, the Boosting algorithms generally assemble the learners sequentially. Basically, the overall classification performance can be enhanced by weighing earlier mislabeled examples with higher weight.
Among established Ensemble-agammaegation methods, we have experimented five prominent classifiers. They are AdaBoostM1, LogitBoost, GentleBoost, RUSBoost, and Bag. The Decision Tree type weak learner is used in the ensemble process where the number of ensemble learning cycles is 200.
The resultant image acquired from the classification stage exhibits two common pitfalls. Firstly, the vessel map may contain a few discontinuous line segments due to some obstacles. Secondly, some isolated points could arise because of some pathological objects. The following two steps will solve these predicaments as much as possible and enhance the accuracy a little bit.
After performing the classification process, some broken vessel segments can be observed which reduces the accuracy of the outcome. To link the broken segments along the vessel pixels, a multiscale line detection algorithm is applied. A window size of 15∗15 is considered for each pixel position where 12 lines of length L are oriented at 12 distinct orientations (with an interval of 150). Three different values 7, 11, and 15 are taken for the value of L which are linearly combined as proposed in Nguyen et al. [15] to yield the connected vessel tree. Figure 10 depicts the performance of before taking advantage of the line detection scheme and after employing the algorithm.
Here, the area of each connected region is measured with the help of morphological area open operation. The accumulation of pixels connected will be removed from the final output if the value of that region is beneath 40 pixels. One sample output after executing the area open operation is given in Figure 11.
The proposed method doesn’t require any heavy computational power. The implementation of the proposed idea has been performed in MATLAB R2015a by @2.20 GHz processor consist of 8.00 GB RAM and without any GPU involvement. Two openly accessible benchmark databases are utilized which are DRIVE [39] and STARE [10] database to assess the algorithmic performance of the proposed approach. Moreover, to demonstrate the robustness of our approach a comparative analysis has been incorporated in the latter part of this section.
The DRIVE (Digital Retinal Images for Vessel Extraction) dataset has been formulated from a diabetic retinopathy diagnosis program of 400 subjects. With the help of medical experts, 40 ground truth images are created where the first half of the pictures are selected as test subjects and the remaining half of the photographs are chosen for training purposes. On the other hand, the development of the database STARE (STructured Analysis of the Retina) is performed at the University of California. It consists of 20 retinal color photos where two sets of ground truth are designed by two different pathological experts. Here, the first 50% of the images are acknowledged as the training set and the later 50% of the pictures are recognized as test data in our experiment. The overall summary of the aforementioned two datasets is represented in Table 1.
Image Information | DRIVE | STARE |
Number of image | 40 | 20 |
with Ground truth | ||
Camera used | Canon CR5 non-mydriatic 3CCD | TopConTRV-50 |
Field of View (FOV) | 45 degree | 35 degree |
Image size | 565 x 584 pixels | 605 x 700 pixels |
Image format | TIF | PPM |
Bits per color channel | 8 | 8 |
Number of pathology image | 7 | 10 |
Four types of scenarios can be observed in a two class categorization problem. They are:
(1) True Positive (TP): correct identification
(2) True Negative (TN): correct rejection
(3) False Positive (FP): incorrect identification
(4) False Negative (FN): incorrect rejection
The fundamental statistical measures Sensitivity (recall), Specificity (true negative rate), Precision (positive predictive value), and Accuracy are evaluated to assess the binary classification performance based on these scenarios. In healthcare diagnosis, sensitivity (true positive rate or recall), is the correct identification of samples with the disease whereas specificity (true negative rate) is the correct identification of samples without the disease. Precision (Positive Predictive Value) is the amount of identified items that are relevant and high precision means that an algorithm returned substantially more relevant results than irrelevant. The Accuracy is the fraction of the total number of truly identified pixels and the number of pixels present in the FOV. Sensitivity (Sn), Specificity (Sp), Precision (Pr), and Accuracy (Acc) are measured by Eqs (4.1)–(4.4) respectively.
Sensitivity,Sn=TPTP+FN | (4.1) |
Specificity,Sp=TNTN+FP | (4.2) |
Precision,Pr=TPTP+FP | (4.3) |
Accuracy,Acc=TP+TNTP+TN+FP+FN | (4.4) |
Table 2 shows the relationship of vessel classification with the above mentioned four events.
Vessel pixels | Background pixels | |
in ground-truth | in ground-truth | |
Vessel classified | True Positive (TP) | False Positive (FP) |
Vessel not classified | False Negative (FN) | True Negative (TN) |
Firstly, the proposed method is assessed by the images of DRIVE database. In the first part of the evaluation, the algorithm is trained by the 20 train set images with 5 established aforementioned ensemble classifiers. Next, a similar feature vector is also designed for the test set images and applied for the classification result with trained data. All ensemble classifiers produce almost similar results in terms of sensitivity, specificity, precision, and accuracy. However, AdaBoost slightly outperforms among other classifiers, and for that reason, the individual performance measures of each of the images of DRIVE dataset represented in Figure 12 is based on the outcome of the AdaBoost classifier. The average sensitivity, specificity, precision, and accuracy achieved in this database are 0.7588, 0.9748, 0.8226, and 0.9456 respectively.
Secondly, the same approach is also performed in the case of STARE dataset where the average sensitivity, specificity, precision, and accuracy achieved are 0.7798, 0.9746, 0.7956, and 0.9528 respectively. The detailed result of the individual image is represented in Figure 13.
A bunch of ensemble classifier is experimented to find the most accurate result of the blood vessel segmentation. They are AdaBoost, LogitBoost, GentleBoost, RUSBoost, and Bag. Table 3 depicts the complete result of these classifiers based on Average Sn, Sp, Pr, and Acc on DRIVE and STARE database. RUSBoost achieves the highest score on both datasets for Sensitivity whereas AdaBoost and LogitBoost receive the best rate for Specificity and Precision on STARE and DRIVE respectively. Overall, considering the performance of Accuracy AdaBoost clearly outweighs other classifiers in our application and thus employed in our final evaluation assessment.
Ensemble Classifier | Dataset | Avg. Sn | Avg. Sp | Avg. Pr | Avg. Acc |
AdaBoostM1 | DRIVE | 0.7588 | 0.9748 | 0.8226 | 0.9456 |
STARE | 0.7798 | 0.9746 | 0.7956 | 0.9528 | |
LogitBoost | DRIVE | 0.7374 | 0.9777 | 0.8361 | 0.9453 |
STARE | 0.7949 | 0.9712 | 0.7748 | 0.9508 | |
GentleBoost | DRIVE | 0.7605 | 0.9728 | 0.8122 | 0.9442 |
STARE | 0.7954 | 0.9706 | 0.7709 | 0.9502 | |
RUSBoost | DRIVE | 0.7941 | 0.9660 | 0.7844 | 0.9428 |
STARE | 0.8397 | 0.9569 | 0.7105 | 0.9429 | |
Bag | DRIVE | 0.7595 | 0.9722 | 0.8085 | 0.9435 |
STARE | 0.8021 | 0.9685 | 0.7650 | 0.9488 |
Our proposed model is also compared with some auspicious existing works. The comparison of our methodology to other recent methods is shown in Tables 4 and 5 for DRIVE and STARE databases respectively. For a better comparison view, the list is grouped according to the categorization of the literature review section with the ascending order of the publication year. Average Sensitivity (Sn), Specificity (Sp), and Accuracy (Acc) are the performance metrics examined for the comparative analysis.
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2663 | 0.9901 | 0.8773 | |
2 | Matched | Zhang [11] | 0.7120 | 0.9724 | 0.9382 |
3 | Filter | Odstrcilik [12] | 0.7120 | 0.9724 | 0.9382 |
4 | Chakraborti [13] | 0.7205 | 0.9579 | 0.9370 | |
5 | Vlachos [14] | 0.747 | 0.955 | 0.929 | |
6 | Multi-scale | Nguyen [15] | 0.7322 | 0.9659 | 0.9407 |
7 | approach | Hou [16] | 0.7354 | 0.9691 | 0.9415 |
8 | Yue [17] | 0.7528 | 0.9731 | 0.9447 | |
9 | Morphological | Mendonca [4] | 0.7344 | 0.9764 | 0.9452 |
10 | processing | Miri [18] | 0.7352 | 0.9795 | 0.9458 |
11 | processing | Fraz [19] | 0.7152 | 0.9759 | 0.9430 |
12 | Pattern | Soares [25] | 0.7332 | 0.9782 | 0.9461 |
13 | classification | Xu [26] | 0.7760 | − | 0.9328 |
14 | (Wavelet-based) | Fraz [27] | 0.7406 | 0.9807 | 0.9480 |
15 | Pattern classification | Marin [23] | 0.7067 | 0.9801 | 0.9452 |
16 | (Image Feature-based) | Fan [24] | 0.7179 | 0.9749 | 0.9414 |
17 | Other | Ghadiri [28] | 0.2663 | 0.9901 | 0.8773 |
18 | Wavelet/ | Bankhead [29] | 0.7027 | − | 0.9371 |
19 | Filter-based | Azzopardi [30] | 0.7655 | 0.9704 | 0.9442 |
20 | Approach | Levet [31] | 0.728 | 0.971 | 0.940 |
21 | Proposed Method | 0.7588 | 0.9748 | 0.9456 |
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2846 | 0.9873 | 0.9142 | |
2 | Matched | Zhang [11] | 0.7177 | 0.9753 | 0.9484 |
3 | Filter | Odstrcilik [12] | 0.7847 | 0.9512 | 0.9341 |
4 | Chakraborti [13] | 0.6786 | 0.9586 | 0.9379 | |
5 | Multi-scale | Vlachos [14] | 0.747 | 0.955 | 0.929 |
6 | approach | Nguyen [15] | 0.7317 | 0.9613 | 0.9324 |
7 | Hou [16] | 0.7348 | 0.9652 | 0.9336 | |
8 | Morphological | Mendonca [4] | 0.6996 | 0.9730 | 0.9440 |
9 | processing | Fraz [19] | 0.7311 | 0.9680 | 0.9442 |
10 | Gao [21] | 0.7581 | 0.9550 | 0.9401 | |
11 | Pattern classification | Soares [25] | 0.7207 | 0.9747 | 0.9479 |
12 | (Wavelet-based) | Fraz [27] | 0.7548 | 0.9763 | 0.9534 |
13 | Pattern classification | Marin [23] | 0.6944 | 0.9819 | 0.9526 |
14 | (Image Feature-based) | Fan [24] | 0.6996 | 0.9787 | 0.9488 |
15 | Other Wavelet/ | Azzopardi [30] | 0.7716 | 0.9701 | 0.9497 |
16 | Filter-based approach | Levet [31] | 0.7321 | 0.9634 | 0.9412 |
17 | Proposed Method | 0.7798 | 0.9746 | 0.9528 |
However, our proposed method is compared with some promising research work in terms of cross-training evaluation which also represents the robustness of our algorithm. In the case of cross-training experiment, the test data is completely taken from a different dataset or domain. Table 6 illustrates the comparative average accuracy analysis where the DRIVE test set images are evaluated after training with STARE dataset images and vice versa. Furthermore, ten abnormal images of STARE database are experimented separately and achieved an acceptable score in performance measures. A relative comparison with some recent works is also noted in Table 7 as well as one output result of a pathology image after performing the execution on Matlab is depicted in Figure 14. Figure 14(c), (d) are the output of shearlet and bendlet transform respectively where we can comprehend the latter image can detect more thin vessels from the background image.
Method | DRIVE (Training on STARE) | STARE (Training on DRIVE) |
Soares [25] | 0.9397 | 0.9327 |
Fraz [27] | 0.9456 | 0.9493 |
Ricci [22] | 0.9266 | 0.9464 |
Marin [23] | 0.9448 | 0.9528 |
Proposed Method | 0.9450 | 0.9487 |
Method | Avg. Sn | Avg. Sp | Avg. Acc |
Zhang [11] | 0.7166 | 0.9673 | 0.9439 |
Saffarzadeh [40] | 0.7166 | 0.9672 | 0.9438 |
Mendonca [4] | 0.6801 | 0.9694 | 0.9426 |
Soares [25] | 0.7181 | 0.9765 | 0.9500 |
Fraz [27] | 0.7262 | 0.9764 | 0.9511 |
Proposed Method | 0.7406 | 0.9728 | 0.9452 |
Automatic and proper retinal blood vessel segmentation leads to the solution for various optic diseases. As day by day, the number of patients and the necessity of the vessel segmentation is increasing, an automated system can be an alternative to the manual system. Due to different sizes and shapes, it is very challenging to develop an automated system. Introducing Bendlets successfully for the first time in the domain of medical imaging opens the potentiality to use a comparable strategy in a lot of increasingly complex clinical applications. Moreover, analyzing and comparing the group of ensemble classifiers help to decide the best option for training and testing any new dataset images. In the future, we want to extend our research to develop an automated segmentation system for other problems such as 3D MR brain images. Different object detection in the optical images like hemorrhage, exudates, optic disc, cotton wool spots as well as Artery-venous classification, measurement of tortuosity, vessel width, branching angle could be some important topics of interest which could ameliorate the autonomous diagnosis of retinal images. Later on, 3D shape analysis of individual objects and predicting the growth rate of disease could be some advanced focus of the future study. It can also be explored for the disease of Alzheimer's and Amyotrophic Lateral Sclerosis (ALS). But to investigate, researchers need to collect robust clinical datasets at first.
The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group no (RG-1441-394).
All authors declare no conflicts of interest in this paper.
[1] | A. M. Joussen, T. W. Gardner, B. Kirchhof, S. J. Ryan, Retinal vascular disease, Springer, 2007. |
[2] |
S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, M. Goldbaum, Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Trans. Med. Imaging, 8 (1989), 263-269. doi: 10.1109/42.34715
![]() |
[3] | M. E. Martínez-Pérez, A. D. Hughes, A. V. Stanton, S. A. Thom, A. A. Bharath, K. H. Parker, Retinal blood vessel segmentation by means of scale-space analysis and region growing, in Inter-national Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, (1999), 90-97. |
[4] |
A. M. Mendonca, A. Campilho, Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction, IEEE Trans. Med. Imaging, 25 (2006), 1200-1213. doi: 10.1109/TMI.2006.879955
![]() |
[5] | N. Brancati, M. Frucci, D. Gragnaniello, D. Riccio, Retinal vessels segmentation based on a con-volutional neural network, in Iberoamerican Congress on Pattern Recognition, Springer, (2017), 119-126. |
[6] | E. J. Candes, D. L. Donoho, Curvelets: A surprisingly effective nonadaptive representationfor objects with edges, Technical report, Stanford Uni Ca Dept of Statistics, 2000. |
[7] |
M. N. Do, M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation, IEEE Trans. Image Proc., 14 (2005), 2091-2106. doi: 10.1109/TIP.2005.859376
![]() |
[8] |
W.-Q. Lim, The discrete shearlet transform: a new directional transform and compactly supported shearlet frames., IEEE Trans. Image Proc., 19 (2010), 1166-1180. doi: 10.1109/TIP.2010.2041410
![]() |
[9] |
C. Lessig, P. Petersen, M. Schäfer, Bendlets: A second-order shearlet transform with bent elements, Appl. Comput. Harmonic Anal., 46 (2019), 384-399. doi: 10.1016/j.acha.2017.06.002
![]() |
[10] |
A. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response, IEEE Trans. Med. Imaging, 19 (2000), 203-210. doi: 10.1109/42.845178
![]() |
[11] |
B. Zhang, L. Zhang, L. Zhang, F. Karray, Retinal vessel extraction by matched filter with first-order derivative of gaussian, Comput. Biol. Med., 40 (2010), 438-445. doi: 10.1016/j.compbiomed.2010.02.008
![]() |
[12] | J. Odstrcilik, R. Kolar, A. Budai, J. Hornegger, J. Jan, J. Gazarek, et al., Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database, IET Image Proc., 7 (2013), 373-383. |
[13] |
T. Chakraborti, D. K. Jha, A. S. Chowdhury, X. Jiang, A self-adaptive matched filter for retinal blood vessel detection, Mach. Vision Appl., 26 (2015), 55-68. doi: 10.1007/s00138-014-0636-z
![]() |
[14] |
M. Vlachos, E. Dermatas, Multi-scale retinal vessel segmentation using line tracking, Comput. Med Imaging Graphics, 34 (2010), 213-227. doi: 10.1016/j.compmedimag.2009.09.006
![]() |
[15] |
U. T. Nguyen, A. Bhuiyan, L. A. Park, K. Ramamohanarao, An effective retinal blood vessel segmentation method using multi-scale line detection, Pattern Recognit., 46 (2013), 703-715. doi: 10.1016/j.patcog.2012.08.009
![]() |
[16] |
Y. Hou, Automatic segmentation of retinal blood vessels based on improved multiscale line detection, J. Comput. Sci. Eng.ng, 8 (2014), 119-128. doi: 10.5626/JCSE.2014.8.2.119
![]() |
[17] |
K. Yue, B. Zou, Z. Chen, Q. Liu, Improved multi-scale line detection method for retinal blood vessel segmentation, IET Image Proc., 12 (2018), 1450-1457. doi: 10.1049/iet-ipr.2017.1071
![]() |
[18] |
M. S. Miri, A. Mahloojifar, Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction, IEEE Trans. Biomed. Eng., 58 (2011), 1183-1192. doi: 10.1109/TBME.2010.2097599
![]() |
[19] | M. M. Fraz, S. A. Barman, P. Remagnino, A. Hoppe, A. Basit, B. Uyyanonvara, et al., An approach to localize the retinal blood vessels using bit planes and centerline detection, Comput. Methods Programs Biomed., 108 (2012), 600-616. |
[20] |
E. Imani, M. Javidi, H. R. Pourreza, Improvement of retinal blood vessel detection using morphological component analysis, Comput. Methods Programs Biomed., 118 (2015), 263-279. doi: 10.1016/j.cmpb.2015.01.004
![]() |
[21] | J. Gao, G. Chen, W. Lin, An effective retinal blood vessel segmentation by using automatic random walks based on centerline extraction, BioMed Res. Int., 2020 (2020). |
[22] |
E. Ricci, R. Perfetti, Retinal blood vessel segmentation using line operators and support vector classification, IEEE Trans. Med. Imaging, 26 (2007), 1357-1365. doi: 10.1109/TMI.2007.898551
![]() |
[23] |
D. Marín, A. Aquino, M. E. Gegúndez-Arias, J. M. Bravo, A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features, IEEE Trans. Med. Imaging, 30 (2011), 146-158. doi: 10.1109/TMI.2010.2064333
![]() |
[24] | Z. Fan, Y. Rong, J. Lu, J. Mo, F. Li, X. Cai, et al., Automated blood vessel segmentation in fundus image based on integral channel features and random forests, 2016 12th World Congress on Intelligent Control and Automation (WCICA), IEEE, 2016. |
[25] |
J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, M. J. Cree, Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification, IEEE Trans. Med. Imaging, 25 (2006), 1214-1222. doi: 10.1109/TMI.2006.879967
![]() |
[26] | L. Xu, S. Luo, A novel method for blood vessel detection from retinal images, Biomed. Eng. online, 9 (2010), 1. |
[27] | M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, et al., An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Trans. Biomed. Eng., 59 (2012), 2538-2548. |
[28] | F. Ghadiri, S. M. Zabihi, H. R. Pourreza, T. Banaee, A novel method for vessel detection using contourlet transform, 2012 National Conference on Communications (NCC), IEEE, 2012. |
[29] | P. Bankhead, C. N. Scholfield, J. G. McGeown, T. M. Curtis, Fast retinal vessel detection and measurement using wavelets and edge location refinement, PloS one, 7 (2012), e32435. |
[30] |
G. Azzopardi, N. Strisciuglio, M. Vento, N. Petkov, Trainable cosfire filters for vessel delineation with application to retinal images, Med. Image Anal., 19 (2015), 46-57. doi: 10.1016/j.media.2014.08.002
![]() |
[31] | F. Levet, M. A. Duval-Poo, E. De Vito, F. Odone, Retinal image analysis with shearlets, in STAG, (2016), 151-156. |
[32] |
B. Khomri, A. Christodoulidis, L. Djerou, M. C. Babahenini, F. Cheriet, Retinal blood vessel segmentation using the elite-guided multi-objective artificial bee colony algorithm, IET Image Proc., 12 (2018), 2163-2171. doi: 10.1049/iet-ipr.2018.5425
![]() |
[33] | M. Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, V. K. Asari, Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation, Forthcoming, 2018. |
[34] | S. Y. Shin, S. Lee, I. D. Yun, K. M. Lee, Deep vessel segmentation by learning graphical connectivity, Med. Image Anal., 58 (2019), 101556. |
[35] |
M. Al-Rawi, M. Qutaishat, M. Arrar, An improved matched filter for blood vessel detection of digital retinal images, Comput. Biol. Med., 37 (2007), 262-267. doi: 10.1016/j.compbiomed.2006.03.003
![]() |
[36] | L. Li, M. Verma, Y. Nakashima, H. Nagahara, R. Kawasaki, Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks, in The IEEE Winter Conference on Applications of Computer Vision, (2020), 3656-3665. |
[37] | R. Kushol, M. H. Kabir, M. S. Salekin, A. A. Rahman, Contrast enhancement by top-hat and bottom-hat transform with optimal structuring element: Application to retinal vessel segmentation, in International Conference Image Analysis and Recognition, Springer, (2017), 533-540. |
[38] | R. Kushol, M. N. Raihan, M. S. Salekin, A. A. Rahman, Contrast enhancement of medical x-ray image using morphological operators with optimal structuring element, Forthcoming, 2019. |
[39] | M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, M. D. Abramoff, Comparative study of retinal vessel segmentation methods on a new publicly available database, Proceedings of SPIE. Belling-ham: Society of Photo-Optical Instrumentation Engineers Press, 2004. |
[40] | V. M. Saffarzadeh, A. Osareh, B. Shadgar, Vessel segmentation in retinal images using multi-scale line operator and k-means clustering, J. Med. Signals Sens., 4 (2014), 122. |
1. | Buket Toptaş, Davut Hanbay, Retinal blood vessel segmentation using pixel-based feature vector, 2021, 70, 17468094, 103053, 10.1016/j.bspc.2021.103053 | |
2. | Sushil Kumar Saroj, Rakesh Kumar, Nagendra Pratap Singh, Retinal blood vessels segmentation using Wald PDF and MSMO operator, 2023, 11, 2168-1163, 215, 10.1080/21681163.2022.2063188 | |
3. | Ga Young Kim, Jae Yong Kim, Sang Hyeok Lee, Sung Min Kim, Mitsuru Nakazawa, Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two-Stage Convolutional Neural Network, 2022, 2022, 2314-6141, 1, 10.1155/2022/1705338 | |
4. | Jianguo Xu, Jianxin Shen, Cheng Wan, Qin Jiang, Zhipeng Yan, Weihua Yang, A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery, 2022, 9, 2296-858X, 10.3389/fmed.2022.821565 | |
5. | Hu Haitao, Piercarlo Cattani, Vincenzo Guercio, Francesco Villecco, 2022, Chapter 55, 978-3-031-05229-3, 464, 10.1007/978-3-031-05230-9_55 | |
6. | Y. Aruna Suhasini Devi, K. Manjunatha Chari, 2023, Chapter 18, 978-981-19-8093-0, 241, 10.1007/978-981-19-8094-7_18 | |
7. | Law Kumar Singh, Munish Khanna, Dheeraj Mansukhani, Shankar Thawkar, Rekha Singh, RETRACTED ARTICLE: Features fusion based novel approach for efficient blood vessel segmentation from fundus images, 2023, 83, 1573-7721, 55109, 10.1007/s11042-023-17621-x | |
8. | Yafei Liu, Linqiang Yang, Hongmei Ma, Shuli Mei, Adaptive filter method in Bendlet domain for biological slice images, 2023, 20, 1551-0018, 11116, 10.3934/mbe.2023492 | |
9. | Arunakranthi Godishala, Veena Raj, Daphne Teck Ching Lai, Hayati Yassin, Survey on retinal vessel segmentation, 2024, 1573-7721, 10.1007/s11042-024-19075-1 | |
10. | Habte Tadesse Likassa, Ding-Geng Chen, Kewei Chen, Yalin Wang, Wenhui Zhu, Robust PCA with Lw,∗ and L2,1 Norms: A Novel Method for Low-Quality Retinal Image Enhancement, 2024, 10, 2313-433X, 151, 10.3390/jimaging10070151 |
Image Information | DRIVE | STARE |
Number of image | 40 | 20 |
with Ground truth | ||
Camera used | Canon CR5 non-mydriatic 3CCD | TopConTRV-50 |
Field of View (FOV) | 45 degree | 35 degree |
Image size | 565 x 584 pixels | 605 x 700 pixels |
Image format | TIF | PPM |
Bits per color channel | 8 | 8 |
Number of pathology image | 7 | 10 |
Vessel pixels | Background pixels | |
in ground-truth | in ground-truth | |
Vessel classified | True Positive (TP) | False Positive (FP) |
Vessel not classified | False Negative (FN) | True Negative (TN) |
Ensemble Classifier | Dataset | Avg. Sn | Avg. Sp | Avg. Pr | Avg. Acc |
AdaBoostM1 | DRIVE | 0.7588 | 0.9748 | 0.8226 | 0.9456 |
STARE | 0.7798 | 0.9746 | 0.7956 | 0.9528 | |
LogitBoost | DRIVE | 0.7374 | 0.9777 | 0.8361 | 0.9453 |
STARE | 0.7949 | 0.9712 | 0.7748 | 0.9508 | |
GentleBoost | DRIVE | 0.7605 | 0.9728 | 0.8122 | 0.9442 |
STARE | 0.7954 | 0.9706 | 0.7709 | 0.9502 | |
RUSBoost | DRIVE | 0.7941 | 0.9660 | 0.7844 | 0.9428 |
STARE | 0.8397 | 0.9569 | 0.7105 | 0.9429 | |
Bag | DRIVE | 0.7595 | 0.9722 | 0.8085 | 0.9435 |
STARE | 0.8021 | 0.9685 | 0.7650 | 0.9488 |
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2663 | 0.9901 | 0.8773 | |
2 | Matched | Zhang [11] | 0.7120 | 0.9724 | 0.9382 |
3 | Filter | Odstrcilik [12] | 0.7120 | 0.9724 | 0.9382 |
4 | Chakraborti [13] | 0.7205 | 0.9579 | 0.9370 | |
5 | Vlachos [14] | 0.747 | 0.955 | 0.929 | |
6 | Multi-scale | Nguyen [15] | 0.7322 | 0.9659 | 0.9407 |
7 | approach | Hou [16] | 0.7354 | 0.9691 | 0.9415 |
8 | Yue [17] | 0.7528 | 0.9731 | 0.9447 | |
9 | Morphological | Mendonca [4] | 0.7344 | 0.9764 | 0.9452 |
10 | processing | Miri [18] | 0.7352 | 0.9795 | 0.9458 |
11 | processing | Fraz [19] | 0.7152 | 0.9759 | 0.9430 |
12 | Pattern | Soares [25] | 0.7332 | 0.9782 | 0.9461 |
13 | classification | Xu [26] | 0.7760 | − | 0.9328 |
14 | (Wavelet-based) | Fraz [27] | 0.7406 | 0.9807 | 0.9480 |
15 | Pattern classification | Marin [23] | 0.7067 | 0.9801 | 0.9452 |
16 | (Image Feature-based) | Fan [24] | 0.7179 | 0.9749 | 0.9414 |
17 | Other | Ghadiri [28] | 0.2663 | 0.9901 | 0.8773 |
18 | Wavelet/ | Bankhead [29] | 0.7027 | − | 0.9371 |
19 | Filter-based | Azzopardi [30] | 0.7655 | 0.9704 | 0.9442 |
20 | Approach | Levet [31] | 0.728 | 0.971 | 0.940 |
21 | Proposed Method | 0.7588 | 0.9748 | 0.9456 |
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2846 | 0.9873 | 0.9142 | |
2 | Matched | Zhang [11] | 0.7177 | 0.9753 | 0.9484 |
3 | Filter | Odstrcilik [12] | 0.7847 | 0.9512 | 0.9341 |
4 | Chakraborti [13] | 0.6786 | 0.9586 | 0.9379 | |
5 | Multi-scale | Vlachos [14] | 0.747 | 0.955 | 0.929 |
6 | approach | Nguyen [15] | 0.7317 | 0.9613 | 0.9324 |
7 | Hou [16] | 0.7348 | 0.9652 | 0.9336 | |
8 | Morphological | Mendonca [4] | 0.6996 | 0.9730 | 0.9440 |
9 | processing | Fraz [19] | 0.7311 | 0.9680 | 0.9442 |
10 | Gao [21] | 0.7581 | 0.9550 | 0.9401 | |
11 | Pattern classification | Soares [25] | 0.7207 | 0.9747 | 0.9479 |
12 | (Wavelet-based) | Fraz [27] | 0.7548 | 0.9763 | 0.9534 |
13 | Pattern classification | Marin [23] | 0.6944 | 0.9819 | 0.9526 |
14 | (Image Feature-based) | Fan [24] | 0.6996 | 0.9787 | 0.9488 |
15 | Other Wavelet/ | Azzopardi [30] | 0.7716 | 0.9701 | 0.9497 |
16 | Filter-based approach | Levet [31] | 0.7321 | 0.9634 | 0.9412 |
17 | Proposed Method | 0.7798 | 0.9746 | 0.9528 |
Method | Avg. Sn | Avg. Sp | Avg. Acc |
Zhang [11] | 0.7166 | 0.9673 | 0.9439 |
Saffarzadeh [40] | 0.7166 | 0.9672 | 0.9438 |
Mendonca [4] | 0.6801 | 0.9694 | 0.9426 |
Soares [25] | 0.7181 | 0.9765 | 0.9500 |
Fraz [27] | 0.7262 | 0.9764 | 0.9511 |
Proposed Method | 0.7406 | 0.9728 | 0.9452 |
Image Information | DRIVE | STARE |
Number of image | 40 | 20 |
with Ground truth | ||
Camera used | Canon CR5 non-mydriatic 3CCD | TopConTRV-50 |
Field of View (FOV) | 45 degree | 35 degree |
Image size | 565 x 584 pixels | 605 x 700 pixels |
Image format | TIF | PPM |
Bits per color channel | 8 | 8 |
Number of pathology image | 7 | 10 |
Vessel pixels | Background pixels | |
in ground-truth | in ground-truth | |
Vessel classified | True Positive (TP) | False Positive (FP) |
Vessel not classified | False Negative (FN) | True Negative (TN) |
Ensemble Classifier | Dataset | Avg. Sn | Avg. Sp | Avg. Pr | Avg. Acc |
AdaBoostM1 | DRIVE | 0.7588 | 0.9748 | 0.8226 | 0.9456 |
STARE | 0.7798 | 0.9746 | 0.7956 | 0.9528 | |
LogitBoost | DRIVE | 0.7374 | 0.9777 | 0.8361 | 0.9453 |
STARE | 0.7949 | 0.9712 | 0.7748 | 0.9508 | |
GentleBoost | DRIVE | 0.7605 | 0.9728 | 0.8122 | 0.9442 |
STARE | 0.7954 | 0.9706 | 0.7709 | 0.9502 | |
RUSBoost | DRIVE | 0.7941 | 0.9660 | 0.7844 | 0.9428 |
STARE | 0.8397 | 0.9569 | 0.7105 | 0.9429 | |
Bag | DRIVE | 0.7595 | 0.9722 | 0.8085 | 0.9435 |
STARE | 0.8021 | 0.9685 | 0.7650 | 0.9488 |
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2663 | 0.9901 | 0.8773 | |
2 | Matched | Zhang [11] | 0.7120 | 0.9724 | 0.9382 |
3 | Filter | Odstrcilik [12] | 0.7120 | 0.9724 | 0.9382 |
4 | Chakraborti [13] | 0.7205 | 0.9579 | 0.9370 | |
5 | Vlachos [14] | 0.747 | 0.955 | 0.929 | |
6 | Multi-scale | Nguyen [15] | 0.7322 | 0.9659 | 0.9407 |
7 | approach | Hou [16] | 0.7354 | 0.9691 | 0.9415 |
8 | Yue [17] | 0.7528 | 0.9731 | 0.9447 | |
9 | Morphological | Mendonca [4] | 0.7344 | 0.9764 | 0.9452 |
10 | processing | Miri [18] | 0.7352 | 0.9795 | 0.9458 |
11 | processing | Fraz [19] | 0.7152 | 0.9759 | 0.9430 |
12 | Pattern | Soares [25] | 0.7332 | 0.9782 | 0.9461 |
13 | classification | Xu [26] | 0.7760 | − | 0.9328 |
14 | (Wavelet-based) | Fraz [27] | 0.7406 | 0.9807 | 0.9480 |
15 | Pattern classification | Marin [23] | 0.7067 | 0.9801 | 0.9452 |
16 | (Image Feature-based) | Fan [24] | 0.7179 | 0.9749 | 0.9414 |
17 | Other | Ghadiri [28] | 0.2663 | 0.9901 | 0.8773 |
18 | Wavelet/ | Bankhead [29] | 0.7027 | − | 0.9371 |
19 | Filter-based | Azzopardi [30] | 0.7655 | 0.9704 | 0.9442 |
20 | Approach | Levet [31] | 0.728 | 0.971 | 0.940 |
21 | Proposed Method | 0.7588 | 0.9748 | 0.9456 |
No | Category | Techniques | Avg. | Avg. | Avg. |
Sn | Sp | Acc | |||
1 | Chaudhuri [2] | 0.2846 | 0.9873 | 0.9142 | |
2 | Matched | Zhang [11] | 0.7177 | 0.9753 | 0.9484 |
3 | Filter | Odstrcilik [12] | 0.7847 | 0.9512 | 0.9341 |
4 | Chakraborti [13] | 0.6786 | 0.9586 | 0.9379 | |
5 | Multi-scale | Vlachos [14] | 0.747 | 0.955 | 0.929 |
6 | approach | Nguyen [15] | 0.7317 | 0.9613 | 0.9324 |
7 | Hou [16] | 0.7348 | 0.9652 | 0.9336 | |
8 | Morphological | Mendonca [4] | 0.6996 | 0.9730 | 0.9440 |
9 | processing | Fraz [19] | 0.7311 | 0.9680 | 0.9442 |
10 | Gao [21] | 0.7581 | 0.9550 | 0.9401 | |
11 | Pattern classification | Soares [25] | 0.7207 | 0.9747 | 0.9479 |
12 | (Wavelet-based) | Fraz [27] | 0.7548 | 0.9763 | 0.9534 |
13 | Pattern classification | Marin [23] | 0.6944 | 0.9819 | 0.9526 |
14 | (Image Feature-based) | Fan [24] | 0.6996 | 0.9787 | 0.9488 |
15 | Other Wavelet/ | Azzopardi [30] | 0.7716 | 0.9701 | 0.9497 |
16 | Filter-based approach | Levet [31] | 0.7321 | 0.9634 | 0.9412 |
17 | Proposed Method | 0.7798 | 0.9746 | 0.9528 |
Method | DRIVE (Training on STARE) | STARE (Training on DRIVE) |
Soares [25] | 0.9397 | 0.9327 |
Fraz [27] | 0.9456 | 0.9493 |
Ricci [22] | 0.9266 | 0.9464 |
Marin [23] | 0.9448 | 0.9528 |
Proposed Method | 0.9450 | 0.9487 |
Method | Avg. Sn | Avg. Sp | Avg. Acc |
Zhang [11] | 0.7166 | 0.9673 | 0.9439 |
Saffarzadeh [40] | 0.7166 | 0.9672 | 0.9438 |
Mendonca [4] | 0.6801 | 0.9694 | 0.9426 |
Soares [25] | 0.7181 | 0.9765 | 0.9500 |
Fraz [27] | 0.7262 | 0.9764 | 0.9511 |
Proposed Method | 0.7406 | 0.9728 | 0.9452 |