Research article

Stabilized bi-grid projection methods in finite elements for the 2D incompressible Navier-Stokes equations

  • We introduce a family of bi-grid schemes in finite elements for solving 2D incompressible Navier-Stokes equations in velocity and pressure (u, p). The new schemes are based on projection methods and use two pairs of FEM spaces, a sparse and a fine one. The main computational e ort is done on the coarsest velocity space with an implicit and unconditionally time scheme while its correction on the finer velocity space is realized with a simple stabilized semi-implicit scheme whose the lack of stability is compensated by a high mode stabilization procedure; the pressure is updated using the free divergence property. The new schemes are tested on the lid driven cavity up to Re = 7500. An enhanced stability is observed as respect to classical semi-implicit methods and an important gain of CPU time is obtained as compared to implicit projection schemes.

    Citation: Hyam Abboud, Clara Al Kosseifi, Jean-Paul Chehab. Stabilized bi-grid projection methods in finite elements for the 2D incompressible Navier-Stokes equations[J]. AIMS Mathematics, 2018, 3(4): 485-513. doi: 10.3934/Math.2018.4.485

    Related Papers:

    [1] Jinke Wang, Lubiao Zhou, Zhongzheng Yuan, Haiying Wang, Changfa Shi . MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(4): 6912-6931. doi: 10.3934/mbe.2023298
    [2] G. Prethija, Jeevaa Katiravan . EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(3): 4742-4761. doi: 10.3934/mbe.2024208
    [3] Caixia Zheng, Huican Li, Yingying Ge, Yanlin He, Yugen Yi, Meili Zhu, Hui Sun, Jun Kong . Retinal vessel segmentation based on multi-scale feature and style transfer. Mathematical Biosciences and Engineering, 2024, 21(1): 49-74. doi: 10.3934/mbe.2024003
    [4] Yinlin Cheng, Mengnan Ma, Liangjun Zhang, ChenJin Jin, Li Ma, Yi Zhou . Retinal blood vessel segmentation based on Densely Connected U-Net. Mathematical Biosciences and Engineering, 2020, 17(4): 3088-3108. doi: 10.3934/mbe.2020175
    [5] Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464
    [6] Meenu Garg, Sheifali Gupta, Soumya Ranjan Nayak, Janmenjoy Nayak, Danilo Pelusi . Modified pixel level snake using bottom hat transformation for evolution of retinal vasculature map. Mathematical Biosciences and Engineering, 2021, 18(5): 5737-5757. doi: 10.3934/mbe.2021290
    [7] Yanxia Sun, Xiang Li, Yuechang Liu, Zhongzheng Yuan, Jinke Wang, Changfa Shi . A lightweight dual-path cascaded network for vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(6): 10790-10814. doi: 10.3934/mbe.2023479
    [8] Yun Jiang, Jie Chen, Wei Yan, Zequn Zhang, Hao Qiao, Meiqi Wang . MAG-Net : Multi-fusion network with grouped attention for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(2): 1938-1958. doi: 10.3934/mbe.2024086
    [9] Wencong Zhang, Yuxi Tao, Zhanyao Huang, Yue Li, Yingjia Chen, Tengfei Song, Xiangyuan Ma, Yaqin Zhang . Multi-phase features interaction transformer network for liver tumor segmentation and microvascular invasion assessment in contrast-enhanced CT. Mathematical Biosciences and Engineering, 2024, 21(4): 5735-5761. doi: 10.3934/mbe.2024253
    [10] Kun Lan, Jianzhen Cheng, Jinyun Jiang, Xiaoliang Jiang, Qile Zhang . Modified UNet++ with atrous spatial pyramid pooling for blood cell image segmentation. Mathematical Biosciences and Engineering, 2023, 20(1): 1420-1433. doi: 10.3934/mbe.2023064
  • We introduce a family of bi-grid schemes in finite elements for solving 2D incompressible Navier-Stokes equations in velocity and pressure (u, p). The new schemes are based on projection methods and use two pairs of FEM spaces, a sparse and a fine one. The main computational e ort is done on the coarsest velocity space with an implicit and unconditionally time scheme while its correction on the finer velocity space is realized with a simple stabilized semi-implicit scheme whose the lack of stability is compensated by a high mode stabilization procedure; the pressure is updated using the free divergence property. The new schemes are tested on the lid driven cavity up to Re = 7500. An enhanced stability is observed as respect to classical semi-implicit methods and an important gain of CPU time is obtained as compared to implicit projection schemes.


    Among all the organs in the human body, eyes offer the most sensational feeling which provides us with the scope to praise the beauty and to communicate with others through visual expression. Clear vision plays a vital role in our life. However, due to different eye diseases, sometimes it becomes crucial to maintain a healthy vision. That is why it is important to test different eye components regularly. Amongst diverse components, the retina is the most important ingredient which can exhibit the symptoms of most of the optical disorders. Different morphological properties of retinal blood vessels including branching pattern, length, tortuosity, angular features, and diameter are the basic observation area to detect any eye disease. Study [1] shows that hypertension can be classified by narrow arterioles with bright reflexes. Cardiovascular disease and diabetic retinopathy can be diagnosed by bifurcation angles and tortuosity information.

    Accurate blood vessel segmentation is important as well as a challenging task for determining early vision problems. However, different branching patterns, bad contrast, low image resolution make the blood vessel segmentation problem cumbersome. Besides, ill-defined optic disk as well as the random number of the hemorrhage, cotton wool spots, and microaneurysms create noises that lead to false-positive detection. Moreover, it is more troublesome to identify the thin vessels from the background image. Bifurcation, central light reflex, arbitrary branch crossings, contrast variation in the vessel maps are usually the general causes for the failure of the complete isolation of retinal blood vessels from color fundus image.

    Matched Filtering [2] methodology based on the characteristics of the blood vessel was the first significant achievement in the field of retinal blood vessel segmentation. Next, the idea of multi-scale approach [3] was developed using a scale-space analysis of the maximum principal curvature. Later on, utilizing morphological reconstruction and vessel centerline detection, the strategy of morphological processing based solution was proposed [4]. Moreover, Machine learning-based techniques were also applied to isolate the vessel tree where a handcrafted feature vector is formed before passed it to a suitable classifier. Nowadays, deep learning-based techniques [5] are dominating this vessel segmentation application akin to most of the segmentation and classification tasks in computer vision.

    Wavelet-based methods have already proved their effectiveness in many applications including image denoising, compression, texture analysis, and edge detection. Gabor wavelet, Curvelets [6], Contourlets [7], Shearlets [8] are commonly employed to detect curved like structures which are already applied by many researchers in the application of retinal vessel segmentation. Bendlets [9] is the recent addition to the family of wavelet-based multiscale directional transform technique which can capture curvatures with fewer coefficient values than others.

    This paper presents an efficient and robust retinal blood vessel segmentation technique utilizing the new multiscale directional transform methodology named Bendlets. Firstly, some preprocessing steps have been followed to reduce some false outcomes as well as make it suitable for the feature extraction stage. Secondly, in order to enhance the contrast of the input image morphological top-hat and bottom-hat transform are employed which helps to detect the thin vessels more effectively. Thirdly, Bendlet transform is applied on different scales to construct a robust feature vector. Fourthly, a small group of ensemble classifiers is analyzed to find out the most satisfactory classifier. Finally, to improve the accuracy a little bit more, a post-processing step is carried out to fulfill the probable gaps inside the vessel as well as remove the isolated false candidates.

    The intensity profile along the cross-section of a retinal blood vessel has the Gaussian-shaped curve. Chaudhuri et al. [2] applied a 2-D Gaussian kernel for separating blood vessels from the background. Based on this concept of matched filtering, Hoover et al. [10] designed a piecewise threshold probing algorithm considering some region-based properties. Zhang et al. [11] proposed a combination of zero-mean Gaussian and first-order derivative of the Gaussian filter which resolves some of the limitations of previous methods. Another variation proposed by Odstrcilik et al. [12] where illumination correction and contrast equalization were performed before matched filtering. Chakraborti et al. [13] improved the concept of matched filtering by incorporating orientation histogram and a vesselness filter.

    Martinez-Perez et al. [3] started the idea of applying the multi-scale approach in detecting vessel from the background image by using gradient magnitude and maximum principal curvature. Vlachos and Dermatas [14] developed a multi-scale line tracking program from different vessel properties. Nguyen et al. [15] proposed a multi-scale line detection method where the response from eight different scale values is linearly combined to get the final vessel map. In another work, Yanli Hou [16] presented a vessel tree extraction process by utilizing a multi-scale top-hat transform and multi-scale line detection algorithm. Yue et al. [17] made a little optimization in the work of [15] where the improved multi-scale line detector chooses the maximum line response from the available multi-scale windows.

    Mendonca and Campilho [4] established the concept of segmenting blood vessels by means of several morphological filters such as multi-scale top-hat transform, region growing process, and multi-scale morphological vessel reconstruction. Miri and Mahloojifar [18] applied connected component analysis and multi-structure elements morphology by reconstruction after using curvelet transform for retinal vessel enhancement. Fraz et al. [19] developed another unique blood vessel segmentation technique where the multidirectional morphological top-hat transform is utilized to enhance the image and morphological bit plane slicing is employed to segment the vessel map. By using morphological component analysis Imani et al. [20] designed another method with combining Morlet wavelet transform and adaptive thresholding. Gao et al. [21] presented an efficient procedure for the vessel segmentation which includes automatic random walks based on centerline extraction.

    Pattern classification based methods are more popular than previous approaches. Ricci and Perfetti [22] made a framework using a support vector machine (SVM) where the features are generated from line detectors based on the average grey-level value of the retinal image. Martin et al. [23] assembled a 7-D feature vector for each pixel representation comprised of moment invariants and grey-level features which are trained by neural network classifier. On the other hand, Fan et al. [24] executed a random forest classifier where the feature vector of each candidate pixel is formed from integral channel features.

    Although local image feature-based machine learning approaches are satisfactory enough, the feature vector composed of wavelet-based methods is more robust in terms of accuracy. Soares et al. [25] developed the feature vector from a 2-D Morlet wavelet transform and trained it by a Gaussian mixture model classifier. Xu et al. [26] included wavelet and curvelet transform technique to obtain a 12-D feature vector that is trained by SVM. In another work, Fraz et al. [27] made a 9-D feature vector where four features are taken from the Gabor filter response at four different scales.

    Ghadiri et al. [28] utilized Non-subsampled Contourlet Transform (NSCT) to extract the vessel tree from the retinal fundus image. Bankhead et al. [29] proposed a wavelet transform to detect the vessel map along with centerline refinement using spline fitting. Azzopardi et al. [30] developed a novel algorithm for vessel segmentation using COSFIRE (Combination of Shifted Filter Response) and DoG (Difference-of-Gaussians) filters. Moreover, Levet et al. [31] proposed applying Shearlet transform with the hysteresis threshold strategy to segment the blood vessel from the input image. Based on elite guided multi-objective artificial bee colony (EMOABC) technique Khomri et al. [32] designed another unsupervised method for retinal vessel segmentation for fundus images.

    After the wave of deep learning, most of the applications related to computer vision and NLP are governed by various deep learning methods. R2U-Net [33] using U-Net and Recurrent Residual Convolutional Neural Network (RRCNN) is able to achieve around 97% accuracy on the STARE dataset. A Graph Neural Network (GNN) [34] has been introduced to extract the vessel structure utilizing the local appearance and global structure of vessels. However, their performance is not superior as the average accuracy is 92.71% and 93.78% on DRIVE and STARE databases respectively. IterNet [36] is another deep learning model where multiple iterations of U-Net have been performed and the accuracy is slightly better than R2U-Net [33]. However, deep learning generally uses millions of parameters to learn the model with lots of training data as well as requires a heavy computational load whereas our proposed methodology uses only a limited number of filters which could run in any lightweight system. On the other hand, if the test sample comes from a different domain usually the deep learning-based trained model performs poorly in this case.

    The proposed complete work flow for retinal blood vessel segmentation is illustrated in Figure 1 where the proposed method uses the following basic steps: (1) Pre-processing, (2) Retinal Vessel Enhancement, (3) Feature Extraction using Bendlets, (4) Classification, and (5) Post Processing.

    Figure 1.  Steps of the proposed system.

    Retinal fundus images generally prone to noise, lighting variations, and poor contrast [23]. In order to obtain a better classification result the following actions are executed: (1) Green channel extraction, (2) Central light reflex diminution, and (3) Extension of the border.

    (1) Green Channel Extraction: The green channel provides the best view for blood vessels compared to other two channels [35]. For this reason, the conversion of RGB color image to Gray-scale image is performed by considering the green plane only, as shown in Figure 2(a).

    Figure 2.  Output images from pre-processing step, (a) gray-scale image conversion from green channel extraction, (b) Central light reflex inside vessel, (c) reduction of central light reflex by morphological opening, and (d) Extension of the border.

    (2) Central Light Reflex Diminution: Due to lighting variation, some blood vessels may contain a light streak through the central area of the blood vessel which is known as central light reflex. As a result, the middle portion of the vessel and the background become similar thus creates some false results. To eliminate this central light reflex morphological opening is employed with a two-pixel radius disk-shaped structuring element. The parameter of disk size is kept to the possible lowest value otherwise it could merge the nearest vessels. An example is shown in Figure 2(b), (c) where the effect of central light reflex and its elimination with the help of opening filtering operation is illustrated respectively.

    (3) Extension of Border: The circular-shaped border of the inverted green channel image is expanded to take aside unenvied border effects by the algorithm used in [25]. Otherwise, the output image produces some false-positive results along the border area. Here, an iterative strategy has been followed where the mean value of the pixel’s neighbors inside the aperture is operated to replace the pixels outside the aperture. As a result, an artificially increasing area can be seen along the border, as shown in Figure 2(d).

    The method designed by Kushol et al. [37,38] using morphological operators has been employed to enhance the contrast of the retinal image. At first, the original input image is added with the resultant image of the top-hat transform. Simultaneously, the outcome of the morphological bottom-hat is deducted from the initial image. Equation (3.1) expresses the morphological operations performed to enhance the contrast of the image.

    Aenhance=A+AtophatAbottomhat (3.1)

    The major contribution of this work is the automatic selection of the structuring element (SE) by means of edge content (EC) based contrast matrix which is computed by the gradient magnitude value. The characteristics of EC with respect to SE is shown in Figure 3 where the graph indicates incrementing the value of SE also results in an increased value of EC at the beginning. However, after a certain period of iterations, the incrementing behavior stops and provides the best contrast-enhanced image.

    Figure 3.  Statistics of different SE size and corresponding EC value for two images of STARE and DRIVE dataset.

    Traditional wavelet-based methods can detect point singularities as well as they are useful for multiresolution and proper localization properties. However, they have weaknesses in the case of optimal edge representation. Since the elements are not highly anisotropic it could not yield optimal geometric structure or curved singularities. In order to obtain the curve singularities and hyperplane singularities in higher dimensions, Candes and Donoho first designed curvelet transform [6]. Curvelet elements can be constructed by a parabolic dilation or scale parameter a(0<a<1), a location or translation parameter t, and an orientation θ to a shaped function ψ as shown in Eqs (3.2) and (3.3):

    ψa,θ,t(x)=a3/4ψ(DaRθ(xt)) (3.2)
    Da=(1/a001/a) (3.3)

    Here, Da is a parabolic scaling matrix and Rθ is a rotation by θ radians.

    Another multiscale directional transform Shearlets also follows the basic principles of wavelets except the isotropic dilation is replaced by anisotropic dilation and shearing. One of the unique characteristics of shearlets is the use of shearing to control directional decision, in contrast to rotation practiced by curvelets. Shearlet elements can be represented by ψa,s,t where a,s,t are the dilation, shearing, and translation variable respectively and can be expressed by Eqs (3.4)–(3.6):

    ψa,s,t(x)=a3/4ψ(A1aS1s(xt)) (3.4)
    Aa=(a00a),a>0 (3.5)
    Ss=(1s01),sεR (3.6)

    Here, Aa is a scaling (or dilation) matrix and Ss represents shear matrix.

    Bendlets can be referred to an extended or second-order shearlet system considering bending as an extra parameter. The basic difference of bendlets compared to classical shearlets appears in α-scaling. If a>0 and 0α< the α-scaling operation is denoted by Eq (3.7):

    Aa,α:=(a00aα) (3.7)

    Here, the variable α expresses the scaling anisotropy. For instance, the value of α=1 indicates isotropic scaling whereas the value of α=1/2 means parabolic scaling, and α=0 leads to pure directional scaling. Bendlets can accurately determine direction, location, and curvature of a discontinuity curve with an anisotropic scaling of 1/3<α<1/2.

    For lεN and r=(r1,....,rl)TεRl the lth order shearing operator S(l)r:R2R2 is represented by Eq (3.8):

    S(l)r:=(1lm=1rmxm1201)(x1,x2)T (3.8)

    Here, for l=1 generates an typical shearing matrix and for l=2 the operator contains the characteristics of both shearing and bending.

    If ψεL2(R2),lεN,α>0. then a unitary expression of the higher-order shearlet groups S(l,α) have the natural representation shown in Eq (3.9):

    π(l,α)(a,s,t)ψ=α(l+α)/2ψ(A1a,αS(l)s(.t)) (3.9)

    The parameters a,s, and t indicate scale, shear (orientation), and location variable respectively. So, the lth order α –shearlet system is denoted as Eq (3.10):

    SHl,αψ={π(l,α)(a,s,t)ψ|(a,s,t)ϵSl,α} (3.10)

    When l=2, the above equation is considered as second-order shearlet transform or bendlet transform. So the bendlet system can be expressed as Eq (3.11) where b takes the role of bending as an extra parameter.

    BS(α)ψ=SH(2,α)ψ={ψa,s,b,t|(a,s,b,t)ϵS(2,α) (3.11)

    Bendlets are different from other directional multiscale transforms because of this extra bending parameter which helps to detect curve singularities more efficiently. The shape of bendlet atoms or elements varies according to different parameter values which are depicted in Figure 4 to Figgure 7. In Figure 4(a), Figure 6(b), (c), and Figure 7(a) coefficient values are close to zero because a small amount of edge energy is included in bendlet area. In Figure 4(c), Figure 5(c), and Figure 7(b) bendlet coefficients are moderately high because of the large area of edge energy are enclosed in bendlet. The coefficient response from Figure 6(a) is very high due to entire bendlet frame is enclosed with the edge energy.

    Figure 4.  Bendlets with fixed scale, shear, and location but varying bending.
    Figure 5.  Bendlets with fixed bending, shear, and location but varying scale.
    Figure 6.  Bendlets with fixed scale, shear, and bending but varying location.
    Figure 7.  Bendlets with fixed scale, location, and bending but varying shear.

    In the case of blood vessel segmentation from the retinal fundus image, the aim is to extract vessel-like structure. As we can see from a retinal image, the most occupied region of the image is taken by vessels and they usually do not follow any specific direction. All the time the vessels are changing their direction abruptly and form curved like edge throughout the whole image. Traditional directional wavelets can detect these types of curvature with their elements coefficient response value. As the shape of these elements is a square-like structure, it requires a huge amount of coefficient response to fully capture all the vessels as well as increases the noise inside the image. On the other hand, curvelet and shearlet transform create parabolic rectangular-shaped element according to the parabolic scaling principle length2width. As a result, few coefficients are required to fully detect the vessels of an image. Further reduction is possible in the number of coefficients with Bendlets where shearlet elements are bent in different directions by adding one extra parameter bending b. One comparison among traditional wavelets, curvelets, and Bendlets with the number of elements required to capture a little amount of vessel-like region is shown in Figure 8 where we can see wavelet needs around 15 coefficient values, curvelet needs 5 coefficient values and bendlet requires only 3 coefficient values to fully detect the curved region.

    Figure 8.  Capturing vessel-like curve with wavelets, curvelets, and bendlets. (a) Huge amount of wavelet coefficients are required to detect the vessel whereas with (b) few curvelet/shearlet coefficients are needed. Further reduction is possible in the number of coefficients with (c) bendlets.

    After applying the bendlet transform, soft thresholding is performed on the coefficient values to retain the large response of the image. Because of the curve like the structure of the bendlet elements, a high response is produced whenever the process finds any vessel-related shape. For different scale values, different reconstructed output images can be obtained by utilizing bendlet transform. However, the scale value of 3, 4, and 5 generate significantly accurate output while reducing unnecessary noise from the background image. Finally, a feature vector of size four is constructed with three different aforementioned scale values of bendlet and another one is taken by subtracting the background from the enhanced image using an averaging filter of size 10×10. These four features can be expressed in Eqs (3.12)–(3.15):

    f1(x,y)=Isub(x,y) (3.12)
    f2(x,y)=Iscale=3bendlet(x,y) (3.13)
    f3(x,y)=Iscale=4bendlet(x,y) (3.14)
    f4(x,y)=Iscale=5bendlet(x,y) (3.15)

    Here, Isub(x,y) is the pixel value obtained by subtracting the original enhanced image with average filtered image and Ibendlet(x,y) is the reconstructed image with modified bendlet coefficient values for a given scale information. The individual output for each of these features is shown in Figure 9.

    Figure 9.  Individual output of each of the features. (a) Output after performing average filter operation, (b) Reconstructed image after applying Bendlet transform with scale = 3, (c) Bendlet transform with scale = 4, and (d) Bendlet transform with scale = 5.

    In the feature development stage, each pixel from an input image is characterized by a vector in a 4-D feature space as Eq (3.16):

    F(x,y)=[f1(x,y),f2(x,y),f3(x,y),f4(x,y)] (3.16)

    The purpose of classification stage is to assign one candidate pixel to either Cv(vessel) or Cnv(nonvessel) class. An efficient and frequently used method in machine learning is ensemble classifiers that combine multiple individual learning models to yield an agammaegate model. The major advantage of an ensemble classifier is having the ability to avoid the mistakes of a single classification model. In a given data set, one individual learning model may result in an over-fitting problem in a specific portion of the data whereas another model can overcome this limitation thus increasing the prediction performance. Moreover, there are mainly two types of ensemble models which are Bagging (subsampling the training data) and Boosting (re-learning with different weights on misclassified instances).

    Bagging (Bootstrap Agammaegation):

    To reduce the problem of overfitting as well as variance the Bagging or Bootstrap agammaegation of sample data can be used primarily. However, the Bagging concept is mostly applied in decision tree methods. Each weak learner in Bootstrap agammaegation contains equal weight and it trains individual models from the training set by randomly selecting a subset. For instance, to achieve high accuracy in terms of classification performance the random forest method combines random decision trees with bagging.

    Boosting:

    Boosting is a weighted average approach that can be utilized for mitigating bias as well as variance. Boosting algorithms are formed by multiple weak learners which are combined to become a final powerful learner. The weight of each weak learner is recalculated in order to overcome the problem of increased weight from the misclassification result. From the knowledge of previous mislabeled classifiers, the Boosting algorithms generally assemble the learners sequentially. Basically, the overall classification performance can be enhanced by weighing earlier mislabeled examples with higher weight.

    Among established Ensemble-agammaegation methods, we have experimented five prominent classifiers. They are AdaBoostM1, LogitBoost, GentleBoost, RUSBoost, and Bag. The Decision Tree type weak learner is used in the ensemble process where the number of ensemble learning cycles is 200.

    The resultant image acquired from the classification stage exhibits two common pitfalls. Firstly, the vessel map may contain a few discontinuous line segments due to some obstacles. Secondly, some isolated points could arise because of some pathological objects. The following two steps will solve these predicaments as much as possible and enhance the accuracy a little bit.

    After performing the classification process, some broken vessel segments can be observed which reduces the accuracy of the outcome. To link the broken segments along the vessel pixels, a multiscale line detection algorithm is applied. A window size of 1515 is considered for each pixel position where 12 lines of length L are oriented at 12 distinct orientations (with an interval of 150). Three different values 7, 11, and 15 are taken for the value of L which are linearly combined as proposed in Nguyen et al. [15] to yield the connected vessel tree. Figure 10 depicts the performance of before taking advantage of the line detection scheme and after employing the algorithm.

    Figure 10.  A part of output image (a) after classification process and (b) after performing multiscale line detection process.

    Here, the area of each connected region is measured with the help of morphological area open operation. The accumulation of pixels connected will be removed from the final output if the value of that region is beneath 40 pixels. One sample output after executing the area open operation is given in Figure 11.

    Figure 11.  Output of isolated irrelevant object removal stage (a) Before morphological area open operation and (b) after operating morphological area open function.

    The proposed method doesn’t require any heavy computational power. The implementation of the proposed idea has been performed in MATLAB R2015a by @2.20 GHz processor consist of 8.00 GB RAM and without any GPU involvement. Two openly accessible benchmark databases are utilized which are DRIVE [39] and STARE [10] database to assess the algorithmic performance of the proposed approach. Moreover, to demonstrate the robustness of our approach a comparative analysis has been incorporated in the latter part of this section.

    The DRIVE (Digital Retinal Images for Vessel Extraction) dataset has been formulated from a diabetic retinopathy diagnosis program of 400 subjects. With the help of medical experts, 40 ground truth images are created where the first half of the pictures are selected as test subjects and the remaining half of the photographs are chosen for training purposes. On the other hand, the development of the database STARE (STructured Analysis of the Retina) is performed at the University of California. It consists of 20 retinal color photos where two sets of ground truth are designed by two different pathological experts. Here, the first 50% of the images are acknowledged as the training set and the later 50% of the pictures are recognized as test data in our experiment. The overall summary of the aforementioned two datasets is represented in Table 1.

    Table 1.  Summary of DRIVE and STARE dataset.
    Image Information DRIVE STARE
    Number of image 40 20
    with Ground truth
    Camera used Canon CR5 non-mydriatic 3CCD TopConTRV-50
    Field of View (FOV) 45 degree 35 degree
    Image size 565 x 584 pixels 605 x 700 pixels
    Image format TIF PPM
    Bits per color channel 8 8
    Number of pathology image 7 10

     | Show Table
    DownLoad: CSV

    Four types of scenarios can be observed in a two class categorization problem. They are:

    (1) True Positive (TP): correct identification

    (2) True Negative (TN): correct rejection

    (3) False Positive (FP): incorrect identification

    (4) False Negative (FN): incorrect rejection

    The fundamental statistical measures Sensitivity (recall), Specificity (true negative rate), Precision (positive predictive value), and Accuracy are evaluated to assess the binary classification performance based on these scenarios. In healthcare diagnosis, sensitivity (true positive rate or recall), is the correct identification of samples with the disease whereas specificity (true negative rate) is the correct identification of samples without the disease. Precision (Positive Predictive Value) is the amount of identified items that are relevant and high precision means that an algorithm returned substantially more relevant results than irrelevant. The Accuracy is the fraction of the total number of truly identified pixels and the number of pixels present in the FOV. Sensitivity (Sn), Specificity (Sp), Precision (Pr), and Accuracy (Acc) are measured by Eqs (4.1)–(4.4) respectively.

    Sensitivity,Sn=TPTP+FN (4.1)
    Specificity,Sp=TNTN+FP (4.2)
    Precision,Pr=TPTP+FP (4.3)
    Accuracy,Acc=TP+TNTP+TN+FP+FN (4.4)

    Table 2 shows the relationship of vessel classification with the above mentioned four events.

    Table 2.  Vessel Classification outcome.
    Vessel pixels Background pixels
    in ground-truth in ground-truth
    Vessel classified True Positive (TP) False Positive (FP)
    Vessel not classified False Negative (FN) True Negative (TN)

     | Show Table
    DownLoad: CSV

    Firstly, the proposed method is assessed by the images of DRIVE database. In the first part of the evaluation, the algorithm is trained by the 20 train set images with 5 established aforementioned ensemble classifiers. Next, a similar feature vector is also designed for the test set images and applied for the classification result with trained data. All ensemble classifiers produce almost similar results in terms of sensitivity, specificity, precision, and accuracy. However, AdaBoost slightly outperforms among other classifiers, and for that reason, the individual performance measures of each of the images of DRIVE dataset represented in Figure 12 is based on the outcome of the AdaBoost classifier. The average sensitivity, specificity, precision, and accuracy achieved in this database are 0.7588, 0.9748, 0.8226, and 0.9456 respectively.

    Figure 12.  Graphical performance analysis on DRIVE Database test set images.

    Secondly, the same approach is also performed in the case of STARE dataset where the average sensitivity, specificity, precision, and accuracy achieved are 0.7798, 0.9746, 0.7956, and 0.9528 respectively. The detailed result of the individual image is represented in Figure 13.

    Figure 13.  Graphical performance analysis on STARE Database test set images.

    A bunch of ensemble classifier is experimented to find the most accurate result of the blood vessel segmentation. They are AdaBoost, LogitBoost, GentleBoost, RUSBoost, and Bag. Table 3 depicts the complete result of these classifiers based on Average Sn, Sp, Pr, and Acc on DRIVE and STARE database. RUSBoost achieves the highest score on both datasets for Sensitivity whereas AdaBoost and LogitBoost receive the best rate for Specificity and Precision on STARE and DRIVE respectively. Overall, considering the performance of Accuracy AdaBoost clearly outweighs other classifiers in our application and thus employed in our final evaluation assessment.

    Table 3.  Comparison of proposed method with different ensemble classifiers.
    Ensemble Classifier Dataset Avg. Sn Avg. Sp Avg. Pr Avg. Acc
    AdaBoostM1 DRIVE 0.7588 0.9748 0.8226 0.9456
    STARE 0.7798 0.9746 0.7956 0.9528
    LogitBoost DRIVE 0.7374 0.9777 0.8361 0.9453
    STARE 0.7949 0.9712 0.7748 0.9508
    GentleBoost DRIVE 0.7605 0.9728 0.8122 0.9442
    STARE 0.7954 0.9706 0.7709 0.9502
    RUSBoost DRIVE 0.7941 0.9660 0.7844 0.9428
    STARE 0.8397 0.9569 0.7105 0.9429
    Bag DRIVE 0.7595 0.9722 0.8085 0.9435
    STARE 0.8021 0.9685 0.7650 0.9488

     | Show Table
    DownLoad: CSV

    Our proposed model is also compared with some auspicious existing works. The comparison of our methodology to other recent methods is shown in Tables 4 and 5 for DRIVE and STARE databases respectively. For a better comparison view, the list is grouped according to the categorization of the literature review section with the ascending order of the publication year. Average Sensitivity (Sn), Specificity (Sp), and Accuracy (Acc) are the performance metrics examined for the comparative analysis.

    Table 4.  Performance comparison of retinal blood vessel segmentation methods on DRIVE database.
    No Category Techniques Avg. Avg. Avg.
    Sn Sp Acc
    1 Chaudhuri [2] 0.2663 0.9901 0.8773
    2 Matched Zhang [11] 0.7120 0.9724 0.9382
    3 Filter Odstrcilik [12] 0.7120 0.9724 0.9382
    4 Chakraborti [13] 0.7205 0.9579 0.9370
    5 Vlachos [14] 0.747 0.955 0.929
    6 Multi-scale Nguyen [15] 0.7322 0.9659 0.9407
    7 approach Hou [16] 0.7354 0.9691 0.9415
    8 Yue [17] 0.7528 0.9731 0.9447
    9 Morphological Mendonca [4] 0.7344 0.9764 0.9452
    10 processing Miri [18] 0.7352 0.9795 0.9458
    11 processing Fraz [19] 0.7152 0.9759 0.9430
    12 Pattern Soares [25] 0.7332 0.9782 0.9461
    13 classification Xu [26] 0.7760 0.9328
    14 (Wavelet-based) Fraz [27] 0.7406 0.9807 0.9480
    15 Pattern classification Marin [23] 0.7067 0.9801 0.9452
    16 (Image Feature-based) Fan [24] 0.7179 0.9749 0.9414
    17 Other Ghadiri [28] 0.2663 0.9901 0.8773
    18 Wavelet/ Bankhead [29] 0.7027 0.9371
    19 Filter-based Azzopardi [30] 0.7655 0.9704 0.9442
    20 Approach Levet [31] 0.728 0.971 0.940
    21 Proposed Method 0.7588 0.9748 0.9456

     | Show Table
    DownLoad: CSV
    Table 5.  Performance comparison of retinal blood vessel segmentation methods on STARE database.
    No Category Techniques Avg. Avg. Avg.
    Sn Sp Acc
    1 Chaudhuri [2] 0.2846 0.9873 0.9142
    2 Matched Zhang [11] 0.7177 0.9753 0.9484
    3 Filter Odstrcilik [12] 0.7847 0.9512 0.9341
    4 Chakraborti [13] 0.6786 0.9586 0.9379
    5 Multi-scale Vlachos [14] 0.747 0.955 0.929
    6 approach Nguyen [15] 0.7317 0.9613 0.9324
    7 Hou [16] 0.7348 0.9652 0.9336
    8 Morphological Mendonca [4] 0.6996 0.9730 0.9440
    9 processing Fraz [19] 0.7311 0.9680 0.9442
    10 Gao [21] 0.7581 0.9550 0.9401
    11 Pattern classification Soares [25] 0.7207 0.9747 0.9479
    12 (Wavelet-based) Fraz [27] 0.7548 0.9763 0.9534
    13 Pattern classification Marin [23] 0.6944 0.9819 0.9526
    14 (Image Feature-based) Fan [24] 0.6996 0.9787 0.9488
    15 Other Wavelet/ Azzopardi [30] 0.7716 0.9701 0.9497
    16 Filter-based approach Levet [31] 0.7321 0.9634 0.9412
    17 Proposed Method 0.7798 0.9746 0.9528

     | Show Table
    DownLoad: CSV

    However, our proposed method is compared with some promising research work in terms of cross-training evaluation which also represents the robustness of our algorithm. In the case of cross-training experiment, the test data is completely taken from a different dataset or domain. Table 6 illustrates the comparative average accuracy analysis where the DRIVE test set images are evaluated after training with STARE dataset images and vice versa. Furthermore, ten abnormal images of STARE database are experimented separately and achieved an acceptable score in performance measures. A relative comparison with some recent works is also noted in Table 7 as well as one output result of a pathology image after performing the execution on Matlab is depicted in Figure 14. Figure 14(c), (d) are the output of shearlet and bendlet transform respectively where we can comprehend the latter image can detect more thin vessels from the background image.

    Table 6.  Performance comparison of results with cross training in terms of Avg. Accuracy.
    Method DRIVE (Training on STARE) STARE (Training on DRIVE)
    Soares [25] 0.9397 0.9327
    Fraz [27] 0.9456 0.9493
    Ricci [22] 0.9266 0.9464
    Marin [23] 0.9448 0.9528
    Proposed Method 0.9450 0.9487

     | Show Table
    DownLoad: CSV
    Table 7.  Performance comparison of results on abnormal retinas (STARE database).
    Method Avg. Sn Avg. Sp Avg. Acc
    Zhang [11] 0.7166 0.9673 0.9439
    Saffarzadeh [40] 0.7166 0.9672 0.9438
    Mendonca [4] 0.6801 0.9694 0.9426
    Soares [25] 0.7181 0.9765 0.9500
    Fraz [27] 0.7262 0.9764 0.9511
    Proposed Method 0.7406 0.9728 0.9452

     | Show Table
    DownLoad: CSV
    Figure 14.  Original image from STARE (a), ground truth (b), output of Shearlet approach (c) and output of proposed method (d).

    Automatic and proper retinal blood vessel segmentation leads to the solution for various optic diseases. As day by day, the number of patients and the necessity of the vessel segmentation is increasing, an automated system can be an alternative to the manual system. Due to different sizes and shapes, it is very challenging to develop an automated system. Introducing Bendlets successfully for the first time in the domain of medical imaging opens the potentiality to use a comparable strategy in a lot of increasingly complex clinical applications. Moreover, analyzing and comparing the group of ensemble classifiers help to decide the best option for training and testing any new dataset images. In the future, we want to extend our research to develop an automated segmentation system for other problems such as 3D MR brain images. Different object detection in the optical images like hemorrhage, exudates, optic disc, cotton wool spots as well as Artery-venous classification, measurement of tortuosity, vessel width, branching angle could be some important topics of interest which could ameliorate the autonomous diagnosis of retinal images. Later on, 3D shape analysis of individual objects and predicting the growth rate of disease could be some advanced focus of the future study. It can also be explored for the disease of Alzheimer's and Amyotrophic Lateral Sclerosis (ALS). But to investigate, researchers need to collect robust clinical datasets at first.

    The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group no (RG-1441-394).

    All authors declare no conflicts of interest in this paper.

    [1] H. Abboud, C. Alkosseifi, J-P. Chehab, A stabilized bi-grid method for Allen-Cahn equation in Finite Elements, accepted to be published in Computational and Applied Mathematics, 2018.
    [2] H. Abboud, V. Girault, T. Sayah, A second order accuracy for a full discretized time-dependent Navier-Stokes equations by a two-grid scheme, Numer. Math., 114 (2009), 189–231.
    [3] H. Abboud and T. Sayah, A full discretization of a time-dependent two dimensional Navier-Stokes equations by a two-grid scheme, M2AN Math. Model. Numer. Anal., 42 (2008), 141–174.
    [4] I. Babuška, The finite element method with Lagrangian multipliers, Numer. Math., 20 (1973), 179–192.
    [5] M. Bercovier and M. Engelman, A finite element for the numerical solution of viscous incompressible flows, J. Comput. Phys., 30 (1979), 181–201.
    [6] A. Bousquet, M. Marion, M. Petcu, et al. Multilevel finite volume methods and boundary conditions for geophysical flows, Comput. Fluids, 74 (2013), 66–90.
    [7] F. Brezzi, On the existence, uniqueness and approximation of saddle-point problem arising from Lagrangian multipliers, RAIRO Anal. Numér., 8 (1974), 129–151.
    [8] F. Brezzi, J. Pitkaranta, On the Stabilization Finite Elements Approximation of the Stokes Equation, In: Effcient Solution of Elliptic Problems, Proceedings of a GAMM-Seminar, Kiel, (1984), 11–19.
    [9] C. H. Bruneau and C. Jouron, An effcient scheme for solving steady incompressible Navier-Stokes equations, J. Comput. Phys, 89 (1990), 389–413.
    [10] C. Calgaro, J. Laminie, R. Temam, Dynamical multilevel schemes for the solution of evolution equations by hierarchical finite element discretization, Appl. Numer. Math., 23 (1997), 403–442.
    [11] C. Calgaro, A. Debussche, J. Laminie, On a multilevel approach for the two-dimensional Navier- Stokes equations with finite elements, Int. J. Numer. Meth. Fl., 27 (1998), 241–258.
    [12] C. Calgaro, J.-P. Chehab, J. Laminie, et al. Schémas multiniveaux pour les équations d'ondes, (French) [Multilevel schemes for waves equations], ESAIM Proc., 27 (2009), 180–208.
    [13] J.-P. Chehab, B. Costa, Time explicit schemes and spatial finite differences splittings, J. Sci. Comput., 20 (2004), 159–189.
    [14] B. Costa. L. Dettori, D. Gottlieb, et al. Time Marching Multilevel Techniques for Evolutionary Dissipative Problems, SIAM J. Sci. comput., 23 (2001), 46–65.
    [15] A. J. Chorin, Numerical solution of the Navier-Stokes equations. Math. Comput., 22 (1968), 745–762.
    [16] T. Dubois, F. Jauberteau, R. Temam, Dynamic multilevel methods and the numerical simulation of homogeneous and non homogeneous turbulence, Cambridge Academic Press.
    [17] T. Dubois, F. Jauberteau, R. Temam, et al. Multilevel schemes for the shallow water equations J. Comput. Phys., 207 (2005), 660–694.
    [18] S. Faure, J. Laminie, R. Temam, Finite Volume Discretization and Multilevel Methods in Flow Problems, J. Sci. Comput., 25 (2005), 231–261.
    [19] FreeFem++. Available from: http://www.freefem.org.
    [20] U. Ghia, K. N. Ghia and C. T. Shin, High-Re solutions for incompressible flow using the Navier- Stokes equations and a multigrid method. J. Comput. Phys., 48 (1982), 387–411.
    [21] V. Girault and J.-L. Lions, Two-grid finite-element schemes for the transient Navier-Stokes equations, ESAIM: Mathematical Modelling and Numerical Analysis, 35 (2001), 945–980.
    [22] K. Goda, A multistep technique with implicit difference schemes for calculating two- or threedimensional cavity flows, J. Comput. Phys., 30 (1979), 76–95.
    [23] O. Goyon, High-Reynolds number solutions of Navier-Stokes equations using incremental unknowns. Comput. Method. Appl. M., 130 (1996), 319–335.
    [24] J. L. Guermond, P. Minev and J. Shen, An overview of projection methods for incompressible flows, Comput. Method. Appl. M., 195 (2006), 6011–6045.
    [25] J. L. Guermond and J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (2003), 112–134.
    [26] J. L. Guermond and J. Shen, Quelques résultats nouveaux sur les méthodes de projection. Comptes Rendus de l'Académie des Sciences-Series Ⅰ-Mathematics, 333 (2001), 1111–1116.
    [27] W. Hackbusch, Multi-grid methods and applications, Berlin, Springer, 1985.
    [28] Y. He and K. M. Liu, Multi-level spectral Galerkin method for the NavierStokes equations, Ⅱ: time discretization, Adv. Comput. Math., 25 (2006), 403–433.
    [29] F. Jauberteau, R. Temam and J. Tribbia, Multiscale/fractional step schemes for the numerical simulation of the rotating shallow water flows with complex periodic topography, J. Comput. Phys., 270 (2014), 506–531.
    [30] W. Layton, A two-level discretization method for the Navier-Stokes equations. Comput. Math. Appl., 26 (1993), 33–38.
    [31] W. Layton, Energy Dissipation in the Smagorinsky Model of Turbulence, Appl. Math. Lett., 59 (2016), 56–59.
    [32] W. Layton, R. Lewandowski, Analysis of an Eddy Viscosity Model for Large Eddy Simulation of Turbulent Flows, J. math. Fluid Mech., 4 (2002), 374–399.
    [33] M. Marion and R. Temam, Nonlinear Galerkin Methods, SIAM J. Numer. Anal., 26 (1989), 1139–1157.
    [34] M. Marion and R. Temam, Nonlinear Galerkin Methods: The Finite elements case, Numer. Math., 57 (1990), 205–226.
    [35] M. Marion and J. Xu, Error estimates on a new nonlinear Galerkin method based on two-grid finite elements, SIAM J. Numer. Anal., 32 (1995), 1170–1184.
    [36] M. Olshanskii, G. Lube, T. Heister, et al. Grad-div stabilization and subgrid pressure models for the incompressible Navier-Stokes equations, Comput. Method. Appl. M., 198 (2009), 3975–3988.
    [37] F. Pascal, Méthodes de Galerkin non linéaires en discrétisation par éléments finis et pseudospectrale. Application la mécanique des fluides, Université de Paris-Sud Orsay, 1992.
    [38] F. Pouit, Etude de schémas numériques multiniveaux utilisant les inconnues incrémentales dans le cas des différences finies: application à la mécanique des fluides, in french. Thèse, Université Paris 11, 1998.
    [39] J. Shen, X. Yang, Numerical Approximations of Allen-Cahn and Cahn-Hilliard Equations. Discrete Cont. Dyn-A, 28 (2010), 1669–1691.
    [40] R. Temam, Navier-Stokes equations, Revised version, North-Holland, Amsterdam, 1984.
    [41] R. Temam, Navier-Stokes equations, Theory and numerical analysis, North-Holland, Amsterdam, 1977.
    [42] R. Temam, Approximation d'équations aux dérivées partielles par des méthodes de décomposition, Séminaire Bourbaki 381, 1969/1970.
    [43] R. Temam, Une méthode d'approximation de la solution des équations de Navier-Stokes, B. Soc. Math. Fr., 98 (1968), 115–152.
    [44] L. J. P. Timmermans, P. D. Minev, F. N. Van De Vosse, An approximate projection scheme for incompressible flow using spectral elements, Int. J. Numer. Meth. Fl., 22 (1996), 673–688.
    [45] S. P. Vanka, Block-implicit multigrid solution of Navier-Stokes equations in primitive variables, J. Comput. Phys., 65 (1986), 138–158.
    [46] J. Xu, Some Two-Grid Finite Element Methods, Tech. Report, P.S.U, 1992.
    [47] J. Xu, A novel two-grid method of semilinear elliptic equations, SIAM J. Sci. Comput., 15 (1994), 231–237.
    [48] J. Xu, Two-grid discretization techniques for linear and nonlinear PDE, SIAM J. Numer. Anal., 33 (1996), 1759–1777.
    [49] H. Yserentant, On the multi-level splitting of finite element spaces, Numer. Math, 49 (1986), 379–412.
  • This article has been cited by:

    1. Buket Toptaş, Davut Hanbay, Retinal blood vessel segmentation using pixel-based feature vector, 2021, 70, 17468094, 103053, 10.1016/j.bspc.2021.103053
    2. Sushil Kumar Saroj, Rakesh Kumar, Nagendra Pratap Singh, Retinal blood vessels segmentation using Wald PDF and MSMO operator, 2023, 11, 2168-1163, 215, 10.1080/21681163.2022.2063188
    3. Ga Young Kim, Jae Yong Kim, Sang Hyeok Lee, Sung Min Kim, Mitsuru Nakazawa, Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two-Stage Convolutional Neural Network, 2022, 2022, 2314-6141, 1, 10.1155/2022/1705338
    4. Jianguo Xu, Jianxin Shen, Cheng Wan, Qin Jiang, Zhipeng Yan, Weihua Yang, A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery, 2022, 9, 2296-858X, 10.3389/fmed.2022.821565
    5. Hu Haitao, Piercarlo Cattani, Vincenzo Guercio, Francesco Villecco, 2022, Chapter 55, 978-3-031-05229-3, 464, 10.1007/978-3-031-05230-9_55
    6. Y. Aruna Suhasini Devi, K. Manjunatha Chari, 2023, Chapter 18, 978-981-19-8093-0, 241, 10.1007/978-981-19-8094-7_18
    7. Law Kumar Singh, Munish Khanna, Dheeraj Mansukhani, Shankar Thawkar, Rekha Singh, RETRACTED ARTICLE: Features fusion based novel approach for efficient blood vessel segmentation from fundus images, 2023, 83, 1573-7721, 55109, 10.1007/s11042-023-17621-x
    8. Yafei Liu, Linqiang Yang, Hongmei Ma, Shuli Mei, Adaptive filter method in Bendlet domain for biological slice images, 2023, 20, 1551-0018, 11116, 10.3934/mbe.2023492
    9. Arunakranthi Godishala, Veena Raj, Daphne Teck Ching Lai, Hayati Yassin, Survey on retinal vessel segmentation, 2024, 1573-7721, 10.1007/s11042-024-19075-1
    10. Habte Tadesse Likassa, Ding-Geng Chen, Kewei Chen, Yalin Wang, Wenhui Zhu, Robust PCA with Lw,∗ and L2,1 Norms: A Novel Method for Low-Quality Retinal Image Enhancement, 2024, 10, 2313-433X, 151, 10.3390/jimaging10070151
  • Reader Comments
  • © 2018 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4740) PDF downloads(724) Cited by(0)

Figures and Tables

Figures(11)  /  Tables(12)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog