Loading [MathJax]/jax/output/SVG/jax.js
Research article

Edge detection of remote sensing image based on Grünwald-Letnikov fractional difference and Otsu threshold


  • Received: 17 October 2022 Revised: 12 December 2022 Accepted: 19 December 2022 Published: 09 January 2023
  • With the development of remote sensing technology, the resolution of remote sensing images is improving, and the presentation of geomorphic information is becoming more and more abundant, the difficulty of identifying and extracting edge information is also increasing. This paper demonstrates an algorithm to detect the edges of remote sensing images based on Grünwald–Letnikov fractional difference and Otsu threshold. First, a convolution difference mask with two parameters in four directions is constructed by using the definition of the Grünwald–Letnikov fractional derivative. Then, the mask is convolved with the gray image of the remote sensing image, and the edge detection image is obtained by binarization with Otsu threshold. Finally, the influence of two parameters and threshold values on detection results is discussed. Compared with the results of other detectors on the NWPU VHR-10 dataset, it is found that the algorithm not only has good visual effect but also shows good performance in quantitative evaluation indicators (binary graph similarity and edge pixel ratio).

    Citation: Chao Chen, Hua Kong, Bin Wu. Edge detection of remote sensing image based on Grünwald-Letnikov fractional difference and Otsu threshold[J]. Electronic Research Archive, 2023, 31(3): 1287-1302. doi: 10.3934/era.2023066

    Related Papers:

    [1] Ruini Zhao . Nanocrystalline SEM image restoration based on fractional-order TV and nuclear norm. Electronic Research Archive, 2024, 32(8): 4954-4968. doi: 10.3934/era.2024228
    [2] Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, Dayle B. Fleischfresser, Daniel J. O'Connor, Graeme C. Wright, William Guo . Peanut yield prediction with UAV multispectral imagery using a cooperative machine learning approach. Electronic Research Archive, 2023, 31(6): 3343-3361. doi: 10.3934/era.2023169
    [3] Weichi Liu, Gaifang Dong, Mingxin Zou . Satellite road extraction method based on RFDNet neural network. Electronic Research Archive, 2023, 31(8): 4362-4377. doi: 10.3934/era.2023223
    [4] Tej Bahadur Shahi, Cheng-Yuan Xu, Arjun Neupane, William Guo . Machine learning methods for precision agriculture with UAV imagery: a review. Electronic Research Archive, 2022, 30(12): 4277-4317. doi: 10.3934/era.2022218
    [5] Jinjiang Liu, Yuqin Li, Wentao Li, Zhenshuang Li, Yihua Lan . Multiscale lung nodule segmentation based on 3D coordinate attention and edge enhancement. Electronic Research Archive, 2024, 32(5): 3016-3037. doi: 10.3934/era.2024138
    [6] Lei Pan, Chongyao Yan, Yuan Zheng, Qiang Fu, Yangjie Zhang, Zhiwei Lu, Zhiqing Zhao, Jun Tian . Fatigue detection method for UAV remote pilot based on multi feature fusion. Electronic Research Archive, 2023, 31(1): 442-466. doi: 10.3934/era.2023022
    [7] Haijun Wang, Wenli Zheng, Yaowei Wang, Tengfei Yang, Kaibing Zhang, Youlin Shang . Single hyperspectral image super-resolution using a progressive upsampling deep prior network. Electronic Research Archive, 2024, 32(7): 4517-4542. doi: 10.3934/era.2024205
    [8] Ziqing Yang, Ruiping Niu, Miaomiao Chen, Hongen Jia, Shengli Li . Adaptive fractional physical information neural network based on PQI scheme for solving time-fractional partial differential equations. Electronic Research Archive, 2024, 32(4): 2699-2727. doi: 10.3934/era.2024122
    [9] Sahar Badri . HO-CER: Hybrid-optimization-based convolutional ensemble random forest for data security in healthcare applications using blockchain technology. Electronic Research Archive, 2023, 31(9): 5466-5484. doi: 10.3934/era.2023278
    [10] Kunxiao Liu, Guowu Yuan, Hongyu Liu, Hao Wu . Multiscale style transfer based on a Laplacian pyramid for traditional Chinese painting. Electronic Research Archive, 2023, 31(4): 1897-1921. doi: 10.3934/era.2023098
  • With the development of remote sensing technology, the resolution of remote sensing images is improving, and the presentation of geomorphic information is becoming more and more abundant, the difficulty of identifying and extracting edge information is also increasing. This paper demonstrates an algorithm to detect the edges of remote sensing images based on Grünwald–Letnikov fractional difference and Otsu threshold. First, a convolution difference mask with two parameters in four directions is constructed by using the definition of the Grünwald–Letnikov fractional derivative. Then, the mask is convolved with the gray image of the remote sensing image, and the edge detection image is obtained by binarization with Otsu threshold. Finally, the influence of two parameters and threshold values on detection results is discussed. Compared with the results of other detectors on the NWPU VHR-10 dataset, it is found that the algorithm not only has good visual effect but also shows good performance in quantitative evaluation indicators (binary graph similarity and edge pixel ratio).



    Remote sensing images include all kinds of geomorphic information, such as roads, farmland and buildings. The edge detection of these different geomorphic features is of great significance for topographic survey, urban planning, precision agriculture and military analysis. However, with the development of remote sensing technology, the resolution of remote sensing images is improving, the presentation of geomorphic information is becoming more and more abundant, and the difficulty of identifying and extracting edge information is also increasing.

    In image processing, edges are the collections of pixels between adjacent areas of an image. They reflect the discontinuity of image features and contain a lot of structural information. Edge detection is an important part of image processing as well as a basic means of image segmentation [1], object detection [2] and information extraction [3]. Currently, the methods used for remote sensing image edge detection can be roughly divided into three categories. Fist, there is edge detection based on gradient: C. Xu et al. proposed multispectral image edge detection via Clifford gradient [4], S. Amstutz et al. proposed edge detection using topological gradients [5], J. T. Tang et al. proposed edge detection based on curvature of the gravity gradient tensor [6], and V. B. S. Prasath et al. proposed multiscale gradient maps augmented fisher information [7]. Second, there is edge detection based on transformation: H. H. Zhao et al. proposed optimal Gabor filter–based edge detection of high spatial resolution remotely sensed images [8], W. C. Lin et al. proposed edge detection in medical images with quasi high–pass filter [9], G. B. Chen et al. proposed a road identification algorithm for remote sensing images based on wavelet transform and a recursive operator [10], and A. Isar and C. Nafornita et al. proposed robust edge detection base on Hyperanalytic wavelet [11]. Third, there is edge detection based on machine learning: M. Han et al. proposed an extreme learning machine based on cellular automata of edge detection for remote sensing images [12], L. Huang et al. proposed edge detection in UAV remote sensing images using a method integrating Zernike moments with clustering algorithms [13], Z. Qu et al. proposed image fusion using deep neural networks for image edge detection [14], and X. G. Zheng proposed a robot image edge detection method based on a Gaussian positive–negative radial basis neural network [15]. The first category edge detectors use mostly differentiation to detect the discontinuity of gray value. Commonly used first order differential detectors mainly include the Sobel detector [16] and the Canny detector [17], and second order differential detectors include the Laplacian detector [18]and the Laplacian of Gaussian (LoG) detector [19]. These algorithms are easy to calculate, but they have some disadvantages such as discontinuous edge extraction and poor anti–noise performance. In view of the problems of these detectors, many researchers have made corresponding improvements. [20] introduced a quantum improved Sobel edge detection algorithm with non–maximum suppression and double threshold techniques for the novel enhanced quantum representation method. Based on Canny and LoG detectors, [21] developed a new efficient edge detector (CLoG) used for the detection of edge areas in digital images. The detection effect of these improved algorithms is indeed improved, but the detection results tend to mistakenly detect fake edges in the presence of excessive noise or artifacts [22].

    In recent years, there has been the application of fractional differentials in image enhancement [23], image data enhancement [24], image denoising [25], image recovery [26], image encryption [27]and data hiding [28], extracting image edge features [29]. More and more widely, the application of fractional differentials in image edge detection has gradually become an important research topic. Because a fractional difference can maintain the low–frequency contour features of the smooth region of the image and enhance the high–frequency edge and texture details of the region with large or insignificant grayscale changes [25], it has achieved good results in edge detection of medical images [30], grayscale images [31], indexed images [32], true color images [33] and pseudo–color [34]. However, its application in the edge detection of remote sensing images is almost absent and needs further research. In order to extract rich detail features, we try to apply the fractional differential method to edge detection of remote sensing images.

    Table 1 shows the comparison of some fractional difference mask operators in the last three years. There are three classical definitions of fractional derivatives: Riemann–Liouville (R–L)[35], Caputo–Fabrizio(C–F)[36,37] and Grünwald–Letnikov (G–L)[38], which are the theoretical basis for the construction of masks. Considering that existing mask operators of fractional difference basically only pay attention to changes in two directions (X–direction and Y–direction), we try to construct mask operators in four directions for the edge detection research of remote sensing images. In addition, to better describe the variation, we also add a regulating parameter to the mask operator. Therefore, the main contributions of this paper are as follows:

    Table 1.  Comparison of fractional difference mask operators.
    Algorithm Year Theoretical Basis Size of Mask Number of Directions Number of Parameters Handling Object
    [30] 2020 C-F 5×5 5 1 Medical images
    [32] 2020 R-L 3×3 2 1 Grayscale images
    [33] 2021 G-L 5×5 2 1 Color images
    [34] 2022 G-L N×N 2 1 Color images
    [39] 2022 G-L 3×3 2 1 Noise images

     | Show Table
    DownLoad: CSV

    1) A fractional difference mask operator with two parameters is constructed in four directions(X-direction, Y-direction, diagonal direction and inverse diagonal direction) based on G-L fractional derivative.

    2) Based on fractional difference mask and Otsu threshold, edge detection of remote sensing images from the NWPU VHR-10 dataset [40] is carried out.

    3) The performance of the proposed method is compared with other detectors from the quantitative aspects of binary graph similarity (BGS) and edge pixel ratio (EPR).

    4) The influence of two parameters and threshold on edge detection results is discussed.

    The rest of the paper is organized as follows. Section 2 describes the related mathematical theory in detail (Grünwald–Letnikov fractional difference, fractional convolution mask and Otsu threshold). Section 3 introduces the operation procedure of the edge detection algorithm, sources of remote sensing images and performance measures for evaluating edge detection operation. The simulation results and discussions are illustrated in Section 4. Finally, conclusion and future scope is summarized in Section 5.

    The G–L fractional derivative is defined as

    Dγf(x)=limh0hγnk=0(1)kΓ(γ+1)Γ(k+1)Γ(γk+1)f(xkh), (2.1)

    where γ is the order of fractional differentiation, γR (and may be a fraction), h is the number of steps, and Γ(k)=(k1)!. So, the fractional partial derivative of f(x,y) is defined as

    γf(x,y)xγ=limh0hγnk=0(1)kΓ(γ+1)Γ(k+1)Γ(γk+1)f(xkh,y). (2.2)
    γf(x,y)yγ=limh0hγnk=0(1)kΓ(γ+1)Γ(k+1)Γ(γk+1)f(x,ykh). (2.3)

    When h=1, the G–L fractional difference can be expressed as [41]

    Dγf(x)f(x)γf(x1)+γ(γ1)2f(x2)++(1)nΓ(γ+1)n!Γ(γn+1)f(xn). (2.4)

    Further, the difference of fractional partial derivative can be written as

    γf(x,y)xγf(x,y)+(γ)f(x1,y)+γ(γ1)2f(x2,y)++(1)nΓ(γ+1)n!Γ(γn+1)f(xn,y), (2.5)
    γf(x,y)yγf(x,y)+(γ)f(x,y1)+γ(γ1)2f(x,y2)++(1)nΓ(γ+1)n!Γ(γn+1)f(x,yn). (2.6)

    When γ=1, the differences of the first partial derivative are

    f(x,y)xf(x,y)f(x1,y),f(x,y)yf(x,y)f(x,y1).

    When γ=2, the differences of the second partial derivative are

    2f(x,y)x2f(x,y)2f(x1,y)+f(x2,y),2f(x,y)y2f(x,y)2f(x,y1)+f(x,y2).

    The integer–order mask operators in horizontal direction and inverse diagonal direction are

    IOMH=[000ρ0ρ000]

    and

    IOMI=[00ρ000ρ00],

    where ρ is called the regulating parameter and is a natural number.

    The differential operator χ=ρ(f(x+1,y)f(x1,y)) is constructed with IOMH. We consider using the two smallest difference values to approximate the gradient value and construct the fractional differential mask operators. Because f(x+1,y)f(x1,y)=2f(x+1,y)x,

    χγ=2ρfγ(x+1,y)xγ=2ρ(γ(γ1)2f(x1,y)+(γ)f(x,y)+f(x+1,y)). (2.7)

    Then, the fractional convolution mask in the X–direction is

    FCMX=[000ργ(γ1)2ργ2ρ000].

    Rotate the FCMX symmetrically to obtain the fractional convolution mask in the Y–direction, that is,

    FCMY=[0ργ(γ1)002ργ002ρ0].

    The differential operator χ=ρ(f(x+1,y1)f(x1,y+1)) is constructed with IOMI. For symmetry, transform χ to

    χ=ρ2(f(x+1,y1)f(x+1,y+1)+f(x+1,y+1)f(x1,y+1))+ρ2(f(x+1,y1)f(x1,y1)+f(x1,y1)f(x1,y+1)). (2.8)

    Because f(x+1,y+1)f(x+1,y1)=2f(x+1,y+1)y, f(x+1,y+1)f(x1,y+1)=2f(x+1,y+1)x, f(x+1,y1)f(x1,y1)=2f(x+1,y1)x, f(x1,y+1)f(x1,y1)=2f(x1,y+1)y,

    χ=ρ(f(x+1,y+1)xf(x+1,y+1)y+f(x+1,y1)xf(x1,y+1)y). (2.9)

    Then,

    χγ=ρ(f(x+1,y+1)+(γ)f(x,y+1)+γ(γ1)2f(x1,y+1)f(x+1,y+1)(γ)f(x+1,y)γ(γ1)2f(x+1,y1)+f(x+1,y1)+(γ)f(x,y1)+γ(γ1)2f(x1,y1)f(x1,y+1)(γ)f(x1,y)γ(γ1)2f(x1,y1)), (2.10)

    and

    χγ=ρ((γ)f(x,y1)γ(γ1)22f(x+1,y1)+γf(x1,y)+γf(x+1,y)+γ(γ1)22f(x1,y+1)+(γ)f(x,y1)). (2.11)

    The fractional convolution mask in the inverse diagonal direction is

    FCMI=[0ργρ(γ2γ2)2ργ0ργρ(γ2γ2)2ργ0].

    Rotate the FCMI symmetrically to obtain the fractional convolution mask in the diagonal direction, that is,

    FCMD=[ρ(γ2γ2)2ργ0ργ0ργ0ργρ(γ2γ2)2].

    Add FCMX, FCMY, FCMI and FCMD to obtain the final fractional convolution mask:

    FCM=[ρ(γ2γ2)2ργ(γ1)ρ(γ2γ2)2ργ(γ1)02ρρ(γ2γ2)22ρρ(γ2γ2)2].

    The basic idea of the maximum interclass variance method [42,43,44] is to divide the image into background and foreground and calculate the best threshold to distinguish the two pixel regions, so as to maximize the distinction between the two types of pixels. Assume that the gray–scale image G(x,y) has κ gray-scale levels, and Nm(m=1,,κ) represents the pixel number in each gray level. The pixel number of G(x,y) is N=κm=1Nm, so the occurrence probability of gray-scale level m is Pm=Nm/N.

    If the threshold ϖ is used as the limit to divide the image into two regions, the foreground pixel region is A, and the background pixel region is B, then the probabilities of foreground A and background B are, respectively, PA=ϖm=1Pm, PB=κm=ϖ+1Pm.

    Let the total gray mean of G(x,y) be s0=κm=1mPm, and then the gray means of A and B are, respectively, sA=ϖm=1mPm, sB=κm=ϖ+1mPm. The interclass variance of A and B is

    ε(ϖ)=PA(s0sA)2+PB(s0sB)2. (2.12)

    When ε(ϖ) is the maximum value, ϖ is the optimal threshold, that is, Otsu threshold.

    Suppose the remote sensing image is RSI(R,G,B), and the process of edge detection using the FCM and Otsu threshold is as follows:

    Step 1. RSI(R,G,B) is preprocessed, and gray–scale image G(x,y) is obtained.

    Step 2. Image E(x,y) is obtained by convolution of image G(x,y) with FCM, using the appropriate values of ρ and γ.

    Step 3. By calculating the Otsu threshold ϖ of image G(x,y), image E(x,y) is binarized to obtain the edge detection image F(x,y).

    In this process, the selection of parameters ρ and γ determines the effect of edge detection, so they need to be adjusted for the best effect in the experiment.

    The remote sensing images used in this study are all from the NWPU VHR-10 dataset. This very–high–resolution (VHR) remote sensing image dataset was constructed by [40] from Northwestern Polytechnical University (NWPU). This dataset contains a total of 800 VHR remote sensing images, which were cropped from Google Earth and the Vaihingen data set. The images cover 10 geographical scenes, including airplane, ship, storage tank, baseball diamond, tennis court, basketball court, ground track field, harbor, bridge and vehicle. Several of them were selected for edge detection research in this paper.

    In order to evaluate the performance [45] of the edge detection operator in this paper, we choose the following indicators:

    1) Suppose B1(x,y) and B2(x,y) are M×N binary graphs, C={(i,j)(M×N)B1(i,j)=B2(i,j)}, and num(C) is the number of elements in set C. The BGS is calculated as follows:

    BGS(B1,B2)=num(C)M×N. (3.1)

    2) Suppose B(x,y) is an M×N binary graph, and S is the number of edge pixels. The EPR is calculated as follows:

    EPR(B)=SM×N. (3.2)

    We used the proposed algorithm to detect the edge of an image in the NWPU VHR-10 dataset, and we compared the results with some edge detectors: Roberts, Sobel, LoG, Laplacian, Canny, [33] and [34]. The original image I1, Otsu image and the edge images extracted by the seven detectors are shown in Figure 1. I1 has a pixel size of 1073×705 and Otsu threshold of 0.5725. In the FCM image, ρ=4.5,γ=0.7.

    Figure 1.  Edge images extracted by eight detectors.

    Compared with the other seven detectors, the edge image extracted by the FCM detector has richer texture layers and more obvious details. In order to compare these eight detectors quantitatively, we calculated the BGS value, EPR value and required time, respectively, as shown in Table 2.

    Table 2.  BGS and EPR with different detectors.
    Roberts Sobel LoG Laplacian Canny [33] [34] FCM
    BGS 0.3390 0.3435 0.3722 0.3711 0.3691 0.3440 0.3529 0.4027
    EPR 0.0346 0.0434 0.0675 0.0397 0.1107 0.0432 0.0517 0.2129
    Time 0.609 s 0.586 s 0.804 s 0.677 s 0.717 s 0.557 s 0.738 s 0.622 s

     | Show Table
    DownLoad: CSV

    As can be seen from Table 2, the FCM detector extracts more edge information, and the edge image extracted by the FCM detector is more similar to the Otsu image, but the time spent is moderate.

    In order to investigate the influence of thresholds, for the original image I2, we made edge detection images extracted by the FCM detector with different thresholds, as shown in Figure 2. I2 has a pixel size of 1259×820 and Otsu threshold of 0.4941. In the FCM image, ρ=3,γ=0.5. The BGS values of edge images extracted by the FCM detector with different thresholds relative to the Otsu image and the EPR values of edge images extracted by FCM detector with different thresholds are shown in Table 3.

    Figure 2.  Edge images extracted by FCM detector with different thresholds.
    Table 3.  BGS and EPR with different thresholds.
    Threshold BGS EPR
    0.1 0.3992 0.3987
    0.2 0.3613 0.3278
    0.3 0.3435 0.2737
    0.4 0.3721 0.2322
    Otsu 0.4494 0.2014
    0.5 0.4562 0.1998
    0.6 0.6214 0.1736
    0.7 0.7555 0.1522
    0.8 0.7896 0.1346
    0.9 0.8157 0.1196

     | Show Table
    DownLoad: CSV

    The difference presented by the images is not obvious, but it can be seen from Table 3 that BGS increases, and EPR decreases with the change of thresholds. The method proposed in this paper uses Otsu to determine the threshold, avoids the subjectivity of artificial assignment, and achieves a balance between BGS and EPR.

    In order to investigate the influence of the two parameters ρ and γ in the FCM detector, edge images corresponding to ρ=3 and different γ values are presented in Figure 3, and edge images corresponding to different ρ values and γ=0.6 are presented in Figure 4. I3 has a pixel size of 974×762 and Otsu threshold of 0.4196. Figures 3 and 4 show that when ρ is constant and γ is in the interval [0,1] or when γ is constant and ρ is larger, more background details and texture information are extracted.

    Figure 3.  Edge images extracted by the FCM detector with different γ.
    Figure 4.  Edge images extracted by the FCM detector with different ρ.

    The BGS and EPR values obtained by different values of the parameters ρ and γ are listed in Tables 4 and 5, respectively. As can be seen from Tables 4 and 5, if γ remains unchanged, both BGS and EPR values increase with the increase of ρ. If ρ is unchanged, the BGS and EPR values increase as γ increases from 0.5 to 0.5, and they decrease as γ increases from 0.5 to 1.7. These features are consistent with the rules shown in Figures 3 and 4, so the values of BGS and EPR can reflect the extraction of edge information to a certain extent. The larger BGS or EPR is, the richer the extracted edge information.

    Table 4.  BGS with different γ and ρ.
    ρ γ
    -1.5 -0.5 -0.1 0.3 0.5 0.9 1.4 1.7 2.1
    1 0.4116 0.4874 0.4906 0.4919 0.4920 0.4914 0.4884 0.4855 0.4847
    2 0.5020 0.4941 0.4974 0.4988 0.4990 0.4983 0.4952 0.4913 0.4891
    3 0.5116 0.4988 0.5035 0.5063 0.5067 0.5052 0.5001 0.4960 0.4948
    4 0.5198 0.5041 0.5104 0.5136 0.5140 0.5124 0.5058 0.5004 0.5006
    5 0.5259 0.5092 0.5162 0.5195 0.5199 0.5183 0.5113 0.5051 0.5063
    6 0.5309 0.5141 0.5212 0.5245 0.5249 0.5233 0.5164 0.5096 0.5113
    7 0.5343 0.5187 0.5254 0.5286 0.5289 0.5273 0.5209 0.5136 0.5157
    8 0.5367 0.5223 0.5287 0.5315 0.5319 0.5305 0.5244 0.5177 0.5195
    9 0.5392 0.5259 0.5313 0.5336 0.5340 0.5331 0.5274 0.5209 0.5229
    10 0.5401 0.5283 0.5337 0.5354 0.5355 0.5349 0.5301 0.5243 0.5257

     | Show Table
    DownLoad: CSV
    Table 5.  EPR with different γ and ρ.
    ρ γ
    -1.5 -0.5 -0.1 0.3 0.5 0.9 1.4 1.7 2.1
    1 0.0274 0.0082 0.0162 0.0218 0.0225 0.0195 0.0101 0.0052 0.0068
    2 0.0877 0.0388 0.0634 0.0774 0.0792 0.0721 0.0454 0.0275 0.0322
    3 0.1381 0.0766 0.1108 0.1286 0.1308 0.1219 0.0862 0.0587 0.0643
    4 0.1776 0.1110 0.1504 0.1691 0.1715 0.1622 0.1226 0.0896 0.0947
    5 0.2089 0.1412 0.1823 0.2016 0.2038 0.1946 0.1537 0.1174 0.1222
    6 0.2346 0.1670 0.2088 0.2281 0.2305 0.2210 0.1798 0.1414 0.1462
    7 0.2561 0.1892 0.2310 0.2500 0.2523 0.2431 0.2019 0.1629 0.1670
    8 0.2743 0.2086 0.2498 0.2685 0.2708 0.2620 0.2212 0.1820 0.1856
    9 0.2895 0.2257 0.2661 0.2846 0.2863 0.2776 0.2381 0.1993 0.2019
    10 0.3030 0.2407 0.2804 0.2980 0.3001 0.2916 0.2531 0.2147 0.2167

     | Show Table
    DownLoad: CSV

    When ρ=3, the line graphs of BGS and EPR with respect to γ on the interval [1.5,2.5] are shown in Figure 5. Both BGS and EPR are symmetric with respect to γ=0.5. When γ=0.5, the line graphs of BGS and EPR with respect to ρ are shown in Figure 6. As ρ increases, both BGS and EPR increase.

    Figure 5.  Line graphs of BGS and EPR with respect to γ.
    Figure 6.  Line graphs of BGS and EPR with respect to ρ.

    Figure 7 presents the surface diagrams of BGS and EPR with respect to two parameters and more intuitively shows how BGS and EPR change as the parameters change. Obviously, the shape of the two surface diagrams is very similar, that is, the changes of BGS and EPR with respect to the two parameters are almost the same.

    Figure 7.  Surface diagrams of BGS and EPR with respect to γ and ρ.

    Based on the Grünwald–Letnikov fractional difference formula, a method for constructing a fractional convolution mask is proposed. Then, the method is combined with the Otsu threshold for edge detection of remote sensing images. We performed both visual and quantitative comparison (BGS and EPR) with existing edge detectors and found that the proposed method can better extract edge information of remote sensing images, reflect more texture details and does not need smoothing processing. However, this study also has some shortcomings: First, the size of the mask is only 3×3, and other sizes of mask structures are not considered; second, the detection effect of the image with noise is not good. Haze and clouds seriously reduce the visibility of remote sensing images and are the main source of noise in remote sensing images, so our next work is to remove haze [46] and clouds [47] from remote sensing images in combination with other mathematical theories, and then better carry out edge detection or target recognition.

    We are thankful to anonymous reviewers and my colleagues (Ju Wu and Fang liu) for their help. This work is funded by the National Natural Science Foundation of China (61872304); National Natural Science Foundation of China (11502121); The Project of State Administration of Science, Technology and Industry for National Defence (20zg6108); The Application Basic Research Plan Project of Sichuan Province (No. 2021JY0108); The Science and Technology Research Plan Project of Sichuan Province (No. NJFH20-003).

    All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Bin Wu and Hua Kong. The first draft of the manuscript was written by Chao Chen, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. The authors declare there is no conflict of interest.



    [1] C. He, S. L. Li, D. H. Xiong, P. Z. Fang, M. S. Liao, Remote sensing image semantic segmentation based on edge information guidance, Remote Sens., 12 (2020). https://doi.org/10.3390/rs12091501 doi: 10.3390/rs12091501
    [2] Z. Z. Tu, Y. Ma, C. L. Li, J. Tang, B. Luo, Edge-guided non-local fully convolutional network for salient object detection, IEEE Trans. Circuits Syst. Video Technol., 31 (2021), 582–593. https://doi.org/10.1109/TCSVT.2020.2980853 doi: 10.1109/TCSVT.2020.2980853
    [3] H. L. Zhao, B. Wu, Y. B. Guo, G. Chen, D. Ye, SSWS: An edge detection algorithm with strong semantics and high detectability for spacecraft, Optik, 247 (2021). https://doi.org/10.1016/j.ijleo.2021.168037 doi: 10.1016/j.ijleo.2021.168037
    [4] C. Xu, H. Liu, W. M. Cao, J. Q. Feng, Multispectral image edge detection via Clifford gradient, Sci. China-Inf. Sci., 55 (2012), 260–269. https://doi.org/10.1007/s11432-011-4540-0 doi: 10.1007/s11432-011-4540-0
    [5] S. Amstutz, J. Fehrenbach, Edge detection using topological gradients: A scale-space approach, J. Math. Imaging Vision, 52 (2015), 249–266. https://doi.org/10.1007/s10851-015-0558-z doi: 10.1007/s10851-015-0558-z
    [6] J. T. Tang, Q. B. Shi, S. G. Hu, Z. Y. Ren, Edge detection based on curvature of gravity gradient tensor, Chin. J. Geophys. Chin. Edit., 62 (2019), 1872–1884. https://doi.org/10.6038/cjg2019M0427 doi: 10.6038/cjg2019M0427
    [7] V. B. S. Prasath, D. N. H. Thanh, N. Q. Hung, L. M. Hieu, Multiscale gradient maps augmented fisher information-based image edge detection, IEEE Access, 8 (2020), 141104–141110. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [8] H. H. Zhao, P. F. Xiao, X. Z. Feng, Optimal Gabor filter-based edge detection of high spatial resolution remotely sensed images, J. Appl. Remote Sens., 11 (2017). https://doi.org/10.1117/1.JRS.11.015019 doi: 10.1117/1.JRS.11.015019
    [9] W. C. Lin, J. W. Wang, Edge detection in medical images with quasi high-pass filter based on local statistics, Biomed. Signal Process. Control, 39 (2018), 294–302. https://doi.org/10.1016/j.bspc.2017.08.011 doi: 10.1016/j.bspc.2017.08.011
    [10] G. B. Chen, Z. W. Sun, Z. Li, Road identification algorithm for remote sensing images based on wavelet transform and recursive operator, IEEE Access, 8 (2020), 141824–141837. https://doi.org/10.1109/ACCESS.2020.3012997 doi: 10.1109/ACCESS.2020.3012997
    [11] A. Isar, C. Nafornita, G. Magu, Hyperanalytic wavelet-based robust edge detection, Remote Sens., 13 (2021), 141104–141110. https://doi.org/10.3390/rs13152888 doi: 10.3390/rs13152888
    [12] M. Han, X. Yang, E. Jiang, An extreme learning machine based on cellular automata of edge detection for remote sensing images, IEEE Access, 19 (2015), 27–34. https://doi.org/10.1016/j.neucom.2015.08.121 doi: 10.1016/j.neucom.2015.08.121
    [13] L. Huang, X. Q. Yu, X. Q. Zuo, Edge detection in UAV remote sensing images using the method integrating zernike moments with clustering algorithms, Int. J. Aerosp. Eng., 2017 (2017), 141104–141110. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [14] Z. Qu, S. Y. Wang, L. Liu, D. Y. Zhou, Visual cross-image fusion using deep neural networks for image edge detection, IEEE Access, 7 (2019), 57604–57615. https://doi.org/10.1109/ACCESS.2019.2914151 doi: 10.1109/ACCESS.2019.2914151
    [15] X. G. Zheng, GPNRBNN: A robot image edge detection method based on gaussian positive-negative radial basis neural network, Sens. Imaging, 22 (2021). https://doi.org/10.1007/s11220-021-00351-5 doi: 10.1007/s11220-021-00351-5
    [16] G. B. Chen, Z. Y. Jiang, M. M. Kamruzzaman, Radar remote sensing image retrieval algorithm based on iImproved sobel operator, J. Visual Commun. Image Represent., 8 (2019), https://doi.org/10.1016/j.jvcir.2019.102720 doi: 10.1016/j.jvcir.2019.102720
    [17] M. Mohammadpour, A. Bahroudi, M. Abedi, Automatic lineament extraction method in mineral exploration using CANNY algorithm and hough transform, IEEE Access, 54 (2020), 366–382. https://doi.org/10.1134/S0016852120030085 doi: 10.1134/S0016852120030085
    [18] H. Q. Wu, J. Yan, The mechanism of digitized landscape architecture design under edge computing, Plos One, 16 (2021), 141104–141110. https://doi.org/10.1371/journal.pone.0252087 doi: 10.1371/journal.pone.0252087
    [19] G. Wang, C. Lopez-Molina, B. D. Baets, Automated blob detection using iterative Laplacian of Gaussian filtering and unilateral second-order Gaussian kernels, Digital Signal Process., 96 (2019). https://doi.org/10.1016/j.dsp.2019.102592 doi: 10.1016/j.dsp.2019.102592
    [20] R. Chetia, S. M. B. Boruah, P. P. Sahu, Quantum image edge detection using improved Sobel mask based on NEQR, Quantum Inf. Process., 20 (2021). https://doi.org/10.1007/s11128-020-02944-7 doi: 10.1007/s11128-020-02944-7
    [21] A. Jan, S. A. Parah, B. A. Malik, M. Rashid, Secure data transmission in IoTs based on CLoG edge detection, Future Generat. Comput. Syst. Int. J. Esci., 121 (2021), 59–73. https://doi.org/10.1016/j.future.2021.03.005 doi: 10.1016/j.future.2021.03.005
    [22] P. Amoako-Yirenkyi, J. K. Appati, I. K. Dontwi, A new construction of a fractional derivative mask for image edge analysis based on Riemann-Liouville fractional derivative, Adv. Differ. Equations, 2016 (2016). https://doi.org/10.1186/s13662-016-0946-8 doi: 10.1186/s13662-016-0946-8
    [23] Y. F. Pu, P. Siarry, A. Chatterjee, Z. N. Wang, Z. Yi, Y. G. Liu, et al., A fractional-order variational framework for retinex: fractional-order partial differential equation-based formulation for multi-scale nonlocal contrast enhancement with texture preserving, IEEE Trans. Image Process., 27 (2018), 1214–1229. https://doi.org/10.1109/TIP.2017.2779601 doi: 10.1109/TIP.2017.2779601
    [24] K. Liu, Y. Z. Tian, Research and analysis of deep learning image enhancement algorithm based on fractional differential, Chaos, Solitons Fractals, 131 (2020), 14–110. https://doi.org/10.1016/j.chaos.2019.109507 doi: 10.1016/j.chaos.2019.109507
    [25] Q. T. Ma, F. F. Dong, D. X. Kong, A fractional differential fidelity-based PDE model for image denoising, IEEE Access, 28 (2017), 635–647. https://doi.org/10.1007/s00138-017-0857-z doi: 10.1007/s00138-017-0857-z
    [26] Q. Wang, J. Ma, S. Y. Yu, L. Y. Tan, Noise detection and image denoising based on fractional calculus, Chaos, Solitons Fractals, 131 (2020), 109463. https://doi.org/10.1016/j.chaos.2019.109463 doi: 10.1016/j.chaos.2019.109463
    [27] Y. S. Zhang, F. Zhang, B. Z. Li, Image restoration method based on fractional variable order differential, IEEE Access, 29 (2018), 999–1024. https://doi.org/10.1007/s11045-017-0482-z doi: 10.1007/s11045-017-0482-z
    [28] F. F. Dong, Q. T. Ma, Single image blind deblurring based on the fractional-order differential, IEEE Access, 78 (2019), 1960–1977. https://doi.org/10.1016/j.camwa.2019.03.033 doi: 10.1016/j.camwa.2019.03.033
    [29] Y. S. Zhang, Y. R. Tian, A new active contour medical image segmentation method based on fractional varying-order differential, Mathematics, 10 (2022), https://doi.org/10.3390/math10020206 doi: 10.3390/math10020206
    [30] J. E. Lavín-Delgado, J. E. Solís-Pérez, J. F. Gómez–Aguilar, R. F. Escobar–Jiménez, A new fractional–order mask for image edge detection based on Caputo-Fabrizio fractional-order derivative without singular kernel, Circuits, Syst. Signal Process., 39 (2020), 1419–1448. https://doi.org/10.1007/s00034-019-01200-3 doi: 10.1007/s00034-019-01200-3
    [31] A. Nandal, H. Gamboa-Rosales, A. Dhaka, J. M. Celaya-Padilla, Image edge detection using fractional calculus with feature and contrast enhancement, Circuits Syst. Signal Process., 37 (2018), 3946–3972. https://doi.org/10.1109/ACCESS.2020.3013888 doi: 10.1109/ACCESS.2020.3013888
    [32] M. Hacini, F. Hachouf, A. Charef, A bi-directional fractional-order derivative mask for image processing applications, IET Image Process., 14 (2020), 2512–2524. https://doi.org/10.1049/iet-ipr.2019.0467 doi: 10.1049/iet-ipr.2019.0467
    [33] S. K. Mishra, K. K. Singh, R. Dixit, M. K. Bajpai, Design of fractional calculus based differentiator for edge detection in color images, IEEE Access, 80 (2021), 29965–29983. https://doi.org/10.1007/s11042-021-11187-2 doi: 10.1007/s11042-021-11187-2
    [34] N. R. Babu, K. Sanjay, P. Balasubramaniam, EED: Enhanced edge detection algorithm via generalized integer and fractional-order operators, Circuits, Syst. Signal Process., 41 (2022), 5492–5534. https://doi.org/10.1007/s00034-022-02028-0 doi: 10.1007/s00034-022-02028-0
    [35] C. P. Li, D. L. Qian, Y. Q. Chen, On Riemann–Liouville and caputo derivatives, Discrete Dyn. Nat. Soc., 2011 (2011), 1–15. https://doi.org/10.1155/2011/562494 doi: 10.1155/2011/562494
    [36] G. Y. Zhang, J. P. Liu, J. Wang, Z. H. Tang, Y. F. Xie, FoGDbED: Fractional-order Gaussian derivatives-based edge-relevant structure detection using Caputo-Fabrizio definition, Digital Signal Process., 98 (2019). https://doi.org/10.1016/j.dsp.2019.102639 doi: 10.1016/j.dsp.2019.102639
    [37] N. Aboutabit, A new construction of an image edge detection mask based on Caputo–Fabrizio fractional derivative, Vis. Comput., 37 (2020), 1545–1557. https://doi.org/s00371-020-01896-4
    [38] D. G. Shao, T. Zhou, F. Liu, S. L. Yi, Y. Xiang, L. Ma, et al., Ultrasound speckle reduction based on fractional order differentiation, J. Med. Ultrason., 44 (2016), 227–237. https://doi.org/10.1007/s10396-016-0763-4 doi: 10.1007/s10396-016-0763-4
    [39] S. Balochian, H. Baloochian, Edge detection on noisy images using Prewitt operator and fractional order differentiation, Multimedia Tools Appl., 81 (2022), 9759–9770. https://doi.org/10.1007/s11042-022-12011-1 doi: 10.1007/s11042-022-12011-1
    [40] G. Cheng, J. W. Han, P. C. Zhou, G. Lei, Multi-class geospatial object detection and geographic image classification based on collection of part detectors, ISPRS J. Photogramm. Remote Sens., 98 (2014), 119–132. https://doi.org/10.1016/j.isprsjprs.2014.10.002 doi: 10.1016/j.isprsjprs.2014.10.002
    [41] F. M. Atici, S. Chang, J. Jonnalagadda, Grünwald-Letnikov fractional operators: From past to present, Fract. Differ. Calculus, 11 (2021), 147–159. https://doi.org/10.7153/fdc-2021-11-10 doi: 10.7153/fdc-2021-11-10
    [42] Y. L. Song, J. F. Qu, C. M. Liu, Real-time registration of remote sensing images with a Markov chain model, J. Real-Time Image Process., 18 (2020), 1527–1540. https://doi.org/10.1007/s11554-020-01043-1 doi: 10.1007/s11554-020-01043-1
    [43] X. Lu, Y. J. Zhang, Human body flexibility fitness test based on image edge detection and feature point extraction, IEEE Access, 24 (2020), 8673–8683. https://doi.org/10.1007/s00500-020-04869-w doi: 10.1007/s00500-020-04869-w
    [44] S. Roy, D. Das, S. Lal, J. Kini, Novel edge detection method for nuclei segmentation of liver cancer histopathology images, J. Ambient Intell. Hum. Comput., 2021 (2021). https://doi.org/10.1007/s12652-021-03308-4 doi: 10.1007/s12652-021-03308-4
    [45] N. Tariq, R. A. Hamzah, T. F. Ng, S. L. Wang, H. Ibrahim, Quality assessment methods to evaluate the performance of edge dtection algorithms for digital image: A systematic literature review, IEEE Access, 9 (2021). 87763–87776. https://doi.org/10.1109/ACCESS.2021.3089210 doi: 10.1109/ACCESS.2021.3089210
    [46] Y. Han, M. Yin, P. H. Duan, P. Ghamisi, Edge-preserving filtering-based dehazing for remote sensing images, IEEE Geosci. Remote Sens. Lett., 2021 (2021). https://doi.org/10.1109/LGRS.2021.3103381 doi: 10.1109/LGRS.2021.3103381
    [47] D. D. He, G. Wang, An algorithm of fuzzy edge detection for wetland remote sensing image based on fuzzy theory, Appl. Nanosci., 2022 (2022). https://doi.org/10.1007/s13204-021-02209-4 doi: 10.1007/s13204-021-02209-4
  • This article has been cited by:

    1. Ruini Zhao, Nanocrystalline SEM image restoration based on fractional-order TV and nuclear norm, 2024, 32, 2688-1594, 4954, 10.3934/era.2024228
    2. Ao Chen, Zehua Lv, Junbo Zhang, Gangyi Yu, Rong Wan, Review of the Accuracy of Satellite Remote Sensing Techniques in Identifying Coastal Aquaculture Facilities, 2024, 9, 2410-3888, 52, 10.3390/fishes9020052
    3. Hui Xu, Xinyang Zhao, Qiyun Yin, Junting Dou, Ruopeng Liu, Wengang Wang, Isolating switch state detection system based on depth information guidance, 2024, 32, 2688-1594, 836, 10.3934/era.2024040
    4. Zhenjing Xie, Jinran Wu, Weirui Tang, Yongna Liu, Khan Bahadar Khan, Advancing image segmentation with DBO-Otsu: Addressing rubber tree diseases through enhanced threshold techniques, 2024, 19, 1932-6203, e0297284, 10.1371/journal.pone.0297284
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1612) PDF downloads(133) Cited by(4)

Figures and Tables

Figures(7)  /  Tables(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog