Loading [MathJax]/jax/output/SVG/jax.js
Research article

Isolating switch state detection system based on depth information guidance

  • Received: 19 November 2023 Revised: 25 December 2023 Accepted: 04 January 2024 Published: 12 January 2024
  • This study addressed the critical role of isolating switches in controlling circuit connections for the stable operation of the substation. Our research introduced an innovative state detection system that utilized depth information guidance, comprising a controllable pan-tilt mechanism, a depth camera, and an industrial computer. The software component employed a two-stage strategy for precise isolating switch detection. Initially, the red green blue with depth (RGB-D) saliency network identified the approximate area of the isolating switch target. Subsequently, a fully connected conditional random field was applied to extract accurate detection results. The real-time state of the isolating switch was determined based on the geometric relationship between its arms. This approach enhanced the accuracy of isolating switch detection, ensuring practical applicability in engineering scenarios. The significance of this research lies in its contribution to advancing isolating switch monitoring through depth information guidance, promoting a more robust and reliable power system. The key improvement is implementing a two-stage strategy, combining RGB-D saliency analysis and conditional random field processing, resulting in enhanced accuracy in isolating switch detection. As validated through extensive experiments, the proposed system's successful application in practical engineering underscores its effectiveness in meeting the accuracy requirements for isolating switch detection and state detection. This innovation holds promise for broader applications in power systems, showcasing its potential to elevate the reliability and efficiency of electrical networks. Code of the proposed system is available at: https://github.com/miaomiao0909/Isolating-Switch-Detection/tree/master.

    Citation: Hui Xu, Xinyang Zhao, Qiyun Yin, Junting Dou, Ruopeng Liu, Wengang Wang. Isolating switch state detection system based on depth information guidance[J]. Electronic Research Archive, 2024, 32(2): 836-856. doi: 10.3934/era.2024040

    Related Papers:

    [1] Bojian Chen, Wenbin Wu, Zhezhou Li, Tengfei Han, Zhuolei Chen, Weihao Zhang . Attention-guided cross-modal multiple feature aggregation network for RGB-D salient object detection. Electronic Research Archive, 2024, 32(1): 643-669. doi: 10.3934/era.2024031
    [2] Yaxi Xu, Yi Liu, Ke Shi, Xin Wang, Yi Li, Jizong Chen . An airport apron ground service surveillance algorithm based on improved YOLO network. Electronic Research Archive, 2024, 32(5): 3569-3587. doi: 10.3934/era.2024164
    [3] Peng Zhi, Haoran Zhou, Hang Huang, Rui Zhao, Rui Zhou, Qingguo Zhou . Boundary distribution estimation for precise object detection. Electronic Research Archive, 2023, 31(8): 5025-5038. doi: 10.3934/era.2023257
    [4] Ran Yan, Shuaian Wang . Ship detention prediction using anomaly detection in port state control: model and explanation. Electronic Research Archive, 2022, 30(10): 3679-3691. doi: 10.3934/era.2022188
    [5] Lei Pan, Chongyao Yan, Yuan Zheng, Qiang Fu, Yangjie Zhang, Zhiwei Lu, Zhiqing Zhao, Jun Tian . Fatigue detection method for UAV remote pilot based on multi feature fusion. Electronic Research Archive, 2023, 31(1): 442-466. doi: 10.3934/era.2023022
    [6] Xiangquan Liu, Xiaoming Huang . Weakly supervised salient object detection via bounding-box annotation and SAM model. Electronic Research Archive, 2024, 32(3): 1624-1645. doi: 10.3934/era.2024074
    [7] Bin Zhang, Zhenyu Song, Xingping Huang, Jin Qian, Chengfei Cai . A practical object detection-based multiscale attention strategy for person reidentification. Electronic Research Archive, 2024, 32(12): 6772-6791. doi: 10.3934/era.2024317
    [8] Yaokun Wang, Kun Zhao, Juan L. G. Guirao, Kai Pan, Huatao Chen . Online intelligent maneuvering penetration methods of missile with respect to unknown intercepting strategies based on reinforcement learning. Electronic Research Archive, 2022, 30(12): 4366-4381. doi: 10.3934/era.2022221
    [9] Jian Wan, Peiyun Yang, Wenbo Zhang, Yaxing Cheng, Runlin Cai, Zhiyuan Liu . A taxi detour trajectory detection model based on iBAT and DTW algorithm. Electronic Research Archive, 2022, 30(12): 4507-4529. doi: 10.3934/era.2022229
    [10] Sahar Badri . HO-CER: Hybrid-optimization-based convolutional ensemble random forest for data security in healthcare applications using blockchain technology. Electronic Research Archive, 2023, 31(9): 5466-5484. doi: 10.3934/era.2023278
  • This study addressed the critical role of isolating switches in controlling circuit connections for the stable operation of the substation. Our research introduced an innovative state detection system that utilized depth information guidance, comprising a controllable pan-tilt mechanism, a depth camera, and an industrial computer. The software component employed a two-stage strategy for precise isolating switch detection. Initially, the red green blue with depth (RGB-D) saliency network identified the approximate area of the isolating switch target. Subsequently, a fully connected conditional random field was applied to extract accurate detection results. The real-time state of the isolating switch was determined based on the geometric relationship between its arms. This approach enhanced the accuracy of isolating switch detection, ensuring practical applicability in engineering scenarios. The significance of this research lies in its contribution to advancing isolating switch monitoring through depth information guidance, promoting a more robust and reliable power system. The key improvement is implementing a two-stage strategy, combining RGB-D saliency analysis and conditional random field processing, resulting in enhanced accuracy in isolating switch detection. As validated through extensive experiments, the proposed system's successful application in practical engineering underscores its effectiveness in meeting the accuracy requirements for isolating switch detection and state detection. This innovation holds promise for broader applications in power systems, showcasing its potential to elevate the reliability and efficiency of electrical networks. Code of the proposed system is available at: https://github.com/miaomiao0909/Isolating-Switch-Detection/tree/master.



    In the process of smart substation construction, automatic control can be realized based on monitoring the real-time state of each controllable power equipment [1,2,3]. The isolation switch is an important piece of equipment in the substation, which converts lines and isolates high-voltage equipment [4]. It is essential to ensure that the isolating switch is disconnected, which relates to the safety of workers in the high voltage line maintenance. However, most isolating switches operate outdoors, and the equipment can be oxidized. The aging of transmission equipment may lead to abnormal states, such as improper equipment opening and closing or blocking the isolating switch. These abnormal cases will make the equipment unable to work correctly, even resulting in safety incidents. Therefore, it is significant to detect the state of the isolation switch exactly in real time.

    Currently, research on the detection of the state of isolation switches in substations is currently divided into two categories. The first category involves the use of sensors [5,6] to detect characteristics (e.g., voltage, current, resistance, magnetic induction [7]) to determine the current status of the isolation switch. Li et al. [8] proposed a noncontact current sensing and measurement method based on tunnel magnetoresistance sensors. This method employs highly sensitive linear sensors that utilize the resistance-changing characteristics of tunnel magnetoresistance sensors with variations in the direction and magnitude of the external magnetic field. When the transmission device drives the circuit breaker to rotate, the permanent magnet also rotates, causing a change in the magnetic field, and the resistance of the magnetoresistance sensor on one end changes accordingly, thereby achieving the detection of the isolating switch state. Yuan et al. [9] achieved isolating switch state detection by installing three-direction pose sensors on the transmission device of the isolating switch. The sensors are mounted on the operating level of the transmission device. When the operating lever undergoes an angle change, the pose sensors measure the rotational angle through pose changes, thereby detecting the isolating switch state.

    The second category is based on computer vision. The state of isolation switches based on computer vision is efficiency and flexibility, which has become a new trend [10,11]. Specifically, computer vision has performed significantly superior in most tasks, such as target extraction [12,13], target classification [14], edge detection [15], and pose recognition [16]. The different lighting conditions and shooting angles increase the difficulty of isolating switch detection [17]. So, most works carried out edge detection and feature extraction based on the structural characteristics of the isolating switch. These works judged the state of the isolating switch through the change of position and posture during the movement of the isolating switch. Zhang et al. [18] used the improved Hough transform method to detect the line, identify the isolating switch, and then judge the state. The direction gradient histogram (HOG) and support vector machine (SVM) are used as detection and state judgment of the isolating switch in [19]. Wang et al. [20] realized the detection and tracking of isolating switches through feature extraction and the AdaBoost classifier. By tracking the position of the isolating switch in real time, the state of the switch is determined according to the distance between the two arms of the switch. Yang et al. [21] combined scale invariance with the Hough transform, and proposed an algorithm for isolating switch automatic recognition based on computer intelligent vision technology. Lu et al. [22], based on image geometry monitored substation switches. After obtaining the contour of the arm image, the angle of the arm is calculated by the cosine theorem and the switch state is finally determined. Bin et al. [23] extracted and matched the scale-invariant features of the isolating switch. They transformed the features, and their method performed well in the case of image rotation and uneven illumination. Some works make an effort to introduce extra information (e.g., depth [24], temperature [25,26,27]) to detect the state of the isolation switch comprehensively. Xu et al. [24] calculated the actual angle between the switch arm and the insulator by exploiting the depth map and feature information. Ullah et al. [26,27] utilized temperature information to classify the high-voltage equipment and detect the state of the isolation switch.

    Although a lot of work for the detection of the state of isolation switch has arisen, there are still two main problems. First, most of the above methods combine prior knowledge and geometric principles to improve detection accuracy. Meanwhile, some assumptions are used to reduce the complexity of the scene. However, for the different shooting angles and lighting conditions in real scenes, isolating switches in the images are incredibly diverse in shape, appearance, and size and their backgrounds are also other. Second, these methods can detect the state of one group of isolating switches successfully. In practical application, we must meet the requirements of synchronous detection, which is required to detect the state of multiple groups of isolating switches in the area simultaneously.

    To tackle the above problems, an isolating switch state detection system based on depth information guidance is developed in this paper, which can detect multiple groups of isolating switches and judge the state in real time. Specifically, first, the depth camera is used to obtain red green blue (RGB) images and depth images of the isolating switch scene. Depth information is essential to exploit three-dimensional data of the isolating switch. Second, a two-stage isolating switch state detection method is proposed. We use the saliency object detection algorithm [28,29,30] to obtain the position of the isolating switch, then, we use a fully-connected conditional random field (CRF) [31,32,33] to reconstruct a clear edge. After obtaining the switch state detection results, we use Hough transform line detection to get the longest boundary in the isolating switch area. The state of the isolating switch is judged by calculating the angle between the boundary lines of the switch arm and the distance between the arms. In summary, the main contributions of this paper are as follows.

    ● An automatic state detection system of isolating switch is developed, integrating a controllable pan tilt, a depth camera, and an industrial computer. The system aims to detect the state of isolating switches with high accuracy and speed.

    ● A two-stage method is proposed, which applies the saliency object detection method to the isolating switch state detection system first. We then use a conditional random field to optimize the edge of the detection results. This method has good accuracy performance for isolating switch segmentation.

    ● Our system can meet the needs of isolating switch synchronization detection and judge whether the states of multiple groups of isolating switches are the same. According to the results, whether malfunction exists on one of the isolating switches or not can be judged.

    The rest of this paper is organized as follows. The Section 2 introduces the hardware structure of the system. In the same section, we elaborate on isolating switch detection and state judgment. The experimental results are presented in the Section 3. Finally, the Section 4 concludes our work.

    The logical flowchart illustrating all parts of Section 2 is shown in Figure 1. In the hardware structure part, we will illustrate the three main components of the system and their respective functionalities. In the two-stage strategy part, we will describe how the two-stage method, composed of isolating switch segmentation and fully connected CRFs, is employed to achieve the segmentation and edge optimization of the isolating switch. Finally, the part on judgement will elucidate the process of isolating switch state judgement.

    Figure 1.  Logical flowchart of the developed isolating switch state detection system.

    In this paper, the investigated isolating switch is a type of horizontally rotating isolating switch, as illustrated in Figure 2. This isolating switch primarily consists of two isolating blades, with the rotational range of each isolating blade being 0–90°. When the isolating blades are in the closed position, the circuit is in a closed state, allowing current to flow. Conversely, when the isolating blades are in the open position, the circuit is disconnected, leading to the interruption of current flow.

    Figure 2.  The figures (a, b) and module (c) of the isolating switch.

    As shown in Figure 3, the system comprises an image acquisition module consisting of a pan tilt, a depth camera, and an industrial computer for image analysis.

    Figure 3.  The developed isolating switch state detection system.

    Table 1 shows the detailed information of the devices. The depth camera, also known as a 3D camera, is a sensor capable of acquiring depth information from a scene. Taking the ZED2 depth camera produced by StereoLabs Inc., as utilized in this paper, as an example, the ZED2 primarily utilizes stereo vision through binocular cameras to obtain depth information. By employing functions such as camera calibration, stereo rectification, and stereo matching, it calculates the depth information of the captured scene, enabling the acquisition of both RGB and depth images. However, the limitation of a depth camera resides in imposing a considerable load on the system hardware, demanding higher hardware specifications. The controllable pan tilt can rotate horizontally and vertically and can change the shooting angles of the camera according to the needs of information acquisition. The depth camera can collect depth and RGB images simultaneously, providing critical information for monitoring. The industrial computer is used to deploy software algorithms to process the collected images in real time and judge the state of the isolating switches. The remote server is responsible for controlling the pan tilt rotation, displaying the images collected by the camera and the processing results.

    Table 1.  Hardware devices used in this work for isolation switch state detection system.
    Hardware device Model Specification
    Pan tilt Three axis pan tilt Horizontal axis motor, Rotation range: -90°–90°;
    Horizontal roller motor, maintain lateral stability;
    Pitch axis motor, Rotation range: -60°–60°
    Depth camera ZED2 Dimensions: 175.25 mm × 30.25 mm × 43.10 mm;
    Power: power via USB 5V / 380mA;
    Image sensors resolution: Dual 4M pixels sensors with 2-micron pixels;
    Depth FOV: 110° (H) × 70° (V) × 120° (D) max;
    CPU Data processing module Operating system: Windows 10;
    CPU: Intel Core i7-10700F;
    GPU: RTX 3060-12G
    Remote server Data terminal Operating system: Windows 10;
    CPU: Intel Core i9-10900KF;
    GPU: RTX 3090

     | Show Table
    DownLoad: CSV

    The system has the following advantages:

    We innovatively combine a controllable pan tilt, a depth camera, and an industrial computer. Using the controllable pan tilt instead of the fixed bracket, it can control the pan tilt to change the monitoring perspective and realize the monitoring of multiple groups of isolating switches in different areas.

    Each system has an industrial control computer that can combine multiple procedures into an internet of things network.

    In this paper, we propose an automatic detection method for the states of the isolating switch in the substation. The specific implementation process of the proposed method is shown in Figure 4.

    Figure 4.  The flow chart of the proposed method.

    First, RGB images and depth images collected by the depth camera are regarded as the inputs of the RGB with depth (RGB-D) saliency network. We use the RGB-D saliency object detection method instead of conventional object detection. We can better separate the target from the background and similar objects with depth information guidance.

    Deeper models with multiple maximum pooling layers have been proven to be very effective in object detection tasks. Still, their increased invariance and large acceptance domain make it more challenging to obtain the refined target edge. The blurred edge segmentation will make subsequent processing complex, and the measurement results have a large error.

    We refine the target edge by adding a fully connected CRF to the model. Line detection is performed on the optimized segmentation results, and the angles between the corresponding isolating switches are calculated.

    Finally, the state of the isolating switch is judged according to the angle range.

    The output results of common target detection methods, such as you look only once (YOLO), fast region convolutional neural networks (Fast-RCNN), and so on, are the coordinate information of the targets located. These methods cannot achieve the accurate segmentation of the target region boundary. Visual saliency refers to quickly detecting the most unique visual region by imitating the human visual system and completing the detection and boundary location of saliency objects at the same time. For the single object detection scenario, the single object training of the salient target detection network can improve the robustness of the network to specific targets.

    Unlike RGB saliency object detection, the RGB-D saliency object detection model considers color and depth image information together to identify the salient object. Generally speaking, in detecting salient objects, spatial depth information can be one of the important salient features [34]. In addition, the depth information can effectively eliminate the interference of complex background texture, and it is easier to detect the isolating switch target in the image. In an RGB-D saliency network, color image, and depth image can improve the effect of isolating switch segmentation.

    This paper uses the hierarchical depth awareness network (HiDAnet) [35] network for saliency object detection. As shown in Figure 5, the isolating switch segmentation network based on HiDAnet proposed a novel granularity-based attention scheme that attends to fine-grained details in order to strengthen the feature discriminability of each modality. HiDAnet includes four main modules, namely, granularity-based attention (GBA), cross dual-attention module (CDA), efficient multi-input fusion (EMI), and receptive field block (RFB [36]). White blocks denote the network backbone. The GBA strengthens the discriminatory power of RGB and depth features separately. The CDA module takes advantage of cross-domain cues to attentively realize multimodal and multilevel fusion in a coarse-to-fine manner. The efficient fusion scheme effectively models the shared information from each modality. The shared features are further improved with the skip connections for final saliency map generation.

    Figure 5.  The overall architecture of isolating switch segmentation network.

    The CRF is a probability graphical model based on an undirected graph, where nodes of the graph represent random variables and edges depict relationships between these variables. The essence lies in that, for any node in the undirected graph, its conditional probability distribution is only dependent on connected nodes and is independent of other nodes. In the context of a CRF, it is divided into observable nodes X and latent state nodes Y, interconnected in a certain pattern to represent explicit and implicit relationships between nodes.

    In this paper, to fully exploit the strong correlation among image pixels, a fully connected CRF is employed, where each pixel is connected to every other pixel, forming the connectivity edges. For pixel j, the color vector is denoted as Ij and the label is Xj. Ij belongs to the set of possible pixel intensities in the input image, denoted as I={I1,,IN}, while Xj belongs to the set of possible labels assigned to pixels, denoted as X={X1,,XN}, where N represents the size of the input image. Consequently, we can assume that each point in the image follows a Gibbs distribution, as formulated in Eq (1).

    P(x|I)=1Z(I)eE(X|I) (1)

    where x represents the observed values and Z(I) is the normalization factor used to convert the final output into a probability form. E(X|I) is the energy function.

    Based on the number of variables, the potential functions of CRF are typically categorized into unary potentials, pairwise potentials, and higher-order potentials. Although higher-order potentials can model more extensive node information, the computational complexity of such potential functions is higher. Therefore, we use unary and pairwise potentials to describe the energy function E(X|I), where the unary potential measures the category probability of a pixel, and the pairwise potential characterizes the relationships between pixels. This encourages similar pixels to be assigned the same label while discouraging significantly different pixels from being assigned the same label. The expression of the energy function E(X|I) is given by Eq (2).

    E(X|I)=iθi(xi)+ijθij(xi,xj) (2)

    where the range of i and j are from 1 to N and xi or xj represents the label assignment of a pixel.

    We define the expression of the unary potential function as θi(xi)=logP(xi), where P(xi) is the label assignment probability at pixel i computed by isolating switch segmentation network.

    The pairwise potential function models spatial information between each pixel and its neighborhood by considering both the label field and the observation field. During the camera capture process, due to factors such as lighting variations or noise, prominent features between regions of the same category may exhibit differences and be identified as different categories in the saliency object detection process. However, the pairwise potential incorporates the impact of spatial correlations, simulating this smoothness, and taking into account label constraints. This is beneficial for edge smoothing between adjacent regions. The pairwise potential function can be expressed as Eq (3).

    θij(xi,xj)=μ(xi,xj)Km=1ωmkm(fi,fj) (3)

    where μ(xi,xj) models the label consistency. It takes 1 when xi=xj and is 0 otherwise, adhering to a Poisson distribution. km is a​ Gaussian kernel km(f1,f2)=exp(12(f1,f2)TΛM(f1,f2)) that depends on features extracted for pixels and is weighted by linear combination weight ωm, and the vectors fi and fj are feature vectors for pixel i and j in an arbitrary feature space. The model is fully connected, so there will be a pair of items between pixels in the graph no matter how far away they are. Each kernel km is characterized by a symmetric, positive-definite precision matrix ΛM, which defines its shape.

    We employ a bilateral position and color term to compose the kernel function. The kernel function can be explicitly expressed as Eq (4).

    {k(fi,fj)=ω1exp(R1)+ω2exp(R2)R1=|pipj|22θ2α|IiIj|22θ2βR2=|pipj|22θ2γ (4)

    where the first kernel function depends on both pixel position p and pixel color intensity I, while the second kernel function only depends on pixel position p. The hyperparameters θα, θβ, and θγ control the scale of the Gaussian kernels.

    The sensitivity of edge contrast can be improved by using pairwise potential for edge optimization of the isolating switch segmentation results.

    After the edge optimization of a conditional random field, the edge of the isolating switch area is more precise. We extract the direction vector representing the direction of the switch arm and calculate the angle between the corresponding switch arms. We judge the state of the isolating switch at this time according to the angle. The specific steps are as follows.

    ● Hough line detection is performed on each connected domain, then we select the longest line and calculate the direction vector.

    ● Because the output result of the saliency object detection method is an independent object region, there is no direct matching relationship between each switch arm region. According to the distance between the left points of the connected regions, the two regions with the shortest distance are selected as the matching regions.

    ● We calculate the cosine value of the direction vector of the two corresponding regions and the angle formed by the edges of the two switch arms.

    ● After many observations, if the angle is between 0~7° or 173~180° and the pixel distance between the two regions is less than the set threshold, the isolating switch is considered closed; otherwise, the isolating switch is open. The pixel distance of the area is taken as the judgment factor because the rotation range of the switch arm is 0~90°. When the isolating switch is fully opened, the two switch arms will be parallel. Therefore, when they separate, the distance should increase. Because the pixel distance is not accurate enough, it is not enough to judge the state only based on the pixel distance. The distance can only be used as an auxiliary judgment condition. Meanwhile, shooting from different angles will significantly change the threshold of pixel distance.

    This section conducts extensive experiments to prove the feasibility of the developed isolating switch state detection system. All experiments are carried out on the Windows 10 operating system, which adopts an Intel Core i7-10700F@2.9GHz central processing unit (CPU).

    Isolating switch segmentation networks requires substantial support from datasets during the training phase to achieve effective segmentation of isolating switches. Currently, there is a lack of publicly available RGB-D image datasets specifically tailored for isolating switch scenes. Without sufficient dataset support, neural network models may lack generalization capabilities for isolating switch scenes, and their segmentation performance may struggle to meet engineering requirements. To address this issue, this study has constructed an RGB-D image isolating switch segmentation dataset for isolating switch scenes in substations, which is utilized for training and testing neural network models.

    Generally, isolating switches used in substations consists of horizontally unfolding iron double arms. When the isolating switch is fully closed, the iron double arms remain horizontal and parallel to each other with their inner contacts in contact, forming a straight line. During the opening process of the isolating switch, the iron double arms rotate horizontally outward. Isolating switches in substations are typically situated outdoors with the sky predominantly composing the background. Additionally, the isolating switch is mounted on an iron frame, connected and fixed to the iron support by cylindrical insulators. Typically, three sets of isolating switches are installed in parallel, several meters above the ground.

    To obtain a neural network model with optimal segmentation performance, this section recorded multiple video segments of isolating switch scenes at the experimental site supervised by the Changzhou Power Company, utilizing a depth camera. To diversify the samples, video recording was conducted from various perspectives to obtain isolating switch images from different angles. OpenCV library, produced by OpenCV team, was employed to extract frames from the videos, capturing images from different perspectives. Each RGB and depth image has a resolution of 2208 * 1242 pixels. To prevent an abundance of highly similar images, frame extraction from the video adopted an interval frame extraction strategy. Moreover, frames with high blur caused by camera movement were manually excluded.

    The dataset was annotated with ground truth images using the Labelme tool. Each single-arm isolating switch's boundary was delineated using the straight-line option in Labelme, and the arm area was filled in white while other areas were set to black.

    Given the substantial similarity in video content, overfitting is prone to occur during network training. Therefore, data augmentation was applied to the existing isolating switch dataset to further expand its capacity, enhancing the generalization capabilities of the network model as much as possible. Various methods, including image mirroring, noise generation, image rotation, and image scaling, were employed to augment the existing dataset, expanding it to five times its original size. The augmented dataset reached a total of 15,000 images. The existing dataset was randomly divided into 10,000 training images, 2500 validation images, and 2500 test images, in a ratio of 4:1:1.

    In this section, we use a depth camera to obtain RGB images and depth images of isolating switches in real-world substation scenes to evaluate the effectiveness and performance of the detection algorithm.

    The detection results of this algorithm are shown in Figure 6. The experimental results show that the RGB-D saliency object detection method HiDAnet used in this paper can detect all the isolating switches in the scene. In order to observe the final segmentation and optimization results more clearly, we cut the edge region of the resulting image. After optimization of the edge of the conditional random field, better isolating switch segmentation results can be obtained. The output results show that a clearer segmentation effect can be achieved for isolating switches at different angles and the edge is more precise. When the isolating switch is at a long distance, the isolating switch segmentation result will worsen. Although it has been significantly improved after the optimization of the conditional random field, it is still not as good as that at a short distance.

    Figure 6.  Detection results of the isolating switches.

    In order to further verify the effectiveness and reliability of this method, as shown in Table 2 and Figure 7, we use the same dataset to compare other target segmentation algorithms. To show the detection results of the algorithm more clearly, we cut the original image to highlight the detection results of the isolating switch areas. Four evaluation metrics are used, including mean absolute error (MAE), mean F-measure, enhanced alignment measure (mean E-measure), and structure measure (S-measure) [37].

    Table 2.  Quantitative results of proposed method and other representative SOTA methods.
    Models MAE ↓ S-measure ↑ F-measure ↑ E-measure ↑
    ACNet [38] 0.3602 0.5088 0.4515 0.1904
    DeepLabV3 [39] 0.4003 0.5023 0.2435 0.3382
    ESANet [40] 0.3668 0.5091 0.4950 0.4954
    PSANet [41] 0.3971 0.4929 0.3624 0.3944
    PSPNet [42] 0.3875 0.5059 0.2743 0.3604
    RedNet [43] 0.3645 0.5034 0.4348 0.4643
    RefineNet [44] 0.3688 0.5139 0.4940 0.4865
    UCNet [45] 0.3345 0.5144 0.6217 0.4780
    STINet [46] 0.3428 0.5244 0.6045 0.4952
    TSFPNet [47] 0.3157 0.5326 0.6657 0.4839
    Flow-EdgeNet [48] 0.3286 0.5261 0.6347 0.4816
    Ours 0.2947 0.5618 0.7162 0.5481

     | Show Table
    DownLoad: CSV
    Figure 7.  Detection results of the proposed method and other representative SOTA methods.

    The MAE is a linear evaluation metric that calculates the average absolute error, pixel-wise, between saliency prediction maps and ground truth maps. A lower MAE value signifies better predictive performance. The specific formulation is as Eq (5).

    MAE=1H×WHy=1Wx=1|S(x,y)G(x,y)| (5)

    where H and W represent the height and width of the image. S and G represent the saliency prediction map and the ground truth map.

    The F-measure is a statistical metric in machine learning, and a higher value indicates better model performance. Prior to computing this metric, the predictive saliency maps need to be binarized using various thresholds. Subsequently, the binary saliency maps along with the ground truth maps, are employed to calculate the weighted harmonic mean of precision and recall. This process yields the corresponding F-measure. The general formulation of F-measure is Eq (6). The mean F-measure is computed by calculating the F-measure at segmentation thresholds ranging from 0 to 255 and then obtaining the average.

    Fβ=(β2+1)precisionrecallβ2precision+recall (6)

    where precision=TPTP+FP, and recall=TPTP+FN. TP (true positive), FP (false positive), and FN (false negative), respectively, represent instances of correct positive predictions, incorrect positive predictions, and incorrect negative predictions, and β2 is employed to balance the importance of precision and recall.

    E-measure combines local pixel-wise information with image-level average, providing a comprehensive evaluation that considers both local and global information. E-measure is a binary foreground map assessment metric that necessitates the segmentation of saliency maps into multiple binary saliency maps using various thresholds. The general formulation is typically expressed as Eq (7).

    Eϕ=1H×WHy=1Wx=1ϕFM(x,y) (7)

    where ϕ represents the enhanced alignment matrix employed to capture two crucial attributes: pixel-level matching and image-level statistics.

    The S-measure is specifically designed for the evaluation of nonbinary images. Previous saliency evaluation metrics were primarily based on pixel-level errors, neglecting considerations of structural similarity within salient objects. Consequently, a new evaluation metric has been proposed by integrating both object structure similarity (S0) and region structure similarity (Sr). A higher S-measure indicates better predictive performance. The formulation is expressed as Eq (8).

    S=αS0+(1α)Sr (8)

    where S0 represents object structure similarity and Sr represents region structure similarity. α is a parameter used to balance the importance of S0 and Sr.

    The test images include a single group of isolating switches and multiple groups of isolating switches. In Table 2, we quantitatively compare the performance of our approach and other state of the art (SOTA) algorithms. Our approach achieves significant S-measure, E-measure, and F-measure performance and decreases MAE. Figure 7 shows the final segmentation results of different algorithms more intuitively. It can be seen from the figure that our method is better than other image segmentation methods in the scene of isolating switch saliency object detection. For different distances and different numbers of isolating switches, the segmented image is clearer and the edge is more obvious. The isolating switch is divided into two parts. When judging the state of the isolating switch, we need to obtain the different areas of the arms clearly, especially when the isolating switch is closed and close to each other. In order to obtain the direction of the arm, the edge line of the detection area must be extracted. If the edge of the detection result is blurred, the result of line extraction will deviate. After optimizing the edge of the CRF for the saliency object detection results, we can find that the areas of different arms are separated, and the area edges are clear in Figure 7.

    There are many groups of isolating switches in a substation and they are often distributed in different areas. The camera's shooting angle range is limited, and it can only shoot the isolating switches in one area at a fixed angle. However, the controllable pan tilt in the system can change the shooting angle of the camera. Therefore, we can control the pan tilt to select the appropriate shooting angles to reduce the target occlusion and maximize the number of detection objects.

    To further validate the reliability of the proposed system in isolating switch state detection, as shown in Table 3, we conducted comparisons with other isolating switch state detection methods on the same dataset. We partitioned the dataset into two categories: open and closed states, using the accuracy of the final detection results as the evaluation metric. The test data included a set of isolating switch images as well as multiple sets of isolating switch images. In Table 3, we quantitatively compared the performance of our method with other approaches. Our method exhibited superior accuracy on both open and closed state data. This indicates that our isolating switch state detection method achieves more accurate detection.

    Table 3.  Quantitative results of isolating switch state detection.
    Methods Isolating switch state Accuracy
    HOG + SVM [49] Close 96.3%
    Open 95.7%
    ISSSRNet [50] Close 98.2%
    Open 97.5%
    Ours Close 99.4%
    Open 99.1%

     | Show Table
    DownLoad: CSV

    To comprehensively assess the effectiveness of the system in different environments, we captured images of isolating switches in foggy weather conditions when creating the dataset using the depth camera. These foggy weather scene images were included in the dataset. Due to the incorporation of depth information, the proposed method in this paper demonstrated accurate detection results even in these foggy weather conditions, as illustrated in Figure 8.

    Figure 8.  RGB images, depth images, and detection results in foggy weather.

    In the actual detection process, the number of display frames detected is related to the number of isolating switches in the image. With the increase of isolating switches, the number of frames detected will be less. There are three groups of knife switches in the image. When the system runs under the hardware specifications provided in this paper, namely with the Windows 10 operating system, an Intel Core i9-10900KF CPU, and an RTX 3090 graphics processing unit (GPU) produced by NVIDIA Corporation, the ZED2 camera computes and outputs depth images at a rate of 40 frames per second. The real-time response rate of the system reaches 15 frames per second. As shown in Figure 9, we choose a scene with three groups of isolating switches to test the performance of the system and record the process of isolating switches from closing to opening. Figure 9 includes original RGB images and detection results. When the isolating switch starts to open, the system can detect the state change of the isolating switch in time.

    Figure 9.  Experimental results of the proposed isolation switch state detection system.

    To facilitate the staff's observation and control of the system, we designed an excellent graphical user interface, as shown in Figures 10 and 11. The staff can set the time of system start and shut-down according to the monitoring requirements. People can control the rotation of pan tilt on the remote server. In this way, the camera shooting angle can be changed to realize the monitoring of isolating switches in different areas. During the inspection, the real-time monitoring results processed by the industrial computer can be displayed on the remote service control terminal. The content of the observation screen includes scene, depth scene, and detection result image. The specific information on the test results is displayed in the table.

    Figure 10.  Results of isolating switch state detection system (Synchronism).
    Figure 11.  Results of isolating switch state detection system (Asynchronism).

    In this paper, a new isolating switch state detection system was introduced, which realized the detection, state judgment, and synchronization detection of isolating switch. The detection system consisted of a controllable pan tilt, a depth camera, and an industrial computer. The remote server can realize the control of the system and the comprehensive processing of detection information. First, the system obtained the isolating switch area by RGB-D saliency object detection. Second, in order to improve the accuracy of subsequent detection, we introduced the fully connected conditional random field to optimize the detection results and get the detection results with clearer edges. Finally, the edge line of the switch arm was extracted and the direction angle and distance between the cutter arms were calculated to judge the state of the isolating switch. In the experimental results, we compared the target detection method used in this paper with the SOTA methods in many aspects. The results showed that the proposed method in this paper obtained the best performance in terms of qualitative and quantitative analysis. Therefore, the system meets the accuracy requirements of isolating switch detection and state identification in practical application.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    Funding: This research was funded by the Science and Technology Project of State Grid Ningxia Electric Power Co., Ltd., grant number 5229CG230003.

    The authors declare there is no conflict of interest.



    [1] J. K. Pylvanainen, K. Nousiainen, P. Verho, Studies to utilize loading guides and ANN for oil-immersed distribution trans-former condition monitoring, IEEE Trans. Power Delivery, 22 (2006), 201–207. https://doi.org/10.1109/TPWRD.2006.877075 doi: 10.1109/TPWRD.2006.877075
    [2] L. Maraaba, Z. Al-Hamouz, H. Al-Duwaish, Estimation of high voltage insulator contamination using a combined image processing and artificial neural networks, in IEEE 8th International Power Engineering and Optimization Conference, (2014), 214–219. http://doi/org/10.1109/PEOCO.2014.6814428
    [3] J. Xie, C. Liu, M. Sforna, M. Bilek, R. Hamza, On-line physical security monitoring of power substations, Int. Trans. Electr. Energy Syst., 26 (2016), 1148–1170. http://doi/org/10.1002/etep.2122 doi: 10.1002/etep.2122
    [4] D. Pernebayeva, M. Bagheri, A. James, High voltage insulator surface evaluation using image processing, in 2017 International Symposium on Electrical Insulating Materials (ISEIM), 2 (2017), 520–523. http://doi/org/10.23919/ISEIM.2017.8166540
    [5] J. Sui, S. Liao, B. Li, H. Zhang, High sensitivity multitasking non-reciprocity sensor using the photonic spin Hall effect, Opt. Lett., 47 (2022), 6065–6068. https://doi.org/10.1364/OL.476048 doi: 10.1364/OL.476048
    [6] J. Sui, J. Zou, S. Liao, B. Li, H. Zhang, High sensitivity multiscale and multitasking terahertz Janus sensor based on photonic spin Hall effect, Appl. Phys. Lett., 23 (2023), 122. https://doi.org/10.1063/5.0153342 doi: 10.1063/5.0153342
    [7] B. Wan, Q. Wang, H. Peng, H. Ye, H. Zhang, A late-model optical biochemical sensor based on OTS for methane gas and glucose solution concentration detection, IEEE Sens. J., 21 (2021), 21465–21472. http://doi/org/10.1109/JSEN.2021.3103548 doi: 10.1109/JSEN.2021.3103548
    [8] J. Li, H. Liu, T. Bi, Tunnel magnetoresistance-based noncontact current sensing and measurement method, IEEE Trans. Instrum. Meas., 71 (2022), 1–9. http://doi/org/10.1109/TIM.2022.3152240 doi: 10.1109/TIM.2022.3152240
    [9] H. Yuan, Z. Sun, L. Wang, A. Yang, X. Wang, M. Rong, Fault Diagnosis of Disconnector Based on Attitude Sensing System, High Voltage Eng., 1 (2022), 47–57. http://doi/org/10.13336/j.1003-6520.hve.20210884 doi: 10.13336/j.1003-6520.hve.20210884
    [10] L. Jin, D. Zhang, Contamination grades recognition of ceramic insulators using fused features of infrared and ultraviolet images, Energies, 8 (2015), 837–858. http://doi/org/10.3390/en8020837 doi: 10.3390/en8020837
    [11] S. Liao, J. An, A robust insulator detection algorithm based on local features and spatial orders for aerial images, IEEE Geo-Sci. Remote Sens. Lett., 12 (2014), 963–967. http://doi/org/10.1109/LGRS.2014.2369525 doi: 10.1109/LGRS.2014.2369525
    [12] W. Liu, G. Dong, M. Zou, Satellite road extraction method based on RFDNet neural network, Electron. Res. Arch., 31 (2023), 4362–4377. http://doi/org/10.3934/era.2023223 doi: 10.3934/era.2023223
    [13] M. Bacco, P. Cassarà, A. Gotta, M. Puddu, A simulation framework for QoE-aware real-time video streaming in multipath scenarios, in Ad-Hoc, Mobile, and Wireless Networks: 19th International Conference on Ad-Hoc Networks and Wireless, ADHOC-NOW 2020, Bari, Italy, October 19–21, 2020, Proceedings 19. Springer International Publishing, (2020), 114–121.
    [14] Y. Dong, J. Liu, Y. Lan, A classification method for breast images based on an improved VGG16 network model, Electron. Res. Arch., 31 (2023), 2358–2373. http://doi/org/10.3934/era.2023120 doi: 10.3934/era.2023120
    [15] C. Chen, H. Kong, B. Wu, Edge detection of remote sensing image based on Grünwald-Letnikov fractional difference and Otsu threshold, Electron. Res. Arch., 31 (2023), 1287–1302. http://doi/org/10.3934/era.2023066 doi: 10.3934/era.2023066
    [16] L. Romeo, R. Marani, A. Petitti, A. Milella, T. D'Orazio, G. Cicirelli, Image-based mobility assessment in elderly people from low-cost systems of cameras: A skeletal dataset for experimental evaluations, in Ad-Hoc, Mobile, and Wireless Networks: 19th International Conference on Ad-Hoc Networks and Wireless, ADHOC-NOW 2020, Bari, Italy, October 19–21, 2020, Proceedings 19. Springer International Publishing, (2020), 125–130.
    [17] Y. Ma, Q. Li, L. Chu, Y. Zhou, C. Xu, Real-time detection and spatial localization of insulators for UAV inspection based on binocular stereo vision, Remote Sens., 13 (2021), 230. http://doi/org/10.3390/rs13020230 doi: 10.3390/rs13020230
    [18] G. Zhang, D. Zhang, D. Li, L. Zhou, The automatic identification method of switch state, Int. J. Simul.: Syst., Sci. Technol., 17 (2016), 21.1–21.4.
    [19] Y. Teng, T. Y. Tan, C. Lei, J. Yang, Y. Ma, K. Zhao, et al., A novel method to recognize the state of high-voltage isolating switch, IEEE Trans. Power Delivery, 34 (2019), 1350–1356. http://doi/org/10.1109/TPWRD.2019.2897132 doi: 10.1109/TPWRD.2019.2897132
    [20] J. Wang, Q. Liu, K. Zhao, Y. Jiang, L. Cheng, Recognition of high voltage isolating switch's states based on object tracking, in 4th International Conference on Systems and Informatics (ICSAI), (2017), 474–478. http://doi/org/10.1109/ICSAI.2017.8248339
    [21] C. Yang, X. Wu, W. Gong, Q. Wang, L. Li, An intelligent identification algorithm for obtaining the state of power equipment in SIFT-based environments, Int. J. Performabil. Eng. 15 (2019), 2382. http://doi/org/10.23940/ijpe.19.09.p11.23822391 doi: 10.23940/ijpe.19.09.p11.23822391
    [22] J. Lu, H. Lin, W. Zhang, X. Shi, A condition monitoring algorithm based on image geometric analysis for substation switch, in Proceedings of 2015 International Conference on Intelligent Computing and Internet of Things, (2015), 72–76. http://doi/org/10.1109/ICAIOT.2015.7111541
    [23] Y. Bin, C. Hao, H. Wenguang, Study on the method of switch state detection based on image recognition in substation sequence control, in 2014 International Conference on Power System Technology, (2014), 2504–2510. http://doi/org/10.1109/POWERCON.2014.6993781
    [24] J. Xu, Q. Li, Y. Luo, Y. Zhou, J. Wang, State measurement of isolating switch using cost fusion and smoothness prior based stereo matching, Int. J. Adv. Rob. Syst., 17 (2020), 1729881420925299. http://doi/org/10.1177/1729881420925299 doi: 10.1177/1729881420925299
    [25] M. S. Jadin, S. Taib, Recent progress in diagnosing the reliability of electrical equipment by using infrared thermography, Infrared Phys. Technol., 55 (2012), 236–245. http://doi/org/10.1016/j.infrared.2012.03.002 doi: 10.1016/j.infrared.2012.03.002
    [26] I. Ullah, F. Yang, R. Khan, L. Liu, H. Yang, B. Gao, et al., Predictive maintenance of power substation equipment by infrared thermography using a machine-learning approach, Energies, 10 (2017), 1987. http://doi/org/10.3390/en10121987 doi: 10.3390/en10121987
    [27] I. Ullah, R. U. Khan, F. Yang, L. Wuttisittikulkij, Deep learning image-based defect detection in high voltage electrical equipment, Energies, 13 (2020), 392. http://doi/org/10.3390/en13020392 doi: 10.3390/en13020392
    [28] T. Zhou, D. Fan, M. Cheng, J. Shen, L. Shao, RGB-D salient object detection: A survey, Comput. Visual Media, (2021), 37–69
    [29] L. Junfeng, L. Min, W. Qinruo, A novel insulator detection method for aerial images, in Proceedings of the 9th International Conference on Computer and Automation Engineering, (2017), 141–144.
    [30] D. Gong, Z. He, X. Ye, Z. Fang, Visual saliency detection for over-temperature regions in 3D space via dual-source images, Sensors, 20 (2020), 3414. http://doi/org/10.3390/s20123414 doi: 10.3390/s20123414
    [31] C. Rother, V. Kolmogorov, A. Blake, 'GrabCut' interactive foreground extraction using iterated graph cuts, ACM Trans. Graphics (TOG), 23 (2004), 309–314. http://doi/org/10.1145/1015706.1015720 doi: 10.1145/1015706.1015720
    [32] P. Krä henbühl, V. Koltun, Efficient inference in fully connected crfs with gaussian edge potentials, Adv. Neural Inf. Process. Syst., (2011), 24.
    [33] P. Krä henbühl, V. Koltun, Parameter learning and convergent inference for dense random fields, in International Conference on Machine Learning (PMLR), (2013), 213–521.
    [34] Y. Chen, W. Zhou, Hybrid-attention network for RGB-D salient object detection, Appl. Sci., 10 (2020), 5806. http://doi/org/10.3390/app10175806 doi: 10.3390/app10175806
    [35] Z. Wu, G. Allibert, F. Meriaudeau, C. Ma, C. Demonceaux, Hidanet: Rgb-d salient object detection via hierarchical depth awareness, IEEE Trans. Image Process., 32 (2023), 2160–2173. http://doi/org/10.1109/TIP.2023.3263111 doi: 10.1109/TIP.2023.3263111
    [36] S. Liu, D. Huang, Receptive field block net for accurate and fast object detection, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 385–400.
    [37] D. P. Fan, M. M. Cheng, Y. Liu, T. Li, A. Borji, Structure-measure: A new way to evaluate foreground maps, in Proceedings of the IEEE International Conference on Computer Vision, (2017), 4548–4557. http://doi/org/10.1109/TPAMI.2022.3145877
    [38] K. Min, G. H. Lee, S. W. Lee, Acnet: Mask-aware attention with dynamic context enhancement for robust acne detection, in IEEE International Conference on Systems, Man, and Cybernetics (SMC), (2021), 2724–2729. http://doi/org/10.1109/SMC52423.2021.9659243
    [39] M. N. Mahmud, M. H. Azim, M. Hisham, M. K. Osman, A. P. Ismail, F. Ahmad, et al., Altitude analysis of road segmentation from UAV images with DeepLab V3+, in IEEE 12th International Conference on Control System, Computing and Engineering (ICCSCE), (2022), 219–223. http://doi/org/10.1109/ICCSCE54767.2022.9935649
    [40] D. Seichter, M. Kö hler, B. Lewandowski, T. Wengefeld, H. M. Gross, Efficient rgb-d semantic segmentation for indoor scene analysis, in IEEE International Conference on Robotics and Automation (ICRA), (2021), 13525–13531. http://doi/org/10.1109/ICRA48506.2021.9561675
    [41] H. Zhao, Y. Zhang, S. Liu, J. Shi, C. C. Loy, D. Lin, et al., Psanet: Point-wise spatial attention network for scene parsing, in Proceedings of the European Conference on Computer Vision (ECCV), (2018), 267–283.
    [42] P. Ravikiran, M. Chakkaravarthy, Improved efficiency of semantic segmentation using pyramid scene parsing deep learning network method, in Intelligent Systems and Sustainable Computing: Proceedings of ICISSC 2021. Singapore: Springer Nature Singapore, (2022), 175–181.
    [43] J. Kim, H. Park, Reduced CNN model for face image detection with GAN oversampling, in Innovative Mobile and Internet Services in Ubiquitous Computing: Proceedings of the 15th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS-2021). Springer International Publishing, (2022), 232–241.
    [44] H. Zhou, H. Chen, Y. Zhang, M. Wei, H. Xie, J. Wang, et al., Refine-net: Normal refinement neural network for noisy point clouds, IEEE Trans. Pattern Anal. Mach. Intell., 45(2022), 946–963. http://doi/org/10.1109/TPAMI.2022.3145877 doi: 10.1109/TPAMI.2022.3145877
    [45] J. Zhang, D. P. Fan, Y. Dai, S. Anwar, F. S. Saleh, T. Zhang, et al., UC-Net: Uncertainty inspired RGB-D saliency detection via conditional variational autoencoders, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020), 8582–8591.
    [46] X. Zhou, W. Cao, H. Gao, Z. Ming, J. Zhang, STI-Net: Spatiotemporal integration network for video saliency detection, Inf. Sci., 628 (2023), 134–147. https://doi.org/10.1016/j.ins.2023.01.106 doi: 10.1016/j.ins.2023.01.106
    [47] Q. Chang, S. Zhu, Human vision attention mechanism-inspired temporal-spatial feature pyramid for video saliency detection, Cognit. Comput., (2023), 1–13. http://doi/org/10.1007/s12559-023-10114-x doi: 10.1007/s12559-023-10114-x
    [48] M. Jian, X. Lu, X. Yu, Y. Ju, H. Yu, K. M. Lam, Flow-Edge-Net: Video saliency detection based on optical flow and edge-weighted balance loss, IEEE Trans. Comput. Social Syst., 2023. http://doi/org/10.1109/TCSS.2023.3270164 doi: 10.1109/TCSS.2023.3270164
    [49] Y. Teng, T. Tan, C. Lei, J. Yang, Y. Ma, K. Zhao, et al., A novel method to recognize the state of high-voltage isolating switch, IEEE Trans. Power Delivery, 34 (2019), 1350–1356. http://doi/org/10.1109/TPWRD.2019.2897132 doi: 10.1109/TPWRD.2019.2897132
    [50] X. Lu, W. Quan, S. Gao, G. Zhang, K. Feng, G. Lin, et al., A segmentation-based multitask learning approach for isolating switch state recognition in high-speed railway traction substation, IEEE Trans. Intell. Transp. Syst., 23 (2022), 15922–15939. http://doi/org/10.1109/TITS.2022.3146338 doi: 10.1109/TITS.2022.3146338
  • This article has been cited by:

    1. Paul Arévalo, Danny Ochoa-Correa, Toward Enhanced Efficiency: Soft Sensing and Intelligent Modeling in Industrial Electrical Systems, 2024, 12, 2227-9717, 1365, 10.3390/pr12071365
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1332) PDF downloads(45) Cited by(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog