
In this paper, a new model known as YOLO-v5 is initiated to detect defects in PCB. In the past many models and different approaches have been implemented in the quality inspection for detection of defect in PCBs. This algorithm is specifically selected due to its efficiency, accuracy and speed. It is well known that the traditional YOLO models (YOLO, YOLO-v2, YOLO-v3, YOLO-v4 and Tiny-YOLO-v2) are the state-of-the-art in artificial intelligence industry. In electronics industry, the PCB is the core and the most basic component of any electronic product. PCB is almost used in each and every electronic product that we use in our daily life not only for commercial purposes, but also used in sensitive applications such defense and space exploration. These PCB should be inspected and quality checked to detect any kind of defects during the manufacturing process. Most of the electronic industries are focused on the quality of their product, a small error during manufacture or quality inspection of the electronic products such as PCB leads to a catastrophic end. Therefore, there is a huge revolution going on in the manufacturing industry where the object detection method like YOLO-v5 is a game changer for many industries such as electronic industries.
Citation: Venkat Anil Adibhatla, Huan-Chuang Chih, Chi-Chang Hsu, Joseph Cheng, Maysam F. Abbod, Jiann-Shing Shieh. Applying deep learning to defect detection in printed circuit boards via a newest model of you-only-look-once[J]. Mathematical Biosciences and Engineering, 2021, 18(4): 4411-4428. doi: 10.3934/mbe.2021223
[1] | Kangjian Sun, Ju Huo, Qi Liu, Shunyuan Yang . An infrared small target detection model via Gather-Excite attention and normalized Wasserstein distance. Mathematical Biosciences and Engineering, 2023, 20(11): 19040-19064. doi: 10.3934/mbe.2023842 |
[2] | Jingchao Jiang, Chunling Yu, Xun Xu, Yongsheng Ma, Jikai Liu . Achieving better connections between deposited lines in additive manufacturing via machine learning. Mathematical Biosciences and Engineering, 2020, 17(4): 3382-3394. doi: 10.3934/mbe.2020191 |
[3] | Zulin Xu . An intelligent fault detection approach for digital integrated circuits through graph neural networks. Mathematical Biosciences and Engineering, 2023, 20(6): 9992-10006. doi: 10.3934/mbe.2023438 |
[4] | Muhammad Hassan Jamal, Muazzam A Khan, Safi Ullah, Mohammed S. Alshehri, Sultan Almakdi, Umer Rashid, Abdulwahab Alazeb, Jawad Ahmad . Multi-step attack detection in industrial networks using a hybrid deep learning architecture. Mathematical Biosciences and Engineering, 2023, 20(8): 13824-13848. doi: 10.3934/mbe.2023615 |
[5] | Eric Ke Wang, Nie Zhe, Yueping Li, Zuodong Liang, Xun Zhang, Juntao Yu, Yunming Ye . A sparse deep learning model for privacy attack on remote sensing images. Mathematical Biosciences and Engineering, 2019, 16(3): 1300-1312. doi: 10.3934/mbe.2019063 |
[6] | Yihai Ma, Guowu Yuan, Kun Yue, Hao Zhou . CJS-YOLOv5n: A high-performance detection model for cigarette appearance defects. Mathematical Biosciences and Engineering, 2023, 20(10): 17886-17904. doi: 10.3934/mbe.2023795 |
[7] | Long Wen, Yan Dong, Liang Gao . A new ensemble residual convolutional neural network for remaining useful life estimation. Mathematical Biosciences and Engineering, 2019, 16(2): 862-880. doi: 10.3934/mbe.2019040 |
[8] | Shuai Cao, Biao Song . Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991. doi: 10.3934/mbe.2021103 |
[9] | Weibin Jiang, Xuelin Ye, Ruiqi Chen, Feng Su, Mengru Lin, Yuhanxiao Ma, Yanxiang Zhu, Shizhen Huang . Wearable on-device deep learning system for hand gesture recognition based on FPGA accelerator. Mathematical Biosciences and Engineering, 2021, 18(1): 132-153. doi: 10.3934/mbe.2021007 |
[10] | Tingxi Wen, Hanxiao Wu, Yu Du, Chuanbo Huang . Faster R-CNN with improved anchor box for cell recognition. Mathematical Biosciences and Engineering, 2020, 17(6): 7772-7786. doi: 10.3934/mbe.2020395 |
In this paper, a new model known as YOLO-v5 is initiated to detect defects in PCB. In the past many models and different approaches have been implemented in the quality inspection for detection of defect in PCBs. This algorithm is specifically selected due to its efficiency, accuracy and speed. It is well known that the traditional YOLO models (YOLO, YOLO-v2, YOLO-v3, YOLO-v4 and Tiny-YOLO-v2) are the state-of-the-art in artificial intelligence industry. In electronics industry, the PCB is the core and the most basic component of any electronic product. PCB is almost used in each and every electronic product that we use in our daily life not only for commercial purposes, but also used in sensitive applications such defense and space exploration. These PCB should be inspected and quality checked to detect any kind of defects during the manufacturing process. Most of the electronic industries are focused on the quality of their product, a small error during manufacture or quality inspection of the electronic products such as PCB leads to a catastrophic end. Therefore, there is a huge revolution going on in the manufacturing industry where the object detection method like YOLO-v5 is a game changer for many industries such as electronic industries.
A Printed Circuit Board (PCB) is the core and primary section of any electronic product. As we observe in our day to day life we human beings are being reliable on electronic product, from communication to transport, each and every product is being induced with PCB. These PCB are the core and base of any electronic product, which has to be designed and manufactured with a high precision to cope with demands and requirements keeping the quality up to the mark is really a challenging job. PCB are basically a base for the chips, transistors, capacitors and other electronic components which is made of fiberglass, composite epoxy with laminated materials [1,2,3,4]. During the manufacturing process many PCBs are produced with defects due to mishandling or technical faults during production. These faulty or defective PCBs should be detected and segregated during the quality inspection process. In the past there were many traditional and standard methods to detect defects [5,6] but they are not so efficient and fast.
In the recent couple of years, there is a lot of research done on detecting defects in PCBs but they were not so effective and unfortunately not able to detect tiny defects [7]. Although nowadays the PCB manufacturing industries use different image processing methods such as image subtraction, they are still not able to keep up with the quality inspection process. With the hike in popularity of consumer electronics products, accurate PCB manufacturing is important [8]. Few examples of PCB defect detection methods can be found in the literature [8,9,10]. Particularly, template-matching method [11] is used to detect defects in PCBs. Image subtraction method is another method using OpenCV [12]. However, these detection algorithms are specific to particular types of defects in PCBs. Remarkable progress has been made in the use of convolutional neural networks (CNNs) in several applications [13,14], such as image recognition [15,16] and object detection. In particular, object detection is achieved by implementing object recognition methods [17] and region-based CNNs [18].
These methods are used with the involvement of AOI and AVI machines. Apart from using such an expensive machine, PCB industries are bound to train and involve a huge amount of manpower for quality inspection after the traditional inspection process [19,20,21]. These industries need to rely on trained skilled manpower where there is an inconsistency in accuracy, and it consumes more time which delays the production. To overcome such kinds of difficulties and issues, YOLO-v5 has been introduced in this paper to overcome the difficulties that were experienced in previous research. However, training skilled engineers requires real skilled training. Even after years of training, human being makes errors. Eventually, the application of machine learning can reduce the error to some extent. Machines programmed with deep learning algorithms can be used to verify hard to detect defects.
Comparative to skilled quality engineer, machine learning methods are more accurate and much faster. Thus, this research proves that the involvement of machine learning can detect defects in PCB and increase productivity with higher accuracy. Researchers have applied various You-Only-Look-Once (YOLO) [22,23,24,25,26] approaches to art datasets and achieved excellent accuracy. YOLO-v5 outperforms other object detection algorithms due to its unique features like Mosaic data enhancement and adaptive anchor frame calculation. Apart from its well-designed structure, it outperforms in speed and it is smaller in size. On Ubuntu operating system, running a Titan V, we saw inference times up to 0.007 seconds per image, meaning 140 frames per second (FPS). By contrast, YOLO-v4 achieved 50 FPS after having been converted to the same PyTorch library. YOLO-v5 has several differences and improvements. In this research, we have used the PyTorch library to deploy the YOLO-v5 model which is a lot user-friendly for developers.
YOLO-v5 model can be also deployed in mobile devices as it can be compiled to CoreML and ONNX. 140 FPS has been achieving by YOLO-v5. When batch size is set to 1, 30 FPS as output is achieved by YOLO-v5 and 10 FPS is achieved by YOLO-v4. In our tests, we achieved roughly 0.895 mean average precision (mAP) after training for initial 100 epochs. Admittedly, we saw a comparable performance from EfficientDet and YOLO-v4, but it is rare to see such across-the-board performance improvements without any loss inaccuracy. Finally, YOLO-v5 is small in size. The weight file for YOLO-v5 is about 27 megabytes and the weight file for YOLO-v4 is 244 megabytes. YOLO-v5 is nearly 90% smaller than YOLO-v4. This means YOLO-v5 can be deployed to embedded devices much more easily.
Initially, data is collected from the AVI machine, the AVI machine provides an RGB format of the panel image of 4 different cameras. Images from 4 different cameras are extracted separately and then further process by cropping and testing which is done separately on the 4 different cameras. So initially, the R, G, and B image is combined and cropped to the exact location of the defect that is provided by AOI machine in a text file. The images is cropped to a size of 400 × 400 of the location and saved. This method is used to collect around 23,000 defective PCB images. After collecting the images, they are labelled using a tool created for the quality inspection engineers. This tool is designed to extracts the images from AVI and AOI machine of defective PCB with coordinates and label the defects. The quality inspection engineer labels the defective region with box shape around the defective region and label it as DEFECT eventually the location of two corners of the box which is x1y1 and x2y2 are saved in txt format which is used for the training process. After collecting and labeling the images, three different models like YOLO-v5 small, YOLO-v5 medium, and YOLO-v5 large are trained and their results are compared.
A total of 23,000 images are used in this experiment which is divided into 20,700 images for training and 2300 images for testing purposes. After training for all the three models using 10 cross-validation method, 30 models are generated and tested. The quality control testing procedure is fully automated with a user interface that provides the time and number of negative and OK images. This visualization user interface is effective and helps in monitoring the testing process without human interaction. The interface is developed using Python and Tkinter library. Figure 1 shows a snapshot of the developed interface used by the quality inspection operators.
The structure of YOLO-v5 and YOLO-v4 very similar, but there are some differences. Figure 2 shows YOLO-v5 network structure diagram, it is still divided into 4 parts, namely: Input, Backbone, Neck, Prediction.
a) Mosaic data enhancement
The input end of YOLO-v5 uses the same Mosaic data enhancement method. During the usual project training, Arithmetic progression with small target Generally lower than the medium and large goals. Our data set also contains a large number of small targets, but the more troublesome is the distribution of small targets Uneven. There are several advantages, Rich data set, Random use of pictures, Random scaling, and random distribution for splicing, which greatly enriches the detection data set, especially random scaling adds a lot of small targets, making the network more robust. Reducing GPU, some people may say that random scaling, ordinary data enhancement can also be done. But the author considers that many people may only have a GPU, so Mosaic enhancement training can directly calculate the data of 4 pictures such that the Mini-batch size does not need to be very large. Therefore, a GPU can achieve better results.
b) Adaptive anchor frame calculation
In the YOLO algorithm, for different data sets, there will be an Anchor frame with initial length and width. During the network training stage, the network outputs are the prediction frame based on the initial anchor frame, then the sum of the Ground Truth Compare is calculated using the difference between the two, and then Iterative network parameters are updated in reverse. In YOLO-v3 and YOLO-v4, when training with different data sets, the calculation of the initial anchor box value is executed through a separate program. But in YOLO-v5, this function is embedded in the code. Hence, the best anchor box value is calculated during each training stage and updated adaptively.
c) Adaptive image scaling
In commonly used target detection algorithms, different pictures have different lengths and widths, so the common way is to uniformly scale the original pictures to a standard size, and then feed them to the detection network. During inspection, many defect images might have different aspect ratios, and after zooming and filling, the black image borders can be different. If more fillings is needed, there will be information redundancy, which will affect the speed of reasoning. Therefore, in the YOLO-v5 code, the letterbox function is modified to the original image and the least black border adaptively which is different with our previous study [27]. The black edges at both ends of the image height are reduced which consequently reduced the calculations during inference which improves the target detection speed. Through this simple improvement, the inference speed has been increased by 37%, which can be said to be very effective.
a) Focus structure
Taking the structure of YOLO-v5 as an example, an original 608 × 608 × 3 image is fed into the Focus structure as shown in Figure 3, then the slicing operation is used to change the images to 304 × 304 × 12 feature map, followed by 32 convolution operation kernels which produces a final feature map of 304 × 304 × 32. The Focus structure of YOLO-v5 is shown in Figure 3.
b) CSP structure
The full name of CSP Net is Cross Stage Partial Network, which mainly solves the problem of a large amount of calculation in reasoning from the perspective of network structure design. The author of CSP Net believes that the problem of excessive inference calculations is due to network optimization gradient information repetition. Therefore, the CSP module is used to first divide the feature map of the base layer into two parts, and then merge them through a cross-stage hierarchy, which can reduce the amount of calculation and ensure accuracy. There are two CSP structures designed in the YOLO-v5 network, for example, the CSP1_X structure applied to the Backbone network and an additional CSP2_X Structure is applied to Neck.
YOLO-v5's current Neck and YOLO-v4 use FPN + PAN structure, but when YOLO-v5 first came out, only the FPN structure was used, and the PAN structure was added later, and other parts of the network were also adjusted. The Neck structure of YOLO-v5 as shown in Figure 4, the CSP2 structure designed by CSPnet is adopted to strengthen the ability of the network feature integration.
a) Bounding box loss function
YOLO-v5 uses Generalized Intersection over Union (GIOU) Loss as the loss function of the bounding box. In GIOU the measurement method of the intersecting scale is added which relieves the embarrassment of pure Intersection over Union (IOU). Based on the IOU, it solves the problem when the bounding boxes do not coincide.
b) Non-maximum suppression
In the post-processing of target detection, screening of many target frames usually requires a non-maximum suppression operation. In this research, DIOU_non-maximum suppression method is adopted which is different from our previous study [27]. Under the same parameters, the IOU in non-maximum suppression was changed to DIOU_non-maximum suppression. For some block overlapping targets, there will be indeed some improvements. When the parameters are consistent with the ordinary IOU_non-maximum suppression, we modify it to DIOU_non-maximum suppression, and the two targets can be detected. Although the effect is similar in most states without an increase in the calculation cost, there is a slight improvement which is also good.
In comparison with other object detection algorithms, the implementation of YOLO-v5 into embedded devices is very easy. Nvidia TITAN V GPU is used for this experiment, which reduces training time to 10% (i.e., 34 to 4 h). YOLO-v5 only requires the installation of a torch and some lightweight python libraries. With NVidia TITAN V GPU using Linux operating system and with the help of PyTorch, the experimental costs have been reduced because we just need to install the Torch with lightweight libraries. YOLO-v5 can infer individual images, batch images, video feeds, or webcam ports. The file folder layout is intuitive and easy to navigate while developing. You can easily translate YOLO-v5 from PyTorch weights to ONXX weights to CoreML to IOS. Three types of YOLO-v5 models are used in this experiment and have been compared with each other YOLO-v5 small, medium and large models. They have trained and tested the network setting, and parameters have been adjusted and tuned gradually using trial and error method. YOLO-v5 includes four different models ranging from the smallest YOLO-v5 with 7.5 million parameters (plain 7 MB, COCO pre-trained 14 MB) and 140 layers to the largest YOLO-v5 x with 89 million parameters and 284 layers (plain 85 MB, COCO pre-trained 170 MB). The approach that is considered in this paper is based on pre-trained YOLO-v5x model.
YOLO-v5x model uses a two-stage detector that consists of a Cross Stage Partial Network (CSPNet) backbone, and a model head using a Path Aggregation Network (PANet) for instance segmentation. Each Bottleneck CSP unit consists of two convolutional layers with 1 × 1 and 3 × 3 filters. The backbone incorporates a Spatial Pyramid Pooling network (SSP), which allows for dynamic input image size and is robust against object deformations. In this experiment, we have used 23,000 images which are labeled by skilled quality control engineer different types of defects has been labeled as a single class as DEFECT. The epoch size is based on the training dataset. After deciding the parameters for the model, an initial ideal starts for training beginning. A cross-validation method has been induced to validate the results and 10 cross-validation method has been used. Then data is divided into 10 parts, 9 of them are used for training and the remaining one part is used for testing.
The 10 cross-validation method is used for 3 different models which generates 30 cross-validation models to justify the result. The 10-fold cross-validation method [28] has been implemented to evaluate the execution of the trained models. Initially, the data is randomly divided into 10 equal parts, 9 of these parts are used for the training model and the remaining part is used for testing. After every training of the YOLO-v5 model, the data used to evaluate the model is not seen or interacted with the model during the training season. This method is repeated 10 times by jumbling the training and validation datasets. After completing the training process for the Tiny-YOLO-v5 model, every model is tested for different datasets.
As demonstrated in Tables 1–3, the results are gradually improving as the structure changes. This performance also proves that YOLO-v5 model is more efficient than other YOLO models. After every epoch or iteration, there is an increase in accuracy of the training process, which eventually tends the model to move towards better performance, and the final model is saved after the accuracy reaches a stable state. Results of 10 cross-validations YOLO-v5 small model are shown in Table 1, for YOLO-v5 medium model are shown in Table 2 and YOLO-v5 large model are shown in Table 3.
YOLO-v5 small model | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Crossvalidation 1 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 2 | 97.56% | 0.024 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 3 | 97.43% | 0.025 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 4 | 97.56% | 0.024 | 0.973 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 5 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
97.65% 97.47% 97.56% 97.60% 97.39% |
0.023 0.025 0.024 0.023 0.026 |
0.973 0.970 0.971 0.971 0.969 |
0.02 0.02 0.02 0.02 0.02 |
0.97 0.97 0.97 0.97 0.97 |
0.97 0.97 0.97 0.97 0.97 |
0.45 0.45 0.45 0.45 0.45 |
Mean ± SD | 97.52 ± 0.07 | ||||||
*Note. Accuracy: Overall, how often is the detection correct. (TP + TN) / total; Misclassification Rate: Overall, how often is it wrong. (FP + FN) / total; True Positive Rate: When it's actually yes, also known as "Sensitivity" or "Recall"; False Positive Rate: When it's no. FP/actual no; True Negative Rate: When it's actually no, also known as "Specificity". TN/actual no; Precision: When it predicts yes. TP/predicted yes; Prevalence: It defines how often does the yes condition occurs in our sample. Actual yes/total. |
YOLO-v5 medium model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.08% | 0.009 | 0.99 | 0.008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.17% | 0.008 | 0.99 | 0.009 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.21% 99.20% 99.17% 99.13% 99.17% |
0.007 0.007 0.008 0.008 0.008 |
0.99 0.99 0.99 0.99 0.99 |
0.007 0.007 0.007 0.008 0.009 |
0.99 0.99 0.99 0.99 0.99 |
0.99 0.99 0.99 0.99 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.16 ± 0.03 | |||||||
*Note. Similar to Table 1. |
YOLO-v5 large model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.82% | 0.001 | 0.99 | 0.0016 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.82% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.78% | 0.002 | 0.99 | 0.0002 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 98.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.86% 99.95% 99.78% 99.82% 99.86% |
0.001 0.0004 0.002 0.001 0.001 |
0.99 0.99 0.99 0.99 0.99 |
0.0008 0.0000 0.0031 0.0000 0.0023 |
0.99 1.00 0.99 1.00 0.99 |
0.99 1.00 0.99 1.00 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.74 ± 0.29 | |||||||
* Note. Similar to Table 1. |
Table 4 displays the result of testing in the form of a confusion matrix. In detail, initially, YOLO-v5 small model detection accuracy is approximately 97.52%, the YOLO-v5 medium model's accuracy is 99.16%, YOLO-v5 large model is approximately 99.74% as reported in Tables 1–3 respectively. In total, 30 cross-validations have been trained on 3 different model sizes as shown in Table 4. The cells are shaded with red and green representing True Positive and True Negative respectively. The categories are labeled as NG (not good/damaged) and OK.
Real | |||||||||
S | M | L | |||||||
Predicted | model | NG | OK | NG | OK | NG | OK | ||
Cross validation 1 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 2 | NG | 1023 | 27 | 1039 | 11 | 1048 | 2 | ||
OK | 29 | 1221 | 10 | 1240 | 2 | 1248 | |||
Cross validation 3 | NG | 1020 | 30 | 1041 | 9 | 1049 | 1 | ||
OK | 29 | 1221 | 10 | 1240 | 3 | 1247 | |||
Cross validation 4 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 28 | 1222 | 7 | 1243 | 2 | 1248 | |||
Cross validation 5 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 6 | NG | 1024 | 26 | 1040 | 10 | 10491 | 1 | ||
OK | 28 | 1222 | 8 | 1242 | 2 | 1248 | |||
Cross validation 7 | NG | 1023 | 27 | 1041 | 9 | 1050 | 0 | ||
OK | 31 | 1219 | 8 | 1242 | 1 | 1249 | |||
Cross validation 8 | NG | 1024 | 26 | 1040 | 10 | 1046 | 4 | ||
OK | 30 | 1220 | 9 | 1241 | 1 | 1249 | |||
Cross validation 9 | NG | 1025 | 25 | 1040 | 10 | 1050 | 0 | ||
OK | 30 | 1220 | 10 | 1240 | 4 | 1246 | |||
Cross validation 10 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 32 | 1218 | 7 | 1243 | 0 | 1250 |
According to the results, it can be concluded that YOLO-v5 large provides the best output its highest accuracy of 99.95% in detecting defects, and on average for 10 cross-validations the accuracy is 99.74% (Table 3). Also, other parameters like Misclassification Rate, True Positive Rate, False Positive Rate, True Negative Rate, and Prevalence which can be noticed that measuring parameters for YOLO-v5 large, is consistent and in all the 10 cross-validations for YOLO-v5 large model, there is not a huge difference between them. Figure 5 shows sample images for True Positive. In the above sample images, it can be seen that the model can detect the defects with confidence.
Figure 6 displays the sample images for False Negative, and these are the images that have a defective region in them but it's not able to detect it.
Figure 7 shows the False Positive defects. In these images, the model predicts False Positive which is misclassified but it with low confidence. To avoid such kind of misclassification, the model needs to be fined tuned by inspecting the size of the bounding box in training data.
Figure 8 displays sample images for True Negative. These samples do not have any defect and have been detected as OK. As it can be seen, the detection results YOLO-v5 large are appreciable with an average accuracy of 99.74% apart from the evaluation precision which is consistently 0.99 (Table 3). In addition, other measures such as misclassification rate, True Positive Rate, False Positive Rate, True Negative Rate, and Prevalence are the best using YOLO-v5 large compared to YOLO-v5 medium and YOLO-v5 small which gives stability and consistency. In most machine learning algorithms, it is believed that a large and balanced dataset makes the difference in performance.
In order to compare with our previous study using Tiny-YOLO-v2 version [27], five-fold cross-validation with batch size 32 were used in this experiment where Tiny-YOLO-v2 was tested using 765 images. An average defective PCB detection accuracy (batch size of 32) was found to be 98.79%, and evaluation precision was consistently 0.99. In addition, other measures such as the misclassification rate, true positive rate, false-positive rate, true negative rate, and prevalence for a batch size of 32 were not up to the mark in comparison with YOLO-v5 as deposited in Tables 5 and 6. As the mean accuracy for YOLO-v5 large is 99.52%. Table 7 displays Confusion matrix of five different cross-validations and comparison between Tiny-YOLO-v2 and YOLO-v5 large methods.
YOLO-v5 large | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 99.60% | 0.0039 | 0.99 | 0.013 | 0.98 | 0.99 | 0.80 |
Cross validation 2 | 99.35% | 0.0065 | 0.99 | 0.020 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 99.47% | 0.0052 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Cross validation 4 | 99.47% | 0.0052 | 0.99 | 0 | 1 | 1 | 0.80 |
Cross validation 5 | 99.73% | 0.0026 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Mean ± SD | 99.524 ± 0.12 | ||||||
*Note. Similar to Table 1. |
Tiny-YOLO-v2 Batch size = 32 | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 98.82% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 2 | 98.95% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 98.82% | 0.01 | 0.98 | 0.01 | 0.98 | 0.99 | 0.80 |
Cross validation 4 | 99.21% | 0.007 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 5 | 98.16% | 0.01 | 0.98 | 0.04 | 0.95 | 0.99 | 0.80 |
Mean ± SD | 98.79 ± 0.346 | ||||||
*Note. Similar to Table 1. |
Batch size | Yolo-v5(l) | Tiny-YOLO-v2(batch size 32) | |||
NG | OK | NG | OK | ||
Cross validation 1 | NG | 615 | 2 | 611 | 6 |
OK | 1 | 147 | 3 | 145 | |
Cross validation 2 | NG | 614 | 3 | 613 | 4 |
OK | 2 | 146 | 4 | 144 | |
Cross validation 3 | NG | 616 | 1 | 610 | 7 |
OK | 3 | 145 | 2 | 146 | |
Cross validation 4 | NG | 617 | 0 | 614 | 3 |
OK | 4 | 144 | 3 | 145 | |
Cross validation 5 | NG | 616 | 1 | 609 | 8 |
OK | 1 | 147 | 6 | 142 |
Finally, in order to compare the training time and memory size for our proposal YOLO-v5 three models (i.e., small, medium, and large). Table 8 shows results for running 150 epochs.
YOLO-v5 small | YOLO-v5 medium | OLO-v5 large | |
Time (hrs.) | 3 hrs | 15 hrs | 31 hrs |
Memory (MB) | 27 MB | 50 MB | 192 MB |
*Note. Time is the required time used by model for training process and memory is amount of memory of weight that is used by model. |
One major advantage of YOLO-v5 over other models in the YOLO series is that YOLO-v5 is coded in PyTorch from the ground up. This makes it useful for machine learning engineers as there exists an active and vast PyTorch community to support the researchers. YOLO-v5 is also much faster than all the previous versions of YOLO [29,30,31]. In addition to this, YOLO-v5 is nearly 90% smaller than YOLO-v4. This means YOLO-v5 can be deployed to embedded devices much more easily. To know more about some of the advantages of YOLO-v5, mosaic augmentation, is an included technique in the improved YOLO-v5. Previously, YOLO models are developed using darknet and that was not so flexible for research work and not suitable to be used in industry. Iterating on YOLO-v5 may be easier for the broader research community. Apart from that, YOLO-v5 is fast running on NVidia Titan V as it can reach 140 FPS while the other YOLO models are restricted to 50 FPS. YOLO-v5 is accurate after training for 1000 epochs and roughly it can achieve 0.935 mean average precision. It is not usually seen in any other YOLO or object detection models without loss an accuracy has been achieved. Finally, the size of YOLO-v5 model, e.g., the weight file of YOLO-v5 small is only 27 megabytes and the size of YOLO-v5 large is 192 megabytes while the size of YOLO-v4 is 244 megabytes.
As mentioned in Section 2, adaptive image scaling and Non-maximum suppression are the changes that we have done comparative to our previous work [27]. In this study we have also implemented the automatic testing process and a user interface which automatically extracts the images from AVI machine and runs the testing in a parallel computing which eventually saves time during testing process. In addition to that, it runs the testing without intervention of human. Furthermore, to justify the difference between both the model, Figure 9(a) is the structure of Tiny-YOLO-v2 that we have used in the previous study, and Figure 9(b) displays the structure of YOLO-v5 that we have used in this research. As we can see in the below figures, the difference between both the structure is the neck section. In the field of target detection, in order to better extract the fusion features, usually some layers are inserted in the Backbone and the output layer. This part is called Neck. The neck, which is equivalent to the target detection network, is also very critical. YOLO-v5 uses CSPDarknet53 as the backbone, plus the SPP module, PANET as the neck, and the head of YOLO-v3. Compared with Tiny-YOLO-v2, the structure diagram of YOLO-v5 has more CSP structure and PAN structure. If you simply look at the structure, you will find it very convoluted. However, after seeing the below structure, it will feel suddenly open. In fact, the overall structure is same, but using various new algorithm ideas to improve each substructure. Yolo-v5 structure is the method for neighboring positive sample anchor matching strategy. Through flexible configuration parameters, models of different complexity can be obtained so it improves overall performance through some built-in hyper-parameter optimization strategies, for example, mosaic enhancement is used to improve the detection performance of small objects.
The used datasets are collected by our research team who have decades of experience in quality inspection of PCB. Moreover, the traditional deep learning [32,33,34,35] methods are based on classifying or detecting particular objects in the image. In this paper, three different types of models have been deployed and have been compared with each other. The training time has also been compared each other after comparing training time, it has been observed that YOLO-v5 small takes less time compared to the other two models. YOLO-v5 small takes almost 3–4 hrs., to train and YOLO-v5 medium takes 12–14 hrs. to train while the YOLO-v5 large takes 31–32 hrs. However, with respect to the overall accuracy, YOLO-v5 large is more accurate than the other two models. YOLO-v5 small has an average accuracy of 97.52%, while YOLO-v5 medium has 99.16% and YOLO-v5 large has an accuracy of 99.74%. Apart from accuracy if we compare with the map, YOLO-v5 large has the highest map than the other two. YOLO-v5 large has 94%, while YOLO-v5 medium and YOLO-v5 small has 84 and 82%.
This research proves that YOLO-v5 large can detect the defects in PCB with plausible accuracy of 99.74%, which optimized a lot of skilled manpower and time. It also increases the accuracy. In future work, the accuracy can be increased considering several types of defects. In future work, efficient performance with higher accuracy requires further research for example includes a different kind of defects. The grouping of classes must be done in a more balanced way, and we need to include more types of defects and increase the type of defects with an increase in data. Further, in the future, we will try to develop fully automatic training without human interference and use transfer learning and meta-learning to improve the accuracy. Eventually, the transfer learning approach [36,37] can be considered for a pre-trained YOLO model.
This work was financially supported by the Ministry of Science and Technology, Taiwan (Grant number: MOST 109-2622-E-155-007).
The authors declare no conflict of interest in this paper.
[1] | H. Suzuki, Junkosha Co Ltd., Printed Circuit Board, US 4640866, March 16, 1987. |
[2] | H. Matsubara, M. Itai, K. Kimura, NGK Spark Plug Co Ltd., Printed Circuit Board, US 6573458, September 12, 2003. |
[3] | J. A. Magera, G. J. Dunn, The Printed Circuit Designer's Guide to Flex and Rigid-Flex Fundamentals, Motorola Solutions Inc., Printed Circuit Board, US 7459202, August 21, 2008. |
[4] | H. S. Cho, J. G. Yoo, J. S. Kim, S. H. Kim, Official Gazette of the United states patent and trademark, Samsung Electro Mechanics Co Ltd., Printed Circuit Board, US 8159824, 2012. |
[5] | A. P. S. Chauhan, S. C. Bhardwaj, Detection of bare PCB defects by image subtraction method using machine vision, in Proceedings of the World Congress on Engineering, 2 (2011), 6-8. |
[6] | N. K. Khalid, Z. Ibrahim, An Image Processing Approach towards Classification of Defects on Printed Circuit Board, PhD thesis, University Technology Malaysia, Johor, Malaysia, 2007. |
[7] |
Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspectives, IEEE Trans. Pattern Anal. Mach. Intell., 35 (2013), 1798-828. doi: 10.1109/TPAMI.2013.50
![]() |
[8] | P. S. Malge, PCB defect detection, classification and localization using mathematical morphology and image processing tools, Int. J. Comput. Appl., 87 (2014), 40-45. |
[9] | Y. Takada, T. Shiina, H. Usami, Y. Iwahori, Defect detection and classification of electronic circuit boards using keypoint extraction and CNN features, in The Ninth International Conferences on Pervasive Patterns and Applications Defect, 100 (2017), 113-116. |
[10] | D. B. Anitha, R. Mahesh, A survey on defect detection in bare PCB and assembled PCB using image processing techniques, in 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), (2017), 39-43. |
[11] |
A. J. Crispin, V. Rankov, Automated inspection of PCB components using a genetic algorithm template-matching approach, Int. J. Adv. Manuf. Technol., 35 (2007), 293-300. doi: 10.1007/s00170-006-0730-0
![]() |
[12] | F. Raihan, W. Ce, PCB defect detection USING OPENCV with image subtraction method, in 2017 International Conference on Information Management and Technology (ICIMTech), (2017), 204-209. |
[13] | I. B. Basyigit, A. Genc, H. Dogan, F. A. Senel, S. Helhel, Deep learning for both broadband prediction of the radiated emission from heatsinks and heatsink optimization, Eng. Sci. Technol. Int. J., 24 (2021), 706-714. |
[14] | S. Metlek, K. Kayaalp, I. B. Basyigit, A. Genc, H. Doğan, The dielectric properties prediction of the vegetation depending on the moisture content using the deep neural network model, Int. J. RF Microwave Comput.-Aided Eng., 31 (2020), e22496. |
[15] | H. Hosseini, B. Xiao, M. Jaiswal, R. Poovendran, On the limitation of convolutional neural networks in recognizing negative images, in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), (2017), 352-358. |
[16] |
X. Tao, Z. Wang, Z. Zhang, D. Zhang, D. Xu, X. Gong, et al., Wire defect recognition of spring-wire socket using multitask convolutional neural networks, IEEE Trans. Compon. Package. Manuf. Technol., 8 (2018), 689-698. doi: 10.1109/TCPMT.2018.2794540
![]() |
[17] | J. R. R. Uijlings, K. E. A. Van De Sande, T. Gevers, A. W. M. Smeulders, Selective search for object recognition, Int. J. Comput. Vision, 104 (2012), 154-171. |
[18] | R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), 580-587. |
[19] | K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, preprint, arXiv: 1512.03385. |
[20] | K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. |
[21] | C. Szegedy, S. Ioffe, V. Vanhoucke, Inception-v4, inception-resnet and the impact of residual connections on learning, preprint, arXiv: 1602.07261. |
[22] | C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., Going deeper with convolutions, preprint, arXiv: 1409.4842. |
[23] | N. Suda, V. Chandra, G. Dasika, A. Mohanty, Y. Ma, S. Vrudhula, et al., Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks, in Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (2016), 16-25. |
[24] | J. Zhang, J. Li, Improving the performance of OpenCL-based FPGA accelerator for convolutional neural network, in Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, (2017), 25-34. |
[25] | D. Wang, J. An, K. Xu, PipeCNN: An OpenCL-based FPGA accelerator for large-scale convolution neuron networks, preprint, arXiv: 1611.02450. |
[26] | J. Cong, B. Xiao, Minimizing computation in convolutional neural networks, in International conference on artificial neural networks, Springer, Cham, (2014), 281-290. |
[27] |
V. A. Adibhatla, H. C. Chih, C. C. Hsu, J. Cheng, M. F. Abbod, J. S. Shieh, Defect detection in printed circuit boards using you-only-look-once convolutional neural networks, Electronics, 9 (2020), 1547. doi: 10.3390/electronics9091547
![]() |
[28] | M. Pritt, G. Chern, Satellite image classification with deep learning, in 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), (2017), 1-7. |
[29] |
X. S. Zhang, R. J. Roy, E. W. Jensen, EEG complexity as a measure of depth of anesthesia for patients, IEEE Trans. Biomed. Eng., 48 (2001), 1424-1433. doi: 10.1109/10.966601
![]() |
[30] |
V. Lalitha, C. Eswaran, Automated detection of anesthetic depth levels using chaotic features with artificial neural networks, J. Med. Syst., 31 (2007), 445-452. doi: 10.1007/s10916-007-9083-y
![]() |
[31] | M. Peker, B. Sen, H. Gürüler, Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks, J. Med. Syst., 39 (2015), 1-11. |
[32] |
P. L. Callet, C. Viard-Gaudin, D. Barba, A convolutional neural network approach for objective video quality assessment, IEEE Trans. Neural Networks, 17 (2006), 1316-1327. doi: 10.1109/TNN.2006.879766
![]() |
[33] | D. C. Cireşan, U. Meier, L. M. Gambardella, J. Schmidhuber, Convolutional neural network committees for handwritten character classification, in Proceedings of the 2011 International Conference on Document Analysis and Recognition, (2011), 1135-1139. |
[34] | N. Kalchbrenner, E. Grefenstette, P. Blunsom, A convolutional neural network for modelling sentences, preprint, arXiv: 1404.2188. |
[35] | A. Devarakonda, M. Naumov, M. Garland, Adabatch: Adaptive batch sizes for training deep neural networks, preprint, arXiv: 1712.02029. |
[36] | T. Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in Proceedings of the IEEE international conference on computer vision, (2017), 2980-2988. |
[37] |
L. Shao, F. Zhu, X. Li, Transfer learning for visual categorization: A survey, IEEE Trans. Neural Netw. Learn. Syst., 26 (2015), 1019-1034. doi: 10.1109/TNNLS.2014.2330900
![]() |
1. | Qianru Zhang, Yunfei Wang, Mengxuan Han, Huaibo Song, Using an Improved YOLOv5 Network for the Automatic Detection of Silicon on Wheat Straw Epidermis of Micrographs, 2021, 1556-5068, 10.2139/ssrn.3929843 | |
2. | Tijeni Delleji, Feten Slimeni, Hedi Fekih, Achref Jarray, Wadi Boughanmi, Abdelaziz Kallel, Zied Chtourou, An Upgraded-YOLO with Object Augmentation: Mini-UAV Detection Under Low-Visibility Conditions by Improving Deep Neural Networks, 2022, 3, 2662-2556, 10.1007/s43069-022-00163-7 | |
3. | Qin Ling, Nor Ashidi Mat Isa, Printed Circuit Board Defect Detection Methods Based on Image Processing, Machine Learning and Deep Learning: A Survey, 2023, 11, 2169-3536, 15921, 10.1109/ACCESS.2023.3245093 | |
4. | Naohito Hayashi, Shigeki Koyanaka, Tatsuya Oki, Verification of algorithm for automatic detection of electronic devices mounted on waste printed circuit boards, 2022, 72, 1096-2247, 420, 10.1080/10962247.2022.2044408 | |
5. | Christine Dewi, Rung-Ching Chen, Yong-Cun Zhuang, Henoch Juli Christanto, Yolov5 Series Algorithm for Road Marking Sign Identification, 2022, 6, 2504-2289, 149, 10.3390/bdcc6040149 | |
6. | Mingjuan Han, Ding Han, Xihao Wang, Deqiang Cheng, Omer Deperlioglu, 2022, Detection method of a goat in a natural scene based on improved YOLOv4, 9781510657694, 49, 10.1117/12.2643586 | |
7. | Andria Legon, Makarand Deo, Sacharia Albin, Michel Audette, 2022, Detection and Classification of PCB Defects using Deep Learning Methods, 978-1-6654-6816-9, 1, 10.1109/MDTS54894.2022.9826925 | |
8. | Joon-Hyung Park, Yeong-Seok Kim, Hwi Seo, Yeong-Jun Cho, Analysis of Training Deep Learning Models for PCB Defect Detection, 2023, 23, 1424-8220, 2766, 10.3390/s23052766 | |
9. | Yike Li, Jiajie Guo, Peikai Yang, Ethan Schonfeld, Developing an Image-Based Deep Learning Framework for Automatic Scoring of the Pentagon Drawing Test, 2022, 85, 13872877, 129, 10.3233/JAD-210714 | |
10. | Teerawat Kamnardsiri, Phasit Charoenkwan, Chommaphat Malang, Ratapol Wudhikarn, 1D Barcode Detection: Novel Benchmark Datasets and Comprehensive Comparison of Deep Convolutional Neural Network Approaches, 2022, 22, 1424-8220, 8788, 10.3390/s22228788 | |
11. | Jie Niu, Hongyan Li, Xu Chen, Kun Qian, Kathiravan Srinivasan, An Improved YOLOv5 Network for Detection of Printed Circuit Board Defects, 2023, 2023, 1687-7268, 1, 10.1155/2023/7270093 | |
12. | Ruoshan Lei, Dongpeng Yan, Hongjin Wu, Yibing Peng, 2022, A Precise Convolutional Neural Network-based Classification and Pose Prediction Method for PCB Component Quality Control, 978-1-6654-9535-6, 1, 10.1109/ECAI54874.2022.9847518 | |
13. | Xiangyuan Zhu, Xiuchun Xiao, Zhiming Lan, Qiuhong Hong, Miao Hou, 2022, An Intelligent Defect Detection Algorithm for PCB based on Deep Learning, 9781450397155, 20, 10.1145/3577117.3577142 | |
14. | Fu-Jun Du, Shuang-Jian Jiao, Improvement of Lightweight Convolutional Neural Network Model Based on YOLO Algorithm and Its Research in Pavement Defect Detection, 2022, 22, 1424-8220, 3537, 10.3390/s22093537 | |
15. | Danilo Alves, Victor Farias, Iago Chaves, Richard Chao, Joao Paulo Madeiro, Joao Paulo Gomes, Javam Machado, 2022, Detecting Customer Induced Damages in Motherboards with Deep Neural Networks, 978-1-7281-8671-9, 1, 10.1109/IJCNN55064.2022.9892047 | |
16. | Yusen Wan, Liang Gao, Xinyu Li, Yiping Gao, Semi-Supervised Defect Detection Method with Data-Expanding Strategy for PCB Quality Inspection, 2022, 22, 1424-8220, 7971, 10.3390/s22207971 | |
17. | Hongru Song, Multi-Scale Safety Helmet Detection Based on RSSE-YOLOv3, 2022, 22, 1424-8220, 6061, 10.3390/s22166061 | |
18. | Haodong Wang, Jun Xie, Xinying Xu, Zihao Zheng, Few-Shot PCB Surface Defect Detection Based on Feature Enhancement and Multi-Scale Fusion, 2022, 10, 2169-3536, 129911, 10.1109/ACCESS.2022.3228392 | |
19. | Imam Husni Al Amin, Falah Hikamudin Arby, Edy Winarno, Budi Hartono, Wiwien Hadikurniawati, 2022, Real-time Social Distance Detection using YOLO-v5 with Bird-eye View Perspective to Suppress the Spread of COVID-19, 978-1-6654-9433-5, 269, 10.1109/ICITE54466.2022.9759552 | |
20. | Abhiroop Bhattacharya, Sylvain G. Cloutier, End-to-end deep learning framework for printed circuit board manufacturing defect classification, 2022, 12, 2045-2322, 10.1038/s41598-022-16302-3 | |
21. | Chengle Fang, Huiyu Xiang, Chongjie Leng, Jiayue Chen, Qian Yu, Research on Real-Time Detection of Safety Harness Wearing of Workshop Personnel Based on YOLOv5 and OpenPose, 2022, 14, 2071-1050, 5872, 10.3390/su14105872 | |
22. | Qianru Zhang, Yunfei Wang, Lei Song, Mengxuan Han, Huaibo Song, Using an improved YOLOv5s network for the automatic detection of silicon on wheat straw epidermis of micrographs, 2023, 40, 1556-4959, 130, 10.1002/rob.22120 | |
23. | Nikhil Aggarwal, Manish Deshwal, Piyush Samant, 2022, A Survey on Automatic Printed Circuit Board Defect Detection Techniques, 978-1-6654-3789-9, 853, 10.1109/ICACITE53722.2022.9823872 | |
24. | S. Matsuo, Y. Takahashi, C. Tokoro, M. Isozaki, Construction of flower bud diagnosis system using AI image analysis in strawberry cultivation, 2023, 0567-7572, 229, 10.17660/ActaHortic.2023.1360.29 | |
25. | Nathan Jessurun, Olivia P. Dizon-Paradis, Jacob Harrison, Shajib Ghosh, Mark M. Tehranipoor, Damon L. Woodard, Navid Asadizanjani, FPIC: A Novel Semantic Dataset for Optical PCB Assurance, 2023, 1550-4832, 10.1145/3588032 | |
26. | Shiyi Chen, Wugang Lai, Junjie Ye, Yingjie Ma, A Fast and Low-Power Detection System for the Missing Pin Chip Based on YOLOv4-Tiny Algorithm, 2023, 23, 1424-8220, 3918, 10.3390/s23083918 | |
27. | Anagha K Lailesh, Jolinson A Richi, N Preethi, 2023, A Pre-trained YOLO-v5 model and an Image Subtraction Approach for Printed Circuit Board Defect Detection, 978-1-6654-9260-7, 140, 10.1109/IITCEE57236.2023.10090861 | |
28. | S. Matsuo, H. Umeda, Development of a growth state estimation system using a moving camera in strawberry cultivation, 2024, 0567-7572, 1471, 10.17660/ActaHortic.2024.1404.204 | |
29. | 吴磊 Wu Lei, 储钰昆 Chu Yukun, 杨洪刚 Yang Honggang, 陈云霞 Chen Yunxia, 基于YOLOv7TS的铝合金焊缝DR图像缺陷检测技术, 2024, 51, 0258-7025, 2002102, 10.3788/CJL231313 | |
30. | Eunjeong Choi, Jeongtae Kim, Robust Inspection of Integrated Circuit Substrates Based on Twin Network With Image Transform and Suppression Modules, 2023, 11, 2169-3536, 66017, 10.1109/ACCESS.2023.3290914 | |
31. | Shengping Lv, Bin Ouyang, Zhihua Deng, Tairan Liang, Shixin Jiang, Kaibin Zhang, Jianyu Chen, Zhuohui Li, A dataset for deep learning based detection of printed circuit board surface defect, 2024, 11, 2052-4463, 10.1038/s41597-024-03656-8 | |
32. | Luis Augusto Libório Oliveira Fonseca, Yuzo Iano, Gabriel Gomes de Oliveira, Gabriel Caumo Vaz, Giulliano Paes Carnielli, Júlio César Pereira, Rangel Arthur, Automatic printed circuit board inspection: a comprehensible survey, 2024, 4, 2731-0809, 10.1007/s44163-023-00081-5 | |
33. | Wenliang Deng, Investigation of visual inspection methodologies for printed circuit board products, 2024, 53, 0972-8821, 1462, 10.1007/s12596-023-01342-3 | |
34. | Hasan Ali Akyürek, Hasan İbrahim Kozan, Şakir Taşdemir, Surface Crack Detection in Historical Buildings with Deep Learning-based YOLO Algorithms: A Comparative Study, 2024, 10, 1735-465X, 1, 10.61186/crpase.10.3.2904 | |
35. | Stefano Puttero, Aydin Nassehi, Elisa Verna, Gianfranco Genta, Maurizio Galetto, Automatic object detection for disassembly and recycling of electronic board components, 2024, 127, 22128271, 206, 10.1016/j.procir.2024.07.036 | |
36. | Kaiyang Zhou, Dong Lei, Pang-jo Chun, Zesheng She, Jintao He, Wenkang Du, Miao Hong, Evaluation of BFRP strengthening and repairing effects on concrete beams using DIC and YOLO-v5 object detection algorithm, 2024, 411, 09500618, 134594, 10.1016/j.conbuildmat.2023.134594 | |
37. | Ting Li, Shiyong Wang, Yongchao Luo, Jiafu Wan, Ziren Luo, Minghao Chen, 3-D Vision and Intelligent Online Inspection in SMT Microelectronic Packaging: A Review, 2024, 5, 2687-9735, 779, 10.1109/JESTIE.2024.3365030 | |
38. | Yangtao Li, Haitao Zhao, Yang Wei, TengFei Bao, Tianyu Li, Qiudong Wang, Ning Wang, Mengfan Zhao, Vision-guided crack identification and size quantification framework for dam underwater concrete structures, 2024, 1475-9217, 10.1177/14759217241287906 | |
39. | Fangrong Zhou, Lifeng Liu, Hao Hu, Weishi Jin, Zezhong Zheng, Zhongnian Li, Yi Ma, Qun Wang, An Improved YOLO Network for Insulator and Insulator Defect Detection in UAV Images, 2024, 90, 0099-1112, 355, 10.14358/PERS.23-00074R2 | |
40. | Eunjeong Choi, Jeongtae Kim, Spatial and Channel-Wise Co-Attention-Based Twin Network System for Inspecting Integrated Circuit Substrate, 2023, 36, 0894-6507, 434, 10.1109/TSM.2023.3289294 | |
41. | Doǧan Irmak Ural, Arda Sezen, Research on PCB defect detection using artificial intelligence: a systematic mapping study, 2024, 17, 1864-5909, 3101, 10.1007/s12065-024-00930-x | |
42. | Karim Kolachi, Malhar Khan, Shahjahan Alias Sarang, Aaqib Raza, 2023, Fault Detection and Quality Inspection of Printed Circuit Board Using Yolo-v7 Algorithm of Deep Learning, 979-8-3503-3846-1, 1, 10.1109/IMTIC58887.2023.10178512 | |
43. | Xudong Song, Tiankai Zhang, Weiguo Yi, An improved YOLOv8 safety helmet wearing detection network, 2024, 14, 2045-2322, 10.1038/s41598-024-68446-z | |
44. | Rung-Ching Chen, Christine Dewi, Yong-Cun Zhuang, Jeang-Kuo Chen, Contrast Limited Adaptive Histogram Equalization for Recognizing Road Marking at Night Based on Yolo Models, 2023, 11, 2169-3536, 92926, 10.1109/ACCESS.2023.3309410 | |
45. | Xiao Wu, Tao Ma, Qipeng Zhao, Liucun Zhu, Congwei He, Research on YOLOv5s Improved Algorithm for Pavement Crack Detection in Complex Environments, 2024, 12, 2169-3536, 122452, 10.1109/ACCESS.2024.3451708 | |
46. | Rui Wang, Shaoze Gao, Chen Wang, Peng Zhou, 2023, X-ray Flaw Detection of Welds Based on Slice Inference, 979-8-3503-0245-5, 2111, 10.1109/ICSP58490.2023.10248820 | |
47. | Qin Ling, Nor Ashidi Mat Isa, TD‐YOLO: A Lightweight Detection Algorithm for Tiny Defects in High‐Resolution PCBs, 2024, 7, 2513-0390, 10.1002/adts.202300971 | |
48. | Nikhil Aggarwal, Manish Deshwal, Piyush Samant, 2023, Comparative Analysis of Existing Deep Learning Techniques for Automatic Defects Detection in Printed Circuit Boards, 979-8-3503-4233-8, 1176, 10.1109/ICTACS59847.2023.10390098 | |
49. | Suxiao Li, Hongbin Guo, Qing Zhang, Qian Miao, Xiaofang Zhang, Pengfei Zhang, Yuriy S. Shmaliy, Yougang Sun, Habib Zaidi, Hongying Meng, Hoshang Kolivand, Jianping Luo, Mamoun Alazab, 2023, Detection of cigar appearance defects based on improved YOLOv5, 9781510668621, 129, 10.1117/12.3009569 | |
50. | Antika Roy, Md Mahfuz Al Hasan, Shajib Ghosh, Nitin Varshney, Jake Julia, Reza Forghani, Navid Asadizanjani, Applications and Challenges of AI in PCB X-ray Inspection: A Comprehensive Study, 2024, 1550-4832, 10.1145/3703457 | |
51. | Thi-Thu-Huyen Vu, Tai-Woo Chang, Haejoong Kim, Enhancing Quality Control in Battery Component Manufacturing: Deep Learning-Based Approaches for Defect Detection on Microfasteners, 2024, 12, 2079-8954, 24, 10.3390/systems12010024 | |
52. | Aria Hendrawan, Rahmat Gernowo, Oky Dwi Nurhayati, 2023, Contrast Stretching and Contrast Limited Adaptive Histogram Equalization for Recognizing Vehicles Based on Yolo Models, 979-8-3503-6055-4, 1, 10.1109/ICTECA60133.2023.10490716 | |
53. | Zhaoyang Li, Junying Chen, Hantao Huang, Xuze Dong, 2023, Defect Detection in Computer Motherboard Assembly through Fusion of Multi-Scale Features and Attention Mechanisms, 979-8-3503-8370-6, 1, 10.1109/ICSAI61474.2023.10423311 | |
54. | Manav Madan, Christoph Reich, Frank Hassenpflug, 2023, Chapter 24, 978-3-031-31434-6, 359, 10.1007/978-3-031-31435-3_24 | |
55. | Yongbing Zhou, Minghao Yuan, Jian Zhang, Guofu Ding, Shengfeng Qin, Review of vision-based defect detection research and its perspectives for printed circuit board, 2023, 70, 02786125, 557, 10.1016/j.jmsy.2023.08.019 | |
56. | Xuewei Zhang, Jichun Wang, Yang Wang, Yanwu Feng, Shufeng Tang, Image Segmentation of Fiducial Marks with Complex Backgrounds Based on the mARU-Net, 2023, 23, 1424-8220, 9347, 10.3390/s23239347 | |
57. | Shih-Hsien Tseng, Chi Kuo, PCB Defect Detection Based on Improved Deep Learning Model, 2024, 38, 0218-0014, 10.1142/S0218001424540144 | |
58. | Yue Zhang, He Zou, Jing Wang, Zhenwu Lei, Meng Zhou, 2023, Lightweight Neural Network-based Real-time PCB Defect Detection System, 979-8-3503-3775-4, 1, 10.1109/SAFEPROCESS58597.2023.10295891 | |
59. | Ping Luo, Tingting Wang, Zhengmei Wang, Jiaqing Shen, 2023, Defect Detection Method of PCB Small Target Based on Improved YOLOv5, 979-8-3503-0375-9, 2208, 10.1109/CAC59555.2023.10450789 | |
60. | Archana Chaudhari, Varun Manwatkar, Gayatri Shinde, 2025, Chapter 35, 978-981-97-8328-1, 475, 10.1007/978-981-97-8329-8_35 | |
61. | Shafiq Mutebi, Zainul Abidin, Raden Arief Setyawan, 2024, Enhancing PCB Quality Control with Deep Learning Based Defect Detection, 979-8-3315-0793-0, 1, 10.1109/ICAAEEI63658.2024.10899172 | |
62. | Florian Stamer, Rouven Jachemich, Stefano Puttero, Elisa Verna, Maurizio Galetto, Integrative inspection methodology for enhanced PCB remanufacturing using artificial intelligence, 2025, 132, 22128271, 227, 10.1016/j.procir.2025.01.038 | |
63. | Zia Ullah, Lin Qi, E. J. Solteiro Pires, Arsénio Reis, Ricardo Rodrigues Nunes, A Systematic Review of Computer Vision Techniques for Quality Control in End-of-Line Visual Inspection of Antenna Parts, 2024, 80, 1546-2226, 2387, 10.32604/cmc.2024.047572 | |
64. | Aniruddh D. P., Deepa N. P., Atharva Pathak, Hriday Katti, Ayush B. N., 2025, Defect Detection and Identification in PCBs Using Single Stage Object Detection Models, 979-8-3315-1591-1, 1, 10.1109/IITCEE64140.2025.10915342 |
YOLO-v5 small model | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Crossvalidation 1 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 2 | 97.56% | 0.024 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 3 | 97.43% | 0.025 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 4 | 97.56% | 0.024 | 0.973 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 5 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
97.65% 97.47% 97.56% 97.60% 97.39% |
0.023 0.025 0.024 0.023 0.026 |
0.973 0.970 0.971 0.971 0.969 |
0.02 0.02 0.02 0.02 0.02 |
0.97 0.97 0.97 0.97 0.97 |
0.97 0.97 0.97 0.97 0.97 |
0.45 0.45 0.45 0.45 0.45 |
Mean ± SD | 97.52 ± 0.07 | ||||||
*Note. Accuracy: Overall, how often is the detection correct. (TP + TN) / total; Misclassification Rate: Overall, how often is it wrong. (FP + FN) / total; True Positive Rate: When it's actually yes, also known as "Sensitivity" or "Recall"; False Positive Rate: When it's no. FP/actual no; True Negative Rate: When it's actually no, also known as "Specificity". TN/actual no; Precision: When it predicts yes. TP/predicted yes; Prevalence: It defines how often does the yes condition occurs in our sample. Actual yes/total. |
YOLO-v5 medium model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.08% | 0.009 | 0.99 | 0.008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.17% | 0.008 | 0.99 | 0.009 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.21% 99.20% 99.17% 99.13% 99.17% |
0.007 0.007 0.008 0.008 0.008 |
0.99 0.99 0.99 0.99 0.99 |
0.007 0.007 0.007 0.008 0.009 |
0.99 0.99 0.99 0.99 0.99 |
0.99 0.99 0.99 0.99 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.16 ± 0.03 | |||||||
*Note. Similar to Table 1. |
YOLO-v5 large model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.82% | 0.001 | 0.99 | 0.0016 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.82% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.78% | 0.002 | 0.99 | 0.0002 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 98.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.86% 99.95% 99.78% 99.82% 99.86% |
0.001 0.0004 0.002 0.001 0.001 |
0.99 0.99 0.99 0.99 0.99 |
0.0008 0.0000 0.0031 0.0000 0.0023 |
0.99 1.00 0.99 1.00 0.99 |
0.99 1.00 0.99 1.00 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.74 ± 0.29 | |||||||
* Note. Similar to Table 1. |
Real | |||||||||
S | M | L | |||||||
Predicted | model | NG | OK | NG | OK | NG | OK | ||
Cross validation 1 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 2 | NG | 1023 | 27 | 1039 | 11 | 1048 | 2 | ||
OK | 29 | 1221 | 10 | 1240 | 2 | 1248 | |||
Cross validation 3 | NG | 1020 | 30 | 1041 | 9 | 1049 | 1 | ||
OK | 29 | 1221 | 10 | 1240 | 3 | 1247 | |||
Cross validation 4 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 28 | 1222 | 7 | 1243 | 2 | 1248 | |||
Cross validation 5 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 6 | NG | 1024 | 26 | 1040 | 10 | 10491 | 1 | ||
OK | 28 | 1222 | 8 | 1242 | 2 | 1248 | |||
Cross validation 7 | NG | 1023 | 27 | 1041 | 9 | 1050 | 0 | ||
OK | 31 | 1219 | 8 | 1242 | 1 | 1249 | |||
Cross validation 8 | NG | 1024 | 26 | 1040 | 10 | 1046 | 4 | ||
OK | 30 | 1220 | 9 | 1241 | 1 | 1249 | |||
Cross validation 9 | NG | 1025 | 25 | 1040 | 10 | 1050 | 0 | ||
OK | 30 | 1220 | 10 | 1240 | 4 | 1246 | |||
Cross validation 10 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 32 | 1218 | 7 | 1243 | 0 | 1250 |
YOLO-v5 large | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 99.60% | 0.0039 | 0.99 | 0.013 | 0.98 | 0.99 | 0.80 |
Cross validation 2 | 99.35% | 0.0065 | 0.99 | 0.020 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 99.47% | 0.0052 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Cross validation 4 | 99.47% | 0.0052 | 0.99 | 0 | 1 | 1 | 0.80 |
Cross validation 5 | 99.73% | 0.0026 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Mean ± SD | 99.524 ± 0.12 | ||||||
*Note. Similar to Table 1. |
Tiny-YOLO-v2 Batch size = 32 | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 98.82% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 2 | 98.95% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 98.82% | 0.01 | 0.98 | 0.01 | 0.98 | 0.99 | 0.80 |
Cross validation 4 | 99.21% | 0.007 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 5 | 98.16% | 0.01 | 0.98 | 0.04 | 0.95 | 0.99 | 0.80 |
Mean ± SD | 98.79 ± 0.346 | ||||||
*Note. Similar to Table 1. |
Batch size | Yolo-v5(l) | Tiny-YOLO-v2(batch size 32) | |||
NG | OK | NG | OK | ||
Cross validation 1 | NG | 615 | 2 | 611 | 6 |
OK | 1 | 147 | 3 | 145 | |
Cross validation 2 | NG | 614 | 3 | 613 | 4 |
OK | 2 | 146 | 4 | 144 | |
Cross validation 3 | NG | 616 | 1 | 610 | 7 |
OK | 3 | 145 | 2 | 146 | |
Cross validation 4 | NG | 617 | 0 | 614 | 3 |
OK | 4 | 144 | 3 | 145 | |
Cross validation 5 | NG | 616 | 1 | 609 | 8 |
OK | 1 | 147 | 6 | 142 |
YOLO-v5 small | YOLO-v5 medium | OLO-v5 large | |
Time (hrs.) | 3 hrs | 15 hrs | 31 hrs |
Memory (MB) | 27 MB | 50 MB | 192 MB |
*Note. Time is the required time used by model for training process and memory is amount of memory of weight that is used by model. |
YOLO-v5 small model | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Crossvalidation 1 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 2 | 97.56% | 0.024 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 3 | 97.43% | 0.025 | 0.972 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 4 | 97.56% | 0.024 | 0.973 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 5 | 97.60% | 0.023 | 0.971 | 0.02 | 0.97 | 0.97 | 0.45 |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
97.65% 97.47% 97.56% 97.60% 97.39% |
0.023 0.025 0.024 0.023 0.026 |
0.973 0.970 0.971 0.971 0.969 |
0.02 0.02 0.02 0.02 0.02 |
0.97 0.97 0.97 0.97 0.97 |
0.97 0.97 0.97 0.97 0.97 |
0.45 0.45 0.45 0.45 0.45 |
Mean ± SD | 97.52 ± 0.07 | ||||||
*Note. Accuracy: Overall, how often is the detection correct. (TP + TN) / total; Misclassification Rate: Overall, how often is it wrong. (FP + FN) / total; True Positive Rate: When it's actually yes, also known as "Sensitivity" or "Recall"; False Positive Rate: When it's no. FP/actual no; True Negative Rate: When it's actually no, also known as "Specificity". TN/actual no; Precision: When it predicts yes. TP/predicted yes; Prevalence: It defines how often does the yes condition occurs in our sample. Actual yes/total. |
YOLO-v5 medium model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.08% | 0.009 | 0.99 | 0.008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.17% | 0.008 | 0.99 | 0.009 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 99.17% | 0.008 | 0.99 | 0.007 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.21% 99.20% 99.17% 99.13% 99.17% |
0.007 0.007 0.008 0.008 0.008 |
0.99 0.99 0.99 0.99 0.99 |
0.007 0.007 0.007 0.008 0.009 |
0.99 0.99 0.99 0.99 0.99 |
0.99 0.99 0.99 0.99 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.16 ± 0.03 | |||||||
*Note. Similar to Table 1. |
YOLO-v5 large model | ||||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence | |
Crossvalidation 1 | 99.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 2 | 99.82% | 0.001 | 0.99 | 0.0016 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 3 | 99.82% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 4 | 99.78% | 0.002 | 0.99 | 0.0002 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 5 | 98.86% | 0.001 | 0.99 | 0.0008 | 0.99 | 0.99 | 0.45 | |
Crossvalidation 6 Crossvalidation 7 Crossvalidation 8 Crossvalidation 9 Crossvalidation 10 |
99.86% 99.95% 99.78% 99.82% 99.86% |
0.001 0.0004 0.002 0.001 0.001 |
0.99 0.99 0.99 0.99 0.99 |
0.0008 0.0000 0.0031 0.0000 0.0023 |
0.99 1.00 0.99 1.00 0.99 |
0.99 1.00 0.99 1.00 0.99 |
0.45 0.45 0.45 0.45 0.45 |
|
Mean ± SD | 99.74 ± 0.29 | |||||||
* Note. Similar to Table 1. |
Real | |||||||||
S | M | L | |||||||
Predicted | model | NG | OK | NG | OK | NG | OK | ||
Cross validation 1 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 2 | NG | 1023 | 27 | 1039 | 11 | 1048 | 2 | ||
OK | 29 | 1221 | 10 | 1240 | 2 | 1248 | |||
Cross validation 3 | NG | 1020 | 30 | 1041 | 9 | 1049 | 1 | ||
OK | 29 | 1221 | 10 | 1240 | 3 | 1247 | |||
Cross validation 4 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 28 | 1222 | 7 | 1243 | 2 | 1248 | |||
Cross validation 5 | NG | 1025 | 25 | 1040 | 10 | 1049 | 1 | ||
OK | 30 | 1220 | 9 | 1241 | 2 | 1248 | |||
Cross validation 6 | NG | 1024 | 26 | 1040 | 10 | 10491 | 1 | ||
OK | 28 | 1222 | 8 | 1242 | 2 | 1248 | |||
Cross validation 7 | NG | 1023 | 27 | 1041 | 9 | 1050 | 0 | ||
OK | 31 | 1219 | 8 | 1242 | 1 | 1249 | |||
Cross validation 8 | NG | 1024 | 26 | 1040 | 10 | 1046 | 4 | ||
OK | 30 | 1220 | 9 | 1241 | 1 | 1249 | |||
Cross validation 9 | NG | 1025 | 25 | 1040 | 10 | 1050 | 0 | ||
OK | 30 | 1220 | 10 | 1240 | 4 | 1246 | |||
Cross validation 10 | NG | 1022 | 28 | 1038 | 12 | 1047 | 3 | ||
OK | 32 | 1218 | 7 | 1243 | 0 | 1250 |
YOLO-v5 large | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 99.60% | 0.0039 | 0.99 | 0.013 | 0.98 | 0.99 | 0.80 |
Cross validation 2 | 99.35% | 0.0065 | 0.99 | 0.020 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 99.47% | 0.0052 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Cross validation 4 | 99.47% | 0.0052 | 0.99 | 0 | 1 | 1 | 0.80 |
Cross validation 5 | 99.73% | 0.0026 | 0.99 | 0.006 | 0.99 | 0.99 | 0.80 |
Mean ± SD | 99.524 ± 0.12 | ||||||
*Note. Similar to Table 1. |
Tiny-YOLO-v2 Batch size = 32 | |||||||
Category | Accuracy | Misclassification Rate | True Positive Rate | False Positive Rate | True Negative Rate | Precision | Prevalence |
Cross validation 1 | 98.82% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 2 | 98.95% | 0.01 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 3 | 98.82% | 0.01 | 0.98 | 0.01 | 0.98 | 0.99 | 0.80 |
Cross validation 4 | 99.21% | 0.007 | 0.99 | 0.02 | 0.97 | 0.99 | 0.80 |
Cross validation 5 | 98.16% | 0.01 | 0.98 | 0.04 | 0.95 | 0.99 | 0.80 |
Mean ± SD | 98.79 ± 0.346 | ||||||
*Note. Similar to Table 1. |
Batch size | Yolo-v5(l) | Tiny-YOLO-v2(batch size 32) | |||
NG | OK | NG | OK | ||
Cross validation 1 | NG | 615 | 2 | 611 | 6 |
OK | 1 | 147 | 3 | 145 | |
Cross validation 2 | NG | 614 | 3 | 613 | 4 |
OK | 2 | 146 | 4 | 144 | |
Cross validation 3 | NG | 616 | 1 | 610 | 7 |
OK | 3 | 145 | 2 | 146 | |
Cross validation 4 | NG | 617 | 0 | 614 | 3 |
OK | 4 | 144 | 3 | 145 | |
Cross validation 5 | NG | 616 | 1 | 609 | 8 |
OK | 1 | 147 | 6 | 142 |
YOLO-v5 small | YOLO-v5 medium | OLO-v5 large | |
Time (hrs.) | 3 hrs | 15 hrs | 31 hrs |
Memory (MB) | 27 MB | 50 MB | 192 MB |
*Note. Time is the required time used by model for training process and memory is amount of memory of weight that is used by model. |