Abstract

Information such as cracks and deflections is the important basis for structural safety. Existing methods have not achieved simultaneous detection. In most existing computer vision measurement systems, the view is fixed due to the fixed position of the camera. Thus, it is difficult to obtain the structures’ overall crack and deflection information. An automatic response measurement method is proposed in this study including () continuous image acquisition and signal transmission system based on self-walking bracket and Internet of Things (IoT), () an image splicing method based on feature matching, and () a crack and deflection measurement method. The self-walking bracket allows the industrial camera to move at a fixed distance to obtain the continuous image of the beam. Next, the spliced image is obtained through the PCA-SIFT method with a screening mechanism. The cracks’ information is acquired by the dual network model. The simplified AKAZE feature detection algorithm and the modified RANSAC are used to track the natural features of the structures. The curve fitting is performed to obtain the deflection curve of the beam under different loads. Experimental results show that the method can directly reflect the crack and deflection information of the beam. The average deviation of width is 11.76%, average deviation of length is 8.18%, and the average deformation deviation is 4.50%, which verifies the practicability of the method and shows great potential to apply it in actual structures.

1. Introduction

Structures and infrastructure systems including bridges, buildings, dams, and pipelines are complex engineering systems that support a society’s economic prosperity and quality of life. It is the common work to measure the cracks and deflections of structures. The conventional practice for cracks identification and deflections measurement based on periodic human inspection is inadequate. Nondestructive evaluation (NDE) has shown potential for detecting hidden damage but the structures’ large size presents a significant challenge to implement such inspection methods. Over the past two decades, a significant amount of studies have been conducted in the emerging field of structural health monitoring, aiming at objective and quantitative structural damage detection and integrity assessment based on measurements by sensors, mostly accelerometers [112]. Their wide deployment in realistic engineering structures is limited by cumbersome and expensive installation, maintenance of sensors networks, and data acquisition (DAQ) systems. To overcome these limitations, camera and computer vision-based techniques have emerged as a promising tool for noncontact measurement of structural responses.

In the field of crack detection, plenty of computer vision-based techniques have been implemented for detecting structures’ defects. These techniques are primarily used to manipulate images to extract defect features, such as cracks in concrete and steel surfaces. Crack identification based on computer vision mainly includes threshold segmentation, edge detection, and penetration models. The convolutional neural networks (CNNs) [13, 14] based on deep learning have achieved fruitful results, which have been widely used in image recognition. CNN can retain the spatial information of images with highly robust to geometric transformation, deformation, illumination, and others, which ultimately realize the recognition of concrete crack area. Cha et al. [15] propose an innovation study using CNNs to build a classifier for detecting concrete cracks from images. The main advantage of the Cha’s CNN-based detection of concrete cracks is that it requires no feature extraction and calculation compared to traditional approaches. This CNN method showed very robust performance compared to the traditional, well-known edge detection methods. Fully convolutional network (FCN) [16, 17] was proposed to realizing the end-to-end semantic segmentation of images. FCN is a new method for image segmentation and extraction of crack information and has been applied in many studies to identify road cracks [16, 17]. Liang et al. [18] proposed a dual network model composed of CNN and FCN for the crack identification in concrete bridges. Compared with the conventional method, the high precision and low noise were taken into account simultaneously with high reliability and recognition rate and less computation. Kang et al. [19] proposed an automatic crack detection, localization, and quantification method using the integration of a faster region proposal convolutional neural network (Faster R-CNN) algorithm to detect crack regions. The regions were located using various bounding boxes and a modified tubularity flow field (TuFF) algorithm to segment the crack pixels from the detected crack regions. A modified distance transform method (DTM) was used to measure crack thickness and length in terms of pixel measurement. Choi and Cha [20] proposed SDDNet for the real-time segmentation of superficial cracks in structures. The SDDNet was composed of standard convolutions, several DenSep modules, a modified ASPP module, and a decoder module. This model is expected to be one of the best options in crack segmentation. Jahanshahi et al. [21] and Beckman et al. [22] performed a plane fitting implementation using the random sample consensus (RANSAC) algorithm to quantify the standard surface of the spalling concrete. The RANSAC algorithm consists of a random selection of points to fit in a plane according to predefined parameters, such as a reference vector, a distance threshold, and the maximum angular distance between each point’s normal and the reference vector. The algorithm fits a plane to the greatest number of similar data points according to the assigned threshold values.

In the field of deflection measurement, Chu et al. [23] proposed an optical measurement technology using digital image correlation (DIC) to calculate displacement, deflection, and strain. Ye et al. [24] proposed a measurement method based on digital image processing technology. A template was selected in advance, and normalized cross-correlation operation was performed with the original image containing the target point. When the result was the maximum, the best matching was achieved, and the position of the target in the original image was obtained. Sladek et al. [25] introduced image registration techniques in the study, which allow images of the construction to be taken from different points of view during detections. Simplification of the measurement procedure is possible because the complicated, expensive, and time-consuming step of camera positioning is not necessary. Guo and Zhu [26] manufactured a high-speed camera system to complete the displacement measurement in real time, and the Lucas–Kanade template tracking algorithm was introduced in the field of computer vision. Additionally, a modified inverse compositional algorithm is proposed in the study to reduce the computing time of the original algorithm and improve the efficiency further. However, in some local positions of the structures, the available feature points are not sufficient, so it is difficult to find enough templates to measure. In recent years, the feature tracking method which can effectively detect and match stable features with both geometric (translation, rotation, and scale) invariance and luminosity invariance has been widely used [27]. Yu and Zhang [28] detected and described the features through SURF (speeded up robust features) and BRISK (binary robust invariant scalable key points), respectively. Besides, KNN (K-nearest neighbor) was used to match and purify the matching results with RANSAC (random sample consensus) to finally obtain the deflection changes of the beam at several locations. Cha [29] proposed a novel vision-based detection method to detect the horizontal and vertical lengths of the bolt head, by which the method has excellent performance to detecting loosened bolts, and it can operate in quasi-real time.

Based on the above analysis, cracks and deflections need to be measured separately in existing researches, and the detection efficiency can be improved. And the camera is fixed at a specific position in the traditional measurement system for structures by computer vision, which leads to a fixed view. Thus, it cannot obtain information such as the overall cracks and deflection of the structures. However, there is a contradiction in the content of the image in the dual measurement of cracks and deflection based on computer vision. Crack detection requires that the collected images clearly show the structure’s crack, limiting the distance between the camera and the structure (not too far). On the contrary, the overall deflection measurement requires images containing the entire structure. Therefore, to realize the dual measurement, the camera must move to meet the two image requirements at the same time. Thus, an algorithm that integrates image acquisition, image preprocessing, crack recognition, and deflection measurement is to be built. The intelligent detection system for concrete beams will realize a more comprehensive detection of structures.

In summary, this study aims to provide an automatic response measurement method of structures, which integrates image collection, image splicing, crack identification, and deflection measurement. The study is organized as follows. In section 2, the overall framework of the proposed method is presented. In section 3, a continuous image acquisition and signal transmission system based on self-walking bracket and IoT is designed. In section 4, the algorithms of image splicing, crack identification, and deflection measurement are discussed. Section 5 shows the measurement results of the proposed method, and Section 6 concludes the study with a summary.

2. The Overall Framework of the Proposed System

The automatic measurement method proposed in this study can be divided into three parts: continuous image acquisition and signal transmission system, image splicing system, crack identification and deflection measurement system. This process is illustrated in Figure 1.

To obtain detailed images of the test beam, a self-walking bracket is designed. The bracket, serving as an industrial camera placement platform, enables the camera to keep a safe distance from the test beam and move laterally at a fixed distance to obtain continuous images of the test beam. And then, the complete images of the test beam are obtained through the image splicing system. Testers can remotely control the operational status of the self-walking bracket and the camera by the signal transmission technology of the IoT. The system is also equipped with monitoring equipment, which can monitor the equipment in real time. For image splicing, the PCA-SIFT (principal component analysis scale invariant feature transform) technology with a screening mechanism improves the running speed and the spliced quality of cracks and the bottom edge of the beam. For crack identification, the dual network model is used to identify cracks in the whole beam automatically. The width and length of the cracks are extracted at the same time. In terms of deflection measurement, AKAZE (accelerated-KAZE) feature detection method with simplified scale space and the RANSAC algorithm with predetection and adaptive iteration numbers are used to track the structure’s natural features. Curve fitting is used to obtain the deflection curve of the beam under various loads. The detection process is completely operated indoors and the operational status is monitored in real time. The beam’s damage information can be obtained quickly and conveniently by the method proposed in this study.

3. Image Continuous Acquisition and Signal Transmission System

As illustrated in Figure 1, the framework of the proposed method mainly contains three steps: the first is to construct a continuous image acquisition and signal transmission system based on a self-walking bracket and IoT. The second step is to apply image splicing technology to stitch the acquired test beam images into undistorted panoramic images. The last step is to send the spliced images to the crack identification system and the deflection measurement system to detect the response of the beam. This section mainly introduces the proposed image continuous acquisition and signal transmission system.

3.1. Objects and Design of the Study

The simply supported test beams are rectangular sections with the size of 20 cm × 30 cm. The length between the two supporting points is about 180 cm, as shown in Figure 2. The test beam is made of C50 concrete equipped with 3 HRB400 longitudinal reinforcement with a diameter of 16 mm. To obtain the cracks and deflections of the test beam at different loads, the electrohydraulic servo pressure testing machine (50T) is used to perform graded loading on the test beam. Two loading points on the beam are located at the quarter point. The loading speed is 0.5 kN/s. Three dial indicators are placed at the quarter points of the beam span, as shown in Figure 3. After the pressure is stable, the values of dial indicators are recorded and image acquisition is started. The distance between the camera and beam surface is about 35 cm. The view field of each image is about 38 cm. The repeated areas with a length of 12 cm between adjacent images are used for image splicing because the middle part of each image is the clearest due to the camera’s automatic focus. The long repeating area provides more basis for image splicing to make the image more accurate and clear. 12 cm is the recommended value for multiple trials. Finally, a high-resolution panoramic image can be obtained.

3.2. Self-Walking Bracket

The self-walking bracket serves as the basis of the inspection system. The bracket enables the movement of the industrial camera. As shown in Figure 4, the self-walking bracket is mainly composed of 5 parts: industrial camera, distance control, height control, position control, and PLC (Programmable Logic Controller). The industrial camera is the Eloncin EC1100 AF autofocus camera with a resolution of 3840 × 2880, which is fixed to the bracket by buckles. The distance control and height control are manual to realize the fine adjustment of the image range of the camera to meet the visual field requirements in measurement. Once determined, the parameters will not be changed in the experiment. The position control is automatic, which is used to realize the movement of the camera in the left and right directions. The control of the bracket is realized by the PLC receiving computer signals. This method ensures that the industrial camera keeps a safe distance from the structure and moves along the beam direction to obtain continuous images.

3.3. Data Acquisition and Transmission Method

The test beam response measurement needs to be equipped with a controller, monitor, human-computer interaction screen, and power. As shown in Figure 5, the tester sends signals to the controller through the IoT in the computer, such as the movement status of the bracket, the camera’s working status, and the storage path of the acquired images. All images and monitoring screens are uploaded to Operation Support Systems (OSS) via the wireless network.

Besides, this study’s experimental environment is as follows: CPU is Intel(R) Core(TM) i7, 1.8 GHz, 8G memory, the operating system is Windows10, and the software platform is Python3.6 64 bits. The operation steps of the continuous image acquisition and signal transmission system are as follows:Step 1: place the self-walking bracket parallel to the beam. Fine-tune the height of the camera and the distance from the beam so that the camera’s field covers the whole beam’s height. In this experiment, the camera was 35 cm away from the surface of the test beam. The length of the repeated area between adjacent images is 12 cm.Step 2: control the working status of the bracket and camera by computer. The process can monitor the detection status in real time. The camera takes pictures automatically after the bracket stops moving for 10s. Once a photo is taken, the bracket continues to move while the image is uploaded. Images of the whole test beam can be obtained. Seven images were taken in this experiment. The computer control interface and monitoring system are shown in Figure 6.Step 3: the computer automatically downloads the acquired images from the OSS and stores them on the specified path to complete the data collection. Parts of the images obtained are shown in Figure 7.

When using images to detect the beam’s response, the most fundamental step is camera calibration and image correction. The self-walking bracket and the test beam are placed in parallel strictly to obtain images to ensure that the scale factors between the collected images are the same. Guarantee that the panoramic images obtained by the subsequent process have the correct perspective. After obtaining a series of images of the beam, image splicing, crack identification, and noncontact deflection measurement are carried out.

4. Image Splicing, Crack Identification, and Deflection Measurement Methods

4.1. Image Splicing Algorithm

In this study, an optimized PCA-SIFT image splicing algorithm [18] is adopted. The SIFT algorithm is used to extract feature points from the image. Simultaneously, the Harris algorithm is also used in the DoG (difference of Gaussians) scale space to extract the key points at the cracks and the bottom edge. The feature points obtained by these two algorithms are matched to eliminate redundant points, which can achieve the feature points matching between multiple images and complete the precise stitching of the whole beam image. The specific steps are shown in Figure 8.

This algorithm removes the redundant points to improve the running speed and matching rate significantly. Besides, the feature extraction processes are carried out in the DoG scale space, which can eliminate the splicing dislocation caused by different scales between images.

4.2. Crack Identification Method

The cracks in the test beam produced by loads are slender and varied. Holes, graffiti, and light on the surface have substantial interference with the identification of small cracks. This study adopts the previous research of our group for crack identification—the dual network model. This model comprises the CNN model and FCN, which can identify the cracks accurately, simply, and in real time.

The CNN model was trained by 9200 photos and verified by 2000 photos, and the verification accuracy is up to 98.6%, as shown in Figure 9(a). Since the images in the verification set do not participate in the training, the verification accuracy of the network model is the closest to the true one. Finally, the CNN model with the highest verification accuracy is saved. The FCN model is trained by 1700 photos and verified by another 368 photos. The verification accuracy is up to 99.5%, as shown in Figure 9(b). Similarly, the FCN model with the highest verification accuracy is saved in the same way.

When performing image segmentation, it is difficult to determine whether a pixel belongs to the edge of a crack. Thus, probability is used to describe the attribution of a pixel. The dual network model outputs a probability matrix, indicating the probability that the pixel at the corresponding position belongs to the crack. When writing the recognition function, the probability is mapped to the background pixel, subpixel, and crack pixels with weights of 0, 0.5, and 1, respectively. When the result is output, the background pixel is set to white, the subpixel gray, and the crack pixel black. In the relevant statistical calculation, the weighted sum of the three categories of pixels is introduced.

To smooth the measurement results and simplify the procedure, the crack width is calculated by taking the mean value of local sampling when programming. First of all, the photo to be detected is cropped into a small image of 128 × 128 pixels. The photos whose size cannot be divided by 128 will be filled with boundary values in advance. Secondly, the small photos are input into the network for identification. The images containing cracked pixels in the output result are sampled. The size of the sliding window [15] to sample is 32 × 32 pixels. Since the sampling window is tiny, it is reasonable to only consider the case where the sampling window contains only a single and straight crack. Then, the number of crack pixels and subpixels of each sampling matrix is counted and the projection length is calculated in the width and height directions to obtain the slope angle and length of the crack. Finally, each small 128 × 128-pixel image after recognition is spliced in the original order and cropped to the original size. The output is saved as the final result.where represents width direction projection, the unit is the number of pixels; represents height direction projection, the unit is the number of pixels; represents the crack length; represents the crack width; is the scale factor; is the number of crack pixels; is the number of subpixels; and represents the length of the crack.

4.3. Deflection Measurement Method

In order to measure the deflection after obtaining the spliced image, a feature tracking method based on simplified AKAZE and adaptive RANSAC is proposed in this study. Figure 10 shows the basic procedures:Step 0: extracting the region of interest (ROI) in the spliced image, the edge region of the beamStep 1: feature extraction, that is, feature detection through a simplified AKAZE algorithm in scale space and description through BRIEF (Binary Robust Independent Elementary Features)Step 2: feature matching based on the descriptor got in Step 1 by KNN [28] matching and improved RANSAC [29, 30] purifyingStep 3: transforming the pixel displacement into physical displacement using a scale factorStep 4: fitting the deflection curve of the test beam based on the displacement of feature pointsStep 5: calculating the deflection value at each position according to the deflection curve got in Step 4.

4.3.1. Feature Extraction

In this study, the position of the self-walking bracket is strictly controlled when collecting images, and the deflection measurement is based on the spliced image, weakening the influence of the scale factor manually. Therefore, an AKAZE algorithm that simplifies the scale space is proposed.

The AKAZE algorithm constructs the scale space through a nonlinear diffusion filter [31]. The divergence of the flow function is used to describe the variation of the image brightness at different scales space. The classic nonlinear diffusion equation is as follows:where and represent divergence and gradient, respectively, and is the image brightness matrix. is the conduction function. is the scale parameter. is the gradient image of the original image after the Gaussian smoothing filter. Fast Explicit Diffusion (FED) is used to solve 3 to get the nonlinear scale space of image .where is the identity matrix and is the conduction matrix of the image .

The AKAZE algorithm, with a simplified scale space, only needs to build an octave, which contains five layers. The index of the inner layer of the octave and the scale parameter of the Gaussian filter have the following mapping relationship:

After the nonlinear scale space, the Hessian matrix is used for feature extraction. The response value of the sampling point is compared with the surrounding 26 neighborhood points to determine whether it is the maximum value. Then the two-dimensional quadratic function is fitted to the determinant of Hessian response in the 3 × 3-pixel neighborhood. Finally, the maximum value is obtained to locate the key points precisely.

4.3.2. Feature Matching

KNN based on the Hamming distance is adopted in this study, and the threshold value of the nearest neighbor and subneighbor ratio is set to 0.7. RANSAC is adopted to purify the rough matching results because of wrong matching here [22, 29].

After each sample extracting and model establishing in the classical RANSAC, all samples in the set are checked one by one. Randomly selected samples greatly influence the effect. The number of iterations in parameter calculation is determined based on experience. Hence, there is a problem with a large calculation amount and long operation time [32]. Before sampling modeling, this study arranges the matching point pairs in ascending order according to the matching deviation of Hamming distance. Eight samples are randomly selected from the first 30% matching pair with a smaller Hamming distance to generate the model to be tested. Then, the first 70% of the matching pairs are pretested. Those with a deviation less than the determination threshold of 40 are regarded as internal points. Furthermore, the number of iterations is determined according to the number of internal points. The relationship between the internal point rate and the number of iterations waswhere represents the internal point rate, the ratio of the internal points to the total number of data sets. is the number of iterations. is the confidence probability, which in this study is 99.99%. The upper limit of iteration number is set as 700. After repeated tests, 700 iterations can obtain the measurement results that meet the accuracy requirements.

The above is the optimized feature tracking method proposed in this study. The AKAZE algorithm with simplified scale space and the BRIEF algorithm are used for feature point extraction. The RANSAC algorithm with precheck and an adaptive number of iterations purifies the KNN rough matching result to realize feature tracking and obtain deflection information at the feature points of the test beam.

4.3.3. Deflection Curve

In the current deflection measurement system based on computer vision, only the deformation information at a specific position can be obtained, which has information limitations. This study uses a large amount of feature point information obtained by feature tracking to build a more accurate deflection curve of the test beam through curve fitting. The deflection curve is generally continuous and smooth. Calonder et al. [33] describe the deflection curve under the action of loads as polynomials. In this study, the least square method is used for polynomial fitting.

4.4. Effect Evaluation of Deflection Measurement Method

Experimental setup is described in Section 3.1. The loads of three experiments are set to 0 kN, 160 kN, and 225 kN, respectively. Three sets of images are taken. The collected image sequences of each group are spliced to obtain a panoramic image with 39783 × 2880 pixels used for the deflection.

4.4.1. Measurement Speed

The deflection measurement time can be divided into five stages [34]: (1) feature detection, (2) descriptor calculation, (3) feature matching, (4) matching purification, and (5) fitting curve.

The results are shown in Figures 11 and 12. It is evident that the method proposed in this study takes the shortest time, around 35s, which is more than 40% shorter than the SIFT. Especially in the feature description and matching purification stages, the time drops rapidly with more feature matching pairs. Therefore, the method in this study has a faster measurement speed. The processing time of each algorithm is summarized in Table 1 and Table 2 and the pressures of the actuator are 160 kN and 225 kN, respectively.

4.4.2. Measurement Accuracy

In this section, the measurements of the algorithm in this study and the above algorithms are analyzed with the actual measurement results. The root means square error () is used for quantification [26]:where is the number of data sets. and are the deflection data measured by the vision sensor and the dial indicators, respectively. The results are in Table 3, showing that the algorithm proposed in this study has higher accuracy, which is significantly better than the other three algorithms.

5. Response Measurement Results of Test Beam Based on Spliced Image

5.1. Result Display Interface

To give a visual testing result of the structure to the testers, the results are displayed in the computer control interface of the image continuous acquisition and transmission system, as shown in Figure 13. The interface shows the result of crack identification and deflection measurement. The display from top to bottom in the control interface is image splicing result, feature point matching result, deflection curve, and crack distribution, respectively.

5.2. Laboratory Experiment

The image continuous acquisition and signal transmission system was used for laboratory tests. The loads of 0 kN, 50 kN, 100 kN, 150 kN, 200 kN, 225 kN, and 250 kN are loaded on the test beam in stage. The images before and after splicing are shown in Figures 14(a) and 14(b), respectively. It can be seen that the cracks and the bottom edge of the beam have been correctly spliced. Figure 15 is the panoramic image obtained by splicing all beam images that cover the entire beam.

To verify the necessity of image splicing, the deflection results of the single image and the spliced image are compared with the result achieved by the dial indicators, under the load of 225 kN. The single image measurement uses the same scale factor as the spliced image. The deflections of the midspan and the quarter points measured by the three methods are compared in Figure 16.

When using dial indicators, only the deflection values at the measuring positions can be obtained. In the measurement based on single images, only the deformation information within the image can be obtained. Both of them have limitations. Moreover, due to the different scale factors between each image, the measurement deviation is relatively large. It is necessary to adjust the scale factors to achieve the ideal results. The method based on the spliced images uses the DoG scale space in the stitching process to fuse the scales between the adjacent images. Therefore, the deviation is smaller than that using a single image. The measurement results are shown in Table 4.

5.3. Results of Crack Identification

The cracks intercepted from the beam’s panoramic image are sent to the trained dual network model. The recognition results are shown in Figures 1719. In these figures, when the load stage is 100 kN, there are only five tiny cracks whose widths are less than 0.130 mm. From another perspective, it helps to ignore the small cracks produced by natural weathering. The recognition effect is not ideal. However, as the load increases, the width gradually increases, and the recognition effect increases as well. During the test, the width and length of the cracks were also measured by crack meter and ruler, respectively.

From Figures 18 and 19, three cracks with larger widths are selected. The width of cracks is measured at a clear boundary. With the increase of the load, the width and length of Crack ① and Crack ② increase significantly. The results of crack’s width and length are shown in Table 5 and Table 6, respectively.

The length deviation of Crack ③ is relatively large, because Crack ③ produces two bifurcation points. In the dual network model, the crack lengths at the bifurcation points are all accumulated. When measuring the length of Crack ③, the researcher manually measured the main branch’s length according to the usual requirements. Above all, when considering cracks’ width, the maximum deviation is 13.52%, and the average deviation is 11.76%. When considering cracks’ length, the maximum deviation is 14.29%, and the average deviation is 8.18%.

5.4. Results of Deflection Measurement

he deflection measurement method based on the spliced image is adapted. The results are shown in Figure 20.

As shown in Figures 20 and 21, the deformation curve based on the stitched image is similar to the actual deformation shape, and the expression of the deflection curve can be obtained conveniently. Comparing the deflection at the midspan and quarterpoint with the dial indicators’ values, as shown in Table 7, the maximum deviation is 8.49%, and the average deviation is 4.50%. As the load increases, the deviation gradually decreases.

6. Conclusion

The traditional crack and deflection measurement method can only obtain the information at the specific points of the test. This study proposes an automatic measurement method for the response of the test beam based on the continuous image acquisition system, which integrates image collection, image splicing, crack identification, and deflection measurement. This method provides a new idea for the overall detection of frames, beams, and other structures. The main conclusions are as follows:(1)The continuous image acquisition and signal transmission system uses a self-walking bracket, enabling the industrial camera to maintain a proper distance from the structure and move at a fixed distance along the beam direction. The signal is transmitted through the IoT to realize remote control.(2)High-quality panoramic images of beams can be obtained using PCA-SIFT image stitching technology with a screening mechanism. The spliced image can be used for crack identification and deflection measurement.(3)The dual network model is built to collect and identify cracks and measure their width and length. The CNN model can effectively eliminate most of the interference information while identifying the cracks in the image. FCN model is used to extract features from the CNN network results, accurately obtaining information such as the cracks’ width and length. The average deviation of the width is 11.76%, and the average deviation of the length is 8.18%, meeting the crack detection requirements. The method also can obtain the distribution of cracks, which is essential for the force analysis of the component.(4)The optimized feature tracking method combines the simplified scale-space AKAZE algorithm and the BRIEF algorithm to extract feature points. The RANSAC algorithm with precheck and adaptive iteration times purifies the KNN rough matching results to achieve feature tracking. This method can shorten the time by 40%.(5)A large number of feature point positions and deformation obtained by feature tracking are used for curve fitting to obtain the accurate deflection curve of the beam under load, with an average deviation of 4.50%.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

D. Liang conceived the idea of this research. J. Liu and L. Wang wrote, revised, and polished this paper. C. Liu and J. Liu wrote this paper.

Acknowledgments

This work presented in this study was supported by the National Natural Science Foundation of China (no. 51978236), Science and Technology Project of Hebei Provincial Department of Transportation (no. YC-201912), and Science and Technology Development Project Plan of Tianjin Transport Commission (no. 2019-06).