In recent years, the deterioration of infrastructure facilities such as bridges has become a problem. Precautionary measures such as visual inspection and repair by humans are in place as countermeasures for aging; however, there are issues with cost and safety in such inspections. If inspection by robots becomes possible, both these aspects will be improved, which will significantly contribute to the maintenance of infrastructure facilities. In this paper, we propose a complex image processing technique to specify the location of feature points as coordinates through smartphone cameras to obtain the location information of feature points needed for positioning BIREM-IV-P developed to support bridge inspection. The corners located in the bridge inspection environment are used as feature points, and the corners are specified using Harris corner detection, which is a conventional corner detection method, to obtain the position of the feature points. In addition, to compensate for the shortcomings of Harris corner detection, a line segment in the image is detected using the Hough transform, and the intersection points of the line segments are recognized as corners. By combining the results of the two detection methods in this manner, the target feature points can be accurately specified. Then, the position of the feature points of the specified image coordinate system can be changed to the world coordinate system. As a result, it was possible to detect the location of the target feature points in a three-dimensional coordinate system.
Published in | Automation, Control and Intelligent Systems (Volume 9, Issue 1) |
DOI | 10.11648/j.acis.20210901.15 |
Page(s) | 34-45 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2021. Published by Science Publishing Group |
Corner Detection, Hough Transform, Harris Corner Detection, Image Processing, Coordinate Transformation
[1] | Ronny Salim Lim, Hung Manh La, Zeyong Shan and Weihua Sheng, Developing a Crack Inspection Robot for Bridge Maintenance, IEEE International Conference on Robotics and Automation (2011), pp. 6288-6293. |
[2] | A. Mazumdar and H. H. Asada, Mag-Foot: A steel bridge inspection robot, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, 2009, pp. 1691-1696. |
[3] | W Shen, J Gu, Y Shen Permanent magnetic system design for the wall-climbing robot. Applied Bionics and Biomechanics 3: (2006), pp. 151-159. |
[4] | S. T. Nguyen and H. M. La, Development of a Steel Bridge Climbing Robot, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 1912-1917. |
[5] | P. Ward et al, Climbing robot for steel bridge inspection: design challenges, 9th Austroads Bridge Conference 2014. |
[6] | Yoshito Okada and Takayuki Okatani, Development of UAV with Passive Rotating Spherical Shell for Bridge Inspection and its Evaluation of Inspection Capability in Real Bridges, Journal of the Robotics Society of Japan, Vol. 34 (2016), No. 2, pp. 119-122. |
[7] | R. Fujisawa, K. Umemoto, M. Tanaka, N. Sato, N. Nagaya, and M. Katsuyama, Development, Bridge Inspection and Operation of Araneus - Winch Wiring Robot System, J. JSCE. Ser. F4, (Construction and management) special issue, Vol. 73 (2017), No. 1, pp. 26-37 (In Japanese). |
[8] | K. Nakata, K. Umemoto, K. Kaneko, F. Ryusuke, Development and Operation of Wire Movement Type Bridge Inspection Robot System ARANEUS, Kalpa Publications in Engineering, Vol. 3 (2020), pp. 168-174. |
[9] | H. Kajiwara, N. Hanajima, K. Kurashige, Y. Fujihira, Development of Hanger-Rope Inspection Robot for Suspension Bridges, Journal of Robotics and Mechatronics, Vol. 31 (2019), No. 6, pp. 855-862. |
[10] | Y. Okada and T. Okatani, Development of UAV with Passive Rotating Spherical Shell for Bridge Inspection and its Evaluation of Inspection Capability in Real Bridges, Journal of the Robotics Society of Japan, Vol. 34, No. 2 (2016), pp. 119–122. |
[11] | N. Kanehira, M. Hirai, S. Kashimoto, and S. Echigo, Bridge Mounting Multicopter for Bridge Inspection, Kawada technical report, Vol. 36 (2017), pp. 22–28. |
[12] | M. Nakao, E. Hasegawa, T. Kudo, N. Sawasaki, Development of a Bridge Inspection Support Robot System Using Two-Wheeled Multicopters, Journal of Robotics and Mechatronics, Vol. 31 (2019), No. 6, pp. 837-844. |
[13] | K. Hidaka, D. Fujimoto, K. Sato, Autonomous Adaptive Flight Control of a UAV for Practical Bridge Inspection Using Multiple-Camera Image Coupling Method, Journal of Robotics and Mechatronics, Vol. 31 (2019), No. 6, pp. 845-854. |
[14] | Y. Takada, S. Ito, and N. Imajo, Development of a bridge inspection robot capable of traveling on splicing parts, Inventions 2017, Vol. 2, Issue 3, 22. |
[15] | W. Lee, and S. Hirose, Contacting Surface-Transfer Control for Reconfigurable Wall-Climbing Robot Gunryu III, Journal of Robotics and Mechatronics, Vol. 25 (2013), No. 3, pp. 439-448. |
[16] | Rui Wang, Youhei Kawamura, An Automated Sensing System for Steel Bridge Inspection Using GMR Sensor Array and Magnetic Wheels of Climbing Robot, Journal of Sensors, Vol. 2016, pp. 1-15. |
[17] | H. M. La, T. H. Dinh, N. H. Pham, Q. P. Ha, and A. Q. Pham, Automated Robotic Monitoring and Inspection of Steel Structures and Bridges, Robotica, pp. 1-21 (2016). |
[18] | Y. Matsumura, K. Kawamoto and Y. Takada, Development of a Compact Wall-Climbing Robot Capable of Transitioning among Floor, Vertical Wall and Ceiling, Journal of the Robotics Society of Japan, Vol. 37 (2019), No. 6, pp. 514-522. |
[19] | H. Song, J. Nakahama and Y. Takada, Establishing a Localization Method for Mobile Robots to Support Bridge Inspection, The Symposium on Evaluation and Diagnosis, Vol. 18 (2019), pp. 134-139. |
[20] | H. Moravec, Rover visual obstacle avoidance, Proceedings of the 7th international joint conference on Artificial intelligence, Vol. 2 (1981), pp. 785-790. |
[21] | S. M. Smith and J. M. Brady, SUSAN -A new approach to low level image processing, International Journal of Computer Vision, Vol. 23, No. 1 (1997), pp. 45-78. |
[22] | M. Trajkovic, Hedley, M. Fast Corner Detection. Image and Vision Computing. 1998, Vol. 16, pp. 75–87. |
[23] | E. Rosten, T. Drummond, Fusing Points and Lines for High Performance Tracking, In Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, China, pp. 17–21 October 2005. |
[24] | E. Rosten, G. Reitmayr, and T. Drummond, Real-Time Video Annotations for Augmented Reality. Adv. Vis. Comput. 2005, Vol. 3804 (2005), pp. 294–302. |
[25] | C. Harris, and M. Stephens, A Combined Corner and Edge Detector, In Proceedings of the 4th Alvey Vision Conference (1988), pp. 147-151. |
[26] | J. Canny, A Computational Approach to Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8 (1986), pp. 679-698. |
[27] | P. V. C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959. |
APA Style
Hyunwoo Song, Jun Nakahama, Yogo Takada. (2021). Localization Method Based on Image Processing for Autonomous Driving of Mobile Robot in the Linear Infrastructure. Automation, Control and Intelligent Systems, 9(1), 34-45. https://doi.org/10.11648/j.acis.20210901.15
ACS Style
Hyunwoo Song; Jun Nakahama; Yogo Takada. Localization Method Based on Image Processing for Autonomous Driving of Mobile Robot in the Linear Infrastructure. Autom. Control Intell. Syst. 2021, 9(1), 34-45. doi: 10.11648/j.acis.20210901.15
AMA Style
Hyunwoo Song, Jun Nakahama, Yogo Takada. Localization Method Based on Image Processing for Autonomous Driving of Mobile Robot in the Linear Infrastructure. Autom Control Intell Syst. 2021;9(1):34-45. doi: 10.11648/j.acis.20210901.15
@article{10.11648/j.acis.20210901.15, author = {Hyunwoo Song and Jun Nakahama and Yogo Takada}, title = {Localization Method Based on Image Processing for Autonomous Driving of Mobile Robot in the Linear Infrastructure}, journal = {Automation, Control and Intelligent Systems}, volume = {9}, number = {1}, pages = {34-45}, doi = {10.11648/j.acis.20210901.15}, url = {https://doi.org/10.11648/j.acis.20210901.15}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.acis.20210901.15}, abstract = {In recent years, the deterioration of infrastructure facilities such as bridges has become a problem. Precautionary measures such as visual inspection and repair by humans are in place as countermeasures for aging; however, there are issues with cost and safety in such inspections. If inspection by robots becomes possible, both these aspects will be improved, which will significantly contribute to the maintenance of infrastructure facilities. In this paper, we propose a complex image processing technique to specify the location of feature points as coordinates through smartphone cameras to obtain the location information of feature points needed for positioning BIREM-IV-P developed to support bridge inspection. The corners located in the bridge inspection environment are used as feature points, and the corners are specified using Harris corner detection, which is a conventional corner detection method, to obtain the position of the feature points. In addition, to compensate for the shortcomings of Harris corner detection, a line segment in the image is detected using the Hough transform, and the intersection points of the line segments are recognized as corners. By combining the results of the two detection methods in this manner, the target feature points can be accurately specified. Then, the position of the feature points of the specified image coordinate system can be changed to the world coordinate system. As a result, it was possible to detect the location of the target feature points in a three-dimensional coordinate system.}, year = {2021} }
TY - JOUR T1 - Localization Method Based on Image Processing for Autonomous Driving of Mobile Robot in the Linear Infrastructure AU - Hyunwoo Song AU - Jun Nakahama AU - Yogo Takada Y1 - 2021/04/12 PY - 2021 N1 - https://doi.org/10.11648/j.acis.20210901.15 DO - 10.11648/j.acis.20210901.15 T2 - Automation, Control and Intelligent Systems JF - Automation, Control and Intelligent Systems JO - Automation, Control and Intelligent Systems SP - 34 EP - 45 PB - Science Publishing Group SN - 2328-5591 UR - https://doi.org/10.11648/j.acis.20210901.15 AB - In recent years, the deterioration of infrastructure facilities such as bridges has become a problem. Precautionary measures such as visual inspection and repair by humans are in place as countermeasures for aging; however, there are issues with cost and safety in such inspections. If inspection by robots becomes possible, both these aspects will be improved, which will significantly contribute to the maintenance of infrastructure facilities. In this paper, we propose a complex image processing technique to specify the location of feature points as coordinates through smartphone cameras to obtain the location information of feature points needed for positioning BIREM-IV-P developed to support bridge inspection. The corners located in the bridge inspection environment are used as feature points, and the corners are specified using Harris corner detection, which is a conventional corner detection method, to obtain the position of the feature points. In addition, to compensate for the shortcomings of Harris corner detection, a line segment in the image is detected using the Hough transform, and the intersection points of the line segments are recognized as corners. By combining the results of the two detection methods in this manner, the target feature points can be accurately specified. Then, the position of the feature points of the specified image coordinate system can be changed to the world coordinate system. As a result, it was possible to detect the location of the target feature points in a three-dimensional coordinate system. VL - 9 IS - 1 ER -