Control Systems and Computers, N4, 2024, Article 6

https://doi.org/10.15407/csc.2024.04.050

Control Systems and Computers, 2024, Issue 4 (308), pp. 50-60.

УДК 004.8

Kashtan Vita Yu., PhD (Eng.), Associate Professor, Associate Professor of Information Technology and Computer Engineering Department, Dnipro University of Technology, 19 Dmytro Yavornytsky avenue, Dnipro 49005, Ukraine, E–mail: kashtan.v.yu@nmu.one,

Kazymyrenko Oleksii V., graduate student of Department of Information Technology and Computer Engineering, Dnipro University of Technology, 19 Dmytro Yavornytsky avenue, Dnipro 49005, Ukraine, E–mail: kazymyrenko.o.v@nmu.one,

Hnatushenko Volodymyr V., Dr. Sc., Professor, Head of Information Technology and Computer Engineering Department, Dnipro University of Technology, 19 Dmytro Yavornytsky avenue, Dnipro 49005, Ukraine, E–mail: hnatushenko.v.v@nmu.one,

COMPARATIVE ANALYSIS OF THE VEHICLE RECOGNITION METHOD’S EFFECTIVENESS ON AERIAL IMAGES

Introduction. Object recognition on aerial images is an urgent task in modern conditions, especially in cases requiring accurate and fast car recognition. Traditional contour extraction methods, such as Canny, Sobel, Laplacian, Prewitt, and Scharr, are based on gradient analysis and are known for their ease of implementation. This is an essential step for further recognition, as the correct definition of contours contributes to more accurate object identification. However, the effectiveness of the above methods could be improved, especially in complex environments with high object density, uneven brightness, and noise. Neural network models, such as YOLO (You Only Look Once), offer new possibilities, providing more accurate and reliable recognition, even in difficult situations.

Purpose. This study compares the effectiveness of classical contour extraction methods and the YOLOv6n neural network model for vehicle recognition in aerial images. The accuracy of vehicle detection is evaluated by the main metrics: Precision, Recall, and F1-measure, which allow the determination of each method’s efficiency level in specific conditions.

Methods. The study includes testing the classical Canny, Sobel, Laplacian, Prewitt, and Scharr algorithms for car outline detection and analyzing the results of the YOLOv6n model for deep-learning object detection. Classical methods use image processing to identify contours based on pixel gradients, which allows for extracting structures in an image. The YOLOv6n model is based on a neural network approach, considering complex image features for more accurate and faster object detection.

Results. The data analysis showed that classical methods, although they can detect contours, have limited accuracy in conditions of high object density and sharp changes in brightness. The accuracy (Precision) and F1 Score for traditional methods was low, indicating a significant number of false positives and false negatives. In particular, the Sobel and Scharr methods showed the highest Recall but significantly decreased accuracy. In contrast, the YOLOv6n neural network model demonstrated high results in all primary metrics: Precision – 97.9%, Recall – 94.8%, F1 Score – 96.32%, and maP – 97.6%, which confirms its advantages in providing accurate and reliable vehicle recognition in aerial images.

Conclusions. The study has shown that traditional contour extraction methods can serve as auxiliary tools for image preprocessing. Still, they need to provide adequate accuracy for the final stages of vehicle recognition. Neural network approaches, such as YOLOv6n, significantly outperform classical methods by providing high detection speed and accuracy, making them recommended for use in high-precision object recognition tasks in aerial images.

Download full text! (On Ukrainian)

Keywords: aerial images, vehicle detection, YOLOv6n model, neural networks, Canny, Sobel, Laplacian, Prewitt, Scharr.

  1. Alsamhi, S.H., Ma, O., Ansari, M.S., Almalki, F.A. (2019) “Survey on collaborative smart drones and internet of things for improving smartness of smart cities”, IEEE Access, pp.128125-128152. https://doi.org/10.1109/ACCESS.2019.2934998
  2. Kashtan, V.Yu., Hnatushenko, V.V., Udovyk, I.M., Shevtsova, O.S. (2023) “Neiromerezheve rozpiznavannia obiektiv zabudovy na aerofotoznimkakh”. Systemni tekhnolohii. Rehionalnyi mizhvuzivskyi zbirnyk naukovykh robit. Vyp. 1 (120). Dnipro, pp. 30-39. DOI: https://doi.org/10.32782/IT/2023-1-5
  3. Al-Kaff, A., Gómez-Silva, M., Moreno, F., de la Escalera, A., & Armingol, J. (2019) “An Appearance-based tracking algorithm for aerial search and rescue purposes”. Multidisciplinary Digital Publishing Institute, 19(3), pp. 652.
    https://doi.org/10.3390/s19030652
  4. Ramachandran, A., Sangaiah ,A.K. (2021). “A review on object detection in unmanned aerial vehicle surveillance”, International Journal of Cognitive Computing in Engineering, Volume 2, pp. 215-228.
    https://doi.org/10.1016/j.ijcce.2021.11.005
  5. Vipul, G., Kapil, T., Pragya, G., Raj, K. (2017). “A Review Paper: On Various Edge Detection Techniques”. International Journal for Research in Applied Science and Engineering Technology, pp. 534-537.
    https://doi.org/10.22214/ijraset.2017.8074
  6. Puyi, S., Hong, C., Haobo, G. (2023). “Improved UAV object detection algorithm for YOLOv5s”. Comput. Eng., 59, pp.108-116.
  7. Claude, C., Steven, Le Moan, Kacem, C. (2022). “A Novel Mean-Shift Algorithm for Data Clustering”. IEEE Access, 10, pp.14575-14585.
    https://doi.org/10.1109/ACCESS.2022.3147951
  8. Umale, Prajakta et al. (2022). “Planer object detection using surf and sift method”. International Journal of Engineering Applied Sciences and Technology.
    https://doi.org/10.33564/IJEAST.2022.v06i11.008
  9. Aytekin, Ö., Zöngür, U., Halici, U. (2013). “Texture-based airport runway detection”, IEEE Geosci. Remote Sens. Lett. 10, pp. 471-475.
    https://doi.org/10.1109/LGRS.2012.2210189
  10. Badia Ez-zahouani, Teodoro,z A., El Kharki O., Jianhua L., Kotaridis I., Yuan X., Lei Ma. (2023). “Remote sensing imagery segmentation in object-based analysis: review of methods, optimization, and quality evaluation over the past 20 years, Remote Sensing Applications: Society and Environment, Vol. 32, 101031.
    https://doi.org/10.1016/j.rsase.2023.101031
  11. Draguţ, L., Csillik, O., Eisank, C., Tiede, D. (2014). “Automated parameterisation for multi-scale image segmentation on multiple layers”, ISPRS J. Photogramm. Remote Sens. 88, pp. 119-127.
    https://doi.org/10.1016/j.isprsjprs.2013.11.018
  12. Li C., Li L., Jiang, H., Weng K., Geng Y., Li L., Ke Z., Li Q., Cheng M., Nie W., et al. (2022). “YOLOv6: A single-stage object detection framework for industrial applications”. arXiv.
  13. OpenAerialMap. [online]. Available at: <https://cocodataset.org/> [Accessed 11 May 2024].
  14. COCO Datasets. [online]. Available at: <https://cocodataset.org/> [Accessed 11 May 2024].

Received 04.11.2024