SA-FRCNN: An Improved Object Detection Method for Airport Apron Scenes
Article
Figures
Metrics
Preview PDF
Reference
Related
Cited by
Materials
Abstract:
The airport apron scene contains rich contextual information about the spatial position relationship. Traditional object detectors only considered visual appearance and ignored the contextual information. In addition, the detection accuracy of some categories in the apron dataset was low. Therefore, an improved object detection method using spatial-aware features in apron scenes called SA-FRCNN is presented. The method uses graph convolutional networks to capture the relative spatial relationship between objects in the apron scene, incorporating this spatial context into feature learning. Moreover, an attention mechanism is introduced into the feature extraction process, with the goal to focus on the spatial position and key features, and distance-IoU loss is used to achieve a more accurate regression. The experimental results show that the mean average precision of the apron object detection based on SA-FRCNN can reach 95.75%, and the detection effect of some hard-to-detect categories has been significantly improved. The proposed method effectively improves the detection accuracy on the apron dataset, which has a leading advantage over other methods.
Keywords:
Project Supported:
This work was supported by the Fundamental Research Funds for Central Universities of the Civil Aviation University of China (No.3122021088).
LYU Zonglei, CHEN Liyun. SA-FRCNN: An Improved Object Detection Method for Airport Apron Scenes[J]. Transactions of Nanjing University of Aeronautics & Astronautics,2021,38(4):571-586