Rotated Object Detection via Scale-Invariant Mahalanobis Distance in Aerial Images
2022
Rotated object detection in aerial images is a meaningful yet challenging task as objects are densely arranged and have arbitrary orientations. The eight-parameter (coordinates of box vectors) methods in rotated object detection usually use
$l_{n}$
-norm losses (L1 loss, L2 loss, and smooth L1 loss) as loss functions. As
$l_{n}$
-norm losses are mainly based on nonscale-invariant Minkowski distance, using
$l_{n}$
-norm losses will lead to inconsistency with the detection metric rotational Intersection-over-Union (IoU) and training instability. To address the problems, we use Mahalanobis distance to calculate loss between the predicted and the target box vertices’ vectors, proposing a new loss function called Mahalanobis distance loss (MDL) for eight-parameter rotated object detection. As Mahalanobis distance is scale-invariant, MDL is more consistent with detection metric and more stable during training than
$l_{n}$
-norm losses. To alleviate the problem of boundary discontinuity like all other eight-parameter methods, we further take the minimum loss value to make MDL continuous at boundary cases. We achieve state-of-the-art performance on DOTA-v1.0 with the proposed method MDL. Furthermore, compared to the experiments using smooth L1 loss and approximate SkewIoU loss, we find that MDL performs better in rotated object detection.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
19
References
0
Citations
NaN
KQI