Embedded real-time infrared and visible image fusion for UAV surveillance

2021 
Infrared and visible image fusion is a beneficial processing task for Unmanned Aerial Vehicle (UAV) surveillance, which can improve visibility by combining the advantages of the infrared camera and the visible light camera. An embedded onboard solution is necessary for UAV-based surveillance missions because it reduces the amount of data that are transmitted to the ground. In this paper, we propose an infrared and visible light image fusion method and implement it on two platforms with commonly used HW accelerators for embedded vision applications: Zedboard (ARM + FPGA) and NVIDIA TX1 (ARM + GPU), and compare their performances. To verify the usefulness of image fusion, we carry out sufficient experiments to prove that image fusion can improve the target detection ability of a UAV in different scenes. The detection rate for target detection is up to 0.926 in our experiments. The execution times on the ZedBoard and the TX1 are, respectively, 205.3 FPS and 36.6 FPS (38 $$\times$$ and 6.7 $$\times$$ in comparison to an ARM Cortex-A9 processor). Our results also show that the ZedBoard achieves an energy/frame reduction ratio of 7.1 $$\times$$ and 18.9 $$\times$$ respectively compared to the TX1 and the ARM CPU. This work is based on a UAV platform designed by ourselves, and all image sets are real scenes that we have captured. This demonstrates that the proposed method is viable and reflects the actual needs of real UAV surveillance systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []