|
Abstract
|
Existing low-light image object detection algorithms based on deep learning often suffer from low accuracy, high computational complexity, and inadequatemodel generalization capabilities. This paper proposes an efficient end-to-end low-light image object detection network, called FE-YOLO, by exploiting the amplitudeand phase information obtained from Fourier Transform and the object detection ability of YOLO. First, a Fourier Enhanced Network (FENet) is proposed. Itincorporates a Frequency Domain Processing Block (FPB), which precisely extracts frequency domain information. By leveraging the positive correlation betweenamplitude and brightness, FPB enhances image brightness and contrast by expanding the amplitude, effectively improving image quality in low-light conditions. Then,the enhancement loss and the detection loss are integrated into a joint loss function. This approach facilitates the parallel optimization of both image enhancementand object detection. In particular, the paper introduces two loss functions: amplitude difference loss and phase similarity loss, which are part of the enhancement loss. Thesefunctions accurately constrain the amplitude and phase of the image; balance the contradiction between image enhancement and structure information preservation.As a result, the performance of object detection is improved. Finally, a comprehensive training strategy is implemented on FE-YOLO, by using an end-to-end jointtraining approach, improves the model's generalization capabilities. Experiments are conducted by using low-light image datasets such as ExDark and DarkFace.The results indicates that FENet outperformes recent advanced enhancement models in terms of low-light image enhancement. Compared with other low-light imageobject detection models, FE-YOLO is more accurate in detecting objects in low-light environments, with good real-time performance. The detailed experimentalresults and program code have been made public at https://github.com/ tgliyang1985/FE-YOLO
|