Abstract
Optical and synthetic aperture radar (SAR) remote sensing have established themselves as valuable tools for object detection. Optical images exhibit weather-dependence but offer intricate information, whereas SAR images are weather-independent but may exhibit speckle noise and weaker edge details. Combining them can significantly enhance object detection precision. Nonetheless, existing fusion techniques frequently introduce noise of SAR images into fused features, consequently impinging on the efficacy of object detection. In this study, we present an innovative object detection architecture. It extracts detailed richness maps from optical images, subsequently employing these maps to recalibrate the spatial attention weights assigned to optical and SAR features. This strategic adjustment mitigates the impact of SAR noise on fused features within regions abundant in optical intricacies. Moreover, prevailing public optical-SAR fusion datasets need more meticulous instance-level object annotations, rendering them unsuitable for fulfilling object detection. Thus, we introduce two distinct datasets: OPTSAR, characterized by high registration accuracy, and QXS-PART, offering a counterpart with lower registration accuracy to validate the efficiency and generalization of our method. Both datasets encompass instance-level labels for diverse entities such as ships, aircraft, and storage tanks. Empirical assessments conducted on the OPTSAR and QXS-PART datasets underscore the prowess of our method. It substantiates a marked enhancement in object detection precision across well-aligned and poorly aligned optical-SAR fusion scenarios. Our method notably surpasses the efficacy of single-source object detection methodologies and established fusion approaches in terms of accuracy.
Supported by Natural Science Foundation of Anhui Province (Grant No. 2208085MF157).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cheng, Y., Cai, R., Li, Z., Zhao, X., Huang, K.: Locality-sensitive deconvolution networks with gated fusion for RGB-D indoor semantic segmentation. In: IEEE Conference on Computer Vision & Pattern Recognition (2017)
Chu, S.Y., Lee, M.S.: MT-DETR: robust end-to-end multimodal detection with confidence fusion. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5252–5261 (2023)
Chudasama, V., Kar, P., Gudmalwar, A., Shah, N., Wasnik, P., Onoe, N.: M2FNet: multi-modal fusion network for emotion recognition in conversation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4652–4661 (2022)
Frigo, O., Martin-Gaffe, L., Wacongne, C.: DooDLeNet: Double deepLab enhanced feature fusion for thermal-color semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3021–3029 (2022)
Fu, S., Xu, F., Jin, Y.Q.: Reciprocal translation between SAR and optical remote sensing images with cascaded-residual adversarial networks. Sci. China Inf. Sci. 64, 1–15 (2021)
Huang, M., et al.: The QXS-SAROPT dataset for deep learning in SAR-optical data fusion (2021)
Li, H., Wu, X.J.: DenseFuse: a fusion approach to infrared and visible images. IEEE Trans. Image Process. 28, 2614–2623 (2018)
Jung, H., Kim, Y., Jang, H., Ha, N., Sohn, K.: Unsupervised deep image fusion with structure tensor representations. IEEE Trans. Image Process. 29(99), 3845–3858 (2020)
Kulkarni, S.C., Rege, P.P.: Pixel level fusion techniques for SAR and optical images: a review. Inf. Fusion 59, 13–29 (2020)
Li, C., et al.: Yolov6 v3. 0: A full-scale reloading. arXiv preprint arXiv:2301.05586 (2023)
Li, X., et al.: Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection. In: Advances in Neural Information Processing Systems vol. 33, pp. 21002–21012 (2020)
Li, Y., et al.: DeepFusion: lidar-camera deep fusion for multi-modal 3D object detection (2022)
Liu, J., et al.: Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5802–5811 (2022)
Ma, J., Tang, L., Fan, F., Huang, J., Mei, X., Ma, Y.: SwinFusion: cross-domain long-range learning for general image fusion via Swin transformer. IEEE/CAA J. Automatica Sinica 9(7), 1200–1217 (2022)
Schmitt, M., Hughes, L.H., Qiu, C., Zhu, X.X.: SEN12MS - a curated dataset of georeferenced multi-spectral sentinel-1/2 imagery for deep learning and data fusion (2019)
Schmitt, M., Hughes, L.H., Zhu, X.X.: The sen1-2 dataset for deep learning in SAR-optical data fusion (2018)
Sun, Y., Cao, B., Zhu, P., Hu, Q.: DetFusion: a detection-driven infrared and visible image fusion network. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 4003–4011 (2022)
Vaswani, A., et al.: Attention is all you need. arXiv (2017)
Wang, Y., Chen, X., Cao, L., Huang, W., Sun, F., Wang, Y.: Multimodal token fusion for vision transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12186–12195 (2022)
Wang, Y., Zhu, X.X., Zeisl, B., Pollefeys, M.: Fusing meter-resolution 4-D InSAR point clouds and optical images for semantic urban infrastructure monitoring. IEEE Trans. Geosci. Remote Sens. 1–13 (2017)
Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)
Wu, J., Shen, T., Wang, Q., Tao, Z., Zeng, K., Song, J.: Local adaptive illumination-driven input-level fusion for infrared and visible object detection. Remote Sens. 15(3), 660 (2023)
Wu, W., Guo, S., Shao, Z., Li, D.: CroFuseNet: a semantic segmentation network for urban impervious surface extraction based on cross fusion of optical and SAR images. IEEE J. Sel. Top. Appl. Earth Observations Remote Sens. 16, 2573–2588 (2023)
Xia, Y., Zhang, H., Zhang, L., Fan, Z.: Cloud removal of optical remote sensing imagery with multitemporal SAR-optical data using X-Mtgan. In: IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 3396–3399. IEEE (2019)
Xiao, Y., Yang, M., Li, C., Liu, L., Tang, J.: Attribute-based progressive fusion network for RGBT tracking. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2831–2838 (2022)
Xu, H., Wang, X., Ma, J.: DRF: disentangled representation for visible and infrared image fusion. IEEE Trans. Instrum. Meas. 70, 1–13 (2021)
Yao, Y., Mihalcea, R.: Modality-specific learning rates for effective multimodal additive late-fusion. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 1824–1834 (2022)
Zhang, H., Ma, J.: SDNet: a versatile squeeze-and-decomposition network for real-time image fusion. Int. J. Comput. Vis. 129, 2761–2785 (2021)
Zhao, X., Zhang, L., Pang, Y., Lu, H., Zhang, L.: A single stream network for robust and real-time RGB-D salient object detection (2020)
Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, X., Wan, S., Zhang, H., Jin, P. (2024). A Detail-Guided Multi-source Fusion Network for Remote Sensing Object Detection. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14554. Springer, Cham. https://doi.org/10.1007/978-3-031-53305-1_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-53305-1_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53304-4
Online ISBN: 978-3-031-53305-1
eBook Packages: Computer ScienceComputer Science (R0)