Loading [a11y]/accessibility-menu.js
A Lightweight RGB-T Fusion Network for Practical Semantic Segmentation | IEEE Conference Publication | IEEE Xplore

A Lightweight RGB-T Fusion Network for Practical Semantic Segmentation


Abstract:

Semantic segmentation of RGB-T images is a complex task due to the challenges involved in fusing information from multi-modalities, which requires significant computation...Show More

Abstract:

Semantic segmentation of RGB-T images is a complex task due to the challenges involved in fusing information from multi-modalities, which requires significant computational resources. This paper presents a novel and lightweight network architecture for RGB-T semantic segmentation that incorporates a parameter-free feature fusion module to integrate complementary information from different modalities. Furthermore, we propose a pretrained parameter selection strategy to improve convergence speed and accuracy. The network is designed to be computationally efficient and lightweight, making it well-suited for real-time applications. Our network architecture employs middle fusion techniques to extract features with separate encoders from the different modalities, and then a parameter-free cross-modal attention mechanism is designed to selectively connect the most relevant information from each modality. Additionally, we investigate the impact of pretrained parameter selection on the performance of the network. Experimental results on an urban scene dataset demonstrate that our approach outperforms real-time state-of-the-art methods in the literature while showing comparable performance with state-of-the-art methods that require up to 100 times the computational complexity. Our findings highlight the potential of the RGB-T fusion-based semantic segmentation for applications in real-world scenarios.
Date of Conference: 24-28 September 2023
Date Added to IEEE Xplore: 13 February 2024
ISBN Information:

ISSN Information:

Conference Location: Bilbao, Spain

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.