skip to main content
research-article

Weighted Guided Optional Fusion Network for RGB-T Salient Object Detection

Published: 22 January 2024 Publication History

Abstract

There is no doubt that the rational and effective use of visible and thermal infrared image data information to achieve cross-modal complementary fusion is the key to improving the performance of RGB-T salient object detection (SOD). A meticulous analysis of the RGB-T SOD data reveals that it mainly consists of three scenarios in which both modalities (RGB and T) have a significant foreground and only a single modality (RGB or T) is disturbed. However, existing methods are obsessed with pursuing more effective cross-modal fusion based on treating both modalities equally. Obviously, the subjective use of equivalence has two significant limitations. Firstly, it does not allow for practical discrimination of which modality makes the dominant contribution to performance. While both modalities may have visually significant foregrounds, differences in their imaging properties will result in distinct performance contributions. Secondly, in a specific acquisition scenario, a pair of images with two modalities will contribute differently to the final detection performance due to their varying sensitivity to the same background interference. Intelligibly, for the RGB-T saliency detection task, it would be more reasonable to generate exclusive weights for the two modalities and select specific fusion mechanisms based on different weight configurations to perform cross-modal complementary integration. Consequently, we propose a weighted guided optional fusion network (WGOFNet) for RGB-T SOD. Specifically, a feature refinement module is first used to perform an initial refinement of the extracted multilevel features. Subsequently, a weight generation module (WGM) will generate exclusive network performance contribution weights for each of the two modalities, and an optional fusion module (OFM) will rely on this weight to perform particular integration of cross-modal information. Simple cross-level fusion is finally utilized to obtain the final saliency prediction map. Comprehensive experiments on three publicly available benchmark datasets demonstrate the proposed WGOFNet achieves superior performance compared with the state-of-the-art RGB-T SOD methods. The source code is available at: https://github.com/WJ-CV/WGOFNet.

References

[1]
Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. 2009. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1597–1604. DOI:
[2]
Yanqi Bao, Kechen Song, Jie Wang, Liming Huang, Hongwen Dong, and Yunhui Yan. 2021. Visible and thermal images fusion architecture for few-shot semantic segmentation. Journal of Visual Communication and Image Representation 80 (2021), 103306.
[3]
Chenglizhao Chen, Shuai Li, Yongguang Wang, Hong Qin, and Aimin Hao. 2017. Video saliency detection via spatial-temporal fusion and low-rank coherency diffusion. IEEE Transactions on Image Processing 26, 7 (2017), 3156–3170.
[4]
Chenglizhao Chen, Guotao Wang, Chong Peng, Yuming Fang, Dingwen Zhang, and Hong Qin. 2021. Exploring rich and efficient spatial temporal interactions for real-time video salient object detection. IEEE Transactions on Image Processing 30 (2021), 3995–4007.
[5]
Chenglizhao Chen, Guotao Wang, Chong Peng, Xiaowei Zhang, and Hong Qin. 2019. Improved robust video saliency detection based on long-term spatial-temporal information. IEEE Transactions on Image Processing 29 (2019), 1090–1100.
[6]
Chenglizhao Chen, Jipeng Wei, Chong Peng, and Hong Qin. 2021. Depth-quality-aware salient object detection. IEEE Transactions on Image Processing 30 (2021), 2350–2363.
[7]
Chenglizhao Chen, Jipeng Wei, Chong Peng, Weizhong Zhang, and Hong Qin. 2020. Improved saliency detection in RGB-D images using two-phase depth estimation and selective deep fusion. IEEE Transactions on Image Processing 29 (2020), 4296–4307.
[8]
Gang Chen, Feng Shao, Xiongli Chai, Hangwei Chen, Qiuping Jiang, Xiangchao Meng, and Yo-Sung Ho. 2022. CGMDRNet: Cross-guided modality difference reduction network for RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 9 (2022), 6308–6323. DOI:
[9]
Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. 2017. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 4 (2017), 834–848. DOI:
[10]
Shuhan Chen and Yun Fu. 2020. Progressively guided alternate refinement network for RGB-D salient object detection. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference. Springer, 520–538. DOI:
[11]
Zuyao Chen, Runmin Cong, Qianqian Xu, and Qingming Huang. 2020. DPANet: Depth potentiality-aware gated attention network for RGB-D salient object detection. IEEE Transactions on Image Processing 30 (2020), 7012–7024. DOI:
[12]
Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, and Philip Torr. 2014. BING: Binarized normed gradients for objectness estimation at 300fps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3286–3293.
[13]
Runmin Cong, Kepu Zhang, Chen Zhang, Feng Zheng, Yao Zhao, Qingming Huang, and Sam Kwong. 2022. Does thermal really always matter for RGB-T salient object detection? IEEE Transactions on Multimedia (2022). DOI:
[14]
Deng-Ping Fan, Ming-Ming Cheng, Yun Liu, Tao Li, and Ali Borji. 2017. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the IEEE International Conference on Computer Vision. 4548–4557.
[15]
Deng-Ping Fan, Cheng Gong, Yang Cao, Bo Ren, Ming-Ming Cheng, and Ali Borji. 2018. Enhanced-alignment measure for binary foreground map evaluation. arXiv:1805.10421. Retrieved from https://arxiv.org/abs/1805.10421
[16]
Guang Feng, Jinyu Meng, Lihe Zhang, and Huchuan Lu. 2022. Encoder deep interleaved network with multi-scale aggregation for RGB-D salient object detection. Pattern Recognition 128 (2022), 108666. DOI:
[17]
Keren Fu, Deng-Ping Fan, Ge-Peng Ji, and Qijun Zhao. 2020. JL-DCF: Joint learning and densely-cooperative fusion framework for RGB-D salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3052–3062.
[18]
Wei Gao, Guibiao Liao, Siwei Ma, Ge Li, Yongsheng Liang, and Weisi Lin. 2021. Unified information fusion network for multi-modal RGB-D and RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 4 (2021), 2091–2106. DOI:
[19]
Yuan Gao, Miaojing Shi, Dacheng Tao, and Chao Xu. 2015. Database saliency for fast image retrieval. IEEE Transactions on Multimedia 17, 3 (2015), 359–369. DOI:
[20]
Qinling Guo, Wujie Zhou, Jingsheng Lei, and Lu Yu. 2021. TSFNet: Two-stage fusion network for RGB-T salient object detection. IEEE Signal Processing Letters 28 (2021), 1655–1659. DOI:
[21]
Liming Huang, Kechen Song, Aojun Gong, Chuang Liu, and Yunhui Yan. 2020. RGB-T saliency detection via low-rank tensor learning and unified collaborative ranking. IEEE Signal Processing Letters 27 (2020), 1585–1589. DOI:
[22]
Liming Huang, Kechen Song, Jie Wang, Menghui Niu, and Yunhui Yan. 2021. Multi-graph fusion and learning for RGBT image saliency detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 3 (2021), 1366–1377. DOI:
[23]
Nianchang Huang, Yang Yang, Dingwen Zhang, Qiang Zhang, and Jungong Han. 2021. Employing bilinear fusion and saliency prior information for RGB-D salient object detection. IEEE Transactions on Multimedia 24 (2021), 1651–1664. DOI:
[24]
Fushuo Huo, Xuegui Zhu, Lei Zhang, Qifeng Liu, and Yu Shu. 2021. Efficient context-guided stacked refinement network for RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 5 (2021), 3111–3124. DOI:
[25]
Fushuo Huo, Xuegui Zhu, Qian Zhang, Ziming Liu, and Wenchao Yu. 2022. Real-time one-stream semantic-guided refinement network for RGB-Thermal salient object detection. IEEE Transactions on Instrumentation and Measurement 71 (2022), 1–12. DOI:
[26]
Wei Ji, Jingjing Li, Shuang Yu, Miao Zhang, Yongri Piao, Shunyu Yao, Qi Bi, Kai Ma, Yefeng Zheng, Huchuan Lu, et al. 2021. Calibrated RGB-D salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9471–9481.
[27]
Xinyu Jia, Chuang Zhu, Minzhen Li, Wenqi Tang, and Wenli Zhou. 2021. LLVIP: A visible-infrared paired dataset for low-light vision. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3496–3504.
[28]
Gongyang Li, Zhi Liu, Minyu Chen, Zhen Bai, Weisi Lin, and Haibin Ling. 2021. Hierarchical alternate interaction network for RGB-D salient object detection. IEEE Transactions on Image Processing 30 (2021), 3528–3542. DOI:
[29]
Yanhua Liang, Guihe Qin, Minghui Sun, Jun Qin, Jie Yan, and Zhonghan Zhang. 2022. Multi-modal interactive attention and dual progressive decoding network for RGB-D/T salient object detection. Neurocomputing 490 (2022), 132–145. DOI:
[30]
Guibiao Liao, Wei Gao, Ge Li, Junle Wang, and Sam Kwong. 2022. Cross-collaborative fusion-encoder network for robust RGB-thermal salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 11 (2022), 7646–7661. DOI:
[31]
Jinyuan Liu, Xin Fan, Zhanbo Huang, Guanyao Wu, Risheng Liu, Wei Zhong, and Zhongxuan Luo. 2022. Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5802–5811.
[32]
Jiang-Jiang Liu, Qibin Hou, Zhi-Ang Liu, and Ming-Ming Cheng. 2022. Poolnet+: Exploring the potential of pooling for salient object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 1 (2022), 887–904. DOI:
[33]
Nian Liu, Ni Zhang, Kaiyuan Wan, Ling Shao, and Junwei Han. 2021. Visual saliency transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4722–4732.
[34]
Zhengyi Liu, Yacheng Tan, Qian He, and Yun Xiao. 2021. SwinNet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 7 (2021), 4486–4497. DOI:
[35]
Zhengyi Liu, Yuan Wang, Zhengzheng Tu, Yun Xiao, and Bin Tang. 2021. TriTransNet: RGB-D salient object detection with a triplet transformer embedding network. In Proceedings of the 29th ACM International Conference on Multimedia. 4481–4490. DOI:
[36]
Ran Margolin, Lihi Zelnik-Manor, and Ayellet Tal. 2014. How to evaluate foreground maps?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 248–255.
[37]
Youwei Pang, Xiaoqi Zhao, Lihe Zhang, and Huchuan Lu. 2023. CAVER: Cross-modal view-mixed transformer for bi-modal salient object detection. IEEE Transactions on Image Processing 32 (2023), 892–904.
[38]
Federico Perazzi, Philipp Krähenbühl, Yael Pritch, and Alexander Hornung. 2012. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 733–740. DOI:
[39]
Ling Shao and Michael Brady. 2006. Specific object retrieval based on salient regions. Pattern Recognition 39, 10 (2006), 1932–1948. DOI:
[40]
Kechen Song, Yanqi Bao, Han Wang, Liming Huang, and Yunhui Yan. 2023. A potential vision-based measurements technology: Information flow fusion detection method using RGB-thermal infrared images. IEEE Transactions on Instrumentation and Measurement 72 (2023), 1–13.
[41]
Kechen Song, Jie Wang, Yanqi Bao, Liming Huang, and Yunhui Yan. 2022. A novel visible-depth-thermal image dataset of salient object detection for robotic visual perception. IEEE/ASME Transactions on Mechatronics (2022). DOI:
[42]
Fan Sun, Wujie Zhou, Lv Ye, and Lu Yu. 2022. Hierarchical decoding network based on swin transformer for detecting salient objects in RGB-T images. IEEE Signal Processing Letters 29 (2022), 1714–1718. DOI:
[43]
Peng Sun, Wenhu Zhang, Huanyu Wang, Songyuan Li, and Xi Li. 2021. Deep RGB-D saliency detection with depth-sensitive attention and automatic multi-modal fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1407–1417.
[44]
Zhengzheng Tu, Zhun Li, Chenglong Li, Yang Lang, and Jin Tang. 2021. Multi-interactive dual-decoder for RGB-thermal salient object detection. IEEE Transactions on Image Processing 30 (2021), 5678–5691. DOI:
[45]
Zhengzheng Tu, Zhun Li, Chenglong Li, and Jin Tang. 2022. Weakly alignment-free RGBT salient object detection with deep correlation network. IEEE Transactions on Image Processing 31 (2022), 3752–3764. DOI:
[46]
Zhengzheng Tu, Yan Ma, Zhun Li, Chenglong Li, Jieming Xu, and Yongtao Liu. 2022. RGBT salient object detection: A large-scale dataset and benchmark. IEEE Transactions on Multimedia (2022). DOI:
[47]
Zhengzheng Tu, Tian Xia, Chenglong Li, Yijuan Lu, and Jin Tang. 2019. M3S-NIR: Multi-modal multi-scale noise-insensitive ranking for RGB-T saliency detection. In Proceedings of the 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 141–146. DOI:
[48]
Zhengzheng Tu, Tian Xia, Chenglong Li, Xiaoxiao Wang, Yan Ma, and Jin Tang. 2019. RGB-T image saliency detection via collaborative graph learning. IEEE Transactions on Multimedia 22, 1 (2019), 160–173. DOI:
[49]
Guotao Wang, Chenglizhao Chen, Deng-Ping Fan, Aimin Hao, and Hong Qin. 2021. From semantic categories to fixations: A novel weakly-supervised visual-auditory saliency detection approach. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15119–15128.
[50]
Guizhao Wang, Chenglong Li, Yunpeng Ma, Aihua Zheng, Jin Tang, and Bin Luo. 2018. RGB-T saliency detection benchmark: Dataset, baselines, analysis and a novel approach. In Proceedings of the Image and Graphics Technologies and Applications: 13th Conference on Image and Graphics Technologies and Applications, IGTA 2018. Springer, 359–369. DOI:
[51]
Jie Wang, Kechen Song, Yanqi Bao, Liming Huang, and Yunhui Yan. CGFNet: Cross-guided fusion network for RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 5 (n.d.), 2949–2961. DOI:
[52]
Jie Wang, Kechen Song, Yanqi Bao, Yunhui Yan, and Yahong Han. 2022. Unidirectional RGB-T salient object detection with intertwined driving of encoding and fusion. Engineering Applications of Artificial Intelligence 114 (2022), 105162. DOI:
[53]
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. 2021. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 568–578.
[54]
Hongfa Wen, Chenggang Yan, Xiaofei Zhou, Runmin Cong, Yaoqi Sun, Bolun Zheng, Jiyong Zhang, Yongjun Bao, and Guiguang Ding. 2021. Dynamic selective network for RGB-D salient object detection. IEEE Transactions on Image Processing 30 (2021), 9179–9192. DOI:
[55]
Yu-Huan Wu, Yun Liu, Jun Xu, Jia-Wang Bian, Yu-Chao Gu, and Ming-Ming Cheng. 2021. MobileSal: Extremely efficient RGB-D salient object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 44, 12 (2021), 10261–10269. DOI:
[56]
Yu-Huan Wu, Yun Liu, Le Zhang, Ming-Ming Cheng, and Bo Ren. 2022. EDN: Salient object detection via extremely-downsampled network. IEEE Transactions on Image Processing 31 (2022), 3125–3136. DOI:
[57]
Nana Yu, Jinjiang Li, and Zhen Hua. 2022. LBP-based progressive feature aggregation network for low-light image enhancement. IET Image Processing 16, 2 (2022), 535–553.
[58]
Nana Yu, Jinjiang Li, and Zhen Hua. 2023. Fla-net: Multi-stage modular network for low-light image enhancement. The Visual Computer 39, 4 (2023), 1251–1270.
[59]
Jing Zhang, Deng-Ping Fan, Yuchao Dai, Xin Yu, Yiran Zhong, Nick Barnes, and Ling Shao. 2021. RGB-D saliency detection via cascaded mutual information minimization. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4338–4347.
[60]
Pingping Zhang, Wei Liu, Dong Wang, Yinjie Lei, Hongyu Wang, and Huchuan Lu. 2020. Non-rigid object tracking via deep multi-scale spatial-temporal discriminative saliency maps. Pattern Recognition 100 (2020), 107130. DOI:
[61]
Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, and Xiang Ruan. 2022. Visible-thermal UAV tracking: A large-scale benchmark and new baseline. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8886–8895.
[62]
Qiang Zhang, Nianchang Huang, Lin Yao, Dingwen Zhang, Caifeng Shan, and Jungong Han. 2019. RGB-T salient object detection via fusing multi-level CNN features. IEEE Transactions on Image Processing 29 (2019), 3321–3335. DOI:
[63]
Qiang Zhang, Tonglin Xiao, Nianchang Huang, Dingwen Zhang, and Jungong Han. 2020. Revisiting feature fusion for RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 31, 5 (2020), 1804–1818. DOI:
[64]
Jiawei Zhao, Yifan Zhao, Jia Li, and Xiaowu Chen. 2020. Is depth really necessary for salient object detection?. In Proceedings of the 28th ACM International Conference on Multimedia. 1745–1754. DOI:
[65]
Heng Zhou, Chunna Tian, Zhenxi Zhang, Chengyang Li, Yuxuan Ding, Yongqiang Xie, and Zhongbo Li. 2023. Position-Aware Relation Learning for RGB-Thermal Salient Object Detection. IEEE Transactions on Image Processing (2023).
[66]
Tao Zhou, Deng-Ping Fan, Ming-Ming Cheng, Jianbing Shen, and Ling Shao. 2021. RGB-D salient object detection: A survey. Computational Visual Media 7 (2021), 37–69. DOI:
[67]
Tao Zhou, Huazhu Fu, Geng Chen, Yi Zhou, Deng-Ping Fan, and Ling Shao. 2021. Specificity-preserving RGB-D saliency detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4681–4691.
[68]
Wujie Zhou, Qinling Guo, Jingsheng Lei, Lu Yu, and Jenq-Neng Hwang. 2021. ECFFNet: Effective and consistent feature fusion network for RGB-T salient object detection. IEEE Transactions on Circuits and Systems for Video Technology 32, 3 (2021), 1224–1235. DOI:
[69]
Wujie Zhou, Yun Zhu, Jingsheng Lei, Jian Wan, and Lu Yu. 2021. APNet: Adversarial learning assistance and perceived importance fusion network for all-day RGB-T salient object detection. IEEE Transactions on Emerging Topics in Computational Intelligence 6, 4 (2021), 957–968. DOI:
[70]
Yahong Han Zihao Zhang, Jie Wang. 2023. Saliency Prototype for RGB-D and RGB-T Salient Object Detection. ACM International Conference on Multimedia (2023).

Cited By

View all
  • (2024)Network Information Security Monitoring Under Artificial Intelligence EnvironmentInternational Journal of Information Security and Privacy10.4018/IJISP.34503818:1(1-25)Online publication date: 21-Jun-2024
  • (2024)Heterogeneous Fusion and Integrity Learning Network for RGB-D Salient Object DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365647620:7(1-24)Online publication date: 15-May-2024
  • (2024)MultiRider: Enabling Multi-Tag Concurrent OFDM Backscatter by Taming In-band InterferenceProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661862(292-303)Online publication date: 3-Jun-2024
  • Show More Cited By

Index Terms

  1. Weighted Guided Optional Fusion Network for RGB-T Salient Object Detection

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 20, Issue 5
    May 2024
    650 pages
    EISSN:1551-6865
    DOI:10.1145/3613634
    • Editor:
    • Abdulmotaleb El Saddik
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 January 2024
    Online AM: 13 October 2023
    Accepted: 10 September 2023
    Revised: 16 August 2023
    Received: 05 May 2023
    Published in TOMM Volume 20, Issue 5

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Salient object detection
    2. RGB-T
    3. transformer
    4. modality contribution weights
    5. cross-modal fusion

    Qualifiers

    • Research-article

    Funding Sources

    • Scientific and Technological Innovation 2030

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)441
    • Downloads (Last 6 weeks)51
    Reflects downloads up to 17 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Network Information Security Monitoring Under Artificial Intelligence EnvironmentInternational Journal of Information Security and Privacy10.4018/IJISP.34503818:1(1-25)Online publication date: 21-Jun-2024
    • (2024)Heterogeneous Fusion and Integrity Learning Network for RGB-D Salient Object DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365647620:7(1-24)Online publication date: 15-May-2024
    • (2024)MultiRider: Enabling Multi-Tag Concurrent OFDM Backscatter by Taming In-band InterferenceProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661862(292-303)Online publication date: 3-Jun-2024
    • (2024)Depth-Assisted Semi-Supervised RGB-D Rail Surface Defect InspectionIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.338794925:7(8042-8052)Online publication date: Jul-2024
    • (2024)SPANet: Spatial perceptual activation network for camouflaged object detectionIET Computer Vision10.1049/cvi2.12310Online publication date: 18-Sep-2024
    • (2024)Leveraging modality‐specific and shared features for RGB‐T salient object detectionIET Computer Vision10.1049/cvi2.12307Online publication date: 25-Sep-2024
    • (2024)Driver intention prediction based on multi-dimensional cross-modality information interactionMultimedia Systems10.1007/s00530-024-01282-330:2Online publication date: 15-Mar-2024

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media