Skip to main content
Log in

RLLNet: a lightweight remaking learning network for saliency redetection on RGB-D images

  • Letter
  • Special Focus on Deep Learning for Computer Vision
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Conclusion

We developed a lightweight learning framework for efficient RGB-D SOD. First, a hierarchical cross-modal principal network extracts complementary depth features by learning multiscale refinement features. RGB and depth features are alternately input into the CIGM for gradual refinement. A cross-modal interaction strategy can prevent mutual degradation between RGB and depth features and achieve preliminary background noise interference. Then, the GDAM receives prior cues and imports them into a lightweight network that initializes weights. Finally, the saliency map is predicted by the weighted addition of the initial saliency map of the reversed background. Its performance on eight benchmark datasets demonstrates the effectiveness and superiority of the proposed RLLNet in terms of efficiency and robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Zhou W J, Guo Q L, Lei J S, et al. IRFR-Net: interactive recursive feature-reshaping network for detecting salient objects in RGB-D images. IEEE Trans Neural Netw Learn Syst, 2021. doi: https://doi.org/10.1109/TNNLS.2021.3105484

  2. Zhou T, Fan D P, Cheng M M, et al. RGB-D salient object detection: a survey. Comp Visual Media, 2021, 7: 37–69

    Article  Google Scholar 

  3. Zhou W J, Wu J W, Lei J S, et al. Salient object detection in stereoscopic 3D images using a deep convolutional residual autoencoder. IEEE Trans Multimedia, 2021, 23: 3388–3399

    Article  Google Scholar 

  4. Zhang Z, Lin Z, Xu J, et al. Bilateral attention network for RGB-D salient object detection. IEEE Trans Image Process, 2021, 30: 1949–1961

    Article  Google Scholar 

  5. Borji A, Cheng M M, Jiang H, et al. Salient object detection: a benchmark. IEEE Trans Image Process, 2015, 24: 5706–5722

    Article  MathSciNet  MATH  Google Scholar 

  6. Zhou W J, Lv Y, Lei J S, et al. Global and local-contrast guides content-aware fusion for RGB-D saliency prediction. IEEE Trans Syst Man Cybern Syst, 2021, 51: 3641–3649

    Article  Google Scholar 

  7. Howard A, Sandler M, Chu G, et al. Searching for MobileNetV3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2019. 1314–1324

  8. Zhang Y F, Zheng J B, Jia W J, et al. Deep RGB-D saliency detection without depth. IEEE Trans Multimedia, 2022, 24: 755–767

    Article  Google Scholar 

  9. Fan D P, Lin Z, Zhang Z, et al. Rethinking RGB-D salient object detection: models, data sets, and large-scale benchmarks. IEEE Trans Neural Netw Learn Syst, 2021, 32: 2075–2089

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61502429, 61972357).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wujie Zhou.

Additional information

Supporting information

Appendixes A–C. The supporting information is available online at info.scichina.com and link.springer.com. The supporting materials are published as submitted, without typesetting or editing. The responsibility for scientific accuracy and content remains entirely with the authors.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, W., Liu, C., Lei, J. et al. RLLNet: a lightweight remaking learning network for saliency redetection on RGB-D images. Sci. China Inf. Sci. 65, 160107 (2022). https://doi.org/10.1007/s11432-020-3337-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-020-3337-9

Navigation