Skip to main content

3D-B2U: Self-supervised Fluorescent Image Sequences Denoising

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14473))

Included in the following conference series:

  • 176 Accesses

Abstract

Fluorescence imaging can reveal the spatiotemporal dynamics of life activities. However, fluorescence image data suffers from photon shot noise due to a limited photon budget. Therefore, denoising fluorescence image sequences is an important task. Existing self-supervised methods solve the problem of complex parameter tuning of non-learning methods and the problem of requiring a large number of noisy-clean image pairs for supervised learning and become state-of-the-art methods for fluorescent image sequences denoising. However, they aim at 2D data, which cannot make good use of the increased time dimension information of fluorescence data compared with single image data. Besides, they still use paired noisy data to train models, and the strong prior information brought by paired data may lead to the overfitting of the model. In this work, we extend existing self-supervised methods to 3D and propose a 3D global masker that introduces a visible blind-spot structure based on 3D convolutions to avoid identity mapping while fully utilizing the input data information. Our method makes reasonable use of time dimension information and enables the task of self-supervised denoising on fluorescent images to mine information from the input data itself. Experimental results show that our method achieves a better denoising effect for fluorescent image sequences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anwar, S., Barnes, N.: Real image denoising with feature attention. In: Proceedings of the International Conference on Computer Vision, pp. 3155–3164 (2019)

    Google Scholar 

  2. Batson, J., Royer, L.: Noise2self: blind denoising by self-supervision. In: Proceedings of the International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 97, pp. 524–533 (2019)

    Google Scholar 

  3. Belthangady, C., Royer, L.A.: Applications, promises, and pitfalls of deep learning for fuorescence image reconstruction. Nat. Methods 1215–1225 (2019)

    Google Scholar 

  4. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 60–65 (2005)

    Google Scholar 

  5. Chang, M., Li, Q., Feng, H., Xu, Z.: Spatial-adaptive network for single image denoising. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 171–187. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_11

  6. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

  7. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image restoration by sparse 3d transform-domain collaborative filtering. In: Image Processing: Algorithms and Systems. SPIE Proceedings, vol. 6812, p. 681207 (2008)

    Google Scholar 

  8. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In: Proceedings of the International Conference on Image Processing, ICIP, pp. 313–316 (2007)

    Google Scholar 

  9. Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 2862–2869 (2014)

    Google Scholar 

  10. Guo, S., Yan, Z., Zhang, K., Zuo, W., Zhang, L.: Toward convolutional blind denoising of real photographs. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 1712–1722 (2019)

    Google Scholar 

  11. Huang, T., Li, S., Jia, X., Lu, H., Liu, J.: Neighbor2neighbor: self-supervised denoising from single noisy images. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 14781–14790 (2021)

    Google Scholar 

  12. Krull, A., Buchholz, T.O., Jug, F.: Noise2void-learning denoising from single noisy images. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 2129–2137 (2019)

    Google Scholar 

  13. Krull, A., Vicar, T., Prakash, M., Lalit, M., Jug, F.: Probabilistic noise2void: unsupervised content-aware denoising. Front. Comput. Sci. 2, 5 (2020)

    Article  Google Scholar 

  14. Laissue, P.P., Alghamdi, R.A., Tomancak, P., Reynaud, E.G., Shroff, H.: Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14(7), 657–661 (2017)

    Article  Google Scholar 

  15. Lehtinen, J., et al.: Noise2noise: learning image restoration without clean data. In: Proceedings of the 35th International Conference on Machine Learning, ICML. Proceedings of Machine Learning Research, vol. 80, pp. 2971–2980 (2018)

    Google Scholar 

  16. Li, B., Wu, C., Wang, M., Charan, K., Xu, C.: An adaptive excitation source for high-speed multiphoton microscopy. Nat. Methods 17(2), 163–166 (2020)

    Article  Google Scholar 

  17. Li, X., et al.: Real-time denoising of fluorescence time-lapse imaging enables high-sensitivity observations of biological dynamics beyond the shot-noise limit. Nat. Biotechnol. 282–292 (2023)

    Google Scholar 

  18. Li, X., et al.: Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nat. Methods 1395–1400 (2021)

    Google Scholar 

  19. Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: learning to denoise from unpaired noisy data. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 12061–12069 (2020)

    Google Scholar 

  20. Ouyang, W., Aristov, A., Lelek, M., Hao, X., Zimmer, C.: Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 460–468 (2018)

    Google Scholar 

  21. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

  22. Song, A., Gauthier, J.L., Pillow, J.W., Tank, D.W., Charles, A.S.: Neural anatomy and optical microscopy (NAOMI) simulation for evaluating calcium imaging methods. J. Neurosci. Methods 358, 109173 (2021)

    Article  Google Scholar 

  23. Wang, Z., Liu, J., Li, G., Han, H.: Blind2unblind: self-supervised image denoising with visible blind spots. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 2017–2026 (2022)

    Google Scholar 

  24. Weigert, M., et al.: Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 1090–10970 (2018)

    Google Scholar 

  25. Yue, Z., Yong, H., Zhao, Q., Meng, D., Zhang, L.: Variational denoising network: toward blind noise modeling and removal. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, pp. 1688–1699 (2019)

    Google Scholar 

  26. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  27. Zhang, K., Zuo, W., Zhang, L.: Ffdnet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018)

    Article  MathSciNet  Google Scholar 

  28. Zheng, Q., et al.: Ultra-stable organic fluorophores for single-molecule research. Chem. Soc. Rev. 43(4), 1044–1056 (2014)

    Google Scholar 

Download references

Acknowledgement

This work was supported by National Key R &D Program of China (2022YFC 3300704), and the National Natural Science Foundation of China under Grants (62171038, 62171042, and 62088101).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoyong Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Li, H., Wang, X., Fu, Y. (2024). 3D-B2U: Self-supervised Fluorescent Image Sequences Denoising. In: Fang, L., Pei, J., Zhai, G., Wang, R. (eds) Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science(), vol 14473. Springer, Singapore. https://doi.org/10.1007/978-981-99-8850-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8850-1_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8849-5

  • Online ISBN: 978-981-99-8850-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics