Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Deep learning based on parameterized physical forward model for adaptive holographic imaging with unpaired data

Abstract

Holographic imaging poses the ill posed inverse mapping problem of retrieving complex amplitude maps from measured diffraction intensity patterns. The existing deep learning methods for holographic imaging often depend solely on the statistical relation between the given data distributions, compromising their reliability in practical imaging configurations where physical perturbations exist in various forms, such as mechanical movement and optical fluctuation. Here, we present a deep learning method based on a parameterized physical forward model that reconstructs both the complex amplitude and the range of objects under highly perturbative configurations where the object-to-sensor distance is set beyond the range of given training data. To prove reliability in practical biomedical applications, we demonstrate holographic imaging of red blood cells flowing in a cluster and diverse types of tissue section presented without any ground truth data. Our results suggest that the proposed approach permits the adaptability of deep learning methods to deterministic perturbations, and therefore extends their applicability to a wide range of inverse problems in imaging.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the proposed model.
Fig. 2: Demonstration of simultaneous reconstruction of complex amplitude and object distance.
Fig. 3: Demonstration of adaptive holographic imaging.
Fig. 4: Demonstration of holographic imaging of RBCs in a dynamic environment.
Fig. 5: Holographic imaging of histology slides without ground truth.

Similar content being viewed by others

Data availability

Part of the 3 μm polystyrene microsphere, RBC and histology slide datasets48 are available at https://doi.org/10.6084/m9.figshare.21378744. The rest of the data that support the findings of this study are available from the corresponding author upon reasonable request. Also, the data used to make figures in this paper are publicly shared in the repository where our code49 is uploaded.

Code availability

The codes49 used in this study are available from https://doi.org/10.5281/zenodo.7220717 and https://github.com/csleemooo/Deep_learning_based_on_parameterized_physical_forward_model_for_adaptive_holographic_imaging.

References

  1. Zheng, G., Horstmeyer, R. & Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photon. 7, 739–745 (2013).

    Article  ADS  CAS  Google Scholar 

  2. Tian, L. & Waller, L. 3D intensity and phase imaging from light field measurements in an LED array microscope. Optica 2, 104–111 (2015).

    Article  ADS  Google Scholar 

  3. Sung, Y. et al. Optical diffraction tomography for high resolution live cell imaging. Opt. InfoBase Conf. Pap. 17, 1977–1979 (2009).

    Google Scholar 

  4. Brady, D. J., Choi, K., Marks, D. L., Horisaki, R. & Lim, S. Compressive holography. Opt. Express 17, 13040–13049 (2009).

    Article  ADS  CAS  PubMed  Google Scholar 

  5. Ozcan, A. & McLeod, E. Lensless imaging and sensing. Annu. Rev. Biomed. Eng. 18, 77–102 (2016).

    Article  CAS  PubMed  Google Scholar 

  6. Gustafsson, M. G. L. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000).

    Article  CAS  PubMed  Google Scholar 

  7. Antipa, N. et al. DiffuserCam: lensless single-exposure 3D imaging. Optica 5, 1–9 (2018).

    Article  ADS  Google Scholar 

  8. Fienup, J. R. Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758–2769 (1982).

    Article  ADS  CAS  PubMed  Google Scholar 

  9. Miao, J., Charalambous, P., Kirz, J. & Sayre, D. Extending the methodology of X-ray crystallography to non-crystalline specimens. AIP Conf. Proc. 521, 3–6 (2000).

    Article  ADS  Google Scholar 

  10. Chapman, H. N. & Nugent, K. A. Coherent lensless X-ray imaging. Nat. Photon. 4, 833–839 (2010).

    Article  ADS  CAS  Google Scholar 

  11. Wu, L. et al. Three-dimensional coherent X-ray diffraction imaging via deep convolutional neural networks. NPJ Comput. Mater. 7, 175 (2021).

    Article  ADS  Google Scholar 

  12. Ruder, S. An overview of gradient descent optimization algorithms. Preprint at https://arxiv.org/abs/1609.04747 (2017).

  13. Rivenson, Y., Stern, A. & Javidi, B. Overview of compressive sensing techniques applied in holography. Appl. Opt. 52, A423–A432 (2013).

    Article  ADS  PubMed  Google Scholar 

  14. Huang, G., Jiang, H., Matthews, K. & Wilford, P. Lensless imaging by compressive sensing. In 2013 IEEE International Conference on Image Processing 2101–2105 (IEEE, 2013).

  15. Ren, Z., Xu, Z. & Lam, E. Y. Learning-based nonparametric autofocusing for digital holography. Optica 5, 337–344 (2018).

    Article  ADS  Google Scholar 

  16. Goy, A., Arthur, K., Li, S. & Barbastathis, G. Low photon count phase retrieval using deep learning. Phys. Rev. Lett. 121, 243902 (2018).

    Article  ADS  CAS  PubMed  Google Scholar 

  17. Bostan, E., Heckel, R., Chen, M., Kellman, M. & Waller, L. Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network. Optica 7, 559–562 (2020).

    Article  ADS  Google Scholar 

  18. Zhang, Y. et al. PhaseGAN: a deep-learning phase-retrieval approach for unpaired datasets. Opt. Express 29, 19593–19604 (2021).

    Article  ADS  PubMed  Google Scholar 

  19. Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 7, 17141 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Li, Y., Xue, Y. & Tian, L. Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media. Optica 5, 1181–1190 (2018).

    Article  ADS  Google Scholar 

  21. Rivenson, Y., Wu, Y. & Ozcan, A. Deep learning in holography and coherent imaging. Light Sci. Appl. 8, 85 (2019).

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  22. Wu, Y. et al. Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. Optica 5, 704–710 (2018).

    Article  ADS  Google Scholar 

  23. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).

    Article  CAS  PubMed  Google Scholar 

  24. Wu, Y. et al. Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram. Light Sci. Appl. 8, 25 (2019).

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  25. Kim, H., Song, G., You, J.-i., Lee, C. & Jang, M. Deep learning for lensless imaging. J. Korean Phys. Soc. 81, 570–579 (2022).

  26. Lyu, M., Wang, H., Li, G. & Situ, G. eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction. Opt. Express 26, 22603–22614 (2018).

    Article  ADS  PubMed  Google Scholar 

  27. Li, X. et al. Unsupervised content-preserving transformation for optical microscopy. Light Sci. Appl. 10, 44 (2021).

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  28. Wang, F. et al. Phase imaging with an untrained neural network. Light Sci. Appl. 9, 77 (2020).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  29. Niknam, F., Qazvini, H. & Latifi, H. Holographic optical field recovery using a regularized untrained deep decoder network. Sci. Rep. 11, 10903 (2021).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  30. Zhang, X., Wang, F. & Situ, G. BlindNet: an untrained learning approach toward computational imaging with model uncertainty. J. Phys. D 55, 034001 (2022).

    Article  ADS  CAS  Google Scholar 

  31. Sim, B., Oh, G., Kim, J., Jung, C. & Ye, J. C. Optimal transport driven CycleGAN for unsupervised learning in inverse problems. SIAM J. Imaging Sci. 13, 2281–2306 (2020).

    Article  MathSciNet  Google Scholar 

  32. Zhu, J. Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE International Conference on Computer Vision 2223–2232 (IEEE, 2017).

  33. Villani, C. Optimal Transport: Old and New (Springer, 2009).

  34. Goodman, J. Introduction to Fourier Optics (McGraw-Hill, 2008).

  35. Cuche, E., Marquet, P. & Depeursinge, C. Spatial filtering for zero-order and twin-image elimination in digital off-axis holography. Appl. Opt. 39, 4070–4075 (2000).

    Article  ADS  CAS  PubMed  Google Scholar 

  36. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Navab, N. et al.) 234–241 (Springer, 2015).

  37. Park, Y. K., Depeursinge, C. & Popescu, G. Quantitative phase imaging in biomedicine. Nat. Photon. 12, 578–589 (2018).

    Article  ADS  CAS  Google Scholar 

  38. Langehanenberg, P., Kemper, B., Dirksen, D. & von Bally, G. Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging. Appl. Opt. 47, D176–D182 (2008).

    Article  PubMed  Google Scholar 

  39. Ren, Z., Zhao, J. & Lam, E. Y. Automatic compensation of phase aberrations in digital holographic microscopy based on sparse optimization. APL Photon. 4, 110808 (2019).

    Article  ADS  Google Scholar 

  40. Wu, Y. & He, K. Group normalization. In Proc. European Conference on Computer Vision (ECCV) (eds Ferrari, V. et al.) 3–19 (Springer, 2018).

  41. Hu, J., Shen, L., Albanie, S., Sun, G. & Wu, E. Squeeze-and-excitation networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 7132–7141 (IEEE, 2018).

  42. Arjovsky, M., Chintala, S. & Bottou, L. Wasserstein generative adversarial networks. In Proc. International Conference on Machine Learning 214–223 (PMLR, 2017).

  43. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A. Improved training of Wasserstein GANs. Adv. Neural Inf. Process. Syst. 30, 5769–5779 (2017).

  44. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  ADS  PubMed  Google Scholar 

  45. Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proc. International Conference on Artificial Intelligence and Statistics 249–256 (PMLR, 2010).

  46. Kingma, D. P. & Ba, J. L. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).

  47. Zhang, L., Zhang, L., Mou, X. & Zhang, D. FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 2378–2386 (2011).

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  48. Lee, C. et al. 3 μm polystyrene bead, red blood cell, and histological slide datasets. Figshare https://doi.org/10.6084/m9.figshare.21378744 (2022).

  49. Lee, C. et al. Deep learning based on parameterized physical forward model for adaptive holographic imaging with unpaired data (v1.0). Zenodo https://doi.org/10.5281/zenodo.7220717 (2022).

Download references

Acknowledgements

This work was supported by the Samsung Research Funding and Incubation Center of Samsung Electronics grant SRFC-IT2002-03 (to C.L., G.S., H.K. and M.J.), National Research Foundation of Korea (NRF) grants funded by the Korea government (MSIT) NRF-2021R1A5A1032937 and 2021R1C1C1011307 (to C.L., G.S., H.K. and M.J.), KAIST Key Research Institute Interdisciplinary Research Group Project (to J.C.Y.) and National Research Foundation (NRF) of Korea government NRF-2020R1A2B5B03001980 (to J.C.Y.). We thank Small Machines for providing blood samples.

Author information

Authors and Affiliations

Authors

Contributions

C.L. and M.J. conceived the initial idea. C.L. performed the experiments with the help of G.S. and H.K. C.L. developed the network architecture and performed data analysis. C.L. and M.J. wrote the manuscript with the help of G.S., H.K. and J.C.Y. M.J. supervised the project.

Corresponding authors

Correspondence to Jong Chul Ye or Mooseok Jang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Pablo Villanueva Pérez, Yuhe Zhang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Demonstration of consistent reconstruction from diffraction intensity patterns measured at different object-to-sensor distances.

a, Input diffraction intensities measured at 9, 11, 13, and 15 mm. b, Reconstructed complex amplitude and distances are indicated by arrows. Regardless of where the diffraction intensities are measured, complex amplitudes are consistently generated. The corresponding measured ground truth is presented in the right column. Scale bar: 20 μm.

Extended Data Fig. 2 Comparison of reconstruction results from diffraction intensity pattern sampled from ID.

a, The input diffraction intensity (left), reconstruction results from each method, and ground truth (right) are illustrated. Each method was trained using diffraction intensities measured at object-to-sensor distances of 7–17 mm with 2 mm spacing and unmatched complex amplitudes. The magnified views of the dashed boxes are presented below the reconstructed image. b,c, (b) Amplitude and (c) phase profiles of dashed lines in the magnified view are compared. All reconstructed profiles are well matched to the ground truth except for the reconstruction result from CycleGAN. Scale bars for the whole FOV and the magnified ROI: 20 μm and 2 μm, respectively.

Extended Data Fig. 3 Comparison of reconstruction results from diffraction intensity pattern sampled from OOD.

a, The input diffraction intensity (left), reconstruction results from each method, and ground truth (right) are illustrated. Each method was trained using the diffraction intensities measured at the single object-to-sensor distance of 13 mm and unmatched complex amplitudes. The magnified views of the dashed boxes are presented below each reconstructed image. b,c, (b) Amplitude and (c) phase profiles of dashed lines in the magnified view are compared. Only the reconstructed profile (red) from the proposed method is well matched to the ground truth (blue). Scale bars for the whole FOV and the magnified ROI: 20 μm and 2 μm, respectively.

Extended Data Fig. 4 Ablation study on ground truth data for 3μm polystyrene beads.

The proposed network was trained with whole diffraction intensity data (3600 patches) while decreasing the portion of the complex amplitude data as 100%, 10%, 1%, and 0.5%. a,b, The diffraction intensity measured at (a) 9 mm and (b) 15 mm are used as the network input. The proposed approach consistently generates the complex amplitude maps up to the level where 1% (~60 patches) of complex amplitude data are used for training. Scale bar: 20 μm.

Extended Data Fig. 5 Analysis of predicted object-to-sensor distances and reconstructed phase map of RBCs.

a, A series of 100 diffraction intensity patterns were measured at the tilted sample stage. Scale bar: 100 μm. b, Mean and standard deviation of the predicted distance over 100 frames (a). The distance estimation error becomes larger for the null patches, which do not include any RBCs, as they present a uniform flat intensity. c, Exemplary patches of measured phase map (top) and generated phase map (bottom) of RBCs. Also, the magnified 3-D phase maps of the dashed boxes (i)-(iv) are presented on the right of each 2-D phase map. Scale bar: 20 μm.

Extended Data Fig. 6 Exemplary phase maps of tissue sections.

a, Exemplary phase maps of colon, rectum, small bowel, and appendix tissue used in ‘group 1’. b, Exemplary phase maps of duodenum, stomach (body), and stomach (antrum) tissue used in ‘group 2’. Scale bar: 20 μm.

Extended Data Fig. 7 Comparison of the reconstructed complex amplitude of rectum and small bowel.

Reconstruction results from three different methods – CycleGAN, PhaseGAN, and the proposed- are presented. a,b, Diffraction intensity of (a) rectum and (b) small bowel measured at 14 mm and 24 mm, respectively, are used as network input. The ground truth is also presented in the right column. The proposed model shows superior reconstruction results compared to the existing methods. Scale bar: 20 μm.

Extended Data Fig. 8 Ablation study on ground truth data for tissue sections.

The proposed network was trained with whole diffraction intensity data (2800 patches) while decreasing the portion of complex amplitude data as 100%, 10%, 1%, and 0.1%. Note that only tissue ‘group 1’ was used for training. a,b, The diffraction intensity of (a) appendix measured at 13 mm and (b) colon measured at 22 mm are used as network input. The proposed network consistently generates complex amplitude up to the level where 1% (~100 patches) of complex amplitude data are used for training. Scale bars: 20 μm.

Extended Data Fig. 9 Robustness of the proposed model over the extended range of object-to-sensor distance.

a,b, (a) The diffraction intensities of colon measured at 10, 20, 30, and 40 mm are used as network inputs and (b) corresponding reconstructed complex amplitude map and distance are indicated by arrow. Even with the extremely large distance range, the proposed model consistently generates the complex amplitude maps. Scale bar: 20 μm.

Extended Data Fig. 10 Ablation study on discriminator.

Comparison of the reconstruction results between the network trained with and without adversarial loss. To train the network without adversarial loss, λWGAN and λGP are set to 0. a,b, The diffraction intensity of (a) small bowel measured at 19 mm and (b) rectum measured at 25 mm are used as network input. The complex amplitude map reconstructed from the network trained without adversarial loss shows out-of-focus artifacts while the physical validation result has a good agreement with the input diffraction intensity. Scale bar: 20 μm.

Supplementary information

Supplementary Information

Supplementary Notes 1–5, Figs. 1–15 and Tables 1–3.

Reporting Summary

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, C., Song, G., Kim, H. et al. Deep learning based on parameterized physical forward model for adaptive holographic imaging with unpaired data. Nat Mach Intell 5, 35–45 (2023). https://doi.org/10.1038/s42256-022-00584-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-022-00584-3

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics