Skip to main content
Log in

Deep Unfolding for Snapshot Compressive Imaging

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Snapshot compressive imaging (SCI) systems aim to capture high-dimensional (\(\ge 3\)D) images in a single shot using 2D detectors. SCI devices consist of two main parts: a hardware encoder and a software decoder. The hardware encoder typically consists of an (optical) imaging system designed to capture compressed measurements. The software decoder, on the other hand, refers to a reconstruction algorithm that retrieves the desired high-dimensional signal from those measurements. In this paper, leveraging the idea of deep unrolling, we propose an SCI recovery algorithm, namely GAP-net, which unfolds the generalized alternating projection (GAP) algorithm. At each stage, GAP-net passes its current estimate of the desired signal through a trained convolutional neural network (CNN). The CNN operates as a denoiser projecting the estimate back to the desired signal space. For the GAP-net that employs trained auto-encoder-based denoisers, we prove a probabilistic global convergence result. Finally, we investigate the performance of GAP-net in solving video SCI and spectral SCI problems. In both cases, GAP-net demonstrates competitive performance on both synthetic and real data. In addition to its high accuracy and speed, we show that GAP-net is flexible with respect to signal modulation implying that a trained GAP-net decoder can be applied in different systems. Our code is available at https://github.com/mengziyi64/GAP-net.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. It has been observed in several papers (Jalali & Yuan, 2019) and experiments (Liu et al., 2019) that AMP does not converge well in SCI applications, due to the special structure of sensing matrix in SCI and ADMM usually outperforms ISTA.

  2. We observed some errors in the proof of global convergence of PnP-GAP in Yuan et al. (2020). Specifically, the lower bound of the second term in Eq. (25) in Yuan et al. (2020) should be 0. Therefore, the proof of global convergence for PnP-GAP presented in Yuan et al. (2020) does not hold. The authors have updated the manuscript on arXiv with a local convergence result. In this paper, using concentration of measure tools, we prove global convergence of GAP-net algorithm to the vicinity of the desired solution.

  3. For PnP-type algorithms, various conditions of denoisers, such as the contraction denoiser (Ryu et al., 2019) Lipschitz continuous (Metzler et al., 2016), bounded denoiser (Chan et al., 2017) and non-expansive denoiser (Metzler et al., 2017), have been studied in the literature.

  4. Characterizing the Lipschitz constant for a given neural networks is a challenging problem (NP-hard (Virmaux & Scaman, 2018)) but an important one that has been extensively studied in recent years. Instead of directly calculating the Lipschitz constant the main focus is typically on bounding it. (See for example (Fazlyab et al., 2019; Virmaux & Scaman, 2018))

References

  • Bioucas-Dias, J., & Figueiredo, M. (2007). A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Transactions on Image Processing, 16(12), 2992–3004.

    Article  MathSciNet  Google Scholar 

  • Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 1–122.

    Article  MATH  Google Scholar 

  • Chan, S. H., Wang, X., & Elgendy, O. A. (2017). Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Transactions on Computational Imaging, 3, 84–98.

    Article  MathSciNet  Google Scholar 

  • Chang, J.H.R., Li, C.L., Poczos, B., Kumar, B.V., Sankaranarayanan, A.C. (2017). One network to solve them all: Solving linear inverse problems using deep projection models. In: 2017 IEEE international conference on computer vision (ICCV), pp. 5889–5898. https://doi.org/10.1109/ICCV.2017.627.

  • Cheng, Z., Chen, B., Liu, G., Zhang, H., Lu, R., Wang, Z., Yuan, X. (2021). Memory-efficient network for large-scale video compressive sensing. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).

  • Cheng, Z., Chen, B., Lu, R., Wang, Z., Zhang, H., Meng, Z., Yuan, X. (2022). Recurrent neural networks for snapshot compressive imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence

  • Cheng, Z., Lu, R., Wang, Z., Zhang, H., Chen, B., Meng, Z., Yuan, X. (2020). BIRNAT: Bidirectional recurrent neural networks with adversarial training for video snapshot compressive imaging. In: European conference on computer vision (ECCV)

  • Choi, I., Jeon, D.S., Nam, G., Gutierrez, D., Kim, M.H. (2017). High-quality hyperspectral reconstruction using a spectral prior. p. 218. ACM.

  • Fazlyab, M., Robey, A., Hassani, H., Morari, M., & Pappas, G. (2019). Efficient and accurate estimation of Lipschitz constants for deep neural networks. NeurIPS, 32, 11427–11438.

    Google Scholar 

  • Gehm, M. E., John, R., Brady, D. J., Willett, R. M., & Schulz, T. J. (2007). Single-shot compressive spectral imaging with a dual-disperser architecture. Optics Express, 15(21), 14013–14027. https://doi.org/10.1364/OE.15.014013

    Article  Google Scholar 

  • Gregor, K., LeCun, Y. (2010). Learning fast approximations of sparse coding. In: Proceedings of the 27th international conference on international conference on machine learning, pp. 399–406.

  • Gu, S., Zhang, L., Zuo, W., Feng, X. (2014). Weighted nuclear norm minimization with application to image denoising. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 2862–2869.

  • Han, X., Wu, B., Shou, Z., Liu, X.Y., Zhang, Y., Kong, L. (2020). Tensor fista-net for real-time snapshot compressive imaging. In: AAAI.

  • He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778.

  • He, W., Yokoya, N., Yuan, X.: Fast hyperspectral image recovery via non-iterative fusion of dual-camera compressive hyperspectral imaging. IEEE Transactions on Image Processing 30 (2021).

  • Hershey, J.R., Roux, J.L., Weninger, F. (2014). Deep unfolding: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574.

  • Hitomi, Y., Gu, J., Gupta, M., Mitsunaga, T., Nayar, S.K. (2011). Video from a single coded exposure photograph using a learned over-complete dictionary. In: 2011 international conference on computer vision, pp. 287–294. IEEE.

  • Hu, X., Cai, Y., Lin, J., Wang, H., Yuan, X., Zhang, Y., Timofte, R., Van Gool, L. (2022). Hdnet: High-resolution dual-domain learning for spectral compressive imaging. arXiv preprint arXiv:2203.02149.

  • Huang, T., Dong, W., Yuan, X., Wu, J., , Shi, G. (2021). Deep gaussian scale mixture prior for spectral compressive imaging. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).

  • Huang, T., Yuan, X., Dong, W., Wu, J., Shi, G.: Deep gaussian scale mixture prior for image reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)

  • Iliadis, M., Spinoulas, L., & Katsaggelos, A. K. (2018). Deep fully-connected networks for video compressive sensing. Digital Signal Processing, 72, 9–18. https://doi.org/10.1016/j.dsp.2017.09.010

    Article  Google Scholar 

  • Jalali, S., & Yuan, X. (2019). Snapshot compressed sensing: Performance bounds and algorithms. IEEE Transactions on Information Theory, 65(12), 8005–8024. https://doi.org/10.1109/TIT.2019.2940666

    Article  MathSciNet  MATH  Google Scholar 

  • Jin, K. H., McCann, M. T., Froustey, E., & Unser, M. (2017). Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9), 4509–4522. https://doi.org/10.1109/TIP.2017.2713099

    Article  MathSciNet  MATH  Google Scholar 

  • Kingma, D.P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

  • Kulkarni, K., Lohit, S., Turaga, P., Kerviche, R., Ashok, A. (2016). Reconnet: Non-iterative reconstruction of images from compressively sensed random measurements. In: CVPR

  • Li, Y., Qi, M., Gulve, R., Wei, M., Genov, R., Kutulakos, K.N., Heidrich, W. (2020). End-to-end video compressive sensing using Anderson-accelerated unrolled networks. In: 2020 IEEE international conference on computational photography (ICCP), pp. 1–12.

  • Liao, X., Li, H., & Carin, L. (2014). Generalized alternating projection for weighted-\(\ell _{2,1}\) minimization with applications to model-based compressive sensing. SIAM Journal on Imaging Sciences, 7(2), 797–823.

  • Lin, J., Cai, Y., Hu, X., Wang, H., Yuan, X., Zhang, Y., Timofte, R., Van Gool, L. (2022). Coarse-to-fine sparse transformer for hyperspectral image reconstruction. arXiv preprint arXiv:2203.04845.

  • Liu, Y., Yuan, X., Suo, J., Brady, D., & Dai, Q. (2019). Rank minimization for snapshot compressive imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(12), 2990–3006.

  • Llull, P., Liao, X., Yuan, X., Yang, J., Kittle, D., Carin, L., Sapiro, G., & Brady, D. J. (2013). Coded aperture compressive temporal imaging. Optics Express, 21(9), 10526–10545. https://doi.org/10.1364/OE.21.010526

  • Lu, R., Cheng, Z., Chen, B., Yuan, X. (2022). Motion-aware dynamic graph neural network for video compressive sensing. arXiv preprint arXiv:2203.00387.

  • Lu, S., Yuan, X., Shi, W. (2020). An integrated framework for compressive imaging processing on CAVs. In: ACM/IEEE Symposium on Edge Computing (SEC).

  • Ma, J., Liu, X., Shou, Z., Yuan, X. (2019). Deep tensor ADMM-net for snapshot compressive imaging. In: IEEE/CVF conference on computer vision (ICCV).

  • Meng, Z., Ma, J., Yuan, X. (2020). End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In: European conference on computer vision (ECCV).

  • Meng, Z., Qiao, M., Ma, J., Yu, Z., Xu, K., & Yuan, X. (2020). Snapshot multispectral endomicroscopy. Optics Letters 45(14), 3897–3900.

  • Meng, Z., Yu, Z., Xu, K., Yuan, X. (2021). Self-supervised neural networks for spectral snapshot compressive imaging. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pp. 2622–2631.

  • Meng, Z., Yu, Z., Xu, K., Yuan, X. (2021). Self-supervised neural networks for spectral snapshot compressive imaging. In: International conference on computer vision (ICCV).

  • Metzler, C., Mousavi, A., Baraniuk, R. (2017). Learned d-amp: Principled neural network based compressive image recovery. In: I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (eds.) Advances in neural information processing systems 30, pp. 1772–1783.

  • Metzler, C. A., Maleki, A., & Baraniuk, R. G. (2016). From denoising to compressed sensing. IEEE Transactions on Information Theory, 62(9), 5117–5144. https://doi.org/10.1109/tit.2016.2556683

  • Miao, X., Yuan, X., Pu, Y., Athitsos, V. (2019). \(\lambda \)-net: Reconstruct hyperspectral images from a snapshot measurement. In: IEEE/CVF conference on computer vision (ICCV).

  • Miao, X., Yuan, X., Wilford, P. (2019). Deep learning for compressive spectral imaging. In: Digital holography and three-dimensional imaging 2019, p. M3B.3. Optica Publishing Group.

  • Mousavi, A., Baraniuk, R.G. (2017). Learning to invert: Signal recovery via deep convolutional networks. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 2272–2276.

  • Pont-Tuset, J., Perazzi, F., Caelles, S., Arbeláez, P., Sorkine-Hornung, A., Van Gool, L. (2017). The 2017 Davis challenge on video object segmentation. arXiv preprint arXiv:1704.00675.

  • Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., Carin, L. (2016). Variational autoencoder for deep learning of images, labels and captions. In: Advances in neural information processing systems 29, pp. 2352–2360.

  • Qiao, M., Liu, X., & Yuan, X. (2020). Snapshot spatial-temporal compressive imaging. Optics Letters.

  • Qiao, M., Liu, X., & Yuan, X. (2021). Snapshot temporal compressive microscopy using an iterative algorithm with untrained neural networks. Optics Letters.

  • Qiao, M., Meng, Z., Ma, J., & Yuan, X. (2020). Deep learning for video compressive sensing. APL Photonics, 5(3), 030801.

  • Qiao, M., Sun, Y., Ma, J., Meng, Z., Liu, X., & Yuan, X. (2021). Snapshot coherence tomographic imaging. IEEE Transactions on Computational Imaging, 7, 624–637. https://doi.org/10.1109/TCI.2021.3089828.

  • Reddy, D., Veeraraghavan, A., Chellappa, R. (2011). P2C2: Programmable pixel compressive camera for high speed imaging. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 329–336. https://doi.org/10.1109/CVPR.2011.5995542

  • Ronneberger, O., P.Fischer, Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention (MICCAI), LNCS, vol. 9351, pp. 234–241. Springer.

  • Ryu, E.K., Liu, J., Wang, S., Chen, X., Wang, Z., Yin, W. (2019). Plug-and-play methods provably converge with properly trained denoisers. In: IEEE conference on machine learning.

  • Sinha, A., Lee, J., Li, S., & Barbastathis, G. (2017). Lensless computational imaging through deep learning. Optica, 4(9), 1117–1125.

  • Smith, T., & Guild, J. (1931). The cie colorimetric standards and their use. Transactions of the optical society, 33(3), 73.

    Article  Google Scholar 

  • Sreehari, S., Venkatakrishnan, S. V., Wohlberg, B., Buzzard, G. T., Drummy, L. F., Simmons, J. P., & Bouman, C. A. (2016). Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Transactions on Computational Imaging, 2(4), 408–423.

    MathSciNet  Google Scholar 

  • Venkatakrishnan, S.V., Bouman, C.A., Wohlberg, B. (2013). Plug-and-play priors for model based reconstruction. In: 2013 IEEE global conference on signal and information processing, pp. 945–948.

  • Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A. (2008). Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning.

  • Virmaux, A., Scaman, K. (2018). Lipschitz regularity of deep neural networks: Analysis and efficient estimation. In: NeurIPS.

  • Wagadarikar, A., John, R., Willett, R., & Brady, D. (2008). Single disperser design for coded aperture snapshot spectral imaging. Applied Optics, 47(10), B44–B51.

    Article  Google Scholar 

  • Wagadarikar, A. A., Pitsianis, N. P., Sun, X., & Brady, D. J. (2009). Video rate spectral imaging using a coded aperture snapshot spectral imager. Optics Express, 17(8), 6368–6388.

    Article  Google Scholar 

  • Wang, L., Cao, M., Yuan, X. (2023). Efficientsci: Densely connected network with space-time factorization for large-scale video snapshot compressive imaging. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18477–18486.

  • Wang, L., Cao, M., Zhong, Y., Yuan, X. (2022). Spatial-temporal transformer for video snapshot compressive imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence https://doi.org/10.1109/TPAMI.2022.3225382

  • Wang, L., Sun, C., Fu, Y., Kim, M.H., Huang, H. (2019). Hyperspectral image reconstruction using a deep spatial-spectral prior. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 8024–8033. https://doi.org/10.1109/CVPR.2019.00822.

  • Wang, L., Wu, Z., Zhong, Y., & Yuan, X. (2022). Snapshot spectral compressive imaging reconstruction using convolution and contextual transformer. Photonics Research, 10(8), 1848–1858.

  • Wang, L., Zhang, T., Fu, Y., & Huang, H. (2019). Hyperreconnet: Joint coded aperture optimization and image reconstruction for compressive hyperspectral imaging. IEEE Transactions on Image Processing, 28(5), 2257–2270.

    Article  MathSciNet  Google Scholar 

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

  • Wang, Z., Zhang, H., Cheng, Z., Chen, B., Yuan, X. (2021). Metasci: Scalable and adaptive reconstruction for video compressive sensing. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).

  • Wu, Z., Yang, C., Su, X., Yuan, X. (2023). Adaptive deep PNP algorithm for video snapshot compressive imaging. International Journal of Computer Vision pp. 1–18.

  • Xie, J., Xu, L., & Chen, E. (2012). Image denoising and inpainting with deep neural networks. Advances in Neural Information Processing Systems, 25, 341–349.

    Google Scholar 

  • Yang, C., Zhang, S., Yuan, X. (2022). Ensemble learning priors driven deep unfolding for scalable video snapshot compressive imaging. In: Computer Vision–ECCV 2022: 17th European conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, pp. 600–618. Springer.

  • Yang, J., Liao, X., Yuan, X., Llull, P., Brady, D. J., Sapiro, G., & Carin, L. (2015). Compressive sensing by learning a Gaussian mixture model from measurements. IEEE Transaction on Image Processing, 24(1), 106–119.

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, J., Yuan, X., Liao, X., Llull, P., Sapiro, G., Brady, D. J., & Carin, L. (2014). Video compressive sensing using Gaussian mixture models. IEEE Transaction on Image Processing, 23(11), 4863–4878.

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, P., Kong, L., Liu, X., Yuan, X., & Chen, G. (2020). Shearlet enhanced snapshot compressive imaging. IEEE Transactions on Image Processing, 29, 6466–6481.

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, Y., Sun, J., Li, H., Xu, Z. (2016). Deep ADMM-net for compressive sensing MRI. In: Advances in Neural Information Processing Systems 29, pp. 10–18.

  • Yasuma, F., Mitsunaga, T., Iso, D., Nayar, S.K. (2010). Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. pp. 2241–2253. IEEE.

  • Yuan, X. (2016). Generalized alternating projection based total variation minimization for compressive sensing. In: 2016 IEEE international conference on image processing (ICIP), pp. 2539–2543.

  • Yuan, X., Brady, D. J., & Katsaggelos, A. K. (2021). Snapshot compressive imaging: Theory, algorithms, and applications. IEEE Signal Processing Magazine, 38(2), 65–88.

    Article  Google Scholar 

  • Yuan, X., Liu, Y., Suo, J., Dai, Q. (2020). Plug-and-play algorithms for large-scale snapshot compressive imaging. In: CVPR.

  • Yuan, X., Liu, Y., Suo, J., Durand, F., Dai, Q. (2021). Plug-and-play algorithms for video snapshot compressive imaging. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Yuan, X., Llull, P., Liao, X., Yang, J., Brady, D.J., Sapiro, G., Carin, L. (2014). Low-cost compressive sensing for color video and depth. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 3318–3325. https://doi.org/10.1109/CVPR.2014.424.

  • Yuan, X., & Pang, S. (2016). Structured illumination temporal compressive microscopy. Biomedical Optics Express, 7, 746–758.

    Article  Google Scholar 

  • Yuan, X., Sun, Y., & Pang, S. (2017). Compressive video sensing with side information. Appl. Opt., 56(10), 2697–2704.

    Article  Google Scholar 

  • Yuan, X., Tsai, T. H., Zhu, R., Llull, P., Brady, D., & Carin, L. (2015). Compressive hyperspectral imaging with side information. IEEE Journal of Selected Topics in Signal Processing, 9(6), 964–976.

    Article  Google Scholar 

  • Zhang, J., Ghanem, B. (2018). Ista-net: Interpretable optimization-inspired deep network for image compressive sensing. In: CVPR, pp. 1828–1837.

  • Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142–3155. https://doi.org/10.1109/TIP.2017.2662206.

  • Zhao, Y., Zheng, S., Yuan, X. (2023). Deep equilibrium models for video snapshot compressive imaging. In: AAAI.

  • Zheng, S., Liu, Y., Meng, Z., Qiao, M., Tong, Z., Yang, X., Han, S., & Yuan, X. (2021). Deep plug-and-play priors for spectral snapshot compressive imaging. Photonics Research, 9(2), B18–B29.

    Article  Google Scholar 

  • Zheng, S., Wang, C., Yuan, X., Xin, H.L. (2021). Super-compression of large electron microscopy time series by deep compressive sensing learning. Patterns p. 100292.

  • Zheng, S., Yang, X., Yuan, X. (2022). Two-stage is enough: A concise deep unfolding reconstruction network for flexible video compressive sensing. arXiv preprint arXiv:2201.05810.

Download references

Acknowledgements

Xin Yuan was supported by the National Natural Science Foundation of China under Grant No. 62271414, Zhejiang Provincial Natural Science Foundation of China under Grant No. LR23F010001. Xin Yuan would like to thank Research Center for Industries of the Future (RCIF) at Westlake University for supporting this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Yuan.

Additional information

Communicated by Jiri Matas.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A: Proof of Theorem 2

A: Proof of Theorem 2

All the steps of the proof of Theorem 3 (a more general version of Theorem 1) hold for the new class of sensing matrices as well, and in fact the proof is simpler than before. To see this, note that there are two key steps in the proof of Theorem 3 that use the i.i.d. Gaussianity assumption. The first one is when we show that the random variables \(X_i\) defined in (44) are independent and bounded. It is straightforward to see that the under the new model, the same is true and more straightforward to establish. Note that since \(\forall i = 1,\dots ,n\), \(R_i = \sum _{b=1}^B D_{i,i,b}^2 =B\), we have \({{{\varvec{R}}}}={{{\varvec{H}}}}{{{\varvec{H}}}}^T=B {{\varvec{I}}}_n\), with \(n = n_x n_y\). Therefore, the same conclusion holds with a shorter proof. Then, we could apply the Hoeffding’s inequality as before. The second step is when we bound \(\sigma _{\max }({{{\varvec{H}}}}^T {{{\varvec{R}}}}^{-1}{{{\varvec{H}}}})\). Again, the same result holds with a shorter proof, since \({{{\varvec{R}}}}\) is no longer random.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meng, Z., Yuan, X. & Jalali, S. Deep Unfolding for Snapshot Compressive Imaging. Int J Comput Vis 131, 2933–2958 (2023). https://doi.org/10.1007/s11263-023-01844-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01844-4

Keywords

Navigation