Skip to main content

High-Quality Video Denoising for Motion-Based Exposure Control

  • Chapter
  • First Online:
Mobile Cloud Visual Media Computing

Abstract

New digital cameras, such as Canon SD1100 and Nikon COOLPIX S8100, have an autoexposure (AE) function that is based on motion estimation. The motion estimation helps to set short exposure and high ISO for frames with fast motion, thereby minimizing most motion blur in recorded videos. This AE function largely turns video enhancement into a denoising problem. This paper studies the problem of how to achieve high-quality video denoising in the context of motion-based exposure control. Unlike previous denoising works which either avoid using motion estimation, such as BM3D Dabov et al. TIP 16:2007, [1], or assume reliable motion estimation as input, such as Liu, ECCV, 2010, [2], our method evaluates the reliability of flow at each pixel and uses the “lifespan” of reliable flow trajectories as a weight to integrate spatial denoising and temporal denoising. This weighted combination scheme makes our method robust to optical flow failure over regions with repetitive texture or uniform color and combines the advantages of both spatial and temporal denoising. Our method also exploits high-quality frames in a sequence to effectively enhance noisier frames. In experiments using both synthetic and real videos, our method outperforms the state-of-the art Dabov et al. TIP 16:2007, Liu, ECCV, 2010, [1, 2].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For example, although it is hard to hold a camera perfectly still for a long period, it is also rare that our hands would continuously shake a camera; shaky intervals are always intermingled with steady moments.

References

  1. Dabov, K., Foi, R., Katkovnik, V., Egiazarian, K., Member, S.: Image denoising by sparse 3D transform-domain collaborative filtering. TIP 16, 1395–1411 (2007)

    Google Scholar 

  2. Liu, C., Freeman, W.T.: A high-quality video denoising algorithm based on reliable motion estimation. In: Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III (ECCV). Springer, Berlin (2010). http://portal.acm.org/citation.cfm?id=1927006.1927061

  3. Dabov, K., Foi, A., Egiazarian, K.: Video denoising by sparse 3D transform-domain collaborative filtering. In: Proceedings of the 15th European Signal Processing Conference (2007)

    Google Scholar 

  4. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Simulation 4, 490–530 (2005)

    Google Scholar 

  5. Roth, S., Black, M.J.: Fields of Experts: A framework for learning image priors. CVPR 2, 860–867 (2005)

    Google Scholar 

  6. Elad, M., Aharon, M.: Image Denoising Via Learned Dictionaries and Sparse representation. CVPR pp. 895–900 (2006)

    Google Scholar 

  7. Lyu, S., Simoncelli, E.P.: Statistical modeling of images with fields of gaussian scale mixtures. NIPS (2006)

    Google Scholar 

  8. Tappen, M.F., Liu, C., Adelson, E.H., Freeman, W.T.: Learning gaussian conditional random fields for low-level vision. CVPR, pp. 1–8. IEEE Computer Society, CA, USA (2007). http://doi.ieeecomputersociety.org/10.1109/CVPR.2007.382979

  9. Foi, A., Katkovnik, V., Egiazarian, K.: Pointwise shape-adaptive DCT for high-quality denoising and deblocking of grayscale and color images. TIP 16(8), 2080–2095 (2007)

    Google Scholar 

  10. Bennett, E.P., McMillan, L.: Video enhancement using per-pixel virtual exposures. SIGGRAPH pp. 845–852. ACM, California, New York (2005). http://doi.acm.org/10.1145/1186822.1073272

  11. Chen, J., Tang, C.K.: Spatio-temporal markov random field for video denoising. CVPR (2007)

    Google Scholar 

  12. Vaish, V., Levoy, M., Szeliski, R., Zitnick, C.L., Kang, S.B.: Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. CVPR 2, 1063–6919. IEEE Computer Society, CA, USA (2006). http://doi.ieeecomputersociety.org/10.1109/CVPR.2006.244

  13. Heo, Y.S., Lee, K.M., Lee, S.U.: Simultaneous depth reconstruction and restoration of noisy stereo images using Non-local pixel distribution. CVPR pp. 1–8. IEEE Computer Society, CA, USA (2007)

    Google Scholar 

  14. Zhang, L., Vaddadi, S., Jin, H., Nayar, S.: Multiple view image denoising. CVPR, IEEE Computer Society, CA, USA (2009). http://doi.ieeecomputersociety.org/10.1109/CVPRW.2009.5206836

  15. Bhat, P., Zitnick, C.L., Snavely, N., Agarwala, A., Agrawala, M., Curless, B., Cohen, M., Kang, S.B.: Using photographs to enhance videos of a static scene. In: Kautz J., Pattanaik S. (eds.) Proceedings of the Eurographics Symposium on Rendering (2007). http://www.cs.washington.edu/homes/pro/papers/videoEnhancement/videoEnhancement.htm

  16. Schubert, F., Mikolajczyk, K.: Combining high-resolution images with low-quality videos. BMVC08 pp. 1–10 (2008) http://www.visionbib.com/bibliography/match-pl503.html#TT48849

  17. Gupta, A., Bhat, P., Dontcheva, M., Curless, B., Deussen, O., Cohen, M.: Enhancing and experiencing spacetime resolution with videos and stills. In: International Conference on Computational Photography (2009) http://grail.cs.washington.edu/projects/enhancing-spacetime/

  18. Watanabe, K., Iwai, Y., Nagahara, H., Yachida, M., Suzuki, T.: Video synthesis with high spatio-temporal resolution using motion compensation and spectral fusion. IEICE—Trans. Inf. Syst. E89–D, pp. 2186–2196, Oxford University Press, Oxford, UK (2006). http://portal.acm.org/citation.cfm?id=1184860.1185056

  19. Nagaharaf, H., Matsunobuf, T., Iwaif, Y., Yachidaf, M., Suzuki, T.: High-resolution video generation using morphing. ICPR 4, 338–341 (2006) doi:10.1109/ICPR.2006.626

  20. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. ICCV (2007)

    Google Scholar 

  21. Liu, C.: Beyond pixels: Exploring new representations and applications for motion analysis. Dissertation, Massachusetts Institute of Technology, Cambridge (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Zhang, L., Portz, T., Jiang, H. (2015). High-Quality Video Denoising for Motion-Based Exposure Control. In: Hua, G., Hua, XS. (eds) Mobile Cloud Visual Media Computing. Springer, Cham. https://doi.org/10.1007/978-3-319-24702-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24702-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24700-7

  • Online ISBN: 978-3-319-24702-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics