Skip to main content

Video Temporal Super-resolution Based on Self-similarity

  • Chapter

Part of the book series: Advances in Computer Vision and Pattern Recognition ((ACVPR))

Abstract

We introduce a method for making temporal super-resolution video from a single video by exploiting the self-similarity that exists in the spatio-temporal domain of videos. Temporal super-resolution is an inherently ill-posed problem because there are an infinite number of high temporal resolution frames that can produce the same low temporal resolution frame. The key idea in this work is exploiting self-similarity for solving this ambiguity. Self-similarity means self-similar motion blur appearances that often reappear at different temporal resolutions. Several existing methods generate plausible intermediate frames by interpolating input frames of a captured video, which has frame exposure time shorter than inter-frame period. In contrast with them, our method can increase the temporal resolution of a given video in which frame exposure time equals to inter-frame period, for instance, by resolving one frame to two frames.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    We confirmed that several consumer cameras automatically selected full-exposure mode for underexposed outdoor and indoor scenes.

  2. 2.

    We assume that the gain of V 2 is half the gain of V 1. Therefore, an image frame of V 2 is equal to the average of the two corresponding image frames of V 1 rather than their sum.

  3. 3.

    The algorithm for increasing the temporal resolution more than twice can be formulated straightforwardly.

  4. 4.

    Strictly speaking, this constraint assumes a linear camera response function (CRF). However, because adjacent image frames usually have similar pixel values, it is approximately satisfied when the CRF is approximated by a piecewise-linear function. In addition, one could calibrate the CRF and convert the pixel values in advance.

References

  1. Agrawal A, Gupta M, Veeraraghavan A, Narasimhan S (2010) Optimal coded sampling for temporal super-resolution. In: Proc CVPR 2010, pp 599–606

    Google Scholar 

  2. Baker S, Kanade T (2002) Limits on super-resolution and how to break them. IEEE Trans Pattern Anal Mach Intell 24(9):1167–1183

    Article  Google Scholar 

  3. Baker S, Scharstein D, Lewis J, Roth S, Black M, Szeliski R (2007) A database and evaluation methodology for optical flow. In: Proc ICCV 2007, pp 1–8

    Google Scholar 

  4. Ben-Ezra M, Nayar SK (2003) Motion deblurring using hybrid imaging. In: Proc CVPR 2003, pp I-657–I-664

    Google Scholar 

  5. Brox T, Bruhn A, Papenberg N, Weickert J (2004) High accuracy optical flow estimation based on a theory for warping. In: Proc ECCV 2004. LNCS, vol 3024, pp 25–36

    Google Scholar 

  6. Choi BT, Lee SH, Ko SJ (2000) New frame rate up-conversion using bi-directional motion estimation. IEEE Trans Consum Electron 46(3):603–609

    Article  Google Scholar 

  7. Freeman W, Jones T, Pasztor E (2002) Example-based super-resolution. IEEE Comput Graph Appl 22(2):56–65

    Article  Google Scholar 

  8. Glasner D, Bagon S, Irani M (2009) Super-resolution from a single image. In: Pro ICCV 2009, pp 349–356

    Google Scholar 

  9. Ha T, Lee S, Kim J (2004) Motion compensated frame interpolation by new block-based motion estimation algorithm. IEEE Trans Consum Electron 50(2):752–759

    Article  Google Scholar 

  10. Kuo T-Y, Kim J, Kuo C-CJ (1999) Motion-compensated frame interpolation scheme for H.263 codec. In: Proc ISCAS ’99, pp 491–494

    Google Scholar 

  11. Mahajan D, Huang FC, Matusik W, Ramamoorthi R, Belhumeur P (2009) Moving gradients: a path-based method for plausible image interpolation. In: Proc SIGGRAPH’99, pp 1–11

    Google Scholar 

  12. Muja M, Lowe DG (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: Proc VISSAPP 2009, pp 331–340

    Google Scholar 

  13. Shahar O, Faktor A, Irani M (2011) Space-time super-resolution from a single video. In: Proc CVPR 2011, pp 3353–3360

    Chapter  Google Scholar 

  14. Shechtman E, Caspi Y, Irani M (2005) Space-time super-resolution. IEEE Trans Pattern Anal Mach Intell 27(4):531–545

    Article  Google Scholar 

  15. Shimano M, Okabe T, Sato I, Sato Y (2010) Video temporal super-resolution based on self-similarity. In: Proc ACCV 2010. LNCS, vol 6492, pp 93–106

    Google Scholar 

  16. Shimano M, Cheung G, Sato I (2011) Adaptive frame and qp selection for temporally super-resolved full-exposure-time video. In: Proc ICIP 2011, pp 2253–2256

    Google Scholar 

  17. Shimano M, Cheung G, Sato I (2011) Compression using self-similarity-based temporal super-resolution for full-exposure-time video. In: Proc ICASSP 2011, pp 1053–1056

    Google Scholar 

  18. Tai YW, Du H, Brown MS, Lin S (2008) Image/video deblurring using a hybrid camera. In: Proc CVPR 2008, pp 1–8

    Google Scholar 

  19. Watanabe K, Iwai Y, Nagahara H, Yachida M, Suzuki T (2006) Video synthesis with high spatio-temporal resolution using motion compensation and image fusion in wavelet domain. In: Proc ACCV 2006. LNCS, vol 3851, pp I-480–I-489

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mihoko Shimano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Shimano, M., Okabe, T., Sato, I., Sato, Y. (2013). Video Temporal Super-resolution Based on Self-similarity. In: Farinella, G., Battiato, S., Cipolla, R. (eds) Advanced Topics in Computer Vision. Advances in Computer Vision and Pattern Recognition. Springer, London. https://doi.org/10.1007/978-1-4471-5520-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5520-1_14

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5519-5

  • Online ISBN: 978-1-4471-5520-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics