Skip to main content
Log in

Multi-Exposure Motion Estimation Based on Deep Convolutional Networks

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

In motion estimation, illumination change is always a troublesome obstacle, which often causes severely performance reduction of optical flow computation. The essential reason is that most of estimation methods fail to formalize a unified definition in color or gradient domain for diverse environmental changes. In this paper, we propose a new solution based on deep convolutional networks to solve the key issue. Our idea is to train deep convolutional networks to represent the complex motion features under illumination change, and further predict the final optical flow fields. To this end, we construct a training dataset of multi-exposure image pairs by performing a series of non-linear adjustments in the traditional datasets of optical flow estimation. Our multi-exposure flow networks (MEFNet) model consists of three main components: low-level feature network, fusion feature network, and motion estimation network. The former two components belong to the contracting part of our model in order to extract and represent the multi-exposure motion features; the third component is the expanding part of our model in order to learn and predict the high-quality optical flow. Compared with many state-of-the-art methods, our motion estimation method can eliminate the obstacle of illumination change and yield optical flow results with competitive accuracy and time efficiency. Moreover, the good performance of our model is also demonstrated in some multi-exposure video applications, like HDR (high dynamic range) composition and flicker removal.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bouguet J. Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. http://robots.stanford.edu/cs223b04/algo_tracking.pdf, Mar. 2018.

  2. Liu C. Beyond pixels: Exploring new representations and applications for motion analysis [Ph.D. Thesis]. Massachusetts Institute of Technology, MA, USA, 2009.

  3. Sun D Q, Roth S, Black M J. Secrets of optical flow estimation and their principles. In Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, June 2010, pp.2432-2439.

  4. Brox T, Malik J. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Trans. Pattern Analysis and Machine Intelligence, 2011, 33(3): 500-513.

    Article  Google Scholar 

  5. Xu L, Jia J Y, Matsushita Y. Motion detail preserving optical flow estimation. IEEE Trans. Pattern Analysis and Machine Intelligence, 2012, 34(9): 1744-1757.

    Article  Google Scholar 

  6. Brox T, Bruhn A, Papenberg N, Weickert J. High accuracy optical flow estimation based on a theory for warping. In Proc. the 8th European Conf. Computer Vision, May 2004, pp.25-36.

  7. Liu C, Yuen J, Torralba A. SIFT flow: Dense correspondence across scenes and its applications. IEEE Trans. Pattern Analysis and Machine Intelligence, 2011, 33(5): 978-994.

    Article  Google Scholar 

  8. Dosovitskiy A, Fischer P, Ilg E, Häusser P, Hazirbas C, Golkov V, van der Smagt P, Cremers D, Brox T. FlowNet: Learning optical flow with convolutional networks. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.2758-2766.

  9. Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In Proc. the 25th Int. Conf. Neural Information Processing Systems, December 2012, pp.1097-1105.

  10. Girshick R. Fast R-CNN. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.1440-1448.

  11. Farabet C, Couprie C, Najman L, LeCun Y. Learning hierarchical features for scene labeling. IEEE Trans. Pattern Analysis and Machine Intelligence, 2013, 35(8): 1915-1929.

    Article  Google Scholar 

  12. Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi-scale deep network. In Proc. the 28th Annual Conf. Neural Information Processing Systems, January 2014, pp.2366-2374.

  13. Teney D, Hebert M. Learning to extract motion from videos in convolutional neural networks. In Proc.the 13th Asian Conf. Computer Vision, November 2016, pp.412-428.

  14. Horn B K P, Schunck B G. Determining optical flow. Artificial Intelligence, 1981, 17(1/2/3): 185-203.

  15. Anandan P. A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision, 1989, 2(3): 283-310.

    Article  Google Scholar 

  16. Bergen J R, Anandan P, Hanna K J, Hingorani R. Hierarchical model-based motion estimation. In Proc. the 2nd European Conf. Computer Vision, May 1992, pp.237-252.

  17. Bruhn A, Weickert J. Towards ultimate motion estimation: Combining highest accuracy with real-time performance. In Proc. the 10th IEEE Int. Conf. Computer Vision, October 2005, pp.749-755.

  18. Bruhn A, Weickert J, Schnörr C. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision, 2005, 61(3): 211-231.

    Article  Google Scholar 

  19. Lempitsky V, Roth S, Rother C. FusionFlow: Discrete-continuous optimization for optical flow estimation. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2008.

  20. Wedel A, Cremers D, Pock T, Bischof H. Structure- and motion-adaptive regularization for high accuracy optic flow. In Proc. the 12th IEEE Int. Conf. Computer Vision, September 29-October 2, 2009, pp.1663-1668.

  21. Zimmer H, Bruhn A, Weickert J. Optic flow in harmony. International Journal of Computer Vision, 2011, 93(3): 368-388.

    Article  MathSciNet  MATH  Google Scholar 

  22. Mémin E, Pérez P. Hierarchical estimation and segmentation of dense motion fields. International Journal of Computer Vision, 2002, 46(2): 129-155.

    Article  MATH  Google Scholar 

  23. Xu L, Chen J N, Jia J Y. A segmentation based variational model for accurate optical flow estimation. In Proc. the 10th European Conf. Computer Vision, October 2008, pp.671-684.

  24. Lei C, Yang Y H. Optical flow estimation on coarse-to-fine region-trees using discrete optimization. In Proc. the 12th IEEE Int. Conf. Computer Vision, September 29-October 2, 2009, pp.1562-1569.

  25. Werlberger M, Pock T, Bischof H. Motion estimation with non-local total variation regularization. In Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, June 2010, pp.2464-2471.

  26. Xiao J J, Cheng H, Sawhney H, Rao C, Isnardi M. Bilateral filtering-based optical flow estimation with occlusion detection. In Proc. the 9th European Conf. Computer Vision, May 2006, pp.211-224.

  27. Seitz S M, Baker S. Filter flow. In Proc. the 12th IEEE Int. Conf. Computer Vision, September 29-Octomber 2, 2009, pp.143-150.

  28. Brox T, Bregler C, Malik J. Large displacement optical flow. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2009, pp.41-48.

  29. Steinbrucker F, Pock T, Cremers D. Large displacement optical flow computation without warping. In Proc. the 12th IEEE Int. Conf. Computer Vision, September 29-October 2, 2009, pp.1609-1614.

  30. Sand P, Teller S. Particle video: Long-range motion estimation using point trajectories. In Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, June 2006, pp.2195-2202.

  31. Chen Z Y, Jin H L, Lin Z, Cohen S, Wu Y. Large displacement optical flow from nearest neighbor fields. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2013, pp.2443-2450.

  32. Revaud J, Weinzaepfel P, Harchaoui Z, Schmid C. EpicFlow: Edge-preserving interpolation of correspondences for optical flow. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2015, pp.1164-1172.

  33. Weinzaepfel P, Revaud J, Harchaoui Z, Schmid C. Deep-Flow: Large displacement optical flow with deep matching. In Proc. IEEE Int. Conf. Computer Vision, December 2013, pp.1385-1392.

  34. Bailer C, Taetz B, Stricker D. Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.4015-4023.

  35. Black M J, Anandan P. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer Vision and Image Understanding, 1996, 63(1): 75-104.

    Article  Google Scholar 

  36. Haussecker H W, Fleet D J Computing optical flow with physical models of brightness variation. IEEE Trans. Pattern Analysis and Machine Intelligence, 2001, 23(6): 661-673.

    Article  Google Scholar 

  37. Shen X Y, Xu L, Zhang Q, Jia J Y. Multi-modal and multispectral registration for natural images. In Proc. the 13th European Conf. Computer Vision, September 2014, pp.309-324.

  38. Kumar A, Tung F, Wong A, Clausi D A. A decoupled approach to illumination-robust optical flow estimation. IEEE Trans. Image Processing, 2013, 22(10): 4136-4147.

    Article  MathSciNet  MATH  Google Scholar 

  39. Mohamed M A, Rashwan H A, Mertsching B, García M A, Puig D. Illumination-robust optical flow using a local directional pattern. IEEE Trans. Circuits and Systems for Video Technology, 2014, 24(9): 1499-1508.

    Article  Google Scholar 

  40. Roth S, Black M J. On the spatial statistics of optical flow. In Proc. the 10th IEEE Int. Conf. Computer Vision, October 2005, pp.42-49.

  41. Sun D Q, Roth S, Lewis J P, Black M J. Learning optical flow. In Proc. the 10th European Conf. Computer Vision, October 2008, pp.83-97.

  42. Rosenbaum D, Zoran D, Weiss Y. Learning the local statistics of optical flow. In Proc. the 27th Annual Conf. Neural Information Processing Systems, December 2013, pp.2373-2381.

  43. Ilg E, Mayer N, Saikia T, Keuper M, Dosovitskiy A, Brox T. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, July 2017, pp.1647-1655.

  44. Zhao W B. A concise tutorial on human motion tracking and recognition with Microsoft Kinect. Science China Information Sciences, 2016, 59(9): 93101.

    Article  Google Scholar 

  45. Xia S H, Gao L, Lai Y K, Yuan M Z, Chai J X. A survey on human performance capture and animation. Journal of Computer Science and Technology, 2017, 32(3): 536-554.

    Article  Google Scholar 

  46. Liu B, Xu K, Martin R P. Static scene illumination estimation from videos with applications. Journal of Computer Science and Technology, 2017, 32(3): 430-442.

    Article  Google Scholar 

  47. Xie Z F, Tang S, Huang D J, Ding Y D, Ma L Z. Photographic appearance enhancement via detail-based dictionary learning. Journal of Computer Science and Technology, 2017, 32(3): 417-429.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi-Feng Xie.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1

(PDF 251 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, ZF., Guo, YC., Zhang, SH. et al. Multi-Exposure Motion Estimation Based on Deep Convolutional Networks. J. Comput. Sci. Technol. 33, 487–501 (2018). https://doi.org/10.1007/s11390-018-1833-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-018-1833-4

Keywords

Navigation