Abstract
In this paper, we introduce an approach to remove the flickers in the videos, and the flickers are caused by applying image-based processing methods to original videos frame by frame. First, we propose a multi-frame based video flicker removal method. We utilize multiple temporally corresponding frames to reconstruct the flickering frame. Compared with traditional methods, which reconstruct the flickering frame just from an adjacent frame, reconstruction with multiple temporally corresponding frames reduces the warp inaccuracy. Then, we optimize our video flickering method from following aspects. On the one hand, we detect the flickering frames in the video sequence with temporal consistency metrics, and just reconstructing the flickering frames can accelerate the algorithm greatly. On the other hand, we just choose the previous temporally corresponding frames to reconstruct the output frames. We also accelerate our video flicker removal with GPU. Qualitative experimental results demonstrate the efficiency of our proposed video flicker method. With algorithmic optimization and GPU acceleration, the time complexity of our method also outperforms traditional video temporal coherence methods.









Similar content being viewed by others
References
Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282
Aydin T, Stefanoski N, Croci S, Gross M, Smolic A (2014) Temporally coherent local tone mapping of HDR video. ACM Trans Graph 33(6):196:1–196:13
Bell S, Bala K, Snavely N (2014) Intrinsic images in the wild. ACM Trans Graph 33(4):159:1–159:12
Bhattacharya S, Venkatesh KS, Gupta S (2016) Restoration of scene flicker using video decomposition. In: International conference on signal processing, computing and control, pp 396–400
Bonneel N, Sunkavalli K, Paris S, Pfister H (2013) Example-based video color grading. ACM Trans Graph 32(4):39:1–39:12
Bonneel N, Sunkavalli K, Tompkin J, Sun D, Paris S, Pfister H (2014) Interactive intrinsic video editing. ACM Trans Graph 33(6):197:1–197:10
Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H (2015) Blind video temporal consistency. ACM Trans Graph 34(6):196:1–196:9
Dan BG, Lischinski D (2013) Optimizing color consistency in photo collections. ACM Trans Graph 32(4):38
Dong X, Bonev B, Zhu Y, Yuille AL (2015) Region-based temporally consistent video post-processing. In: IEEE conference on computer vision and pattern recognition, pp 714–722
Durand F, Dorsey J (2002) Fast bilateral filtering for the display of high-dynamic-range images. In: Conference on computer graphics and interactive techniques, pp 257–266
Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–451
Farbman Z, Lischinski D (2011) Tonal stabilization of video. ACM Trans Graph 30(4):89:1–89:10
Gong W, Wang W, Li W, Tang S (2014) Temporal consistency based method for blind video deblurring. In: International conference on pattern recognition, pp 861–864
Guthier B, Kopf S, Eble M, Effelsberg W (2011) Flicker reduction in tone mapped high dynamic range video. In: Proceedings of the SPIE, vol 7866, pp 1–15
Hsu CY, Lu CS, Pei SC (2007) Video halftoning preserving temporal consistency. In: IEEE international conference on multimedia and expo, pp 1938–1941
Hsu E, Mertens T, Paris S, Avidan S, Durand F (2008) Light mixture estimation for spatially varying white balance. ACM Trans Graph 27(3):70:1–70:7
Huang CR, Chiu KC, Chen CS (2011) Temporal color consistency-based video reproduction for dichromats. IEEE Trans Multimedia 13(5):950–960
Kalantari NK, Shechtman E, Barnes C, Darabi S, Goldman DB, Sen P (2013) Patch-based high dynamic range video. ACM Trans Graph 32(6):202:1–202:8
Kanj A, Talbot H, Luparello RR (2017) Flicker removal and superpixel-based motion tracking for high speed videos. In: 2017 IEEE international conference on image processing (ICIP), pp 245–249
Lang M, Wang O, Aydin T, Smolic A, Gross M (2012) Practical temporal consistency for image-based graphics applications. ACM Trans Graph 31(4):34:1–34:8
Liu C, Yuen J, Torralba A (2011) SIFT flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994
Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. In: International conference on artificial intelligence, pp 1617–1623
Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115
Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) Fortune teller: predicting your career path. In: AAAI conference on artificial intelligence, pp 201–207
Mantiuk R, Daly S, Kerofsky L (2008) Display adaptive tone mapping. ACM Trans Graph 27(3):1–10
Oskam T, Hornung A, Sumner RW, Gross M (2012) Fast and stable color balancing for images and augmented reality. In: Second international conference on 3d imaging, modeling, processing, visualization and transmission, pp 49–56
Reso M, Jachalsky J, Rosenhahn B, Ostermann J (2018) Occlusion-aware method for temporally consistent superpixels. IEEE Trans Pattern Anal Mach Intell PP(99):1–1
Shin DK, Yong MK, Park KT, Lee DS, Choi W, Moon YS (2014) Video dehazing without flicker artifacts using adaptive temporal average. In: The IEEE international symposium on consumer electronics, pp 1–2
Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: IEEE conference on computer vision and pattern recognition, pp 295–302
Wang CM, Huang YH, Huang ML (2006) An effective algorithm for image sequence color transfer. Math Comput Model 44(7):608–627
Yao CH, Chang CY, Chien SY (2017) Occlusion-aware video temporal consistency. In: ACM multimedia, pp 777–785
Ye G, Garces E, Liu Y, Dai Q, Gutierrez D (2014) Intrinsic video and applications. ACM Trans Graph 33(4):80:1–80:11
Zeng H, Ma KK (2012) Content-adaptive temporal consistency enhancement for depth video. In: IEEE international conference on image processing, pp 3017–3020
Zhao X, Ding W, Liu C, Li H (2018) Haze removal for unmanned aerial vehicle aerial video based on spatial-temporal coherence optimisation. IET Image Process 12(1):88–97
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China under Grant 61672228, Grant 61872241, Grant 61572316, and Grant 61370174, in part by the National Key Research and Development Program of China under Grant 2017YFE0104000 and Grant 2016YFC1300302, in part by the Macau Science and Technology Development Fund under Grant 0027/2018/A1, in part by the Science and Technology Commission of Shanghai Municipality under Grant 18410750700, Grant 17411952600, and Grant 16DZ0501100, and in part by the Shanghai Automotive Industry Science and Technology Development Foundation under Grant 1837.
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Li, C., Chen, Z., Sheng, B. et al. Video flickering removal using temporal reconstruction optimization. Multimed Tools Appl 79, 4661–4679 (2020). https://doi.org/10.1007/s11042-019-7413-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-019-7413-y