skip to main content
research-article

A Luminance-aware Model of Judder Perception

Published: 08 July 2019 Publication History

Abstract

The perceived discrepancy between continuous motion as seen in nature and frame-by-frame exhibition on a display, sometimes termed judder, is an integral part of video presentation. Over time, content creators have developed a set of rules and guidelines for maintaining a desirable cinematic look under the restrictions placed by display technology without incurring prohibitive judder. With the advent of novel displays capable of high brightness, contrast, and frame rates, these guidelines are no longer sufficient to present audiences with a uniform viewing experience. In this work, we analyze the main factors for perceptual motion artifacts in digital presentation and gather psychophysical data to generate a model of judder perception. Our model enables applications like matching perceived motion artifacts to a traditionally desirable level and maintain a cinematic motion look.

Supplementary Material

chapiro (chapiro.zip)
Supplemental movie, appendix, image and software files for, A Luminance-aware Model of Judder Perception

References

[1]
Robert S. Allison, Laurie M. Wilcox, Roy C. Anthony, John Helliker, and Bert Dunk. 2016. Expert viewers’ preferences for higher frame rate 3D film. J. Imag. Sci. Technol. 60, 6 (2016), 60402--1.
[2]
Tunç Ozan Aydin, Martin Čadík, Karol Myszkowski, and Hans-Peter Seidel. 2010. Video quality assessment for computer graphics applications. ACM Trans. Graph. 29, 6 (2010), 161.
[3]
Peter G. J. Barten. 1989. The square root integral (SQRI): A new metric to describe the effect of various display parameters on perceived image quality. In Human Vision, Visual Processing, and Digital Display, Vol. 1077. International Society for Optics and Photonics, 73--83.
[4]
P. J. Bex, G. K. Edgar, and A. T. Smith. 1995. Sharpening of drifting, blurred images. Vis. Res. 35, 18 (1995), 2539--2546.
[5]
Stephen H. Burum. 2007. American Cinematographer Manual. Vol. 1. American Cinematographer.
[6]
Jacob Cohen. 1992. A power primer. Psychol. Bull. 112, 1 (1992), 155.
[7]
Scott Daly, Ning Xu, James Crenshaw, and Vikrant J Zunjarrao. 2015. A psychophysical study exploring judder using fundamental signals and complex imagery. SMPTE Motion Imag. J. 124, 7 (2015), 62--70.
[8]
Scott J. Daly. 1992. Visible differences predictor: An algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III, Vol. 1666. International Society for Optics and Photonics, 2--16.
[9]
DCI. 2012. Digital Cinema Initiatives, LLC. High Frame Rates Digital Cinema Recommended Practice. Retrieved from http://www.dcimovies.com/Recommended_Practice/.
[10]
H. de Lange Dzn. 1958. Research into the dynamic nature of the human Fovea-cortex systems with intermittent and modulated light. II. Phase shift in brightness and delay in color perception. J. Opt. Soc. Am. 48, 11 (1958), 784--789.
[11]
Kurt Debattista, Keith Bugeja, Sandro Spina, Thomas Bashford-Rogers, and Vedad Hulusic. 2018. Frame rate vs resolution: A subjective evaluation of spatiotemporal perceived quality under varying computational budgets. In Computer Graphics Forum, Vol. 37. Wiley Online Library, 363--374.
[12]
Andrew M. Derrington and David R. Badcock. 1985. Separate detectors for simple and complex grating patterns? Vis. Res. 25, 12 (1985), 1869--1878.
[13]
Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, Hans-Peter Seidel, and Wojciech Matusik. 2012. A luminance-contrast-aware disparity model and applications. ACM Trans. Graph. 31, 6 (2012).
[14]
Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. 2015. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 2758--2766.
[15]
Gabriel Eilertsen, Rafal K. Mantiuk, and Jonas Unger. 2017. A comparative review of tone-mapping algorithms for high dynamic range video. Computer Graphics Forum (2017).
[16]
Toshiyuki Fujine, Yuhji Kikuchi, Michiyuki Sugino, and Yasuhiro Yoshida. 2007. Real-life in-home viewing conditions for flat panel displays and statistical characteristics of broadcast video signal. Jpn. J. Appl. Phys. 46, 3S (2007), 1358.
[17]
M. A. Georgeson and G. D. Sullivan. 1975. Contrast constancy: Deblurring in human vision by spatial frequency channels. J. Physiol. 252, 3 (1975), 627--656.
[18]
Carolyn Giardina. 2012. Peter Jackson responds to “Hobbit” footage critics, explains 48-frames strategy. The Hollywood Reporter (2012).
[19]
Gunnar Johansson. 1973. Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 14, 2 (1973), 201--211.
[20]
Sally R. Ke, Jessica Lam, Dinesh K. Pai, and Miriam Spering. 2013. Directional asymmetries in human smooth pursuit eye movements. Invest. Ophthalmol. Vis. Sci. 54, 6 (2013), 4409--4421.
[21]
D. H. Kelly. 1983. Spatiotemporal variation of chromatic and achromatic contrast thresholds. J. Opt. Soc. Am. 73, 6 (1983), 742--750.
[22]
Min H. Kim, Tim Weyrich, and Jan Kautz. 2009. Modeling human color perception under extended luminance levels. In ACM Transactions on Graphics (TOG), Vol. 28. ACM, 27.
[23]
James Larimer, Jennifer Gille, and James Wong. 2001. 41.2: Judder-induced edge flicker in moving objects. In SID Symposium Digest of Technical Papers, Vol. 32. Wiley Online Library, 1094--1097.
[24]
Feng Li, Jeff B. Pelz, and Scott J. Daly. 2010. Effects of stimulus size and velocity on steady-state smooth pursuit induced by realistic images. In Human Vision and Electronic Imaging XV, Vol. 7527. International Society for Optics and Photonics, 752717.
[25]
Ziwei Liu, Raymond Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala. 2017. Video frame synthesis using deep voxel flow. In Proceedings of the International Conference on Computer Vision (ICCV’17), Vol. 2.
[26]
Dhruv Mahajan, Fu-Chung Huang, Wojciech Matusik, Ravi Ramamoorthi, and Peter Belhumeur. 2009. Moving gradients: A path-based method for plausible image interpolation. In ACM Transactions on Graphics (TOG), Vol. 28. ACM, 42.
[27]
Rafal Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 40.
[28]
Sean T. McCarthy. 2016. How independent are HDR, WCG, and HFR in human visual perception and the creative process. SMPTE Motion Imag. J. 125, 4 (2016), 24--33.
[29]
Craig H. Meyer, Adrian G. Lasker, and David A. Robinson. 1985. The upper limit of human smooth pursuit velocity. Vis. Res. 25, 4 (1985), 561--563.
[30]
Simone Meyer, Oliver Wang, Henning Zimmer, Max Grosse, and Alexander Sorkine-Hornung. 2015. Phase-based frame interpolation for video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE, 1410--1418.
[31]
Jeffrey B. Mulligan. 1992. Motion transparency is restricted to two planes. Invest. Ophthalmol. Vis. Sci. 33, 4 (1992), 1049.
[32]
Mike Nilsson. 2015. Ultra high definition video formats and standardisation. BT Media and Broadcast Research Paper (2015). Technical Report.
[33]
Shin’ya Nishida. 2011. Advancement of motion psychophysics: Review 2001--2010. J. Vis. 11, 5 (2011), 11--11.
[34]
K. C. Noland. 2016. High frame rate television: Sampling theory, the human visual system, and why the Nyquist-Shannon theorem does not apply. SMPTE Motion Imag. J. 125, 3 (Apr. 2016), 46--52.
[35]
Charles Poynton. 2012. Digital Video and HD: Algorithms and Interfaces. Elsevier.
[36]
Alan Roberts. 2002. The film look: It’s not just jerky motion. BBC R8D White Paper WHP 53 (2002).
[37]
R. Savoy and M. Burns. 1989. Isolated cone classes and the disembodied edge: New stimuli for psychophysics and neurophysiology. Invest. Ophthalmol. Vis. Sci. 30 (1989), 220.
[38]
Tom R. Scherzer and Vebjørn Ekroll. 2009. Intermittent occlusion enhances the smoothness of sampled motion. J. Vis. 9, 10 (2009), 16--16.
[39]
Tatsuto Takeuchi and Karen K. De Valois. 2005. Sharpening image motion based on the spatio-temporal characteristics of human vision. In Human Vision and Electronic Imaging X, Vol. 5666. International Society for Optics and Photonics, 83--95.
[40]
Krzysztof Templin, Piotr Didyk, Karol Myszkowski, and Hans-Peter Seidel. 2016. Emulating displays with continuously varying frame rates. ACM Trans. Graph. 35, 4 (2016), 67.
[41]
Christopher W. Tyler and Russell D. Hamer. 1990. Analysis of visual modulation sensitivity. IV. Validity of the Ferry--Porter law. J. Opt. Soc. Am. A 7, 4 (1990), 743--758.
[42]
Floris L. Van Nes and Maarten A. Bouman. 1967. Spatial modulation transfer in the human eye. J. Opt. Soc. Am. 57, 3 (1967), 401--406.
[43]
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. 2016. Generating videos with scene dynamics. In Advances in Neural Information Processing Systems. 613--621.
[44]
Andrew Watson and Albert Ahumada. 2016. The pyramid of visibilty. J. Vis. 16, 12 (2016), 567--567.
[45]
Andrew B. Watson. 2013. High frame rates and human vision: A view through the window of visibility. SMPTE Motion Imag. J. 122, 2 (2013), 18--32.
[46]
Andrew B. Watson, Albert J. Ahumada, and Joyce E. Farrell. 1986. Window of visibility: A psychophysical theory of fidelity in time-sampled visual motion displays. J. Opt. Soc. Am. A 3, 3 (1986), 300--307.
[47]
Andrew B. Watson and K. Boff. 1986. Temporal sensitivity. Handbook of Perception and Human Performance 1 (1986), 6--1.
[48]
Andrew B. Watson and Jesus Malo. 2002. Video quality measures based on the standard spatial observer. In Proceedings of the 2002 International Conference on Image Processing, Vol. 3. IEEE, III--III.
[49]
Joyce H. D. M. Westerink and Kees Teunissen. 1995. Perceived sharpness in complex moving images. Displays 16, 2 (1995), 89--97.
[50]
Laurie M. Wilcox, Robert S. Allison, John Helliker, Bert Dunk, and Roy C. Anthony. 2015. Evidence that viewers prefer higher frame-rate film. ACM Trans. Appl. Percept. 12, 4 (2015), 15.
[51]
Jiefu Zhai, Keman Yu, Jiang Li, and Shipeng Li. 2005. A low complexity motion compensated frame interpolation method. In Proceedings of the IEEE International Symposium on Circuits and Systems 2005 (ISCAS’05). IEEE, 4927--4930.

Cited By

View all
  • (2024)BiPMAP: a toolbox for predicting perceived motion artifacts on modern displaysOptics Express10.1364/OE.51098532:7(12181)Online publication date: 19-Mar-2024
  • (2024)Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front‐Facing ViewsComputer Graphics Forum10.1111/cgf.1503643:2Online publication date: 27-Apr-2024
  • (2024)Judder Modelling Framework with Perceptual Quality Score Prediction for HDR Videos2024 IEEE International Conference on Visual Communications and Image Processing (VCIP)10.1109/VCIP63160.2024.10849796(1-5)Online publication date: 8-Dec-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 38, Issue 5
October 2019
191 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3341165
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 July 2019
Accepted: 01 May 2019
Revised: 01 May 2019
Received: 01 September 2018
Published in TOG Volume 38, Issue 5

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Frame rate
  2. brightness
  3. contrast
  4. display technologies
  5. high dynamic range
  6. judder
  7. motion perception
  8. speed perception

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)29
  • Downloads (Last 6 weeks)4
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)BiPMAP: a toolbox for predicting perceived motion artifacts on modern displaysOptics Express10.1364/OE.51098532:7(12181)Online publication date: 19-Mar-2024
  • (2024)Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front‐Facing ViewsComputer Graphics Forum10.1111/cgf.1503643:2Online publication date: 27-Apr-2024
  • (2024)Judder Modelling Framework with Perceptual Quality Score Prediction for HDR Videos2024 IEEE International Conference on Visual Communications and Image Processing (VCIP)10.1109/VCIP63160.2024.10849796(1-5)Online publication date: 8-Dec-2024
  • (2024)Variable Frame Timing Affects Perception of Smoothness in First-Person Gaming2024 IEEE Conference on Games (CoG)10.1109/CoG60054.2024.10645568(1-8)Online publication date: 5-Aug-2024
  • (2023)Foveated Walking: Translational Ego-Movement and Foveated RenderingACM Symposium on Applied Perception 202310.1145/3605495.3605798(1-8)Online publication date: 5-Aug-2023
  • (2023)Subjective video quality assessment of immersive HDR content on head-mounted displays2023 IEEE International Conference on Visual Communications and Image Processing (VCIP)10.1109/VCIP59821.2023.10402754(1-5)Online publication date: 4-Dec-2023
  • (2023)Perceptually-guided Dual-mode Virtual Reality System For Motion-adaptive DisplayIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.324709729:5(2249-2257)Online publication date: May-2023
  • (2022)Assessing the Effect of the Refresh Rate of a Device on Various Motion Stimulation Frequencies Based on Steady-State Motion Visual Evoked PotentialsFrontiers in Neuroscience10.3389/fnins.2021.75767915Online publication date: 7-Jan-2022
  • (2022)Dark stereoACM Transactions on Graphics10.1145/3528223.353013641:4(1-12)Online publication date: 22-Jul-2022
  • (2022)Face deblurring using dual camera fusion on mobile phonesACM Transactions on Graphics10.1145/3528223.353013141:4(1-16)Online publication date: 22-Jul-2022
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media