ABSTRACT
A cinemagraph is a new type of medium that infuses a static image with the dynamics of one particular region. It is in many ways intermediate between a photograph and a video, and has a number of attractive potential applications, such as the creation of dynamic scenes for games and interactive environments. However, creating cinemagraphs is time consuming and requires certain level of proficiency on photo editing techniques. In this paper, we present a fully automatic approach that creates cinemagraphs from video sequences. Specifically, we view cinemagraph construction as a constrained optimization problem that seeks a sub-volume in video with the maximum cumulative flow fields. The problem can be efficiently solved by a branch and-bound search scheme. A user survey is conducted to understand user preferences and demonstrate the performance of the proposed approach. The findings of this study should provide information for various design choices for an easy and versatile authoring tool for cinemagraphs.
- N. Joshi, S. Metha, S. Drucker, E. Stollnitz, H. Hoppe, M. Uyttendaele, and M. Cohen, "Cliplets: juxtaposing still and dynamic imagery," Technical Report MSR-TR-2012--52, Microsoft Research, 2012.Google ScholarDigital Library
- C. H. Lampert, M. B. Blaschko, and T. Hofmann, "Efficient subwindow search: a branch and bound framework for object localization," IEEE Tran. Pattern Analysis and Machine Intelligence, 31(12):2129--2342, 2009. Google ScholarDigital Library
- V. Mahadevan and N. Vasconcelos, "Spatiotemporal saliency in dynamic scenes," IEEE Tran. Pattern Analysis and Machine Intelligence, 32(1):171--177, 2010. Google ScholarDigital Library
- H. C. Nothdurft, "The role of features in preattentive vision: comparison of orientation, motion and color cues," Vision Research, 33(14), 1937--1958, 1993.Google ScholarCross Ref
- E. Pogalin, A.W.M. Smeulders, and A.H.C. Thean, "Visual quasi-periodicity," IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2008.Google Scholar
- A. Schödl, R. Szeliski, D. H. Salesin, and I. Essa, "Video textures," ACM International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH), 2000.Google Scholar
- J. Tompkin, F. Pece, K. Subr, and J. Kautz, "Towards moment imagery: automatic cinemagraphs," European Conference on Visual Media Production (CVMP), 2011. Google ScholarDigital Library
- L. Wixson, "Detecting salient motion by accumulating directionally-consistent flow," IEEE Tran. Pattern Analysis and Machine Intelligence, 22(8), 774--780, 2000. Google ScholarDigital Library
- J. Yuan, Z. Liu, and Y. Wu, "Discriminative subvolume search for efficient action detection," IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2009.Google Scholar
- http://cinemagraphs.com/Google Scholar
- http://kinotopic.com/Google Scholar
- http://www.icinegraph.com/Google Scholar
Index Terms
- An approach to automatic creation of cinemagraphs
Recommendations
A tool for automatic cinemagraphs
MM '12: Proceedings of the 20th ACM international conference on MultimediaA cinemagraph is a new type of medium that infuses a static image with the dynamics of one or a few particular regions. It is in many ways intermediate between a photograph and a video, and provides a simple, yet expressive way to mix static and dynamic ...
Automatic blend shape creation for facial motion capture
SIGGRAPH '16: ACM SIGGRAPH 2016 PostersBlend shape animation is one of the most common methods for facial animation used by animators. Creation of effective blend shapes is an investment as they can take a very long time to create, but once finished, help with consistency in animation. As ...
Simulation-based in-between creation for CACAni system
SIGGRAPH ASIA '09: ACM SIGGRAPH ASIA 2009 SketchesIn-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, ...
Comments