Abstract:
4D effects are physical effects simulated in sync with videos, movies, and games to augment the events occurring in a story or a virtual world. Types of 4D effects common...Show MoreMetadata
Abstract:
4D effects are physical effects simulated in sync with videos, movies, and games to augment the events occurring in a story or a virtual world. Types of 4D effects commonly used for the immersive media may include seat motion, vibration, flash, wind, water, scent, thunderstorm, snow, and fog. Currently, the recognition of physical effects from a video is mainly conducted by human experts. Although 4D effects are promising in giving immersive experience and entertainment, this manual production has been the main obstacle to faster and wider application of 4D effects. In this paper, we utilize pretrained models of Convolutional Neural Networks (CNNs) to extract local visual features and propose a new representation method that combines extracted features into video level features. Classification tasks are conducted by employing Support Vector Machine (SVM). Comprehensive experiments are performed to investigate different architecture of CNNs and different type of features for 4D effect classification task and compare baseline average pooling method with our proposed video level representation. Our framework outperforms the baseline up to 2-3% in terms of mean average precision (mAP).
Date of Conference: 17-20 September 2017
Date Added to IEEE Xplore: 22 February 2018
ISBN Information:
Electronic ISSN: 2381-8549