Paper
29 January 2007 Spatial and temporal models for texture-based video coding
Author Affiliations +
Proceedings Volume 6508, Visual Communications and Image Processing 2007; 650806 (2007) https://doi.org/10.1117/12.705068
Event: Electronic Imaging 2007, 2007, San Jose, CA, United States
Abstract
In this paper, we investigate spatial and temporal models for texture analysis and synthesis. The goal is to use these models to increase the coding efficiency for video sequences containing textures. The models are used to segment texture regions in a frame at the encoder and synthesize the textures at the decoder. These methods can be incorporated into a conventional video coder (e.g. H.264) where the regions to be modeled by the textures are not coded in a usual manner but texture model parameters are sent to the decoder as side information. We showed that this approach can reduce the data rate by as much as 15%.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Fengqing Zhu, Ka Ki Ng, Golnaz Abdollahian, and Edward J. Delp "Spatial and temporal models for texture-based video coding", Proc. SPIE 6508, Visual Communications and Image Processing 2007, 650806 (29 January 2007); https://doi.org/10.1117/12.705068
Lens.org Logo
CITATIONS
Cited by 9 scholarly publications and 4 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Motion models

Image segmentation

Video

Video coding

Motion estimation

Computer programming

Quantization

Back to Top