Abstract:
In image and video coding field, an effective compression algorithm should remove not only the spatial, temporal, and statistical redundancy but also the perceptual redun...Show MoreMetadata
Abstract:
In image and video coding field, an effective compression algorithm should remove not only the spatial, temporal, and statistical redundancy but also the perceptual redundancy information from the pictures. Many perceptual models are presented in the literature to cooperate with video coding system to obtain significant bit rate reduction without perceptual distortion. One of the critical issues for those perceptual models is their high computational complexity to apply to real-time applications. To alleviate this problem, this paper aims at hardware architecture design of perception engine for video coding applications. The adopted perceptual models include the structural similarity model, visual attention models, and just-noticeable-distortion model, and contrast sensitivity function. Moreover, those models are further developed and modified to be suitable for hardware implementation. Macroblock-based processing with data reuse scheme is used to save the system bandwidth. The architecture of parallel processing for each visual model with sharing the on-chip memory and buffers is developed to reduce the chip area. Subjective experiment results show that the adopted model achieves about 7%-41% bit-rate saving in the QP range of 24-36 without visual quality degradation. For the hardware implementation of the perception engine, the chip is taped out using 0.18 m technology. The chip size is about 3.3 3.3 mm , and the power consumption is 83.9 mW. The processing capability is HDTV720p.
Published in: IEEE Transactions on Multimedia ( Volume: 13, Issue: 6, December 2011)