An Efficient Deep Learning Accelerator Architecture for Compressed Video Analysis | IEEE Journals & Magazine | IEEE Xplore

An Efficient Deep Learning Accelerator Architecture for Compressed Video Analysis


Abstract:

Previous neural network accelerators tailored to video analysis only accept data of RGB/YUV domain, requiring decompressing the video that are often compressed before tra...Show More

Abstract:

Previous neural network accelerators tailored to video analysis only accept data of RGB/YUV domain, requiring decompressing the video that are often compressed before transmitted from the edge sensors. A compressed video processing accelerator can alleviate the decoding overhead, and gain performance speedup by operating on more compact input data. This work proposes a novel deep learning accelerator architecture, Alchemist, which is able to predict results directly from the compressed video bitstream instead of reconstructing the full RGB images. By utilizing the metadata of motion vector and critical blocks extracted from bitstreams, Alchemist contributes to a remarkable performance speedup of 5x with negligible accuracy loss. Nevertheless, we still find that the original compressed video coded by standard algorithms such as H.264 is not suitable to be directly manipulated, due to diverse compressed structures. Although obviating the requirement to recover all RGB frames, the accelerator must parse the entire compressed video bitstream to locate reference frames and extract useful metadata. If we combine the video codec with the proposed compressed video analysis, additional optimizations can be obtained. Therefore, to cope with the mismatch between current video coding algorithms, such as H.264 and neural network-based video analysis, we propose a specialized coding strategy to generate compressed video bitstreams more suitable for transmission and analysis, which further simplifies the decoding stage of video analysis and is capable of achieving significant storage reduction.
Page(s): 2808 - 2820
Date of Publication: 14 October 2021

ISSN Information:

Funding Agency:


References

References is not available for this document.