Dynamic Deep Pixel Distribution Learning for Background Subtraction | IEEE Journals & Magazine | IEEE Xplore

Dynamic Deep Pixel Distribution Learning for Background Subtraction


Abstract:

Previous approaches to background subtraction usually approximate the distribution of pixels with artificial models. In this paper, we focus on automatically learning the...Show More

Abstract:

Previous approaches to background subtraction usually approximate the distribution of pixels with artificial models. In this paper, we focus on automatically learning the distribution, using a novel background subtraction model named Dynamic Deep Pixel Distribution Learning (D-DPDL). In our D-DPDL model, a distribution descriptor named Random Permutation of Temporal Pixels (RPoTP) is dynamically generated as the input to a convolutional neural network for learning the statistical distribution, and a Bayesian refinement model is tailored to handle the random noise introduced by the random permutation. Because the temporal pixels are randomly permutated to guarantee that only statistical information is retained in RPoTP features, the network is forced to learn the pixel distribution. Moreover, since the noise is random, the Bayesian theorem is naturally selected to propose an empirical model as a compensation based on the similarity between pixels. Evaluations using standard benchmark demonstrates the superiority of the proposed approach compared with the state-of-the-art, including traditional methods as well as deep learning methods.
Page(s): 4192 - 4206
Date of Publication: 06 November 2019

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.