Abstract:
Depth motion maps (DMM), containing abundant information on appearance and motion, are captured from the absolute difference between two consecutive depth video sequences...Show MoreMetadata
Abstract:
Depth motion maps (DMM), containing abundant information on appearance and motion, are captured from the absolute difference between two consecutive depth video sequences. In this paper, each depth frame is first projected onto three orthogonal planes (front, side, top). Then the DMMf, DMMs and DMMt are generated under the three projection view respectively. In order to describe DMM in local and global, histogram of oriented gradient (HOG), local binary patterns (LBP), a local Gist feature description based on a dense grid are computed respectively. Considering the advantages of features fusion and information entropy quantitative evaluation of the Principal Component Analysis (PCA), three descriptors are weighted and fused based on information entropy improved PCA to represent the depth video. A reconstruction error adaptively weighted combination collaborative classifier based on l1-norm and l2-norm is employed for action recognition, the adaptively weights are determined by Entropy Method. Experimental results on MSR Action3D dataset show that the present approach has strong robustness, discriminability and stability.
Date of Conference: 21-23 October 2018
Date Added to IEEE Xplore: 30 December 2018
ISBN Information: