8 March 2019 Data-driven two-layer visual dictionary structure learning
Xiangchun Yu, Zhezhou Yu, Lei Wu, Wei Pang, Chenghua Lin
Author Affiliations +
Abstract
An important issue in statistical modeling is to determine the complexity of the model based on the scale of data so as to effectively mitigate the model’s overfitting problems without big data. We adopt a data-driven approach to automatically determine the number of components of the model. In order to better extract robust features, we propose a framework of data-driven two-layer structure visual dictionary learning (DTSVDL). It works by dividing the visual dictionary structure learning into two levels: the attribute layer and the detail layer. In the attribute layer, the attributes of the image dataset are learned, and these attributes are obtained by a data-driven Bayesian nonparametric model. Then, in the detail layer, the detailed information over attributes is further explored and refined, and the attributes are weighted by the number of effective observations associated with each attribute. Our proposed approach has three main advantages: (1) the two-layer structure makes our building visual dictionary be more expressive; (2) the number of components in the attribute layer can be determined automatically from the data; (3) the components are automatically determined based on the scale of visual words; therefore, our model can well mitigate the overfitting problem. In addition, by comparing with stacked autoencoders, stacked denoising autoencoders, LeNet-5, speeded-up robust features, and pretrained deep learning model ImageNet-VGG-F algorithms, we find that our approach achieves satisfactory image categorization results on two benchmark datasets. Specifically, higher categorization performance is achieved than by the classical approaches on 15 scene categories and action datasets. We conclude that the resulting DTSVDL possesses a good generality derived from attribute information as well as an excellent distinction derived from detailed information. In other words, the visual dictionary learned by our algorithm is more expressive and discriminatory.
© 2019 SPIE and IS&T 1017-9909/2019/$25.00 © 2019 SPIE and IS&T
Xiangchun Yu, Zhezhou Yu, Lei Wu, Wei Pang, and Chenghua Lin "Data-driven two-layer visual dictionary structure learning," Journal of Electronic Imaging 28(2), 023006 (8 March 2019). https://doi.org/10.1117/1.JEI.28.2.023006
Received: 10 October 2018; Accepted: 14 February 2019; Published: 8 March 2019
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Associative arrays

Visualization

Data modeling

Statistical modeling

Visual process modeling

Feature extraction

Detection and tracking algorithms

Back to Top