Abstract:
The introduction of visual attention models in data selection and features selection in CNNs for the task of image classification is an intensive and interesting research...Show MoreMetadata
Abstract:
The introduction of visual attention models in data selection and features selection in CNNs for the task of image classification is an intensive and interesting research topic. In CNNs, the strategy of dropping activations, after features extraction layers, shown an increase in the generalization gap in large-scale datasets and avoiding over-fitting. Dropout has been studied in the literature in a fully-randomized manner to take down activations during training. In this paper, we introduce a saliency-based dropping strategy to take down activations in our AlexNet-like architecture. Our experiments are conducted for the specific task of specific Mexican architectural recognition, in 67 categories. The results are promising: the proposed approach outperformed other models reducing training time and reaching a higher accuracy.
Date of Conference: 04-06 September 2019
Date Added to IEEE Xplore: 21 October 2019
ISBN Information: