Loading [MathJax]/extensions/TeX/euler_ieee.js
Group Ensemble Block: Subspace Diversity Improves Coarse-to-Fine Image Retrieval | IEEE Journals & Magazine | IEEE Xplore

Group Ensemble Block: Subspace Diversity Improves Coarse-to-Fine Image Retrieval

Publisher: IEEE

Impact Statement:The random subspace method has the potential to be a promising ensemble learning technique but its popularity does not match its potential because of its prohibitive cost...View more

Abstract:

The random subspace method is an ensemble learning technique of great potential. However, its popularity does not match its potential because of its prohibitive cost and ...View more
Impact Statement:
The random subspace method has the potential to be a promising ensemble learning technique but its popularity does not match its potential because of its prohibitive cost and the lack of plug-and-play reusable modules. We fill this gap by proposing a random-subspace-based ensemble block. The proposed group ensemble block can be used to effortlessly replace a linear layer in deep models while adding negligible overhead in terms of parameters, computations, and training time. This is the first successful attempt that packages random subspace techniques into a low-cost reusable module for deep learning models, and also the first work that combines ensemble learning, class-level discrimination, and instance-level discrimination. Additionally, it pushes the state-of-the-art accuracy for the coarse-to-fine image retrieval task.

Abstract:

The random subspace method is an ensemble learning technique of great potential. However, its popularity does not match its potential because of its prohibitive cost and the lack of plug-and-play reusable modules. To fill this gap, we propose the random-subspace-based group ensemble block. The proposed ensemble block increases the diversity among subspaces by masking each sampled subspace. It also utilizes the diversity among a large number of subspaces, e.g., 1024, which is facilitated by combining the advantages of both averaging and concatenating strategies for merging different subspaces' outcomes. In terms of cost, the proposed ensemble block contributes few parameters by parameter sharing, negligible computations by the linear transformation, and fast running speed by parallel processing for every subspace. In terms of reusability, it encapsulates every necessary step in one small block, and can be interchangeably used as a linear layer in neural networks. We show that our group ...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 4, Issue: 1, February 2023)
Page(s): 60 - 70
Date of Publication: 13 September 2022
Electronic ISSN: 2691-4581
Publisher: IEEE

Funding Agency:


References

References is not available for this document.