Abstract
In this chapter, we provide an overview of the tools our research group is exploiting to build general-purpose (GenP) classification systems. Although the “no free lunch” (NFL) theorem claims, in effect, that generating a universal classifier is impossible, the goals of GenP systems are more modest in requiring little to no parameter tuning for performing competitively across a range of tasks within a domain or with specific data types, such as images, that span across several fields. The tools outlined here for building GenP systems include methods for building ensembles, matrix representations of data treated as images, deep learning approaches, data augmentation, and classification within dissimilarity spaces. Each of these tools is explained in detail and illustrated with a few examples taken from our work building GenP systems, which spans nearly fifteen years. We note both our successes and some of our limitations. This chapter ends by pointing out some developments in quantum computing and quantum-inspired algorithms that may allow researchers to push the limits hypothesized by the NFL theorem even further.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Abbreviations
- ACT:
-
Activation Layer
- CBD:
-
Compact Binary Descriptor
- CLASS:
-
Classification Layer
- CLBP:
-
Complete LBP
- CNN:
-
Convolutional Neural Network
- CONV:
-
Convolutional Layer
- DCT:
-
Discrete Cosine Transform
- DT:
-
Decision Trees
- FC:
-
Fully-Connected Layer
- GenP:
-
General Purpose (classifier)
- GOLD:
-
Gaussians of Local Descriptors
- GWT:
-
Gabor Wavelet Transform
- HASC:
-
Heterogeneous Auto-Similarities of Characteristics
- IDE:
-
Input Decimated Ensemble
- INC:
-
Inception Module
- LBP:
-
Local Binary Pattern
- LDA:
-
Linear Discriminant Analysis
- LPQ:
-
Local Phase Quantization
- LTP:
-
Local Ternary Pattern
- ML:
-
Machine Learning
- MRELBP:
-
Median Robust Extended LBP
- NFL:
-
No Free Lunch (theorem)
- PCA:
-
Principal Component Analysis
- PCAN:
-
Principal Component Analysis Network
- PDV:
-
Pixel Difference Vectors (generated in CBD)
- POOL:
-
Pooling Layer
- QC:
-
Quantum Computation
- QI:
-
Quantum Inspired (algorithms)
- RES:
-
Residual Layer
- RF:
-
Rotation Forest
- RICLBP:
-
Rotation Invariant Co-occurrence LBP
- RLBP:
-
Rotated LBP
- RS:
-
Random Subspace
- STFT:
-
Short-Term Fourier Transform
- SVM:
-
Support Vector Machine
- TL:
-
Transfer Learning
References
D.H. Wolpert, The supervised learning no-free-lunch theorems, in 6th Online World Conference on Soft Computing in Industrial Applications (2001), pp. 25–42
M. Delgado et al., Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15, 3133–3181 (2014)
L. Nanni, S. Ghidoni, S. Brahnam, Ensemble of convolutional neural networks for bioimage classification. Appl. Comput. Inf. 17(1), 19–35 (2021)
L.K. Hansen, P. Salamon, Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990)
D. Lu, Q. Weng, A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 28, 823–870 (2007)
V. Vapnik, The support vector method, in Artificial Neural Networks ICANN’97. (Springer, Lecture Notes in Computer Science, 1997), pp. 261–271
N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods (Cambridge University Press, Cambridge, UK, 2000)
R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, 2nd edn. (Wiley, New York, 2000)
S. Brahnam, et al., (eds)., Local Binary Patterns: New Variants and Application. (Springer, Berlin, 2014)
T. Ojala, M. Pietikainen, T. Maeenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)
X. Tan, B. Triggs, Enhanced local texture feature sets for face recognition under difficult lighting conditions. Anal. Model. Faces Gestures LNCS 4778, 168–182 (2007)
V. Ojansivu, J. Heikkila, Blur insensitive texture classification using local phase quantization, in ICISP (2008), pp. 236–243
R. Nosaka, C.H. Suryanto, K. Fukui, Rotation invariant co-occurrence among adjacent LBPs, in ACCV Workshops (2012), pp. 15–25
R. Mehta, K. Egiazarian, Dominant rotated local binary patterns (drlbp) for texture classification. Pattern Recogn. Lett. 71(1), 16–22 (2015)
Z. Guo, L. Zhang, D. Zhang, A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 19(6), 1657–1663 (2010)
L. Liu, et al., Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. In press
M. San Biagio et al., Heterogeneous auto-similarities of characteristics (hasc): Exploiting relational information for classification, in IEEE Computer Vision (ICCV13). (Sydney, Australia, 2013), pp. 809–816
Y. Guo, G. Zhao, M. Pietikainen, Discriminative features for texture description. Pattern Recogn. Lett. 45, 3834–3843 (2012)
L. Nanni, S. Brahnam, A. Lumini, Classifier ensemble methods, in Wiley Encyclopedia of Electrical and Electronics Engineering, ed by J. Webster (Wiley, New York, 2015), pp. 1–12
L. Breiman, Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
G. Martínez-Muñoz, A. Suárez, Switching class labels to generate classification ensembles. Pattern Recogn. 38(10), 1483–1494 (2005)
G. Bologna, R.D. Appel, A comparison study on protein fold recognition. in The 9th International Conference on Neural Information Processing (Singapore, 2020)
P. Melville, R.J. Mooney, Creating diversity in ensembles using artificial, information fusion. Spec. Issue Divers. Multiclassifier Syst. 6(1), 99–111 (2005)
L. Nanni, A. Lumini, FuzzyBagging: a novel ensemble of classifiers. Pattern Recogn. 39(3), 488–490 (2006)
Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)
T.K. Ho, The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)
K. Tumer, N.C. Oza, Input decimated ensembles. Pattern Anal Appl 6, 65–77 (2003)
L. Nanni, Cluster-based pattern discrimination: a novel technique for feature selection. Pattern Recogn. Lett. 27(6), 682–687 (2006)
J.J. Rodriguez, L.I. Kuncheva, C.J. Alonso, Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)
L. Breiman, Random forest. Mach. Learn. 45(1), 5–32 (2001)
C.-X. Zhang, J.-S. Zhang, RotBoost: a technique for combining Rotation forest and AdaBoost. Pattern Recogn. Lett. 29(10), 1524–1536 (2008)
L. Nanni, S. Brahnam, A. Lumini, Double committee adaBoost. J. King Saud Univ. 25(1), 29–37 (2013)
L. Nanni, et al., Toward a general-purpose heterogeneous ensemble for pattern classification. Comput. Intell. Neurosci. Article ID 909123, 1–10 (2015)
A. Lumini, L. Nanni, Overview of the combination of biometric matchers. Inf. Fusion 33, 71–85 (2017)
A. Lumini, L. Nanni, Deep learning and transfer learning features for plankton classification. Ecol. Inf. 51, 33–43 (2019)
Z. Wang, et al., Pattern representation in feature extraction and classification-matrix versus vector. IEEE Trans. Neural Netw. 19(758–769) (2008)
R. Eustice et al., UWIT: Underwater image toolbox for optical image processing and mosaicking in MATLAB, in International Symposium on Underwater Technology. (Tokyo, Japan, 2002), pp. 141–145
J. Yang et al., Two-dimension pca: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)
Z. Wang, S.C. Chen, Matrix-pattern-oriented least squares support vector classifier with AdaBoost. Pattern Recogn. Lett. 29, 745–753 (2008)
L. Nanni, Texture descriptors for generic pattern classification problems. Expert Syst. Appl. 38(8), 9340–9345 (2011)
L. Nanni, S. Brahnam, A. Lumini, Matrix representation in pattern classification. Exp. Syst. Appl. 39.3, 3031–3036 (2012)
G. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, in Advances In Neural Information Processing Systems. ed. by F. Pereira et al. (Curran Associates Inc., Red Hook, NY, 2012), pp. 1097–1105
C. Szegedy, et al., Going deeper with convolutions, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9
K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition (Cornell University, 2014). arXiv:1409.1556v6
K. He et al., Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (IEEE, Las Vegas, NV, 2016), pp. 770–778
G. Huang, et al., Densely connected convolutional networks. CVPR 1(2), 3 (2017)
J. Yosinski, et al., How Transferable are Features in Deep Neural Networks? (Cornell University, 2014) arXiv:1411.1792.
T.-H. Chan et al., Pcanet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)
J. Deng, et al. ImageNet: a large-scale hierarchical image database. in CVPR (2009)
B. Athiwaratkun, K. Kang, Feature Representation in Convolutional Neural Networks (2015). arXiv:1507.02313.
B. Yang, et al., Convolutional channel features, in IEEE International Conference on Computer Vision (ICCV) (2015)
C. Barat, C. Ducottet, String representations and distances in deep convolutional neural networks for image classification. Pattern Recogn. Bioinf. 54(June), 104–115 (2016)
A.S. Razavian, et al., CNN features off-the-shelf: an astounding baseline for recognition. CoRR (2014). arXiv:1403.6382
R.H.M. Condori, O.M. Bruno, Analysis of activation maps through global pooling measurements for texture classification. Inf. Sci. 555, 260–279 (2021)
L. Nanni, S. Ghidoni, S. Brahnam, Handcrafted versus non-handcrafted features for computer vision classification. Pattern Recogn. 71, 158–172 (2017)
J. Lu, et al., Learning compact binary face descriptor for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2015)
H. Li, et al., Rethinking the Hyperparameters for Fine-Tuning (2020). arXiv:2002.11770
R. Ribani, M. Marengoni, A survey of transfer learning for convolutional neural networks. in 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T) (2019), pp. 47–57
G. Maguolo, L. Nanni, S. Ghidoni, Ensemble of convolutional neural networks trained with different activation functions. Exp. Syst. Appl. 166, 114048 (2021)
X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks. in AISTATS (2011)
V. Nair, G.E. Hinton, Rectified linear units improve restricted boltzmann machines, in 27th International Conference on Machine Learning. (Haifa, Israel, 2010), pp. 1–8
A.L. Maas, Rectifier nonlinearities improve neural network acoustic models (2013)
D.-A. Clevert, T. Unterthiner, S. Hochreiter, Fast and accurate deep network learning by exponential linear units (ELUs). CoRR (2015). arXiv:1511.07289
G. Klambauer, et al., Self-normalizing neural networks, in 31st Conference on Neural Information Processing Systems (NIPS 2017) (Long Beach, CA, 2017)
K. He et al., Delving deep into rectifiers: surpassing human-level performance on imagenet classification. IEEE Int. Conf. Comput. Vis. (ICCV) 2015, 1026–1034 (2015)
F. Agostinelli, et al., Learning activation functions to improve deep neural networks. CoRR (2014).arXiv:1412.6830
A. Lumini, et al., Image orientation detection by ensembles of Stochastic CNNs. Mach. Learn. Appl. 6, 100090 (2021)
L. Nanni, et al., Stochastic selection of activation layers for convolutional neural networks. Sensors (Basel, Switzerland) 20 (2020)
M. Hutter, Learning Curve Theory (2021). arXiv:2102.04074
B. Sahiner et al., Deep learning in medical imaging and radiation therapy. Med. Phys. 46(1), e1–e36 (2019)
O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomedical image segmentation, in MICCAI 2015 LNCS. ed. by N. Navab et al. (Springer, Cham, 2015), pp. 234–241
J. Shijie, et al., Research on data augmentation for image classification based on convolution neural networks, in Chinese Automation Congress (CAC) 2017 (Jinan, 2017), pp. 4165–4170
A. Dosovitskiy et al., Discriminative unsupervised feature Learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2016)
A. Buslaev, et al., Albumentations: Fast and Flexible Image Augmentations (2018). arXiv:1809.06839
L. Nanni, S. Brahnam, G. Maguolo, Data augmentation for building an ensemble of convolutional neural networks, in Smart Innovation, Systems and Technologies. ed. by Y.-W. Chen et al. (Springer Nature, Singapore, 2019), pp. 61–70
A. Tversky, Features of similarity. Psychol. Rev. 84(2), 327–352 (1977)
E. Pękalska, R.P. Duin, The Dissimilarity Representation for Pattern Recognition - Foundations and Applications (World Scientific, Singapore, 2005)
S. Belongie, J. Malik, J. Puzicha, Shape matching and object recongtiion using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(24), 509–522 (2002)
Y. Rubner, C. Tomasi, L.J. Guibas, The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vision 40, 99–121 (2000)
Y. Chen et al., Similarity-based classification: concepts and algorithms. J. Mach. Learn. Res. 10, 747–776 (2009)
Y.M.G. Costa et al., The dissimilarity approach: a review. Artif. Intell. Rev. 53, 2783–2808 (2019)
S. Cha, S. Srihari, Writer identification: statistical analysis and dichotomizer, in SSPR/SPR (2000)
E. Pękalska, R.P. Duin, Dissimilarity representations allow for building good classifiers. Pattern Recognit. Lett. 23, 943–956 (2002)
R.H.D. Zottesso et al., Bird species identification using spectrogram and dissimilarity approach. Ecol. Inf. 48, 187–197 (2018)
V.L.F. Souza, A. Oliveira, R. Sabourin, A writer-independent approach for offline signature verification using deep convolutional neural networks features. in 2018 7th Brazilian Conference on Intelligent Systems (BRACIS) (2018), pp. 212–217
J.G. Martins et al., Forest species recognition based on dynamic classifier selection and dissimilarity feature vector representation. Mach. Vis. Appl. 26, 279–293 (2015)
E. Pękalska, R.P. Duin, P. Paclík, Prototype selection for dissimilarity-based classifiers. Pattern Recogn. 39, 189–208 (2006)
M. Hernández-Durán, Y.P. Calaña, H.M. Vazquez, Low-resolution face recognition with deep convolutional features in the dissimilarity space. in IWAIPR (2018)
J. Bromley, et al. Signature verification using a “Siamese” time delay neural network. Int. J. Pattern Recognit. Artif. Intell. (1993)
D. Chicco, Siamese neural networks: an overview, in Artificial Neural Networks. Methods in Molecular Biology, ed. by H. Cartwright (Springer Protocols, Humana, New York, NY, 2020), pp. 73–94
L. Nanni et al., Experiments of image classification using dissimilarity spaces built with siamese networks. Sensors 21(1573), 2–18 (2021)
E. Gibney, Hello quantum world! Google publishes landmark quantum supremacy claim. Nature 574, 461–462 (2019)
F. Arute et al., Hartree-Fock on a superconducting qubit quantum computer. Science 369, 1084–1089 (2020)
Y.-H. Luo, et al., Quantum teleportation in high dimensions. Phys. Rev. Lett. 123(7), 070505 (2019)
F. Arute et al., Quantum supremacy using a programmable superconducting processor. Nature 574(7779), 505–510 (2019)
L. Greenemeier, How close are we—really—to building a quantum computer. Sci. Am. (2018)
V. Havlícek et al., Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019)
F. Tacchino, et al., An artificial neuron implemented on an actual quantum processor. NPJ Quantum Inf. 5, 1–8 (2018)
G. Acampora, Quantum machine intelligence. Quantum Mach. Intell. 1(1), 1–3 (2019)
M. Schuld, F. Petruccione, Quantum ensembles of quantum classifiers. Sci. Rep. 8 (2018)
A. Abbas, M. Schuld, F. Petruccione, On quantum ensembles of quantum classifiers. Quantum Mach. Intell. 2, 1–8 (2020)
K. Khadiev, L. Safina. The Quantum Version of Random Forest Model for Binary Classification Problem (2021)
D. Willsch, et al., Support vector machines on the D-Wave quantum annealer. Comput. Phys. Commun. 248, 107006 (2020)
A.A. Gily'en, Z. Song, E. Tang, An Improved Quantum-Inspired Algorithm for Linear Regression (2020). arXiv:2009.07268
C. Ding, T. Bao, H.-L. Huang, Quantum-inspired support vector machine. IEEE Trans. Neural Netw. Learn. Syst. (2021)
D.M. Dias, M. Pacheco, Describing quantum-inspired linear genetic programming from symbolic regression problems. IEEE Congr. Evol. Comput. 2012, 1–8 (2012)
W. Deng et al., An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Trans. Instrum. Meas. 69, 7319–7327 (2020)
L. Bai et al., A quantum-inspired similarity measure for the analysis of complete weighted graphs. IEEE Trans. Cybern. 50, 1264–1277 (2020)
P. Tiwari, M. Melucci, Towards a quantum-inspired binary classifier. IEEE Access 7, 42354–42372 (2019)
E. Bernstein, U. Vazirani, Quantum complexity theory. SIAM J. Comput. 26, 1411–1473 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lumini, A., Nanni, L., Brahnam, S. (2022). Pushing the Limits Against the No Free Lunch Theorem: Towards Building General-Purpose (GenP) Classification Systems. In: Virvou, M., Tsihrintzis, G.A., Jain, L.C. (eds) Advances in Selected Artificial Intelligence Areas. Learning and Analytics in Intelligent Systems, vol 24. Springer, Cham. https://doi.org/10.1007/978-3-030-93052-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-93052-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93051-6
Online ISBN: 978-3-030-93052-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)