Abstract
Our aim is to investigate the robustness of classifiers against the class imbalance. From this point of view, we compare several most widely used classifiers as well as the one recently proposed, which is based on the assumption that the probability densities in classes have the matrix normal distribution. As the base for comparison we take a sequence of images from that laser based additive manufacturing process. It is important that the classifiers are fed by raw images. The classifiers are compared according to several criterions and the methodology of all pair-wise comparisons is used to rank them.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)
Devroye, L., Gyorfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, Berlin (2013). https://doi.org/10.1007/978-1-4612-0711-5
Haykin, S.S.: Neural Networks and Learning Machines. Pearson, Upper Saddle River (2009)
Kadziński, M., Słowiński, R.: Parametric evaluation of research units with respect to reference profiles. Decis. Support. Syst. 72, 33–43 (2015)
Krzysko, M., Skorzybut, M., Wolynski, W.: Classifiers for doubly multivariate data. Discuss. Math Probab. Stat., 31 (2011)
Manceur, A.M., Dutilleul, P.: Maximum likelihood estimation for the tensor normal distribution: algorithm, minimum sample size, and empirical bias and dispersion. J. Comput. Appl. Math. 239, 37–49 (2013)
Ohlson, M., Ahmad, M.R., Von Rosen, D.: The multi-linear normal distribution: introduction and some basic properties. J. Multivar. Anal. 113, 37–47 (2013)
Rafajłowicz, E.: Data structures for pattern and image recognition with application to quality control. Acta Polytechnica Hungarica, Informatics (accepted for publication)
Rafajłowicz, E.: Classifiers for matrix normal images: derivation and testing. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2018. LNCS (LNAI), vol. 10841, pp. 668–679. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91253-0_62
Rafajłowicz, E., Rafajłowicz, W.: Image-driven decision making with application to control gas burners. In: Saeed, K., Homenda, W., Chaki, R. (eds.) CISIM 2017. LNCS, vol. 10244, pp. 436–446. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59105-6_37
Salvatore, G., Matarazzo, B., Slowinski, R.: Rough approximation of a preference relation by dominance relations. Eur. J. Oper. Res. 117(1), 63–83 (1999)
Schein, A.I., Ungar, L.H.: Active learning for logistic regression: an evaluation. Mach. Learn. 68(3), 235–265 (2007)
Skubalska-Rafajłowicz, E.: Sparse random projections of camera images for monitoring of a combustion process in a gas burner. In: Saeed, K., Homenda, W., Chaki, R. (eds.) CISIM 2017. LNCS, vol. 10244, pp. 447–456. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59105-6_38
Werner, K., Jansson, M., Stoica, P.: On estimation of covariance matrices with Kronecker product structure. IEEE Trans. Signal Process. 56(2), 478–491 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
The densities of the matrix normal distribution are defined as follows:
where the normalization constants are given by:
where \(n\times m\) matrices \(M_j\)’s denote the class means matrices. The covariance structure of MND class densities is as follows
-
1.
\(n\times n\) matrix \(U_j\) denotes the covariance matrix between rows of an image from j-th class,
-
2.
\(m\times m\) matrix \(V_j\) stands for the covariance matrix between columns of an image from j-th class.
The above definitions are meaningful only when \({\text {det}}[U_j]>0\), \({\text {det}}[V_j]>0\).
The equivalent description of MND is the following:
where \(\mathcal {N}_{K}\) stands for the classic (vector valued) normal distribution with K componentnts. In (20), \(\text {vec}{({\mathbf {X}})}\) is the operation of stacking columns of matrix \({\mathbf {X}}\), while \(\Sigma _j\) is a \(n\, m\times n\, m\) covariance matrix of j-th class, which is the Kronecker product (denoted as \(\otimes \)) of \(U_j\) and \(V_j\), i.e.,
Formulas (20) and (21) show clearly that MND’s form a subclass of all normal distributions. Namely, MND’s have the special structure of the covariance matrix given by (21) (see [7]). Thus, in practice, it suffices to estimate two much smaller matrices \(U_j\) and \(V_j\) instead of a general covariance matrix which is \(n\, m\times n\, m\). As the consequence, it suffices to have:
(see [6] for the proof).
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Rafajłowicz, E. (2018). Robustness of Raw Images Classifiers Against the Class Imbalance – A Case Study. In: Saeed, K., Homenda, W. (eds) Computer Information Systems and Industrial Management. CISIM 2018. Lecture Notes in Computer Science(), vol 11127. Springer, Cham. https://doi.org/10.1007/978-3-319-99954-8_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-99954-8_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99953-1
Online ISBN: 978-3-319-99954-8
eBook Packages: Computer ScienceComputer Science (R0)