Abstract
As a widely-used strategy among Kaggle competitors, adversarial validation provides a novel selection framework of a reasonable training and validation sets. An adversarial validation heavily depends on an accurate identification of the difference between the distributions of the training and test sets released in a Kaggle competition. However, the typical adversarial validation merely uses a K-fold cross-validated point estimator to measure the difference regardless of the variation of the estimator. Therefore, the typical adversarial validation tends to produce unpromising false positive conclusions. In this study, we reconsider the adversarial validation from a perspective of algorithm comparison. Specifically, we formulate the adversarial validation into a comparison task of a well-trained classifier with a random-guessing classifier on an adversarial data set. Then, we investigate the state-of-the-art algorithm comparison methods to improve the adversarial validation method for reducing false positive conclusions. We conducted sufficient simulated and real-world experiments, and we showed the recently-proposed \(5\times 2\) BCV McNemar’s test can significantly improve the performance of the adversarial validation method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adversarial validation. https://www.kaggle.com/code/kevinbonnes/adversarial-validation/notebook (2018)
Alpaydin, E.: Combined 5\(\times \)2 cv f test for comparing supervised classification learning algorithms. Neural Comput. 11(8), 1885–1892 (1999)
Banachewicz, K., Massaron, L.: The Kaggle Book. Packt Publisher (2022)
Bayle, P., Bayle, A., Janson, L., Mackey, L.: Cross-validation confidence intervals for test error. In: Advances in Neural Information Processing Systems, vol. 33 (2020)
Bouckaert, R.R., Frank, E.: Evaluating the replicability of significance tests for comparing learning algorithms. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 3–12. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24775-3_3
Casella, G., Berger, R.L.: Statistical inference. Cengage Learning (2021)
Derrac, J., Garcia, S., Sanchez, L., Herrera, F.: Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J. Mult. Valued Logic Soft Comput. 17, 255-287 (2015)
Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895–1923 (1998)
Dragomiretskiy, S.: Influential ML: towards detection of algorithmic influence drift through causal analysis. Master’s thesis (2022)
Ishihara, S., Goda, S., Arai, H.: Adversarial validation to select validation data for evaluating performance in e-commerce purchase intent prediction (2021)
Moreno-Torres, J.G., Sáez, J.A., Herrera, F.: Study on the impact of partition-induced dataset shift on \( k \)-fold cross-validation. IEEE Trans. Neural Netw. Learn. Syst. 23(8), 1304–1312 (2012)
Mosquera, A.: Tackling data drift with adversarial validation: an application for German text complexity estimation. In: Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text, pp. 39–44 (2022)
Nadeau, C., Bengio, Y.: Inference for the generalization error. Mach. Learn. 52(3), 239–281 (2003)
Pan, J., Pham, V., Dorairaj, M., Chen, H., Lee, J.Y.: Adversarial validation approach to concept drift problem in user targeting automation systems at uber. arXiv preprint arXiv:2004.03045 (2020)
Qian, H., Wang, B., Ma, P., Peng, L., Gao, S., Song, Y.: Managing dataset shift by adversarial validation for credit scoring. In: Khanna, S., Cao, J., Bai, Q., Xu, G. (eds.) PRICAI 2022: Trends in Artificial Intelligence. PRICAI 2022. Lecture Notes in Computer Science, vol. 13629, pp. 477–488. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20862-1_35
Wang, R., Li, J.: Bayes test of precision, recall, and f1 measure for comparison of two natural language processing models. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4135–4145 (2019)
Wang, R., Li, J.: Block-regularized 5\(\times \)2 cross-validated mcnemar’s test for comparing two classification algorithms. arXiv preprint arXiv:2304.03990 (2023)
Wang, Yu., Li, J., Li, Y.: Choosing between two classification learning algorithms based on calibrated balanced \(5\times 2\) cross-validated F-test. Neural Process. Lett. 46(1), 1–13 (2016). https://doi.org/10.1007/s11063-016-9569-z
Wang, Y., Wang, R., Jia, H., Li, J.: Blocked 3\(\times \) 2 cross-validated t-test for comparing supervised classification learning algorithms. Neural Comput. 26(1), 208–235 (2014)
Yildiz, O.T.: Omnivariate rule induction using a novel pairwise statistical test. IEEE Trans. Knowl. Data Eng. 25(9), 2105–2118 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, W., Liu, Z., Xue, Y., Wang, R., Cao, X., Li, J. (2023). An Improved Cross-Validated Adversarial Validation Method. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14117. Springer, Cham. https://doi.org/10.1007/978-3-031-40283-8_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-40283-8_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40282-1
Online ISBN: 978-3-031-40283-8
eBook Packages: Computer ScienceComputer Science (R0)