Abstract
There is an enormous demand for Explainable Artificial Intelligence to obtain human-understandable models. For example, advertisers are keen to understand what makes video ads successful. In our investigation, we have analysed heterogeneous visual, auditory, and textual content features from YouTube video ads. This paper proposes a two-stage anchoring-and-adjustment approach. In the first stage, we search for the optimum penalized value in the regularization path of Lasso that maximizes the number of Significant Features (SFs). After that, we improve the quality of SFs by dropping features with high Variance-Inflation-Factor (VIF) because high VIF often makes a spurious set of SFs. Experiments show that, compared to the one-stage approach without the adjustment stage, our proposed two-stage approach results in a smaller number of SFs but a higher ability to identify true features that appeal to ad viewers from human evaluation. Furthermore, our approach can identify much more SFs while maintaining similar prediction accuracy as what Lasso and Elastic-net can obtain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Belloni, A., Chernozhukov, V.: Least squares after model selection in high-dimensional sparse models. In: Bernoulli, vol. 19, pp. 521–547 (2013)
Chen, J., Song, L., Wainwright, M.J., Jordan, M.I.: Learning to explain: an information-theoretic perspective on model interpretation. In: ICML, vol. 80, pp. 882–891 (2018)
Hara, S., Maehara, T.: Enumerate lasso solutions for feature selection. In: AAAI, pp. 1985–1991 (2017)
Harder, F., Bauer, M., Park, M.: Interpretable and differentially private predictions. In: AAAI, pp. 4083–4090 (2020)
Hastie, T., Tibshirani, R., Wainwright, M.: Statistical Learning with Sparsity: The Lasso and Generalizations (2015)
James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning: With Applications in R. Springer, New York (2014). https://doi.org/10.1007/978-1-4614-7138-7
Lee, J.D., Sun, D.L., Sun, Y., Taylor, J.E.: Exact post-selection inference, with application to the lasso. Ann. Stat. 44(3), 907–927 (2016)
Lockhart, R., Taylor, J., Tibshirani, R.J., Tibshirani, R.: A significance test for the lasso. Ann. Stat. 42(2), 413–468 (2014)
Park, E., Wong, R.K., Kwon, J., Chu, V.W.: Maximizing explainability with sf-lasso and selective inference for video and picture ads. In: Advances in Knowledge Discovery and Data Mining, pp. 566–577 (2021)
Park, E., Wong, R.K., Kwon, J., Chu, V.W., Rutz., O.J.: Video ads content analysis using significant features lasso. In: The 43rd ISMS Marketing Science Conference (2021)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML, vol. 70, pp. 3145–3153 (2017)
Singh, C., Murdoch, W.J., Yu, B.: Hierarchical interpretations for neural network predictions. In: ICLR (2019)
Taylor, J., Tibshirani, R.: Post-selection inference for \(l\)1-penalized likelihood models. Can. J. Stat. 46(1), 41–61 (2018)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996)
Tversky, A., Kahneman, D.: Judgment under uncertainty: heuristics and biases. Science 185(4157), 1124–1131 (1974)
Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. B 67, 301–320 (2005)
Acknowledgements
This research is supported by the Australian Government Research Training Program Scholarship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Park, E., Wong, R.K., Kwon, J., Chu, V.W. (2021). Anchoring-and-Adjustment to Improve the Quality of Significant Features. In: Zhang, W., Zou, L., Maamar, Z., Chen, L. (eds) Web Information Systems Engineering – WISE 2021. WISE 2021. Lecture Notes in Computer Science(), vol 13080. Springer, Cham. https://doi.org/10.1007/978-3-030-90888-1_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-90888-1_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90887-4
Online ISBN: 978-3-030-90888-1
eBook Packages: Computer ScienceComputer Science (R0)