Skip to main content

Anchoring-and-Adjustment to Improve the Quality of Significant Features

  • Conference paper
  • First Online:
Web Information Systems Engineering – WISE 2021 (WISE 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13080))

Included in the following conference series:

  • 1567 Accesses

Abstract

There is an enormous demand for Explainable Artificial Intelligence to obtain human-understandable models. For example, advertisers are keen to understand what makes video ads successful. In our investigation, we have analysed heterogeneous visual, auditory, and textual content features from YouTube video ads. This paper proposes a two-stage anchoring-and-adjustment approach. In the first stage, we search for the optimum penalized value in the regularization path of Lasso that maximizes the number of Significant Features (SFs). After that, we improve the quality of SFs by dropping features with high Variance-Inflation-Factor (VIF) because high VIF often makes a spurious set of SFs. Experiments show that, compared to the one-stage approach without the adjustment stage, our proposed two-stage approach results in a smaller number of SFs but a higher ability to identify true features that appeal to ad viewers from human evaluation. Furthermore, our approach can identify much more SFs while maintaining similar prediction accuracy as what Lasso and Elastic-net can obtain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Belloni, A., Chernozhukov, V.: Least squares after model selection in high-dimensional sparse models. In: Bernoulli, vol. 19, pp. 521–547 (2013)

    Google Scholar 

  2. Chen, J., Song, L., Wainwright, M.J., Jordan, M.I.: Learning to explain: an information-theoretic perspective on model interpretation. In: ICML, vol. 80, pp. 882–891 (2018)

    Google Scholar 

  3. Hara, S., Maehara, T.: Enumerate lasso solutions for feature selection. In: AAAI, pp. 1985–1991 (2017)

    Google Scholar 

  4. Harder, F., Bauer, M., Park, M.: Interpretable and differentially private predictions. In: AAAI, pp. 4083–4090 (2020)

    Google Scholar 

  5. Hastie, T., Tibshirani, R., Wainwright, M.: Statistical Learning with Sparsity: The Lasso and Generalizations (2015)

    Google Scholar 

  6. James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning: With Applications in R. Springer, New York (2014). https://doi.org/10.1007/978-1-4614-7138-7

    Book  MATH  Google Scholar 

  7. Lee, J.D., Sun, D.L., Sun, Y., Taylor, J.E.: Exact post-selection inference, with application to the lasso. Ann. Stat. 44(3), 907–927 (2016)

    Article  MathSciNet  Google Scholar 

  8. Lockhart, R., Taylor, J., Tibshirani, R.J., Tibshirani, R.: A significance test for the lasso. Ann. Stat. 42(2), 413–468 (2014)

    MathSciNet  MATH  Google Scholar 

  9. Park, E., Wong, R.K., Kwon, J., Chu, V.W.: Maximizing explainability with sf-lasso and selective inference for video and picture ads. In: Advances in Knowledge Discovery and Data Mining, pp. 566–577 (2021)

    Google Scholar 

  10. Park, E., Wong, R.K., Kwon, J., Chu, V.W., Rutz., O.J.: Video ads content analysis using significant features lasso. In: The 43rd ISMS Marketing Science Conference (2021)

    Google Scholar 

  11. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML, vol. 70, pp. 3145–3153 (2017)

    Google Scholar 

  12. Singh, C., Murdoch, W.J., Yu, B.: Hierarchical interpretations for neural network predictions. In: ICLR (2019)

    Google Scholar 

  13. Taylor, J., Tibshirani, R.: Post-selection inference for \(l\)1-penalized likelihood models. Can. J. Stat. 46(1), 41–61 (2018)

    Article  Google Scholar 

  14. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996)

    Google Scholar 

  15. Tversky, A., Kahneman, D.: Judgment under uncertainty: heuristics and biases. Science 185(4157), 1124–1131 (1974)

    Article  Google Scholar 

  16. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. B 67, 301–320 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research is supported by the Australian Government Research Training Program Scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eunkyung Park .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Park, E., Wong, R.K., Kwon, J., Chu, V.W. (2021). Anchoring-and-Adjustment to Improve the Quality of Significant Features. In: Zhang, W., Zou, L., Maamar, Z., Chen, L. (eds) Web Information Systems Engineering – WISE 2021. WISE 2021. Lecture Notes in Computer Science(), vol 13080. Springer, Cham. https://doi.org/10.1007/978-3-030-90888-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90888-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90887-4

  • Online ISBN: 978-3-030-90888-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics