Skip to main content

Beyond Debiasing: Actively Steering Feature Selection via Loss Regularization

  • Conference paper
  • First Online:
Pattern Recognition (DAGM GCPR 2023)

Abstract

It is common for domain experts like physicians in medical studies to examine features for their reliability with respect to a specific domain task. When introducing machine learning, a common expectation is that machine learning models use the same features as these human experts to solve a task, but that is not always the case. Moreover, datasets often contain features that are known from domain knowledge to generalize badly to the real world, referred to as biases. Current debiasing methods only remove such influences. To additionally integrate the domain knowledge about well-established features into the training of a model, their relevance should be increased. We present a method that allows the manipulation of the relevance of features by actively steering the model’s feature selection during the training process. That is, it allows both the discouragement of biases and encouragement of well-established features to incorporate domain knowledge about the feature reliability. We model our objectives for actively steering the feature selection process as a constrained optimization problem, which we implement via a loss regularization that is based on batch-wise feature attributions. We evaluate our approach on a novel synthetic regression dataset and a dataset from the computer vision domain. We observe that it successfully steers the features a model selects during the training process. This is a strong indicator that our method can be used to integrate domain knowledge about well-established features into a model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Arjovsky, M., Bottou, L., Gulrajani, I., Lopez-Paz, D.: Invariant risk minimization (2019)

    Google Scholar 

  3. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)

    MathSciNet  Google Scholar 

  4. Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  5. Bartholomew, D.J.: Latent variable models and factor analysis, Griffin’s statistical monographs and courses, vol. 40. Oxford Univ. Press and Griffin, New York and London (1987)

    Google Scholar 

  6. Basilevsky, A.: Statistical Factor Analysis and Related Methods: Theory and Applications. Wiley series in probability and mathematical statistics. Probability and mathematical statistics, Wiley InterScience, New York, NY, USA and Chichester and Brisbane and Toronto and Singapore (1994)

    Google Scholar 

  7. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods, Optimization and Neural Computation Series, vol. 4. Athena Scientific, Belmont (1996)

    Google Scholar 

  8. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)

    Google Scholar 

  9. Bottou, L., et al.: Comparison of classifier methods: a case study in handwritten digit recognition. In: Proceedings of the 12th IAPR International Conference on Pattern Recognition (Cat. No.94CH3440-5), pp. 77–82. IEEE Computer Society Press (1994)

    Google Scholar 

  10. Diestel, J., Spalsbury, A.: Joys of Haar Measure, Graduate Studies in Mathematics, vol. 150. American Mathematical Society, Providence (2014)

    Google Scholar 

  11. Erion, G., Janizek, J.D., Sturmfels, P., Lundberg, S.M., Lee, S.I.: Improving performance of deep learning models with axiomatic attribution priors and expected gradients. Nature Mach. Intell. 3(7), 620–631 (2021)

    Article  Google Scholar 

  12. Fukushima, K.: Cognitron: a self-organizing multilayered neural network. Biol. Cybern. 20(3–4), 121–136 (1975)

    Article  Google Scholar 

  13. Gao, Y., Gu, S., Jiang, J., Hong, S.R., Yu, D., Zhao, L.: Going beyond XAI: a systematic survey for explanation-guided learning (2022)

    Google Scholar 

  14. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W., Titterington, M. (eds.) Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, pp. 249–256. PMLR, Chia Laguna Resort (2010)

    Google Scholar 

  15. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  16. Gretton, A., Fukumizu, K., Teo, C.H., Song, L., Schölkopf, B., Smola, A.J.: A Kernel Statistical Test of Independence. In: Proceedings of the 20th International Conference on Neural Information Processing Systems, NIPS 2007, pp. 585–592. Curran Associates Inc, Red Hook (2007)

    Google Scholar 

  17. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2019)

    Article  Google Scholar 

  18. Hinnefeld, J.H., Cooman, P., Mammo, N., Deese, R.: Evaluating fairness metrics in the presence of dataset bias (2018)

    Google Scholar 

  19. Hotelling, H.: Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24(6), 417–441 (1933)

    Article  Google Scholar 

  20. Linfoot, E.H.: An informational measure of correlation. Inf. Control 1(1), 85–89 (1957)

    Article  MathSciNet  Google Scholar 

  21. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  22. Liu, F., Avci, B.: Incorporating priors with feature attribution on text classification. In: Korhonen, A., Traum, D., Màrquez, L. (eds.) Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6274–6283. Association for Computational Linguistics, Stroudsburg (2019)

    Google Scholar 

  23. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019 (2019)

    Google Scholar 

  24. Marler, R.T., Arora, J.S.: Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 26(6), 369–395 (2004)

    Article  MathSciNet  Google Scholar 

  25. McKay, D.J.C.: Information Theory, Inference, and Learning Algorithms, 4th edn. Cambridge University Press, Cambridge (2005)

    Google Scholar 

  26. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2022)

    Article  Google Scholar 

  27. Mendenhall, W., Beaver, R.J., Beaver, B.M.: Introduction to Probability and Statistics. Brooks/Cole, Belmont (2009)

    Google Scholar 

  28. Mezzadri, F.: How to generate random matrices from the classical compact groups. Not. AMS 54(5) (2007)

    Google Scholar 

  29. Moradi, R., Berangi, R., Minaei, B.: A survey of regularization strategies for deep models. Artif. Intell. Rev. 53(6), 3947–3986 (2020)

    Article  Google Scholar 

  30. Murdoch, W.J., Liu, P.J., Yu, B.: Beyond word importance: contextual decomposition to extract interactions from LSTMs. In: International Conference on Learning Representations (2018)

    Google Scholar 

  31. Nachbar, F., et al.: The ABCD rule of dermatoscopy: high prospective value in the diagnosis of doubtful melanocytic skin lesions. J. Am. Acad. Dermatol. 30(4), 551–559 (1994)

    Google Scholar 

  32. Nair, V., Hinton, G.E.: Rectified Linear Units Improve Restricted Boltzmann Machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, pp. 807–814. Omnipress, Madison (2010)

    Google Scholar 

  33. Parraga, O., et al.: Debiasing methods for fairer neural models in vision and language research: a survey (2022)

    Google Scholar 

  34. Pearl, J.: Causality: Models, Reasoning, and Inference, 1st edn. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  35. Pearson, K.: LIII. On lines and planes of closest fit to systems of points in space. Lond. Edinburgh Dublin Phil. Maga. J. Sci. 2(11), 559–572 (1901)

    Article  Google Scholar 

  36. Polyanskiy, Y., Wu, Y.: Information Theory: From Coding to Learning. Cambridge, MA (2022+)

    Google Scholar 

  37. Reichenbach, H.: The Direction of Time. University of California Press, Berkeley (1956)

    Google Scholar 

  38. Reimers, C., Bodesheim, P., Runge, J., Denzler, J.: Conditional adversarial debiasing: towards learning unbiased classifiers from biased data. In: Bauckhage, C., Gall, J., Schwing, A. (eds.) DAGM GCPR 2021. LNCS, vol. 13024, pp. 48–62. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92659-5_4

    Chapter  Google Scholar 

  39. Reimers, C., Penzel, N., Bodesheim, P., Runge, J., Denzler, J.: Conditional dependence tests reveal the usage of ABCD rule features and bias variables in automatic skin lesion classification. In: CVPR ISIC Skin Image Analysis Workshop (CVPR-WS), pp. 1810–1819 (2021)

    Google Scholar 

  40. Reimers, C., Runge, J., Denzler, J.: Determining the relevance of features for deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12371, pp. 330–346. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58574-7_20

    Chapter  Google Scholar 

  41. Rieger, L., Singh, C., Murdoch, W.J., Yu, B.: Interpretations are useful: penalizing explanations to align neural networks with prior knowledge. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020 (2020)

    Google Scholar 

  42. Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: training differentiable models by constraining their explanations. In: Bacchus, F., Sierra, C. (eds.) Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 2662–2670. International Joint Conferences on Artificial Intelligence Organization, California (2017)

    Google Scholar 

  43. Rumelhart, D.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition: Foundations, pp. 45–76 (1987)

    Google Scholar 

  44. Singh, C., Murdoch, W.J., Yu, B.: Hierarchical interpretations for neural network predictions. In: International Conference on Learning Representations (2019)

    Google Scholar 

  45. Tan, L.: Generalized inverse of matrix and solution of linear system equation. In: Tan, L. (ed.) A Generalized Framework of Linear Multivariable Control, pp. 38–50. Elsevier Science, Oxford (2017)

    Chapter  Google Scholar 

  46. Wang, A., et al.: REVISE: a tool for measuring and mitigating bias in visual datasets. Int. J. Comput. Vision 130(7), 1790–1810 (2022)

    Article  Google Scholar 

  47. Yin, P., Lyu, J., Zhang, S., Osher, S.J., Qi, Y., Xin, J.: Understanding straight-through estimator in training activation quantized neural nets. In: International Conference on Learning Representations (2019)

    Google Scholar 

  48. Zan, L., Meynaoui, A., Assaad, C.K., Devijver, E., Gaussier, E.: A conditional mutual information estimator for mixed data and an associated conditional independence test. Entropy (Basel, Switzerland) 24(9), 1234 (2022)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Blunk .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 471 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Blunk, J., Penzel, N., Bodesheim, P., Denzler, J. (2024). Beyond Debiasing: Actively Steering Feature Selection via Loss Regularization. In: Köthe, U., Rother, C. (eds) Pattern Recognition. DAGM GCPR 2023. Lecture Notes in Computer Science, vol 14264. Springer, Cham. https://doi.org/10.1007/978-3-031-54605-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-54605-1_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54604-4

  • Online ISBN: 978-3-031-54605-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics