Skip to main content

Representation-Induced Algorithmic Bias

An Empirical Assessment of Behavioural Equivalence over 14 Reinforcement Learning Algorithms Across 4 Isomorphic Gameform Representations

  • Conference paper
  • First Online:
AI 2021: Advances in Artificial Intelligence (AI 2022)

Abstract

In conceiving of autonomous agents able to employ adaptive cooperative behaviours we identify the need to effectively assess the equivalence of agent behavior under conditions of external change. Reinforcement learning algorithms rely on input from the environment as the sole means of informing and so reifying internal state. This paper investigates the assumption that isomorphic representations of environment will lead to equivalent behaviour. To test this equivalence-of assumption we analyse the variance between behavioural profiles in a set of agents using fourteen foundational reinforcement-learning algorithms across four isomorphic representations of the classical Prisoner’s Dilemma gameform. A behavioural profile exists as the aggregated episode-mean distributions of the game outcomes CC, CD, DC, and DD generated from the symmetric selfplay repeated stage game across a two-axis sweep of input parameters: the principal learning rate, \(\alpha \), and the discount factor \(\gamma \), which provides 100 observations of the frequency of the four game outcomes, per algorithm, per gameform representation. A measure of equivalence is indicated by a low variance displayed between any two behavioural profiles generated by any one single algorithm. Despite the representations being theoretically equivalent analysis reveals significant variance in the behavioural profiles of the tested algorithms at both aggregate and individual outcome scales. Given this result, we infer that the isomorphic representations tested in this study are not necessarily equivalent with respect to the induced reachable space made available to any particular algorithm, which in turn can lead to unexpected agent behaviour. Therefore, we conclude that structure-preserving operations applied to environmental reward signals may introduce a vector for algorithmic bias.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Code Availability

A repository of code used in this study, and further supplementary material, is available at https://github.com/simoncstanton/equivalence_study.

References

Download references

Acknowledgements

We would like to acknowledge the use of the high-performance computing facilities provided by the Tasmanian Partnership for Advanced Computing (TPAC) funded and hosted by the University of Tasmania. This research is supported by an Australian Government Research Training Program (RTP) Scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon C. Stanton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Stanton, S.C., Dermoudy, J., Ollington, R. (2022). Representation-Induced Algorithmic Bias. In: Long, G., Yu, X., Wang, S. (eds) AI 2021: Advances in Artificial Intelligence. AI 2022. Lecture Notes in Computer Science(), vol 13151. Springer, Cham. https://doi.org/10.1007/978-3-030-97546-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-97546-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-97545-6

  • Online ISBN: 978-3-030-97546-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics