Abstract
In this letter, we summarize the research agenda that we survey in our recent book The Ethical Algorithm, which is intended for a general, nontechnical audience. At a high level, this research agenda proposes formalizing the ethical and social values that we want our algorithms to maintain --- values including privacy, fairness, and explainability --- and then to embed these social values directly into our algorithms as part of their design. This broad research area is most mature in the area of privacy, specifically differential privacy. It is off to a good start in emerging areas like algorithmic fairness, and seems promising for more nebulous goals like explainability, if only we can find the right definitions. Most work in this area to date analyzes algorithms as isolated components, but game-theoretic and economic analysis will become increasingly important as we try and study the effects of algorithmic interventions in larger sociotechnical systems.
- Angwin, J., Larson, J., Mattu, S., and Kirchner, L. 2016. Machine bias. Propublica.Google Scholar
- Benjamin, R. 2019. Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.Google Scholar
- Berk, R., Heidari, H., Jabbari, S., Kearns, M., and Roth, A. 2018. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 0049124118782533.Google Scholar
- Chouldechova, A. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2, 153--163.Google Scholar
- Chouldechova, A. and Roth, A. 2020. A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63, 5, 82--89. Google ScholarDigital Library
- Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226. Google ScholarDigital Library
- Dwork, C., Kim, M. P., Reingold, O., Rothblum, G. N., and Yona, G. 2019. Learning from outcomes: Evidence-based rankings. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 106--125.Google Scholar
- Dwork, C., McSherry, F., Nissim, K., and Smith, A. 2016. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality 7, 3, 17--51.Google ScholarCross Ref
- Eubanks, V. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. Google ScholarDigital Library
- Gillen, S., Jung, C., Kearns, M., and Roth, A. 2018. Online learning with an unknown fairness metric. In Advances in Neural Information Processing Systems. 2600--2609. Google ScholarDigital Library
- Hebert-Johnson, U., Kim, M., Reingold, O., and Rothblum, G. 2018. Multicalibration: Cali- bration for the (computationally-identifiable) masses. In International Conference on Machine Learning. 1939--1948.Google Scholar
- Hu, L. and Chen, Y. 2018. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference. 1389--1398. Google ScholarDigital Library
- Ilvento, C. 2019. Metric learning for individual fairness. arXiv preprint arXiv:1906.00250 .Google Scholar
- Jabbari, S., Joseph, M., Kearns, M., Morgenstern, J., and Roth, A. 2017. Fairness in rein- forcement learning. In Proceedings of the 34th International Conference on Machine Learning- Volume 70. JMLR. org, 1617--1626. Google ScholarDigital Library
- Joseph, M., Kearns, M., Morgenstern, J. H., and Roth, A. 2016. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems. 325--333. Google ScholarDigital Library
- Jung, C., Kannan, S., Lee, C., Pai, M. M., Roth, A., and Vohra, R. 2020. Fair prediction with endogenous behavior. In The Twenty-First ACM Conference on Economics and Computation. Google ScholarDigital Library
- Jung, C., Kearns, M., Neel, S., Roth, A., Stapleton, L., and Wu, Z. S. 2019. Eliciting and enforcing subjective individual fairness. arXiv preprint arXiv:1905.10660 .Google Scholar
- Kannan, S., Roth, A., and Ziani, J. 2019. Downstream effects of affirmative action. In Pro- ceedings of the Conference on Fairness, Accountability, and Transparency. 240--248. Google ScholarDigital Library
- Kearns, M., Neel, S., Roth, A., and Wu, Z. S. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning. 2564--2572.Google Scholar
- Kearns, M. and Roth, A. 2019. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press.Google Scholar
- Kim, M. P., Reingold, O., and Rothblum, G. N. 2018. Fairness through computationally- bounded awareness. In Advances in Neural Information Processing Systems 31: Annual Con- ference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montreal, Canada, S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. 4847--4857. Google ScholarDigital Library
- Kleinberg, J., Mullainathan, S., and Raghavan, M. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 .Google Scholar
- Koren, J. R. 2016. What does that web search say about your credit? Los Angeles Times. Retrieved 9/15/2016.Google Scholar
- Liu, L. T., Dean, S., Rolf, E., Simchowitz, M., and Hardt, M. 2018. Delayed impact of fair machine learning. arXiv preprint arXiv:1803.04383 .Google Scholar
- Liu, L. T., Wilson, A., Haghtalab, N., Kalai, A. T., Borgs, C., and Chayes, J. 2020. The disparate equilibria of algorithmic decision making when individuals invest rationally. In Pro- ceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 381--391. Google ScholarDigital Library
- Miller, C. C. 2015. Can an algorithm hire better than a human? The New York Times. Retrieved 4/28/2016.Google Scholar
- Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464, 447--453.Google Scholar
- O'Neil, C. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. Google ScholarDigital Library
- Schneier, B. 2015. Data and Goliath: The hidden battles to collect your data and control your world. WW Norton & Company. Google ScholarDigital Library
- Sharifi-Malvajerdi, S., Kearns, M. J., and Roth, A. 2019. Average individual fairness: Al- gorithms, generalization and experiments. In Advances in Neural Information Processing Sys- tems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. B. Fox, and R. Garnett, Eds. 8240--8249.Google Scholar
- Shokri, R., Stronati, M., Song, C., and Shmatikov, V. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3--18.Google Scholar
- Yona, G. and Rothblum, G. 2018. Probably approximately metric-fair learning. In International Conference on Machine Learning. 5680--5688.Google Scholar
Recommendations
The ethics of algorithms: key problems and solutions
AbstractResearch on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society ...
Ethical pluralism and global information ethics
A global information ethics that seeks to avoid imperialistic homogenization must conjoin shared norms while simultaneously preserving the irreducible differences between cultures and peoples. I argue that a global information ethics may fulfill these ...
Issues in ethical data management
PPDP '17: Proceedings of the 19th International Symposium on Principles and Practice of Declarative ProgrammingData science holds incredible promise of improving people's lives, accelerating scientific discovery and innovation, and bringing about positive societal change. Yet, if not used responsibly, this technology can generate economic inequality, destabilize ...
Comments