Skip to main content
Log in

Training philosopher engineers for better AI

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

There is a deluge of AI-assisted decision-making systems, where our data serve as proxy to our actions, suggested by AI. The closer we investigate our data (raw input, or their learned representations, or the suggested actions), we begin to discover “bugs”. Outside of their test, controlled environments, AI systems may encounter situations investigated primarily by those in other disciplines, but experts in those fields are typically excluded from the design process and are only invited to attest to the ethical features of the resulting system or to comment on demonstrations of intelligence and aspects of craftmanship after the fact. This communicative impasse must be overcome. Our idea is that philosophical and engineering considerations interact and can be fruitfully combined in the AI design process from the very beginning. We embody this idea in the role of a philosopher engineer. We discuss the role of philosopher engineers in the three main design stages of an AI system: deployment management (what is the system’s intended use, in what environment?); objective setting (what should the system be trained to do, and how?); and training (what model should be used, and why?). We then exemplify the need for philosopher engineers with an illustrative example, investigating how the future decisions of an AI-based hiring system can be fairer than those contained in the biased input data on which it is trained; and we briefly sketch the kind of interdisciplinary education that we envision will help to bring about better AI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Availability of data and materials

Not applicable.

Code availability

Not applicable.

Notes

  1. E. Musk, CEO of Tesla, changed his title to “technoking” (BBC News; March 15, 2021).

References

  • Bakalar C, Barreto R, Bogen M, Corbett-Davies S, Hall M, Kloumann I, Lam M, Quiñonero Candela J, Raghavan M, Simons J, Tannen J, Tong E, Vredenburgh K, Zhao J (2021) Fairness on the ground: applying algorithmic fairness approaches to production systems. arXiv:2103.06172 [cs.LG]

  • Ball B (2021) Defeating fake news: on journalism, knowledge, and democracy. Moral Philos Polit 8(1):5–26

    Article  Google Scholar 

  • Basl J, Sandler R, Tiell S (2021) Getting from commitment to content in AI and data ethics: justice and explainability. Atlantic Council research report

  • Battaglia PW, Hamrick JB, Bapst V, Sanchez-Gonzalez A, Zambaldi V, Malinowski M, Tacchetti A, Raposo D, Santoro A, Faulkner R, Gulcehre C, Song F, Ballard A, Gilmer J, Dahl G, Vaswani A, Allen K, Nash C, Langston V, Dyer C, Heess N, Wierstra D, Kohli P, Botvinick M, Vinyals O, Li Y, Pascanu R (2018) Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261 [cs.LG]

  • Binns R (2018) Fairness in machine learning: lessons from political philosophy. PMLR 81:149–159

    Google Scholar 

  • Boden MA (2018) Artificial intelligence: a very short introduction. Oxford University Press, Oxford

    Book  MATH  Google Scholar 

  • Bolukbasi T, Chang K-W, Zou J, Saligrama V, Kalai A (2016) Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: NeurIPS’21

  • Bringsjord S, Govindarajulu N (2020) Artificial intelligence. In: Zalta EN (ed) The Stanford encyclopedia of philosophy (summer 2020 edition). Metaphysics Research Lab, Stanford University, Stanford

    Google Scholar 

  • Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler DM, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. arXiv:2005.14165 [cs.CL]

  • Canca C (2020) Operationalizing AI ethics principles. Commun ACM 63(12):18–21

    Article  Google Scholar 

  • Cantwell Smith B (2019) The promise of artificial intelligence: reckoning and judgment. The MIT Press, Cambridge

    Book  Google Scholar 

  • Dennett DC (1987) The intentional stance. The MIT Press, Cambridge

    Google Scholar 

  • Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL’19

  • Dreyfus HL (1965) Alchemy and artificial intelligence. RAND Corporation, Santa Monica, p 3244

    Google Scholar 

  • Dreyfus HL (1972) What computers can’t do. The MIT Press, Cambridge

    Google Scholar 

  • Dreyfus HL (1992) What computers still can’t do. The MIT Press, Cambridge

    Google Scholar 

  • Dreyfus HL (2007) Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Philos Psychol 20(2):247–268

    Article  Google Scholar 

  • Friedman B, Hendry DG (2019) Value sensitive design: shaping technology with moral imagination. The MIT Press, Cambridge

    Book  Google Scholar 

  • Goyal A, Bengio Y (2021) Inductive biases for deep learning of higher-level cognition. arXiv:2011.15091 [cs.LG]

  • Hazirbas C, Bitton J, Dolhansky B, Pan J, Gordo A, Ferrer CC (2021) Towards measuring fairness in AI: the casual conversations dataset. arXiv:2104.02821 [cs.CV]

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE CVPR’16, pp 770–778

  • Ihde D (1999) Technology and prognostic predicaments. AI Soc 13:44–51

    Article  Google Scholar 

  • Kaplan J, McCandlish S, Henighan T, Brown TB, Chess B, Child R, Gray S, Radford A, Wu J, Amodei D (2020) Scaling laws for neural language models. arXiv:2001.08361 [cs.LG]

  • Kolb DA (2015) Experiential learning: experience as the source of learning and development, 2nd edn. Pearson FT Press, Upper Saddle River

    Google Scholar 

  • Koliousis A, Watcharapichat P, Weidlich M, Mai L, Costa P, Pietzuch P (2019) Crossbow: scaling deep learning with small batch sizes on multi-GPU servers. Proc VLDB Endow 12(11):1399–1412

    Article  Google Scholar 

  • LeCun Y, Bengio Y (1995) Convolutional networks for images, speech, and time-series. In: Arbib MA (ed) The handbook of brain theory and neural networks. MIT Press, Cambridge

    Google Scholar 

  • Lucas JR (1964) Mins, machines, and Gödel. In: Anderson AR (ed) Minds and machines. Prentice-Hall, NJ: Prentice-Hall, pp 43–59

  • Marcus G, Davis E (2019) Rebooting AI: building artificial intelligence we can trust. Pantheon Books, New York

    Google Scholar 

  • Menon S, Damian A, Hu S, Ravi N, Rudin C (2020) PULSE: self-supervised photo upsampling via latent space exploration of generative models. In: CVPR’20

  • Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Raji ID, Gebru T (2019) Model cards for model reporting. In :ACM FAT*’19

  • Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc 3(2). https://doi.org/10.1177/2053951716679679

  • Penrose R (1989) The emperor’s new mind. Oxford University Press, Oxford

    Book  Google Scholar 

  • Penrose R (1994) Shadows of the mind. Oxford University Press, Oxford

    Google Scholar 

  • Pollock JL (1987) Defeasible reasoning. Cogn Sci 11(4):481–518

    Article  Google Scholar 

  • Radosavovic I, Kosaraju R, Girshick R, He K, Dollar P (2020) Designing network design spaces. In: CVPR’20

  • Russell S, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Pearson, Upper Saddle River

    MATH  Google Scholar 

  • Searle J (1980) Minds, brains and programs. Behav Brain Sci 3:417–457

    Article  Google Scholar 

  • So DR, Liang C, Le QV (2019) The evolved transfomer. In: Proceedings of the 36th international conference on machine learning, PMLR, vol 97, pp 5877—5886

  • Sun T, Gaut A, Tang S, Huang Y, ElSherief M, Zhao J, Mirza D, Belding E, Chang K-W, Wang WY (2019) Mitigating gender bias in natural language processing: literature review. In: ACL’19

  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: NeurIPS’17, pp 6000–6010

  • Véliz C (2020) Data, Privacy and the Individual. Center for the Governance of Change, Madrid

    Google Scholar 

  • Vig J (2019) A multiscale visualization of attention in the transformer model. In: ACL’19

  • Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S (2019) GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: ICLR’19

  • Wilson C, Ghosh A, Jiang S, Mislove A, Baker L, Szary J, Trindel K, Polli F (2021) Building and auditing fair algorithms: a case study in candidate screening. In: ACM FAccT ’21

  • Wooldridge M (2020) The road to conscious machines: the story of AI. Pelican Books, London

    Google Scholar 

  • Zerilli J (2020) Explaining machine learning decisions. Philos Sci 89(1):1–19

Download references

Acknowledgements

Thanks to Christo Wilson, Dimitrios Mylonas, Fintan Nagle and Sachi Arafat for their valuable feedback on earlier versions of this work, to NCH at Northeastern for support through its RLDI grant scheme, and to the Trans-Atlantic Information Ethics co-investigator Ron Sandler.

Funding

This article was supported by a New College of the Humanities (NCH) at Northeastern Research and Learning Development Initiative (RLDI) grant on Trans-Atlantic Information Ethics, with co-investigators Brian Ball and Ron Sandler as project leads.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Ball.

Ethics declarations

Conflict of interest

Not applicable.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ball, B., Koliousis, A. Training philosopher engineers for better AI. AI & Soc 38, 861–868 (2023). https://doi.org/10.1007/s00146-022-01535-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-022-01535-7

Keywords

Navigation