Skip to main content

Advertisement

Log in

Agency in augmented reality: exploring the ethics of Facebook’s AI-powered predictive recommendation system

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

The development of predictive algorithms for personalized recommendations that prioritize ads, filter content, and tailor our decision-making processes will increasingly impact our society in the upcoming years. One example of what this future might hold was recently presented by Facebook Reality Labs (FRL) who work on augmented reality (AR) glasses powered by contextually aware AI that allows the user to “communicate, navigate, learn, share, and take action in the world” (Facebook Reality Labs 2021). A major feature of those glasses is “the intelligent click” that presents action prompts to the user based on their personal history and previous choices. The user can accept or decline those suggested action prompts depending on individual preferences. Facebook/Meta presents this technology as a gateway to “increased agency”. However, Facebook’s claim presumes a simplistic view of agency according to which our agentive capacities increase parallel to the ease in which our actions are carried out. Technologies that structure people’s lives need to be based on a deeper understanding of agency that serves as the conceptual basis in which predictive algorithms are developed. With the goal of mapping this emerging terrain, the aim of this paper is to offer a thorough analysis of the agency-limiting risks and the agency-enhancing potentials of Facebook’s “intelligent click” feature. Based on a concept of agency by Dignum (Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing, Cham, 2019), the three agential dimensions of autonomy (acting independently), adaptability (reacting to changes in the environment), and interactivity (interacting with other agents) are analyzed towards our ability to make self-determining choices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. In a recent interview with the verge [24], Facebook’s CEO Mark Zuckerberg revealed more information about the companies’ long term goals by explaining his vision of generating what he calls a “metaverse”, i.e. a space in which the physical and the virtual world come together to build their own economy. Zuckerberg describes this as an “embodied internet where instead of just viewing content—you are in it.” This vision became more concrete with the recent announcement of the company’s name change from Facebook to Meta at their annual meeting “Connect 2021” [22].

  2. While many businesses struggled in 2020 due to the Covid-19 pandemic, Netflix increased their subscribers to 203.67 million [20].

  3. There are many other ethical issues discussed in the literature such as algorithm privacy [9], bias [16], and trust [33]. While those issues are connected to agency in numerous direct and indirect ways, this paper will concentrate on the influence of AI on the agentive capacities of their users.

  4. It should be noted that the literature offers a variety of definitions, criteria, and viewpoints for human agency. In the philosophical subfield of action theory, agency is tied to intentionality of a person performing an action [4], [25]. While the three dimensions introduced in this paper are not sufficient to capture the whole phenomenon of agency, they are exceptionally well suited to point at those agentive capacities that might be taken over by an artificial system. This makes them an ideal candidate to analyze the agentive relationship between human agents and AI powered devices.

  5. The only exceptions are action prompts that are intentionally set by the user as a reminder to start the respective activity or offering more autonomy-preserving choices. In the former case, the user would be the decision-maker by setting up appropriate alarms, or allowing the algorithm to be notified if certain criteria are met. In the latter case, in addition to a simple affirmation through clicking “yes”, action prompts could also be accompanied by other agency preserving prompts such as “no” or, if the system is constantly nagging, a “leave me alone” button).

  6. I want to thank my anonymous reviewer for suggesting this example.

References

  1. Adomavicius, G., Bockstedt, J.C., Curley, S.P., Zhang, J.: Do recommender systems manipulate consumer preferences? A study of anchoring effects. Inf. Syst. Res. 24, 956–975 (2013)

    Article  Google Scholar 

  2. Banker, S., Khetani, S.: Algorithm overdependence: how the use of algorithmic recommendation systems can increase risks to consumer well-being. J. Public Policy Mark. 38, 500–515 (2019)

    Article  Google Scholar 

  3. Boddington, P.: Towards a code of ethics for artificial intelligence research. Springer, Berlin Heidelberg, New York (2017)

    Book  Google Scholar 

  4. Bratman, M.: Intention, plans, and practical reason. Harvard University Press, Cambridge (1987)

    Google Scholar 

  5. Bucher, T.: The friendship assemblage: investigating programmed sociality on Facebook. Television & New Media 14, 479–493 (2013)

    Article  Google Scholar 

  6. Chambers, D.: Networked intimacy: algorithmic friendship and scalable sociality. Eur. J. Commun. 32, 26–36 (2017)

    Article  Google Scholar 

  7. Crews, C., Colson, C., Elson, R.: It does matter who your friends are: a case study of Netflix and “friends” licensing. Global J. Bus. Pedagogy 4, 6–13 (2020)

    Google Scholar 

  8. Dignum, V.: Responsible artificial intelligence: how to develop and use AI in a responsible way. Springer International Publishing, Cham (2019)

    Book  Google Scholar 

  9. Dilmaghani, S., Brust, M.R., Danoy, G., Cassagnes, N., Pecero, J., Bouvry, P.: Privacy and Security of Big Data in AI Systems: A Research and Standards Perspective. In: 2019 IEEE International Conference on Big Data (Big Data). Los Angeles, CA, USA: IEEE. 5737–5743. Online available: https://ieeexplore.ieee.org/document/9006283/ (2019)

  10. Facebook Reality Labs: Inside Facebook Reality Labs: Wrist-based interaction for the next computing platform. In: Tech@Facebook. Online available: https://tech.fb.com/inside-facebook-reality-labs-wrist-based-interaction-for-the-next-computing-platform/ (2021)

  11. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., Vayena, E.: AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind. Mach. 28, 689–707 (2018)

    Article  Google Scholar 

  12. Garzón, J., Pavón, J., Baldiris, S.: Systematic review and meta-analysis of augmented reality in educational settings. Virtual Reality 23, 447–459 (2019)

    Article  Google Scholar 

  13. Gomez-Uribe, C.A., Hunt, N.: The Netflix recommender system: algorithms, business value, and innovation. ACM Trans. Manag. Inf. Syst. 6, 1–19 (2016)

    Article  Google Scholar 

  14. Green, B., Chen, Y.: The principles and limits of algorithm-in-the-loop decision making. Proc. ACM Hum-Comput. Interact. 3, 1–24 (2019)

    Article  Google Scholar 

  15. Hancock, J.T., Naaman, M., Levy, K.: AI-mediated communication: definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25, 89–100 (2020)

    Article  Google Scholar 

  16. Harris, C.: Mitigating cognitive biases in machine learning algorithms for decision making. In: Companion Proceedings of the Web Conference 2020. Taipei Taiwan: ACM. S. 775–781. Available: https://dl.acm.org/doi/https://doi.org/10.1145/3366424.3383562 (2020)

  17. Ibáñez, M.-B., Delgado-Kloos, C.: Augmented reality for STEM learning: a systematic review. Comput. Educ. 123, 109–123 (2018)

    Article  Google Scholar 

  18. Johnson, C.W., Shea, C., Holloway, C.M.: The role of trust and interaction in GPS related accidents: a human factors safety assessment of the global positioning system (GPS). Vancouver (2008)

  19. Logg, J.M., Minson, J.A., Moore, D.A.: Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019)

    Article  Google Scholar 

  20. Lozic, J.: Financial analysis of Netflix platform at the time of Covid 19 pandemic. In: Economic and social development. Rabat (2021)

    Google Scholar 

  21. Matthews, J.: Netflix and the design of the audience. MedieKultur 69, 52–70 (2020)

    Article  Google Scholar 

  22. Meta: Introducing Meta: A Social Technology Company. Online available: https://about.fb.com/news/2021/10/facebook-company-is-now-meta/ (2021)

  23. Miller, M.R., Jun, H., Herrera, F., Villa, Y., Jacob, W., Greg, B., Jeremy, N.: Social interaction in augmented reality. PLoS ONE 14(5), 2016290 (2019)

    Article  Google Scholar 

  24. Newton, C.: Mark in the metaverse. Facebook’s CEO on why the social network is becoming ‘a metaverse company’. In: The Verge (2021)

  25. Pacherie, E.: The phenomenology of action: a conceptual framework. Cognition 107, 179–217 (2008)

    Article  Google Scholar 

  26. Palmarini, R., Erkoyuncu, J.A., Roy, R., Torabmostaedi, H.: A systematic review of augmented reality applications in maintenance. Robot. Comput –Integr. Manuf. 49, 215–228 (2018)

    Article  Google Scholar 

  27. Robbins, J.: GPS navigation… but what is it doing to us? In: 2010 IEEE International Symposium on Technology and Society. Wollongong, Australia: IEEE. 309–318. Online available: http://ieeexplore.ieee.org/document/5514623/ (2010)

  28. Siles, I., Espinoza-Rojas, J., Naranjo, A., Tristán, M. F.: The mutual domestication of users and algorithmic recommendations on Netflix. In: Communication, Culture and Critique (2019)

  29. Sundar, S.S.: Rise of Machine agency: a framework for studying the psychology of human–AI interaction (HAII). J. Comput.-Mediat. Commun. 25, 74–88 (2020)

    Article  Google Scholar 

  30. Susser, D.: Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu HI USA: ACM. 403–408. Online available: https://dl.acm.org/doi/https://doi.org/10.1145/3306618.3314286 (2019)

  31. Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., Floridi, L.: The ethics of algorithms: key problems and solutions. AI & Soc. (2021). https://doi.org/10.2139/ssrn.3662302

    Article  Google Scholar 

  32. Vávra, P., Roman, J., Zonča, P., Ihnát, P., Němec, M., Kumar, J., Habib, N., El-Gendi, A.: Recent development of augmented reality in surgery: a review. J. Healthcare Eng. 2017, 1–9 (2017)

    Article  Google Scholar 

  33. Winfield, A.F.T., Jirotka, M.: Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 376, 20180085 (2018)

    Article  Google Scholar 

  34. Yung, R., Khoo-Lattimore, C.: New realities: a systematic literature review on virtual reality and augmented reality in tourism research. Curr. Issue Tour. 22, 2056–2081 (2019)

    Article  Google Scholar 

Download references

Funding

This work was supported by the National Science Foundation (EEC-1028725).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Schönau.

Ethics declarations

Conflict of interest

The author has no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schönau, A. Agency in augmented reality: exploring the ethics of Facebook’s AI-powered predictive recommendation system. AI Ethics 3, 407–417 (2023). https://doi.org/10.1007/s43681-022-00158-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00158-4

Keywords