skip to main content
opinion

Integrating Behavioral, Economic, and Technical Insights to Understand and Address Algorithmic Bias: A Human-Centric Perspective

Published: 14 May 2022 Publication History

Abstract

Many important decisions are increasingly being made with the help of information systems that use artificial intelligence and machine learning models. These computational models are designed to discover useful patterns from large amounts of data, which augment human capabilities to make decisions in various application domains. However, there are growing concerns regarding the ethics challenges faced by these automated decision-making (ADM) models, most notably on the issue of algorithmic bias, in which the models systematically produce less favorable (i.e., unfair) decisions for certain groups of people. In this commentary, we argue that algorithmic bias is not just a technical (e.g., computational or statistical) problem, and its successful resolution requires deep insights into individual and organizational behavior, economic incentives, as well as complex dynamics of the sociotechnical systems in which the ADM models are embedded. We discuss a human-centric, fairness-aware ADM framework that highlights the holistic involvement of human decision makers in each step of ADM. We review the emerging literature on fairness-aware machine learning and then discuss various strategic decisions that humans need to make, such as formulating proper fairness objectives, recognizing fairness-induced trade-offs and implications, utilizing machine learning model outputs, and managing/governing the decisions of ADM models. We further illustrate how these strategic decisions are jointly informed by behavioral, economic, and design sciences. Our discussions reveal a number of future research opportunities uniquely suitable for Management Information Systems (MIS) researchers to pursue.

References

[1]
Ahmed Abbasi, Jingjing Li, Gari Clifford, and Herman Taylor. 2018. Make “Fairness by Design” part of machine learning. Harvard Business Review (2018). https://hbr.org/2018/08/make-fairness-by-design-part-of-machine-learning.
[2]
Serge Abiteboul and Julia Stoyanovich. 2019. Transparency, fairness, data protection, neutrality: Data management challenges in the face of new regulation. Journal of Data and Information Quality 11, 3 (2019), 1–9.
[3]
Julius Adebayo and Lalana Kagal. 2016. Iterative orthogonal feature projection for diagnosing bias in black-box models. (2016). arXiv:1611.04967
[4]
Gediminas Adomavicius, Jesse C. Bockstedt, Shawn P. Curley, and Jingjing Zhang. 2013. Do recommender systems manipulate consumer preferences? A study of anchoring effects. Information Systems Research 24, 4 (2013), 956–975.
[5]
Gediminas Adomavicius, Jesse C. Bockstedt, Shawn P. Curley, and Jingjing Zhang. 2018. Effects of online recommendations on consumers’ willingness to pay. Information Systems Research 29, 1 (2018), 84–102.
[6]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. arXiv:1803.02453 (2018).
[7]
Mehmet Eren Ahsen, Mehmet Ulvi Saygi Ayvaci, and Srinivasan Raghunathan. 2019. When algorithmic predictions use human-generated data: A bias-aware classification algorithm for breast cancer diagnosis. Information Systems Research 30, 1 (2019), 97–116.
[8]
McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What we can’t measure, we can’t understand: Challenges to demographic data procurement in the pursuit of fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 249–260.
[9]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23 (2016), 2016.
[10]
Sulin Ba and Paul A. Pavlou. 2002. Evidence of the effect of trust building technology in electronic markets: Price premiums and buyer behavior. MIS Quarterly 26, 3 (2002), 243–268.
[11]
Ricardo Baeza-Yates. 2018. Bias on the web. Communications of the ACM 61, 6 (2018), 54–61.
[12]
Natã M. Barbosa and Monchu Chen. 2019. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 1–12.
[13]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and Machine Learning. fairmlbook. org, 2019.
[14]
Solon Barocas and Andrew D. Selbst. 2016. Big data’s disparate impact. California Law Review 104 (2016), 671.
[15]
Arianne Renan Barzilay and Anat Ben-David. 2016. Platform inequality: Gender in the gig-economy. Seton Hall Law Review 47 (2016), 393.
[16]
Johannes M. Bauer and Paulien M. Herder. 2009. Designing socio-technical systems. In Philosophy of Technology and Engineering Sciences, Anthonie Meijers (Ed.). Elsevier, 601–630.
[17]
Yahav Bechavod and Katrina Ligett. 2017. Penalizing unfairness in binary classification. (2017). arXiv:1707.00044
[18]
Gary S. Becker. 2010. The Economics of Discrimination. University of Chicago Press.
[19]
W. Bedingfield. 2020. Everything That Went Wrong with the Botched A–Levels Algorithm. Wired. https://www.wired.co.uk/article/alevel-exam-algorithm.
[20]
France Bélanger and Robert E. Crossler. 2011. Privacy in the digital age: A review of information privacy research in information systems. MIS Quarterly 35, 4 (2011), 1017–1042.
[21]
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, A. Mojsilović, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, and Y. Zhang. 2019. AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development 63, 4/5 (2019), 4–1.
[22]
Bettina Berendt and Sören Preibusch. 2012. Exploring discrimination: A user-centric evaluation of discrimination-aware data mining. In IEEE 12th International Conference on Data Mining Workshops. IEEE, 344–351.
[23]
Bettina Berendt and Sören Preibusch. 2014. Better decision support through exploratory discrimination-aware data mining: Foundations and empirical evidence. Artificial Intelligence and Law 22, 2 (2014), 175–209.
[24]
Bettina Berendt and Sören Preibusch. 2017. Toward accountable discrimination-aware data mining: The importance of keeping the human in the loop–and under the looking glass. Big Data 5, 2 (2017), 135–152.
[25]
Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A convex framework for fair regression. (2017). arXiv:1706.02409
[26]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2021. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research 50, 1 (2021), 3–44. 0049124118782533.
[27]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review 94, 4 (2004), 991–1013.
[28]
Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 514–524.
[29]
Emily Black, Samuel Yeom, and Matt Fredrikson. 2020. FlipTest: Fairness testing via optimal transport. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 111–121.
[30]
Iris Bohnet, Alexandra Van Geen, and Max Bazerman. 2016. When performance trumps gender bias: Joint vs. separate evaluation. Management Science 62, 5 (2016), 1225–1234.
[31]
Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems. Curran Associates, 4349–4357.
[32]
Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2016. The social dilemma of autonomous vehicles. Science 352, 6293 (2016), 1573–1576.
[33]
Amanda Bower, Laura Niss, Yuekai Sun, and Alexander Vargo. 2018. Debiasing representations by removing unwanted variation due to protected attributes. (2018). arXiv:1807.00461
[34]
David A. Broniatowski et al. 2021. Psychological foundations of explainability and interpretability in artificial intelligence. NIST: National Institute of Standards and Technology, US Department of Commerce.
[35]
Meredith Broussard. 2018. Artificial Unintelligence: How Computers Misunderstand the World. MIT Press.
[36]
Erik Brynjolfsson and Tom Mitchell. 2017. What can machine learning do? Workforce implications. Science 358, 6370 (2017), 1530–1534.
[37]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. Association for Computing Machinery (ACM), 77–91.
[38]
Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. 2009. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops. IEEE, 13–18.
[39]
Toon Calders, Asim Karim, Faisal Kamiran, Wasif Ali, and Xiangliang Zhang. 2013. Controlling attribute effect in linear regression. In IEEE 13th International Conference on Data Mining. IEEE, 71–80.
[40]
Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21, 2 (2010), 277–292.
[41]
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186.
[42]
Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R. Varshney. 2017. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems. Curran Associates, 3992–4001.
[43]
Ran Canetti, Aloni Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, and Adam Smith. 2019. From soft classifiers to hard decisions: How fair can we be?. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 309–318.
[44]
Simon Caton and Christian Haas. 2020. Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053 (2020).
[45]
L. Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K. Vishnoi. 2019. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 319–328.
[46]
Irene Chen, Fredrik D. Johansson, and David Sontag. 2018. Why is my classifier discriminatory?. In Advances in Neural Information Processing Systems. Curran Associates, 3539–3550.
[47]
Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell. 2019. Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 339–348.
[48]
Xingyu Chen, Brandon Fain, Charles Lyu, and Kamesh Munagala. 2019. Proportionally fair clustering. (2019). arXiv:1905.03674
[49]
Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. 2017. Fair clustering through fairlets. In Advances in Neural Information Processing Systems. Curran Associates, 5029–5037.
[50]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 153–163.
[51]
Alexandra Chouldechova and Aaron Roth. 2020. A snapshot of the frontiers of fairness in machine learning. Communications of the ACM 63, 5 (2020), 82–89.
[52]
Charles T. Clotfelter, Helen F. Ladd, and Jacob L. Vigdor. 2009. The academic achievement gap in grades 3 to 8. The Review of Economics and Statistics 91, 2 (2009), 398–419.
[53]
Brian W. Collins. 2007. Tackling unconscious bias in hiring practices: The plight of the Rooney rule. NYU Law Review 82 (2007), 870.
[54]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. (2018). arXiv:1808.00023
[55]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery (ACM), 797–806.
[56]
Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, and Seungil You. 2018. Training Fairness-Constrained Classifiers to Generalize.
[57]
Bo Cowgill and Catherine E. Tucker. 2019. Economics, fairness and algorithmic bias. Available at SSRN: https://ssrn.com/abstract=3361280 (2019).
[58]
Elliot Creager, David Madras, Jörn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. 2019. Flexibly fair representation learning by disentanglement. (2019). arXiv:1906.02589
[59]
Mary L. Cummings and Songpo Li. 2021. Subjectivity in the creation of machine learning models. ACM Journal of Data and Information Quality 13, 2 (2021), 1–19.
[60]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. San Francisco, CA: Reuters. Retrieved October 9, 2018 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
[61]
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE Symposium on Security and Privacy (SP’16). IEEE, 598–617.
[62]
Fred D. Davis, Richard P.Bagozzi, and Paul R. Warshaw. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management Science 35, 8 (1989), 982–1003.
[63]
Paul B. De Laat. 2018. Algorithmic decision-making based on machine learning from big data: Can transparency restore accountability?Philosophy & Technology 31, 4 (2018), 525–541.
[64]
Berkeley Dietvorst and Soaham Bharti. 2019. People reject even the best possible algorithm in uncertain decision domains. Psychological Science 31, 10 (2020), 1302–1314.
[65]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.
[66]
Christos Dimitrakakis, Yang Liu, David C. Parkes, and Goran Radanovic. 2019. Bayesian fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 509–516.
[67]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. Association for Computing Machinery (ACM), 214–226.
[68]
Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. 2017. Decoupled classifiers for fair and efficient machine learning. (2017). arXiv:1707.06613
[69]
Madeleine Clare Elish. 2019. Moral crumple zones: Cautionary tales in human-robot interaction. Available at SSRN: https://ssrn.com/abstract=2757236.
[70]
Hadi Elzayn, Shahin Jabbari, Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, and Zachary Schutzman. 2019. Fair algorithms for learning in allocation problems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 170–179.
[71]
Virginia Eubanks. 2018. Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
[72]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery (ACM), 259–268.
[73]
Benjamin Fish, Jeremy Kun, and Adám D. Lelkes. 2015. Fair boosting: A case study. In Workshop on Fairness, Accountability, and Transparency in Machine Learning. Citeseer.
[74]
Dylan J. Fitzpatrick, Wilpen L. Gorr, and Daniel B. Neill. 2019. Keeping score: Predictive analytics in policing. Annual Review of Criminology 2 (2019), 473–491.
[75]
William M. Fox. 1995. Sociotechnical system principles and guidelines: Past and present. The Journal of Applied Behavioral Science 31, 1 (1995), 91–105.
[76]
Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Fair boosting: A case study 329–338.
[77]
Runshan Fu, Manmohan Aseri, Param Vir Singh, and Kannan Srinivasan. 2019. ’Un’fair machine learning algorithms. Available at SSRN 3408275 (2019).
[78]
Andreas Fügener, Jörn Grahl, Alok Gupta, and Wolfgang Ketter. 2021. Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI. Management Information Systems Quarterly 45, 3 (2021).
[79]
Susanne Gaube, Harini Suresh, Martina Raue, Alexander Merritt, Seth J. Berkowitz, Eva Lermer, Joseph F. Coughlin, John V. Guttag, Errol Colak, and Marzyeh Ghassemi. 2021. Do as AI say: Susceptibility in deployment of clinical decision-aids. NPJ Digital Medicine 4, 1 (2021), 1–8.
[80]
David Gefen, Elena Karahanna, and Detmar W. Straub. 2003. Trust and TAM in online shopping: An integrated model. MIS Quarterly 27, 1 (2003), 51–90.
[81]
Nelson Granados and Alok Gupta. 2013. Transparency strategy: Competing with information in a digital world. MIS Quarterly 37, 2 (2013), 637–641.
[82]
Ben Green and Yiling Chen. 2019. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 90–99.
[83]
Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference. Association for Computing Machinery (ACM), 903–912.
[84]
Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi, and Adrian Weller. 2017. On fairness, diversity and randomness in algorithmic decision making. (2017). arXiv:1706.10208
[85]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P. Gummadi, and Adrian Weller. 2018. Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In AAAI, Vol. 18. 51–60.
[86]
Barbara J. Grosz, David Gray Grant, Kate Vredenburgh, Jeff Behrends, Lily Hu, Alison Simmons, and Jim Waldo. 2019. Embedded ethics: Integrating ethics across CS education. Communications of the ACM 62, 8 (2019), 54–61.
[87]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM Computing Surveys 51, 5 (2018), 1–42.
[88]
Varun Gulshan, Lily Peng, Marc Coram, Martin C. Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, et al. 2016. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 22 (2016), 2402–2410.
[89]
Sara Hajian and Josep Domingo-Ferrer. 2012. A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering 25, 7 (2012), 1445–1459.
[90]
Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. 2017. Bias in online freelance marketplaces: Evidence from TaskRabbit and Fiverr. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. Association for Computing Machinery (ACM), 1914–1933.
[91]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. Association for Computing Machinery (ACM), 111–122.
[92]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems. Curran Associates, 3315–3323.
[93]
Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. (2018). arXiv:1806.08010
[94]
Hoda Heidari, Michele Loi, Krishna P. Gummadi, and Andreas Krause. 2019. A moral framework for understanding fair ML through economic models of equality of opportunity. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 181–190.
[95]
Ben Hutchinson and Margaret Mitchell. 2019. 50 years of test (Un) fairness: Lessons for machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 49–58.
[96]
Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2017. Fairness in reinforcement learning. In International Conference on Machine Learning. PMLR, 1617–1626.
[97]
Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 375–385.
[98]
Hemant Jain, Balaji Padmanabhan, Paul A. Pavlou, and T. S. Raghu. 2021. Editorial for the special section on humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Information Systems Research 32, 3 (2021), 675–687.
[99]
Heinrich Jiang and Ofir Nachum. 2020. Identifying and correcting label bias in machine learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 702–712.
[100]
Charles E. Jordan, Stanley J. Clark, and Marilyn A. Waldron. 2007. Gender bias and compensation in the executive suite of the Fortune 100. Journal of Organizational Culture, Communications and Conflict 11, 1 (2007), 19–29.
[101]
Matthew Joseph, Michael Kearns, Jamie H. Morgenstern, and Aaron Roth. 2016. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems. Curran Associates, 325–333.
[102]
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth, and Rakesh Vohra. 2020. Fair prediction with endogenous behavior. (2020). arXiv:2002.07147
[103]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In 2nd International Conference on Computer, Control and Communication. IEEE, 1–6.
[104]
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems 33, 1 (2012), 1–33.
[105]
Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. 2010. Discrimination aware decision tree learning. In 2010 IEEE International Conference on Data Mining. IEEE, 869–874.
[106]
Faisal Kamiran, Asim Karim, Sicco Verwer, and Heike Goudriaan. 2012. Classifying socially sensitive data without discrimination: An analysis of a crime suspect dataset. In IEEE 12th International Conference on Data Mining Workshops. IEEE, 370–377.
[107]
Faisal Kamiran, Indrė Žliobaitė, and Toon Calders. 2013. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and Information Systems 35, 3 (2013), 613–644.
[108]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2013. The independence of fairness-aware classifiers. In IEEE 13th International Conference on Data Mining Workshops. IEEE, 849–858.
[109]
Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. 2011. Fairness-aware learning through regularization approach. In IEEE 11th International Conference on Data Mining Workshops. IEEE, 643–650.
[110]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning. PMLR, 2564–2572.
[111]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2019. An empirical study of rich subgroup fairness for machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 100–109.
[112]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. Curran Associates, 656–666.
[113]
Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, and Adrian Weller. 2018. Blind justice: Fairness with encrypted sensitive attributes. (2018). arXiv:1806.03281
[114]
Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Fairness through computationally-bounded awareness. In Advances in Neural Information Processing Systems. Curran Associates, 4842–4852.
[115]
Michael P. Kim, Amirata Ghorbani, and James Zou. 2019. Multiaccuracy: Black-box post-processing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM), 247–254.
[116]
Keith Kirkpatrick. 2016. Battling Algorithmic Bias: How Do We Ensure Algorithms Treat us Fairly?Communications of the ACM 59, 10 (2016), 16–17.
[117]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. The Quarterly Journal of Economics 133, 1 (2018), 237–293.
[118]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In Aea Papers and Proceedings, Vol. 108. 22–27.
[119]
Jon Kleinberg and Sendhil Mullainathan. 2019. Simplicity creates inequity: Implications for fairness, stereotypes, and interpretability. In Proceedings of the 2019 ACM Conference on Economics and Computation. Association for Computing Machinery (ACM), 807–808.
[120]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. (2016). arXiv:1609.05807
[121]
Jon Kleinberg and Manish Raghavan. 2018. Selection problems in the presence of implicit bias. (2018). arXiv:1801.03533
[122]
Tomáš Kliegr, Štěpán Bahník, and Johannes Fürnkranz. 2018. A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. (2018). arXiv:1804.02969
[123]
Amanda J. Koch, Susan D. D’Mello, and Paul R. Sackett. 2015. A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making.Journal of Applied Psychology 100, 1 (2015), 128.
[124]
Issa Kohler-Hausmann. 2019. Eddie Murphy and the dangers of counterfactual causal thinking about detecting racial discrimination. Northwestern University Law Review 113 (2019), 1163.
[125]
Sherrie Y. X. Komiak and Izak Benbasat. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly 30, 4 (2006), 941–960.
[126]
Junpei Komiyama, Akiko Takeda, Junya Honda, and Hajime Shimao. 2018. Nonconvex optimization for regression with fairness constraints. In International Conference on Machine Learning. PMLR, 2737–2746.
[127]
Armin Krishnan. 2009. Killer Robots: Legality and Ethicality of Autonomous Weapons. Ashgate Publishing, Ltd.
[128]
Joshua A. Kroll. 2018. The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2133 (2018), 20180084.
[129]
Joshua A. Kroll. 2021. Outlining traceability: A principle for operationalizing accountability in computing systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 758–771.
[130]
Preethi Lahoti, Krishna P. Gummadi, and Gerhard Weikum. 2019. ifair: Learning individually fair data representations for algorithmic decision making. In IEEE 35th International Conference on Data Engineering (ICDE’19). IEEE, 1334–1345.
[131]
Anja Lambrecht and Catherine Tucker. 2019. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 65, 7 (2019), 2966–2981.
[132]
Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. Association for Computing Machinery (ACM), 1035–1048.
[133]
Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. 2018. Does mitigating ML’s impact disparity require treatment disparity?. In Advances in Neural Information Processing Systems. Curran Associates, 8125–8135.
[134]
Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed impact of fair machine learning. (2018). arXiv:1803.04383
[135]
Jennifer M. Logg, Julia A. Minson, and Don A. Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151 (2019), 90–103.
[136]
Kristian Lum and James Johndrow. 2016. A statistical framework for fair predictive algorithms. (2016). arXiv:1610.08077
[137]
Scott Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874 (2017).
[138]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. (2018). arXiv:1802.06309
[139]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2019. Fairness through causal awareness: Learning causal latent-variable models for biased data. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 349–358.
[140]
Momin M. Malik. 2020. A hierarchy of limitations in machine learning. arXiv:2002.05193 (2020).
[141]
Koray Mancuhan and Chris Clifton. 2014. Combating discrimination using bayesian networks. Artificial Intelligence and Law 22, 2 (2014), 211–238.
[142]
Panos M. Markopoulos and Kartik Hosanagar. 2018. A model of product design and information disclosure investments. Management Science 64, 2 (2018), 739–759.
[143]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys 54, 6 (2021), 1–35.
[144]
John Miller, Smitha Milli, and Moritz Hardt. 2020. Strategic classification is causal modeling in disguise. In International Conference on Machine Learning. PMLR, 6917–6926.
[145]
Sabyasachi Mitra and Sam Ransbotham. 2015. Information disclosure and the diffusion of information security attacks. Information Systems Research 26, 3 (2015), 565–584.
[146]
Deirdre K. Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This thing called fairness: Disciplinary confusion realizing a value in technology. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–36.
[147]
Kee Yuan Ngiam and Wei Khor. 2019. Big data and machine learning algorithms for health-care delivery. The Lancet Oncology 20, 5 (2019), e262–e273.
[148]
Alexandra Niessen-Ruenzi and Stefan Ruenzi. 2019. Sex matters: Gender bias in the mutual fund industry. Management Science 65, 7 (2019), 3001–3025.
[149]
Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.
[150]
Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. 2021. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv:2103.14749 (2021).
[151]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453.
[152]
Cathy O’Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books.
[153]
Paul A. Pavlou. 2011. State of the information privacy literature: Where are we now and where should we go?MIS Quarterly 35, 4 (2011), 977–988.
[154]
Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2009. Integrating induction and deduction for finding evidence of discrimination. In Proceedings of the 12th International Conference on Artificial Intelligence and Law. Association for Computing Machinery (ACM), 157–166.
[155]
Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2009. Measuring discrimination in socially-sensitive decision records. In Proceedings of the 2009 SIAM International Conference on Data Mining. SIAM, 581–592.
[156]
Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2012. A study of top-k measures for discrimination discovery. In Proceedings of the 27th Annual ACM Symposium on Applied Computing. Association for Computing Machinery (ACM), 126–131.
[157]
Walt L. Perry. 2013. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Rand Corporation.
[158]
Edmund S. Phelps. 1972. The statistical theory of racism and sexism. The American Economic Review 62, 4 (1972), 659–661.
[159]
K. Philip and S. J. S. Chan. 1998. Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection. In Proceedings of the 4th International Conference on Knowledge Discovery and Data Mining. AAAI 164–168.
[160]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems. Curran Associates, 5680–5689.
[161]
David Pujol, Ryan McKenna, Satya Kuppam, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2020. Fair decision making using privacy-protected data. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 189–199.
[162]
Natasha Quadlin. 2018. The mark of a woman’s record: Gender and academic performance in hiring. American Sociological Review 83, 2 (2018), 331–360.
[163]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2019. Mitigating bias in algorithmic employment screening: Evaluating claims and practices. (2019). arXiv:1906.09208
[164]
Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, et al. 2019. Machine behaviour. Nature 568, 7753 (2019), 477–486.
[165]
Ernesto Reuben, Paola Sapienza, and Luigi Zingales. 2014. How stereotypes impair women’s careers in science. Proceedings of the National Academy of Sciences 111, 12 (2014), 4403–4408.
[166]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Association for Computing Machinery (ACM), 1135–1144.
[167]
Andrea Romei and Salvatore Ruggieri. 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review 29, 5 (2014), 582–638.
[168]
Salvatore Ruggieri, Dino Pedreschi, and Franco Turini. 2010. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data (TKDD) 4, 2 (2010), 1–40.
[169]
Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C. Parkes, and Yang Liu. 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM), 99–106.
[170]
Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall. 2022. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg, MD. https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464.
[171]
Andrew Selbst and Julia Powles. 2018. “Meaningful Information” and the right to explanation. In Conference on Fairness, Accountability and Transparency. PMLR, 48–48.
[172]
Andrew D Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 59–68.
[173]
Evan Selinger and Woodrow Hartzog. 2020. The inconsentability of facial surveillance. Loyola Law Review 66 (2020), 33.
[174]
Shilad Sen, Margaret E. Giesel, Rebecca Gold, Benjamin Hillmann, Matt Lesicko, Samuel Naden, Jesse Russell, Zixiao Wang, and Brent Hecht. 2015. Turkers, scholars, “Arafat” and “Peace” cultural communities and algorithmic gold standards. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. Association for Computing Machinery (ACM), 826–838.
[175]
Mark Sendak, Madeleine Clare Elish, Michael Gao, Joseph Futoma, William Ratliff, Marshall Nichols, Armando Bedoya, Suresh Balu, and Cara O’Brien. 2020. “The human body is a black box”: Supporting clinical decision-making with deep learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 99–109.
[176]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Association for Computing Machinery (ACM), 2219–2228.
[177]
Ajay S. Singh and Micah B. Masuku. 2014. Sampling techniques & determination of sample size in applied statistics research: An overview. International Journal of Economics, Commerce and Management 2, 11 (2014), 1–22.
[178]
Michael Skirpan and Micha Gorelick. 2017. The authority of “Fair” in machine learning. (2017). arXiv:1706.09976
[179]
Dylan Slack, Sorelle A. Friedler, and Emile Givental. 2020. Fairness warnings and fair-MAML: Learning fairly with minimal data. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 200–209.
[180]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM), 180–186.
[181]
H. Jeff Smith, Tamara Dinev, and Heng Xu. 2011. Information privacy research: An interdisciplinary review. MIS Quarterly 35, 4 (2011), 989–1016.
[182]
Marion Gross Sobol and Paul Ellard. 1980. Comparison of the EEOCC four-fifths rule and a one, two or three binomial criterion. SMU Working Paper (1980).
[183]
Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P. Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Association for Computing Machinery (ACM), 2239–2248.
[184]
Charles Teddlie and Fen Yu. 2007. Mixed methods sampling: A typology with examples. Journal of Mixed Methods Research 1, 1 (2007), 77–100.
[185]
Songül Tolan. 2019. Fair and unbiased algorithmic decision making: Current state and future challenges. arXiv:1901.04730 (2019).
[186]
Amos Tversky and Daniel Kahneman. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.Psychological Review 90, 4 (1983), 293.
[187]
Berk Ustun, Yang Liu, and David Parkes. 2019. Fairness without harm: Decoupled classifiers with preference guarantees. In International Conference on Machine Learning. PMLR 6373–6382.
[188]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 10–19.
[189]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery (ACM), 1–14.
[190]
Viswanath Venkatesh and Fred D. Davis. 2000. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science 46, 2 (2000), 186–204.
[191]
Viswanath Venkatesh, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User acceptance of information technology: Toward a unified view. MIS Quarterly 27, 3 (2003), 425–478.
[192]
John D. Wells, Joseph S. Valacich, and Traci J. Hess. 2011. What signal are you sending? How website quality influences perceptions of product quality and purchase intentions. MIS Quarterly 35, 2 (2011), 373–396.
[193]
Bo Xiao and Izak Benbasat. 2007. E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly 31, 1 (2007), 137–209.
[194]
Bo Xiao and Izak Benbasat. 2014. Research on the use, characteristics, and impact of e-commerce product recommendation agents: A review and update for 2007–2012. In Handbook of Strategic e-Business Management. Springer, 403–431.
[195]
Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. 2020. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ImageNet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 547–558.
[196]
Ke Yang and Julia Stoyanovich. 2017. Measuring fairness in ranked outputs. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management. Association for Computing Machinery (ACM), 1–6.
[197]
Sirui Yao and Bert Huang. 2017. Beyond parity: Fairness objectives for collaborative filtering. In Advances in Neural Information Processing Systems. Curran Associates, 2921–2930.
[198]
I-Cheng Yeh and Che-hui Lien. 2009. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications 36, 2 (2009), 2473–2480.
[199]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web. Association for Computing Machinery (ACM), 1171–1180.
[200]
Muhammad Bilal Zafar, Isabel Valera, Manuel Rodriguez, Krishna Gummadi, and Adrian Weller. 2017. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems. Curran Associates, 229–239.
[201]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. Fa*ir: A fair top-k ranking algorithm. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. Association for Computing Machinery (ACM), 1569–1578.
[202]
Meike Zehlike, Ke Yang, and Julia Stoyanovich. 2021. Fairness in ranking: A survey. arXiv:2103.14000 (2021).
[203]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. PMLR, 325–333.
[204]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery (ACM), 335–340.
[205]
Zhe Zhang and Daniel B. Neill. 2016. Identifying significant predictive bias in classifiers. (2016). arXiv:1611.08292
[206]
Indrė Žliobaitė. 2015. A survey on measuring indirect discrimination in machine learning. (2015). arXiv:1511.00148
[207]
Indrė Žliobaitė, Faisal Kamiran, and Toon Calders. 2011. Handling conditional discrimination. In IEEE 11th International Conference on Data Mining. IEEE, 992–1001.

Cited By

View all
  • (2025)Biological and Social Impacts of Implementing Artificial Intelligence-Based Economic Policies: A Discourse AnalysisIlomata International Journal of Social Science10.61194/ijss.v6i1.14756:1(310-320)Online publication date: 31-Jan-2025
  • (2025)Effect of Generative Artificial Intelligence on Strategic Decision-Making in Entrepreneurial Business Initiatives: A Systematic Literature ReviewAdministrative Sciences10.3390/admsci1502006615:2(66)Online publication date: 18-Feb-2025
  • (2024)Unleashing the Potential of Every ChildEmbracing Cutting-Edge Technology in Modern Educational Settings10.4018/979-8-3693-1022-9.ch002(19-47)Online publication date: 23-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Management Information Systems
ACM Transactions on Management Information Systems  Volume 13, Issue 3
September 2022
312 pages
ISSN:2158-656X
EISSN:2158-6578
DOI:10.1145/3512349
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 May 2022
Online AM: 24 March 2022
Accepted: 01 February 2022
Revised: 01 February 2022
Received: 01 January 2021
Published in TMIS Volume 13, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated decision-making
  2. algorithmic bias
  3. ethics
  4. fairness
  5. augmented decision-making

Qualifiers

  • Opinion
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)318
  • Downloads (Last 6 weeks)40
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Biological and Social Impacts of Implementing Artificial Intelligence-Based Economic Policies: A Discourse AnalysisIlomata International Journal of Social Science10.61194/ijss.v6i1.14756:1(310-320)Online publication date: 31-Jan-2025
  • (2025)Effect of Generative Artificial Intelligence on Strategic Decision-Making in Entrepreneurial Business Initiatives: A Systematic Literature ReviewAdministrative Sciences10.3390/admsci1502006615:2(66)Online publication date: 18-Feb-2025
  • (2024)Unleashing the Potential of Every ChildEmbracing Cutting-Edge Technology in Modern Educational Settings10.4018/979-8-3693-1022-9.ch002(19-47)Online publication date: 23-Feb-2024
  • (2024)Modeling Individual Fairness Beliefs and Its ApplicationsACM Transactions on Management Information Systems10.1145/368207015:3(1-26)Online publication date: 2-Aug-2024
  • (2023)Architectural Design of a Blockchain-Enabled, Federated Learning Platform for Algorithmic Fairness in Predictive Health Care: Design Science StudyJournal of Medical Internet Research10.2196/4654725(e46547)Online publication date: 30-Oct-2023
  • (2023)Research Challenges for the Design of Human-Artificial Intelligence Systems (HAIS)ACM Transactions on Management Information Systems10.1145/354954714:1(1-18)Online publication date: 16-Jan-2023
  • (undefined)Consumer and AI Co-creation: When and Why Human Participation Improves AI Creation.SSRN Electronic Journal10.2139/ssrn.3929070

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media