skip to main content
10.1145/3308560.3317595acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists

Published: 13 May 2019 Publication History

Abstract

In this work, we introduce a novel metric for auditing group fairness in ranked lists. Our approach offers two benefits compared to the state of the art. First, we offer a blueprint for modeling of user attention. Rather than assuming a logarithmic loss in importance as a function of the rank, we can account for varying user behaviors through parametrization. For example, we expect a user to see more items during a viewing of a social media feed than when they inspect the results list of a single web search query. Second, we allow non-binary protected attributes to enable investigating inherently continuous attributes (e.g., political alignment on the liberal to conservative spectrum) as well as to facilitate measurements across aggregated sets of search results, rather than separately for each result list. By combining these two elements into our metric, we are able to better address the human factors inherent in this problem. We measure the whole sociotechnical system, consisting of a ranking algorithm and individuals using it, instead of exclusively focusing on the ranking algorithm. Finally, we use our metric to perform three simulated fairness audits. We show that determining fairness of a ranked output necessitates knowledge (or a model) of the end-users of the particular service. Depending on their attention distribution function, a fixed ranking of results can appear biased both in favor and against a protected group1.

References

[1]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23(2016).
[2]
Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Cal. L. Rev. 104(2016), 671.
[3]
Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In Proc. of SIGIR. ACM.
[4]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proc. of FAT*. 77–91.
[5]
Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. 2009. Building Classifiers with Independency Constraints. In Proc. of ICDM Workshops. IEEE.
[6]
Toon Calders, Asad Karim, Faisal Kamiran, Wesam Ali, and Xiangliang Zhang. 2013. Controlling Attribute Effect In Linear Regression. In Proc. of ICDM. IEEE.
[7]
Toon Calders and Sicco Verwer. 2010. Three Naive Bayes Approaches For Discrimination-free Classification. Data Mining and Knowledge Discovery 21, 2 (2010), 277–292.
[8]
L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. 2017. Ranking with Fairness Constraints. CoRR abs/1704.06840(2017). http://arxiv.org/abs/1704.06840
[9]
Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In Proc. of CHI. ACM.
[10]
Nick Craswell, Onno Zoeter, Michael Taylor, and Bill Ramsey. 2008. An Experimental Comparison of Click Position-bias Models. In Proc. of WSDM. ACM.
[11]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
[12]
Nicholas Diakopoulos, Daniel Trielli, Jennifer Stark, and Sean Mussenden. 2018. I Vote For—How Search Informs Our Choice of Candidate. In Digital Dominance: The Power of Google, Amazon, Facebook, and Apple, M. Moore and D. Tambini (Eds.). 22.
[13]
Dimitar Dimitrov, Philipp Singer, Florian Lemmerich, and Markus Strohmaier. 2017. What Makes a Link Successful on Wikipedia?. In Proceedings of the 26th International Conference on World Wide Web(WWW ’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 917–926.
[14]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proc. of ITCS. ACM, 214–226.
[15]
Robert Epstein and Ronald E Robertson. 2015. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences 112, 33(2015), E4512–E4521.
[16]
Robert Epstein, Ronald E. Robertson, David Lazer, and Christo Wilson. 2017. Suppressing the Search Engine Manipulation Effect (SEME). Proceedings of the ACM: Human-Computer Interaction 1, 2 (November 2017).
[17]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proc. of KDD. ACM.
[18]
Google. 2016. Ads and analytics innovations for a mobile-first world. Google Blog. Accessed: 2018-10-29, https://www.blog.google/products/ads/ads-and-analytics-innovations-for-a-mobile-first-world/.
[19]
Laura A Granka, Thorsten Joachims, and Geri Gay. 2004. Eye-tracking analysis of user behavior in WWW search. In Proc. of SIGIR. ACM.
[20]
Ben Green and Yiling Chen. 2019. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In ACM Conference on Fairness, Accountability, and Transparency.
[21]
Zhiwei Guan and Edward Cutrell. 2007. An eye tracking study of the effect of target rank on web search. In Proc. of CHI. ACM.
[22]
Sara Hajian and Josep Domingo-Ferrer. 2013. A Methodology For Direct And Indirect Discrimination Prevention In Data Mining. IEEE Transactions on Knowledge and Data Engineering 25, 7(2013), 1445–1459.
[23]
Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. 2017. Bias in Online Freelance Marketplaces: Evidence from TaskRabbit and Fiverr. In Proc. of CSCW. ACM.
[24]
Eduardo Hargreaves, Claudio Agosti, Daniel Menasché, Giovanni Neglia, Alexandre Reiffers-Masson, and Eitan Altman. 2019. Fairness in online social network timelines: Measurements, models and mechanism design. Performance Evaluation 129 (2019), 15–39.
[25]
Facebook Inc.2018. https://investor.fb.com/investor-news/press-release-details/2018/Facebook-Reports-Second-Quarter-2018-Results/default.aspx.
[26]
Chitika Insights. 2013. The value of Google result positioning. Online Advertising Network(2013).
[27]
Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased Learning-to-Rank with Biased Feedback. In Proc. of WSDM. ACM.
[28]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In Proc. of Conference on Computer, Control and Communication.
[29]
Faisal Kamiran and Toon Calders. 2010. Classification with No Discrimination by Preferential Sampling. In Proc. of Machine Learning Conference of Belgium and The Netherlands.
[30]
Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. 2010. Discrimination Aware Decision Tree Learning. In Proc. of ICDM. IEEE.
[31]
Faisal Kamiran, Asad Karim, and Xiangliang Zhang. 2012. Decision Theory for Discrimination-Aware Classification. In Proc. of ICDM. IEEE.
[32]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-Aware Classifier with Prejudice Remover Regularizer. In Proc. of ECML PKDD.
[33]
Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. 2011. Fairness-aware Learning through Regularization Approach. In Proc. of ICDM Workshops. IEEE.
[34]
Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. In Proc. of CHI. ACM.
[35]
Mark T Keane, Maeve O’Brien, and Barry Smyth. 2008. Are people biased in their use of search engines?Commun. ACM 51, 2 (2008), 49–52.
[36]
Pauline T. Kim. 2017. Data-Driven Discrimination at Work. William & Mary Law Review 58 (2017).
[37]
Tomasz J Kozubowski, Anna K Panorska, and Matthew L Forister. 2015. A discrete truncated Pareto distribution. Statistical Methodology 26 (2015), 135–150.
[38]
Juhi Kulshrestha, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna Gummadi, and Karrie Karahalios. 2017. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media. In Proc. of CSCW. ACM.
[39]
Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. 2011. k-NN As an Implementation of Situation Testing for Discrimination Discovery and Prevention. In Proc. of KDD. ACM.
[40]
George A. Miller. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63, 2 (1956), 81––97.
[41]
Bennet B Murdock. 1962. The serial position effect of free recall. Journal of Experimental Psychology 64, 5 (1962), 482–488.
[42]
Jamie Murphy, Charles Hofacker, and Richard Mizerski. 2006. Primacy and Recency Effects on Clicking Behavior. Journal of Computer-Mediated Communication 11, 2 (2006), 522–535.
[43]
Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware Data Mining. In Proc. of KDD. ACM.
[44]
Emily Rappleye. 2015. Gender ratio of nurses across 50 states. https://www.beckershospitalreview.com/human-capital-and-risk/gender-ratio-of-nurses-across-50-states.html.
[45]
Matthew Richardson. 2007. Predicting clicks: Estimating the click-through rate for new ads. In Proc. of WWW. IW3C2.
[46]
Ronald E Robertson, Shan Jiang, Kenneth Joseph, Lisa Friedland, David Lazer, and Christo Wilson. 2018. Auditing Partisan Audience Bias within Google Search. Proceedings of the ACM: Human-Computer Interaction 2 (2018).
[47]
Christian Rudder. 2014. Dataclysm: Love, Sex, Race, and Identity–What Our Online Lives Tell Us about Our Offline Selves. Crown.
[48]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. In Proceedings of “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, a preconference at the 64th Annual Meeting of the International Communication Association.
[49]
Piotr Sapiezynski, Valentin Kassarnig, Christo Wilson, Sune Lehmann, and Alan Mislove. 2017. Academic performance prediction in a gender-imbalanced environment. In FATREC Workshop on Responsible Recommendation Proceedings.
[50]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In Proc. of KDD. ACM.
[51]
Ke Yang and Julia Stoyanovich. 2017. Measuring fairness in ranked outputs. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management.
[52]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness Beyond Disparate Treatment: Disparate Impact: Learning Classification Without Disparate Mistreatment. In Proc. of WWW. IW3C2.
[53]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2015. Fairness Constraints: A Mechanism For Fair Classification. In Proc. of FATML.
[54]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. Fa*ir: A fair top-k ranking algorithm. In Proc. of CSCW. ACM.
[55]
Indre Zliobaite, Faisal Kamiran, and Toon Calders. 2011. Handling Conditional Discrimination. In Proc. of ICDM. IEEE.

Cited By

View all

Index Terms

  1. Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        WWW '19: Companion Proceedings of The 2019 World Wide Web Conference
        May 2019
        1331 pages
        ISBN:9781450366755
        DOI:10.1145/3308560
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        In-Cooperation

        • IW3C2: International World Wide Web Conference Committee

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 13 May 2019

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. group fairness
        2. information retrieval
        3. ranked lists

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        WWW '19
        WWW '19: The Web Conference
        May 13 - 17, 2019
        San Francisco, USA

        Acceptance Rates

        Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)52
        • Downloads (Last 6 weeks)10
        Reflects downloads up to 20 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Properties of Group Fairness Measures for RankingsACM Transactions on Social Computing10.1145/36748838:1-2(1-45)Online publication date: 17-Jan-2025
        • (2025)Preference eigensystems for fair rankingExpert Systems with Applications10.1016/j.eswa.2024.126324269(126324)Online publication date: Apr-2025
        • (2025)Impacts of Personalization on Social Network ExposureSocial Networks Analysis and Mining10.1007/978-3-031-78538-2_3(38-53)Online publication date: 25-Jan-2025
        • (2024)Explaining Recommendation Fairness from a User/Item PerspectiveACM Transactions on Information Systems10.1145/369887743:1(1-30)Online publication date: 5-Oct-2024
        • (2024)Fairness in Ranking under Disparate UncertaintyProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694703(1-31)Online publication date: 29-Oct-2024
        • (2024)It's Not You, It's Me: The Impact of Choice Models and Ranking Strategies on Gender Imbalance in Music RecommendationProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688163(884-889)Online publication date: 8-Oct-2024
        • (2024)FairRankTune: A Python Toolkit for Fair Ranking TasksProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679238(5195-5199)Online publication date: 21-Oct-2024
        • (2024)Fairness-Aware Exposure Allocation via Adaptive RerankingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657794(1504-1513)Online publication date: 10-Jul-2024
        • (2024)Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated ImagesProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657750(208-217)Online publication date: 10-Jul-2024
        • (2024)On the trade-off between ranking effectiveness and fairnessExpert Systems with Applications10.1016/j.eswa.2023.122709241(122709)Online publication date: May-2024
        • Show More Cited By

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media