Skip to main content

Position Bias in Recommender Systems for Digital Libraries

  • Conference paper
  • First Online:
Transforming Digital Worlds (iConference 2018)

Abstract

“Position bias” describes the tendency of users to interact with items on top of a list with higher probability than with items at a lower position in the list, regardless of the items’ actual relevance. In the domain of recommender systems, particularly recommender systems in digital libraries, position bias has received little attention. We conduct a study in a real-world recommender system that delivered ten million related-article recommendations to the users of the digital library Sowiport, and the reference manager JabRef. Recommendations were randomly chosen to be shuffled or non-shuffled, and we compared click-through rate (CTR) for each rank of the recommendations. According to our analysis, the CTR for the highest rank in the case of Sowiport is 53% higher than expected in a hypothetical non-biased situation (0.189% vs. 0.123%). Similarly, in the case of Jabref the highest rank received a CTR of 1.276%, which is 87% higher than expected (0.683%). A chi-squared test confirms the strong relationship between the rank of the recommendation shown to the user and whether the user decided to click it (p < 0.01 for both Jabref and Sowiport). Our study confirms the findings from other domains, that recommendations in the top positions are more often clicked, regardless of their actual relevance.

This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under Grant Number 13/RC/2106. This work was also supported by a fellowship within the postdoc-program of the German Academic Exchange Service (DAAD).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Beel, J., et al.: Introducing Mr. DLib, a machine-readable digital library. In: Proceedings of the 11th ACM/IEEE Joint Conference on Digital Libraries (JCDL 2011) (2011)

    Google Scholar 

  2. Beel, J., et al.: Mr. DLib: Recommendations-as-a-Service (RaaS) for academia. In: 2017 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2017, Toronto, ON, Canada, 19–23 June 2017, pp. 313–314 (2017)

    Google Scholar 

  3. Beel, J., Langer, S.: A comparison of offline evaluations, online evaluations, and user studies in the context of research-paper recommender systems. In: Kapidakis, S., Mazurek, C., Werla, M. (eds.) TPDL 2015. LNCS, vol. 9316, pp. 153–168. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24592-8_12

    Chapter  Google Scholar 

  4. Feyer, S., Siebert, S., Gipp, B., Aizawa, A., Beel, J.: Integration of the scientific recommender system Mr. DLib into the reference manager JabRef. In: Jose, J.M., Hauff, C., Altıngovde, I.S., Song, D., Albakour, D., Watt, S., Tait, J. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 770–774. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56608-5_80

    Chapter  Google Scholar 

  5. Hofmann, K., Schuth, A., Bellogín, A., de Rijke, M.: Effects of position bias on click-based recommender evaluation. In: de Rijke, M., Kenter, T., de Vries, A.P., Zhai, C., de Jong, F., Radinsky, K., Hofmann, K. (eds.) ECIR 2014. LNCS, vol. 8416, pp. 624–630. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06028-6_67

    Chapter  Google Scholar 

  6. Joachims, T., et al.: Accurately interpreting clickthrough data as implicit feedback. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 154–161. ACM (2005)

    Google Scholar 

  7. Joachims, T., et al.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Trans. Inf. Syst. (TOIS) 25(2), 7 (2007)

    Article  Google Scholar 

  8. Joachims, T., et al.: Unbiased learning-to-rank with biased feedback. In: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 781–789 ACM (2017)

    Google Scholar 

  9. Keane, M.T., et al.: Are people biased in their use of search engines? Commun. ACM. 51(2), 49–52 (2008)

    Google Scholar 

  10. Klöckner, K., et al.: Depth-and breadth-first processing of search result lists. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, p. 1539. ACM (2004)

    Google Scholar 

  11. Lerman, K., Hogg, T.: Leveraging position bias to improve peer recommendation. PLoS ONE 9(6), e98914 (2014)

    Article  Google Scholar 

  12. Murphy, J., et al.: Primacy and recency effects on clicking behavior. J. Comput.-Mediat. Commun. 11(2), 522–535 (2006)

    Article  Google Scholar 

  13. Pandey, S., et al.: Shuffling a stacked deck: the case for partially randomized ranking of search engine results. In: Proceedings of the 31st International Conference on Very Large Data Bases, pp. 781–792. VLDB Endowment (2005)

    Google Scholar 

  14. Schnabel, T., et al.: Recommendations as treatments: debiasing learning and evaluation. In: Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, 19–24 June 2016. pp. 1670–1679 (2016)

    Google Scholar 

  15. Schuth, A.: Search engines that learn from their users. SIGIR Forum 50(1), 95–96 (2016)

    Article  Google Scholar 

  16. Serenko, A., Bontis, N.: First in, best dressed: the presence of order-effect bias in journal ranking surveys. J. Informetr. 7(1), 138–144 (2013)

    Article  Google Scholar 

  17. Teppan, E.C., Zanker, M.: Decision biases in recommender systems. J. Internet Commer. 14(2), 255–275 (2015)

    Article  Google Scholar 

  18. Wang, X., et al.: Learning to rank with selection bias in personal search. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 115–124. ACM (2016)

    Google Scholar 

  19. Zheng, H., et al.: Do clicks measure recommendation relevancy?: an empirical user study. In: Proceedings of the Fourth ACM Conference on Recommender Systems, pp. 249–252. ACM (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew Collins .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Collins, A., Tkaczyk, D., Aizawa, A., Beel, J. (2018). Position Bias in Recommender Systems for Digital Libraries. In: Chowdhury, G., McLeod, J., Gillet, V., Willett, P. (eds) Transforming Digital Worlds. iConference 2018. Lecture Notes in Computer Science(), vol 10766. Springer, Cham. https://doi.org/10.1007/978-3-319-78105-1_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78105-1_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78104-4

  • Online ISBN: 978-3-319-78105-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics