skip to main content
10.1145/3529372.3533296acmconferencesArticle/Chapter ViewAbstractPublication PagesjcdlConference Proceedingsconference-collections
short-paper

BIP! SCHOLAR: A Service to Facilitate Fair Researcher Assessment

Published:20 June 2022Publication History

ABSTRACT

In recent years, assessing the performance of researchers has become a burden due to the extensive volume of the existing research output. As a result, evaluators often end up relying heavily on a selection of performance indicators like the h-index. However, over-reliance on such indicators may result in reinforcing dubious research practices, while overlooking important aspects of a researcher's career, such as their exact role in the production of particular research works or their contribution to other important types of academic or research activities (e.g., production of datasets, peer reviewing). In response, a number of initiatives that attempt to provide guidelines towards fairer research assessment frameworks have been established. In this work, we present BIP! Scholar, a Web-based service that offers researchers the opportunity to set up profiles that summarise their research careers taking into consideration well-established guidelines for fair research assessment, facilitating the work of evaluators who want to be more compliant with the respective practices.

References

  1. Noémie Aubert Bonn and Wim Pinxten. 2021. Advancing science or advancing careers? Researchers' opinions on success indicators. PLOS ONE 16, 2 (02 2021), 1--17. Google ScholarGoogle ScholarCross RefCross Ref
  2. European Commission, Directorate-General for Research, and Innovation. 2017. Evaluation of research careers fully acknowledging Open Science practices : rewards, incentives and/or recognition for researchers practicing Open Science. Publications Office. Google ScholarGoogle ScholarCross RefCross Ref
  3. Gemma Derrick and Simon Hettrick. 2022. Time to celebrate science's' hidden'contributors. Nature (2022).Google ScholarGoogle Scholar
  4. Daniele Fanelli. 2010. Do pressures to publish increase scientists' bias? An empirical support from US States Data. PloS one 5, 4 (2010), e10271.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jim Gray. 2009. Jim Gray on eScience: A Transformed Scientific Method. In The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research, 17--31.Google ScholarGoogle Scholar
  6. Ginny Hendricks, Dominika Tkaczyk, Jennifer Lin, and Patricia Feeney. 2020. Crossref: The sustainable source of community-owned scholarly metadata. 1, 1 (2020), 414--427. Google ScholarGoogle ScholarCross RefCross Ref
  7. Diana Hicks, Paul Wouters, Ludo Waltman, Sarah De Rijcke, and Ismael Rafols. 2015. Bibliometrics: the Leiden Manifesto for research metrics. Nature 520, 7548 (2015), 429--431.Google ScholarGoogle Scholar
  8. Jorge E Hirsch. 2005. An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences 102, 46 (2005), 16569--16572.Google ScholarGoogle ScholarCross RefCross Ref
  9. Ilias Kanellos, Thanasis Vergoulis, Dimitris Sacharidis, Theodore Dalamagas, and Yannis Vassiliou. 2019. Impact-Based Ranking of Scientific Publications: A Survey and Experimental Evaluation. IEEE TKDE (2019).Google ScholarGoogle Scholar
  10. Ilias Kanellos, Thanasis Vergoulis, Dimitris Sacharidis, Theodore Dalamagas, and Yannis Vassiliou. 2020. Ranking Papers by their Short-Term Scientific Impact. arXiv:2006.00951 [cs.DL]Google ScholarGoogle Scholar
  11. Paolo Manghi, Nikos Houssos, Marko Mikulicic, and Brigitte Jörg. 2012. The data model of the openaire scientific communication e-infrastructure. In Research Conference on Metadata and Semantic Research. Springer, 168--180.Google ScholarGoogle ScholarCross RefCross Ref
  12. Robert K Merton. 1968. The Matthew Effect in Science: The reward and communication systems of science are considered. Science 159, 3810 (1968), 56--63.Google ScholarGoogle ScholarCross RefCross Ref
  13. David Moher, Lex Bouter, Sabine Kleinert, Paul Glasziou, Mai Har Sham, Virginia Barbour, Anne-Marie Coriat, Nicole Foeger, and Ulrich Dirnagl. 2020. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLOS Biology 18, 7 (07 2020), 1--14. Google ScholarGoogle ScholarCross RefCross Ref
  14. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank citation ranking: Bringing order to the web. (1999).Google ScholarGoogle Scholar
  15. Nancy Pontika, Thomas Klebel, Antonia Correia, Hannah Metzler, Petr Knoth, and Tony Ross-Hellauer. 2022. Indicators of research quality, quantity, openness and responsibility in institutional promotion, review and tenure policies across seven countries. (2022).Google ScholarGoogle Scholar
  16. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In Proceedings of WWW '15 Companion. 243--246.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Michaela Strinzel, Josh Brown, Wolfgang Kaltenbrunner, Sarah de Rijcke, and Michael Hill. 2021. Ten ways to improve academic CVs for fairer research assessment. Humanities and Social Sciences Communications 8, 1 (2021), 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  18. Elaine Svenonius. 2000. The intellectual foundation of information organization. MIT press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020. Microsoft Academic Graph: When experts are not enough. QSS 1, 1 (2020), 396--413.Google ScholarGoogle Scholar

Index Terms

  1. BIP! SCHOLAR: A Service to Facilitate Fair Researcher Assessment

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      JCDL '22: Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries
      June 2022
      392 pages
      ISBN:9781450393454
      DOI:10.1145/3529372
      • General Chairs:
      • Akiko Aizawa,
      • Thomas Mandl,
      • Zeljko Carevic,
      • Program Chairs:
      • Annika Hinze,
      • Philipp Mayr,
      • Philipp Schaer

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 June 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      JCDL '22 Paper Acceptance Rate35of132submissions,27%Overall Acceptance Rate415of1,482submissions,28%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader