Abstract
Peer review is the widely accepted method of research validation. However, with the deluge of research paper submissions accompanied with the rising number venues, the paper vetting system has come under a lot of stress. Problems like dearth of adequate reviewers, finding appropriate expert reviewers, maintaining the quality of the reviews are steadily and strongly surfacing up. To ease the peer review workload to some extent, here we investigate how an Artificial Intelligence (AI)-powered review system would look like. We leverage on the paper-review interaction to predict the decision in the reviewing process. We do not envisage an AI reviewing papers in the near-future, but seek to explore a human-AI collaboration in the decision-making process where the AI would leverage on the human-written reviews and paper full-text to predict the fate of the paper. The idea is to have an assistive decision-making tool for the chairs/editors to help them with an additional layer of confidence, especially with borderline and contrastive reviews. We use cross-attention between the review text and paper full-text to learn the interactions and henceforth generate the decision. We also make use of sentiment information encoded within peer-review texts to guide the outcome. Our initial results show encouraging performance on a dataset of paper+peer reviews curated from the ICLR openreviews. We make our codes and dataset (https://github.com/PrabhatkrBharti/PEERAssist) public for further explorations. We re-iterate that we are in an early stage of investigation and showcase our initial exciting results to justify our proposition.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Beltagy, I., Lo, K., Cohan, A.: SciBERT: a pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 (2019)
Bornmann, L., Daniel, H.D.: Reliability of reviewers’ ratings when using public peer review: a case study. Learn. Publish. 23(2), 124–131 (2010)
Burstein, J., Doran, C., Solorio, T.: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers). In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Long and Short Papers), vol. 1 (2019)
Cer, D., et al.: Universal sentence encoder arXiv preprint arXiv:1803.11175 (2018)
Chakraborty, S., Goyal, P., Mukherjee, A.: Aspect-based sentiment analysis of scientific reviews. In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, pp. 207–216 (2020)
Charlin, L., Zemel, R.: The Toronto paper matching system: an automated paper-reviewer assignment system (2013)
Ghosal, T., Sonam, R., Ekbal, A., Saha, S., Bhattacharyya, P.: Is the paper within scope? Are you fishing in the right pond? In: 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 237–240. IEEE (2019)
Ghosal, T., Verma, R., Ekbal, A., Bhattacharyya, P.: DeepSentiPeer: harnessing sentiment in review texts to recommend peer review decisions. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1120–1130 (2019)
Ghosal, T., Verma, R., Ekbal, A., Bhattacharyya, P.: A sentiment augmented deep architecture to predict peer review outcomes. In: 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 414–415. IEEE (2019)
Ghosal, T., Verma, R., Ekbal, A., Saha, S., Bhattacharyya, P.: Investigating impact features in editorial pre-screening of research papers. In: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries, pp. 333–334 (2018)
Huisman, J., Smits, J.: Duration and quality of the peer review process: the author’s perspective. Scientometrics 113(1), 633–650 (2017). https://doi.org/10.1007/s11192-017-2310-5
Hutto, C., Gilbert, E.: VADER: a parsimonious rule-based model for sentiment analysis of social media text. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 8 (2014)
Kang, D., et al.: A dataset of peer reviews (PeerRead): collection, insights and NLP applications. arXiv preprint arXiv:1804.09635 (2018)
Kelly, J., Sadeghieh, T., Adeli, K.: Peer review in scientific publications: benefits, critiques, & a survival guide. Ejifcc 25(3), 227 (2014)
Li, S., Zhao, W.X., Yin, E.J., Wen, J.R.: A neural citation count prediction model based on peer review text. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4914–4924 (2019)
Plank, B., van Dalen, R.: CiteTracked: a longitudinal dataset of peer reviews and citations. In: BIRNDL@ SIGIR, pp. 116–122 (2019)
Price, S., Flach, P.A.: Computational support for academic peer review: a perspective from artificial intelligence. Commun. ACM 60(3), 70–79 (2017)
Qiao, F., Xu, L., Han, X.: Modularized and attention-based recurrent convolutional neural network for automatic academic paper aspect scoring. In: Meng, X., Li, R., Wang, K., Niu, B., Wang, X., Zhao, G. (eds.) WISA 2018. LNCS, vol. 11242, pp. 68–76. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02934-0_7
Smith, R.: Peer review: a flawed process at the heart of science and journals. J. R. Soc. Med. 99(4), 178–182 (2006)
Stelmakh, I., Shah, N.B., Singh, A., Daumé III, H.: A novice-reviewer experiment to address scarcity of qualified reviewers in large conferences. CoRR abs/2011.15050 (2020). https://arxiv.org/abs/2011.15050
Sun, M.: Peer review comes under peer review. Science 244(4907), 910–913 (1989)
Superchi, C., González, J.A., Solà, I., Cobo, E., Hren, D., Boutron, I.: Tools used to assess the quality of peer review reports: a methodological systematic review. BMC Med. Res. Methodol. 19(1), 48 (2019)
Thelwall, M., Papas, E.R., Nyakoojo, Z., Allen, L., Weigert, V.: Automatically detecting open academic review praise and criticism. Online Information Review (2020)
Wang, K., Wan, X.: Sentiment analysis of peer review texts for scholarly papers. In: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 175–184 (2018)
Wicherts, J.M.: Peer review quality and transparency of the peer-review process in open access and subscription journals. PLoS ONE 11(1), e0147913 (2016)
Yuan, W., Liu, P., Neubig, G.: Can we automate scientific reviewing? arXiv preprint arXiv:2102.00176 (2021)
Acknowledgement
Asif Ekbal is a recipient of the Visvesvaraya Young Faculty Award and acknowledges Digital India Corporation, Ministry of Electronics and Information Technology, Government of India for supporting this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Bharti, P.K., Ranjan, S., Ghosal, T., Agrawal, M., Ekbal, A. (2021). PEERAssist: Leveraging on Paper-Review Interactions to Predict Peer Review Decisions. In: Ke, HR., Lee, C.S., Sugiyama, K. (eds) Towards Open and Trustworthy Digital Societies. ICADL 2021. Lecture Notes in Computer Science(), vol 13133. Springer, Cham. https://doi.org/10.1007/978-3-030-91669-5_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-91669-5_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91668-8
Online ISBN: 978-3-030-91669-5
eBook Packages: Computer ScienceComputer Science (R0)