skip to main content
10.1145/3230599.3230607acmotherconferencesArticle/Chapter ViewAbstractPublication PagesceriConference Proceedingsconference-collections
research-article

Content-based recommendation for Academic Expert finding

Published: 26 June 2018 Publication History

Abstract

Nowadays it is more and more frequent that Web users search for professionals in order to find people who can help solve any problem in a given field. This is call expert finding. A particular case is when users are interested in scientific researchers. The associated problem is to get, given a query that expresses a topic of interest for a user, a set of researchers who are expert on it. One of the difficulties to tackle the problem is to indentify the topics in which a professional is expert. In this paper, we face this problem from a content-based recommendatation perspective and we present a method where, starting from the articles published by each researcher, and a query, the expert researchers are obtained. We also present a new document collection, called PMSC-UGR, specifically designed for the evaluation in the field of expert finding and document filtering

References

[1]
N. Asadi, D. Metzler, T. Elsayed, J. Lin Pseudo test collections for learning web search ranking functions. SIGIR, 1073--1082, 2011.
[2]
L. Azzopardi, M. de Rijke, K. Balog Building simulated queries for known-item topics: An analysis using six European languages. SIGIR 2007, 455--462, 2007.
[3]
P. Bailey, N. Craswell, I. Soboroff, A.P. de Vries. The CSIRO enterprise search collection. SIGIR Forum 41(2):42--45, 2007.
[4]
K. Balog, Y. Fang, M. de Rijke, P. Serdyukov, L. Si. Expertise Retrieval. Foundations and Trends in Information Retrieval 6(2--3):127--256, 2012.
[5]
J. Beel, B. Gipp, S. Langer, C. Breitinger. Research-paper recommender systems: a literature survey. International Journal on Digital Libraries 17(49):305--338, 2016.
[6]
R. Berendsen, M. Tsagkias, W. Weerkamp, M. de Rijke. Pseudo test collections for training and tuning microblog rankers. SIGIR, 2013
[7]
R. Berendsen, M. Tsagkias, M. de Rijke, E. Meij. Generating Pseudo Test Collections for Learning to Rank Scientific Articles. CLEF 2012, LNCS 7488, pp. 42--53, 2012.
[8]
J. Bobadilla, A. Hernando, O. Fernando, A. Gutiérrez. Recommender systems survey. Knowledge Based Systems 46:109--132, 2013.
[9]
B. Carterette, V. Pavlu, E. Kanoulas, J. Aslam, J. Allan. Evaluation over thousands of queries. Proceedings of the 31st ACM SIGIR Conference, pp. 651--658, 2008.
[10]
Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete. Profile-based recommendation: A case study in a parliamentary context. Journal of Information Science, 43(5), 665--682, 2017.
[11]
Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete. Committee-based profiles for politician finding. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 25(2), 21--36, 2017.
[12]
Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete. On the selection of the correct number of terms for profile construction: Theoretical and empirical analysis. Information Science, 430--431, 142--162, 2018.
[13]
Luis M. de Campos, Juan M. Fernández-Luna, Juan F. Huete. Luis Redondo-Expósito. Automatic Construction of Multi-faceted User Profiles using Text Clustering and its Application to Expert Recommendation and Filtering Problems. Submitted to Knowledge-based Systems.
[14]
Christopher S. Campbell, Paul P. Maglio, Alex Cozzi, Byron Dom. Expertise Identification using Email Communications. CIKM, 528--531, 2003.
[15]
N. Craswell, A.P. de Vries, I. Soboroff. Overview of the TREC 2005 Enterprise Track. Proceedings of the 14th TREC Conference, 2005.
[16]
U. Hanani, B. Shapira, P. Shoval. Information filtering: Overview of issues, research and systems. User Modelling and User-Adapted Interaction 11:203--259, 2001.
[17]
B. Huurnink, K. Hofmann, M. de Rijke. Simulating searches from transaction logs. SIGIR 2010 Workshop on the Simulation of Interaction, 2010.
[18]
M. Kolla y O. Vechtomova. In Enterprise Search: Methods to identify argumentative discussions and to find topical experts. TREC, 2006.
[19]
V. Mangaravite, R.L.T. Santos, I.S. Ribeiro, M.A. Gonçalves, A.H.F. Laender. The LExR Collection for Expertise Retrieval in Academia. In Proceedings of the 39th ACM SIGIR Conference, pp. 721--724, 2016.
[20]
M. Sanderson, J. Zobel. Information retrieval system evaluation: Effort, sensitivity, and reliability. Proceedings of the 28th ACM SIGIR Conference, pp. 162--169, 2005.
[21]
A. Schuth, F. Sietsma, S. Whiteson, M. de Rijke. Optimizing Base Rankers Using Clicks. A Case Study Using BM25. ECIR 2014, LNCS 8416, 75--87, 2014.
[22]
P. Serdyukov, H. Rode, D. Hiemstra University of Twente at the TREC 2007 Enterprise Track: Modeling relevance propagation for the expert search task. TREC, 2007.
[23]
J. Tang, J. Zhang, L. Yao, J. Li, L. Zhang, Z. Su. ArnetMiner: extraction and mining of academic social networks. Proceedings of the 14th ACM SIGKDD Conference, pp. 990--998, 2008.
[24]
Z. Yang, L. Hong, B. Davison. Topic-driven Multi-type Citation Network Analysis. RIAO, 2010.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CERI '18: Proceedings of the 5th Spanish Conference on Information Retrieval
June 2018
91 pages
ISBN:9781450365437
DOI:10.1145/3230599
© 2018 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 June 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Búsqueda de expertos
  2. Recomendación basada en contenido
  3. Sistema de recomendación

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CERI '18

Acceptance Rates

CERI '18 Paper Acceptance Rate 18 of 24 submissions, 75%;
Overall Acceptance Rate 36 of 51 submissions, 71%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 177
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media