Export Citations
Welcome to the 31st year of SIGIR, the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. The growth in SIGIR in recent years has been remarkable. SIGIR 2005 received a record 368 full paper submissions, SIGIR 2006 eclipsed that record with 399 submissions, and SIGIR 2007 exceeded all expectations with 490 submissions. Things stabilized a bit this year, with 497 full paper submissions from 35 countries. Notably, more submissions were received from Pacific Rim countries (182) than from any other region (North America 159, Europe 107, India 31, Middle East 9, South America 7, Africa 2). Of these submissions, we were able to accept 85 full papers (17%). These contributions include many long-standing areas of information retrieval research (e.g., indexing, evaluation, classification, user studies), several topics that have received increasing attention in recent years (e.g., learning to rank, social tagging) and some emerging topics that may inspire more work along similar lines in the future (e.g., collaborative search, sentiment analysis). Along with these full papers, we accepted 99 posters, 11 demonstrations, 9 tutorials and 10 workshops, and 11 Ph.D. candidates were selected to participate in our doctoral consortium.
Again this year, the selection of full papers was performed using a two-stage reviewing process. A total of 35 members of the Senior Program Committee (two more than in 2007) were selected based on their expertise, with significant attention to balancing continuity from previous years with addition of new perspectives, and to reflecting the increasingly diverse global scope of the SIGIR membership. The PC co-chairs and Senior PC members nominated more than 400 primary reviewers, 337 of whom accepted our invitation and actually reviewed papers (an 8% increase over the 313 reviewers who performed that critical role for SIGIR 2007). For the first time this year, PC and Senior PC members bid for papers, which were then assigned by the PC chairs based on those bids, the reviewer's stated subject expertise, and avoiding known conflicts of interest. Each paper was assigned to three primary reviewers (which resulted in five papers typically being assigned to a reviewer), and to one Senior PC member (which typically resulted in 14 papers being managed by a Senior PC member). All reviewing was double blind, with the identity of authors known only by the PC chairs during the review process, and with reviewer identities known only by unconflicted members of the PC. The Senior PC member assigned to each paper encouraged reviewers to discuss substantive differences of opinion, they requested additional reviews when needed, they led the discussion of the paper at an in-person PC meeting (at the University of Maryland, March 27-28, 2008) at which final decisions were made, and they produced a meta-review for each paper that summarized the basis for the Program Committee's decision. We wish to especially thank the members of the Senior PC for their outstanding work!
Similar processes were followed for selection of posters and demonstrations, tutorials, workshops, and doctoral consortium participants.
Delighting Chinese users: the Google China experience
Google entered China market as a late-comer in late-2005, with no local employees, an inadequate product line, and small market share. This talk will discuss Google China's efforts to build up a team, learn about local user needs, apply its global ...
Guilt by association as a search principle
The exploitation of fundamental invariants is among the most elegant solutions to many computational problems in a wide variety of domains. One of the more powerful approaches to exploit invariants is the principle of "guilt by association". In ...
Cited By
-
Roy R and Anand A (2022). Question Answering for the Curated Web, 10.1007/978-3-031-79512-1,
- Voorhees E, Craswell N and Lin J Too Many Relevants Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2970-2980)
-
Ferrante M, Ferro N and Fuhr N Towards Meaningful Statements in IR Evaluation: Mapping Evaluation Measures to Interval Scales, IEEE Access, 10.1109/ACCESS.2021.3116857, 9, (136182-136216)
-
Roy R and Anand A (2021). Question Answering for the Curated Web: Tasks and Methods in QA over Knowledge Bases and Text Collections, Synthesis Lectures on Information Concepts, Retrieval, and Services, 10.2200/S0113ED1V01Y202109ICR076, 13:4, (1-194), Online publication date: 27-Oct-2021.
- Breuer T, Ferro N, Fuhr N, Maistro M, Sakai T, Schaer P and Soboroff I How to Measure the Reproducibility of System-oriented IR Experiments Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (349-358)
- Ferrante M, Ferro N and Maistro M (2017). AWARE, ACM Transactions on Information Systems, 36:2, (1-38), Online publication date: 15-Sep-2017.
-
Liu B (2012). Sentiment Analysis and Opinion Mining, Synthesis Lectures on Human Language Technologies, 10.2200/S00416ED1V01Y201204HLT016, 5:1, (1-167), Online publication date: 23-May-2012.
-
Liu B (2012). Sentiment Analysis and Opinion Mining, 10.1007/978-3-031-02145-9,
Index Terms
- Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval