Abstract
All search in the real-world is inherently interactive. Information retrieval (IR) has a firm tradition of using simulation to evaluate IR systems as embodied by the Cranfield paradigm. However, to a large extent, such system evaluations ignore user interaction. Simulations provide a way to go beyond this limitation. With an increasing number of researchers using simulation to evaluate interactive IR systems, it is now timely to discuss, develop and advance this powerful methodology within the field of IR. During the SimInt 2010 workshop around 40 participants discussed and presented their views on the simulation of interaction. The main conclusion and general consensus was that simulation offers great potential for the field of IR; and that simulations of user interaction can make explicit the user and the user interface while maintaining the advantages of the Cranfield paradigm.
- O. Alonso and J. Pedersen. Recovering temporal context for relevance assessments. In Azzopardi et al. {3}, pages 25--26.Google Scholar
- P. Arvola and J. Kekäläinen. Simulating user interaction in result document browsing. In Azzopardi et al. {3}, pages 27--28.Google Scholar
- L. Azzopardi, K. Järvelin, J. Kamps, and M. D. Smucker, editors. Proceedings of the SIGIR 2010 Workshop on the Simulation of Interaction: Automated Evaluation of Interactive IR (SimInt 2010), 2010. ACM Press.Google Scholar
- P. Clough, J. Gonzalo, and J. Karlgren. Creating re-useable log files for interactive CLIR. In Azzopardi et al. {3}, pages 19--20.Google Scholar
- M. J. Cole. Simulation of the IIR user: Beyond the automagic. In Azzopardi et al. {3}, pages 1--2.Google Scholar
- M. D. Cooper. A simulation model of an information retrieval system. Information Storage and Retrieval, 9:13--32, 1973.Google ScholarCross Ref
- S. Geva and T. Chappell. Focused relevance feedback evaluation. In Azzopardi et al. {3}, pages 9--10.Google Scholar
- D. Harman. Relevance feedback revisited. In SIGIR 1992, pages 1--10, 1992. Google ScholarDigital Library
- B. Huurnink, K. Hofmann, and M. de Rijke. Simulating searches from transaction logs. In Azzopardi et al. {3}, pages 21--22.Google Scholar
- C. Jethani and M. D. Smucker. Modeling the time to judge document relevance. In Azzopardi et al. {3}, pages 11--12.Google Scholar
- E. Kanoulas, P. Clough, B. Carterette, and M. Sanderson. Session track at TREC 2010. In Azzopardi et al. {3}, pages 13--14.Google Scholar
- T. Kato, M. Matsushita, and N. Kando. Bridging evaluations: Inspiration from dialog system research. In Azzopardi et al. {3}, pages 3--4.Google Scholar
- H. Keskustalo and K. Järvelin. Query and browsing-based interaction simulation in test collections. In Azzopardi et al. {3}, pages 29--30.Google Scholar
- H. Keskustalo, K. Järvelin, and A. Pirkola. Effectiveness of relevance feedback based on a user simulation model: Effects of a user scenario on cumulated gain value. Journal of Information Retrieval, 11:209--228, 2008. Google ScholarDigital Library
- H. Keskustalo, K. Järvelin, and A. Pirkola. Graph-based query session exploration based on facet analysis. In Azzopardi et al. {3}, pages 15--16.Google Scholar
- C. Mulwa, W. Li, S. Lawless, and G. Jones. A proposal for the evaluation of adaptive information retrieval systems using simulated interaction. In Azzopardi et al. {3}, pages 5--6.Google Scholar
- N. Nanas, U. Kruschwitz, M.-D. Albakour, M. Fasli, D. Song, Y. Kim, U. C. Beresi, and A. D. Roeck. A methodology for simulated experiments in interactive search. In Azzopardi et al. {3}, pages 23--24.Google Scholar
- M. Preminger. Evaluating a visualization approach by user simulation. In Azzopardi et al. {3}, pages 31--32.Google Scholar
- S. Stober and A. Nuernberger. Automatic evaluation of user adaptive interfaces for information organization and exploration. In Azzopardi et al. {3}, pages 33--34.Google Scholar
- J. Tague, M. Nelson, and H. Wu. Problems in the simulation of bibliographic retrieval systems. In SIGIR 1980, pages 236--255, 1980. Google ScholarDigital Library
- D. Tunkelang. Using QPP to simulate query refinement. In Azzopardi et al. {3}, pages 7--8.Google Scholar
- R. W. White, I. Ruthven, J. M. Jose, and C. J. Van Rijsbergen. Evaluating implicit feedback models using searcher simulations. ACM Transactions on Information Systems, 23:325--361, 2005. Google ScholarDigital Library
- P. Zhang, U. C. Beresi, D. Song, and Y. Hou. A probabilistic automaton for the dynamic relevance judgement of users. In Azzopardi et al. {3}, pages 17--18.Google Scholar
Index Terms
- Report on the SIGIR 2010 workshop on the simulation of interaction
Recommendations
IWGS 2010 workshop report: The First ACM SIGSPATIAL International Workshop on Geostreaming (San Jose, California - November 2, 2010)
The ACM SIGSPATIAL International Workshop on Geostreaming (IWGS) was held for the first time in conjunction with the 18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS 2010). The workshop has ...
Report on the SIGIR 2017 Workshop on eCommerce (ECOM17)
The SIGIR 2017 Workshop on eCommerce (ECOM17), was a full day workshop that took place on Friday, August 11, 2017 in Tokyo, Japan. The purpose of the workshop was to serve as a platform for publication and discussion of Information Retrieval and NLP ...
Report on the SIGIR 2016 Workshop on Neural Information Retrieval (Neu-IR)
The SIGIR 2016 workshop on Neural Information Retrieval (Neu-IR) took place on 21 July, 2016 in Pisa. The goal of the Neu-IR (pronounced "New IR") workshop was to serve as a forum for academic and industrial researchers, working at the intersection of ...
Comments