skip to main content
review-article

Report on the SIGIR 2010 workshop on the simulation of interaction

Published:03 January 2011Publication History
Skip Abstract Section

Abstract

All search in the real-world is inherently interactive. Information retrieval (IR) has a firm tradition of using simulation to evaluate IR systems as embodied by the Cranfield paradigm. However, to a large extent, such system evaluations ignore user interaction. Simulations provide a way to go beyond this limitation. With an increasing number of researchers using simulation to evaluate interactive IR systems, it is now timely to discuss, develop and advance this powerful methodology within the field of IR. During the SimInt 2010 workshop around 40 participants discussed and presented their views on the simulation of interaction. The main conclusion and general consensus was that simulation offers great potential for the field of IR; and that simulations of user interaction can make explicit the user and the user interface while maintaining the advantages of the Cranfield paradigm.

References

  1. O. Alonso and J. Pedersen. Recovering temporal context for relevance assessments. In Azzopardi et al. {3}, pages 25--26.Google ScholarGoogle Scholar
  2. P. Arvola and J. Kekäläinen. Simulating user interaction in result document browsing. In Azzopardi et al. {3}, pages 27--28.Google ScholarGoogle Scholar
  3. L. Azzopardi, K. Järvelin, J. Kamps, and M. D. Smucker, editors. Proceedings of the SIGIR 2010 Workshop on the Simulation of Interaction: Automated Evaluation of Interactive IR (SimInt 2010), 2010. ACM Press.Google ScholarGoogle Scholar
  4. P. Clough, J. Gonzalo, and J. Karlgren. Creating re-useable log files for interactive CLIR. In Azzopardi et al. {3}, pages 19--20.Google ScholarGoogle Scholar
  5. M. J. Cole. Simulation of the IIR user: Beyond the automagic. In Azzopardi et al. {3}, pages 1--2.Google ScholarGoogle Scholar
  6. M. D. Cooper. A simulation model of an information retrieval system. Information Storage and Retrieval, 9:13--32, 1973.Google ScholarGoogle ScholarCross RefCross Ref
  7. S. Geva and T. Chappell. Focused relevance feedback evaluation. In Azzopardi et al. {3}, pages 9--10.Google ScholarGoogle Scholar
  8. D. Harman. Relevance feedback revisited. In SIGIR 1992, pages 1--10, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Huurnink, K. Hofmann, and M. de Rijke. Simulating searches from transaction logs. In Azzopardi et al. {3}, pages 21--22.Google ScholarGoogle Scholar
  10. C. Jethani and M. D. Smucker. Modeling the time to judge document relevance. In Azzopardi et al. {3}, pages 11--12.Google ScholarGoogle Scholar
  11. E. Kanoulas, P. Clough, B. Carterette, and M. Sanderson. Session track at TREC 2010. In Azzopardi et al. {3}, pages 13--14.Google ScholarGoogle Scholar
  12. T. Kato, M. Matsushita, and N. Kando. Bridging evaluations: Inspiration from dialog system research. In Azzopardi et al. {3}, pages 3--4.Google ScholarGoogle Scholar
  13. H. Keskustalo and K. Järvelin. Query and browsing-based interaction simulation in test collections. In Azzopardi et al. {3}, pages 29--30.Google ScholarGoogle Scholar
  14. H. Keskustalo, K. Järvelin, and A. Pirkola. Effectiveness of relevance feedback based on a user simulation model: Effects of a user scenario on cumulated gain value. Journal of Information Retrieval, 11:209--228, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. H. Keskustalo, K. Järvelin, and A. Pirkola. Graph-based query session exploration based on facet analysis. In Azzopardi et al. {3}, pages 15--16.Google ScholarGoogle Scholar
  16. C. Mulwa, W. Li, S. Lawless, and G. Jones. A proposal for the evaluation of adaptive information retrieval systems using simulated interaction. In Azzopardi et al. {3}, pages 5--6.Google ScholarGoogle Scholar
  17. N. Nanas, U. Kruschwitz, M.-D. Albakour, M. Fasli, D. Song, Y. Kim, U. C. Beresi, and A. D. Roeck. A methodology for simulated experiments in interactive search. In Azzopardi et al. {3}, pages 23--24.Google ScholarGoogle Scholar
  18. M. Preminger. Evaluating a visualization approach by user simulation. In Azzopardi et al. {3}, pages 31--32.Google ScholarGoogle Scholar
  19. S. Stober and A. Nuernberger. Automatic evaluation of user adaptive interfaces for information organization and exploration. In Azzopardi et al. {3}, pages 33--34.Google ScholarGoogle Scholar
  20. J. Tague, M. Nelson, and H. Wu. Problems in the simulation of bibliographic retrieval systems. In SIGIR 1980, pages 236--255, 1980. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. D. Tunkelang. Using QPP to simulate query refinement. In Azzopardi et al. {3}, pages 7--8.Google ScholarGoogle Scholar
  22. R. W. White, I. Ruthven, J. M. Jose, and C. J. Van Rijsbergen. Evaluating implicit feedback models using searcher simulations. ACM Transactions on Information Systems, 23:325--361, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. Zhang, U. C. Beresi, D. Song, and Y. Hou. A probabilistic automaton for the dynamic relevance judgement of users. In Azzopardi et al. {3}, pages 17--18.Google ScholarGoogle Scholar

Index Terms

  1. Report on the SIGIR 2010 workshop on the simulation of interaction

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM SIGIR Forum
        ACM SIGIR Forum  Volume 44, Issue 2
        December 2010
        83 pages
        ISSN:0163-5840
        DOI:10.1145/1924475
        Issue’s Table of Contents

        Copyright © 2011 Authors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 3 January 2011

        Check for updates

        Qualifiers

        • review-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader