skip to main content
10.1145/383952.384015acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Intelligent information triage

Published:01 September 2001Publication History

ABSTRACT

In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many information-retrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether a news story will be followed by many, very similar news stories, and also whether the stock price of one or more companies associated with a news story will move significantly following the appearance of that story. We conclude by discussing how the comprehensibility of the learned classifiers can be critical to success.}

References

  1. 1.J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang. Topic detection and tracking pilot study: Final report. In Proceedings of the Broadcast News Understanding and Transcription Workshop, pages 194-218, 1998.Google ScholarGoogle Scholar
  2. 2.J. Allan, V. Lavrenko, and H. Jin. First story detection in TDT is hard. In Proceedings of the Ninth International Conference on Information and Knowledge Management, pages 374-381, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. 3.J. Allan, V. Lavrenko, and R. Papka. Event tracking. CIIR Technical Report IR-128, University of Massachusetts Computer Science Department, 1998.Google ScholarGoogle Scholar
  4. 4.J. Allan, R. Papka, and V. Lavrenko. On-line new event detection and tracking. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. 5.W. W. Cohen. Fast effective rule induction. In Proceedings of the Twelfth International Conference on Machine Learning, 1995.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. 6.W. W. Cohen. Learning trees and rules with set-valued features. In Proceedings of the National Conference on Artificial Intelligence, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. 7.M. W. Craven and J. W. Shavlik. Extracting tree-structured representations of trained networks. In Advances in Neural Information Processing Systems, pages 24-30, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. 8.A. Danyluk and F. Provost. Small disjuncts in action: Learning to diagnose errors in the telephone network local loop. In Proceedings of the Tenth International Conference on Machine Learning, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  9. 9.P. Domingos. Knowledge acquisition from examples via multiple models. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 98-106, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. 10.P. Domingos and M. Pazzani. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In Proceedings of the 13th International Conference on Machine Learning, pages 105-112, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. 11.T. Fawcett and F. Provost. Activity monitoring: Noticing interesting changes in behavior. In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. 12.P. W. Foltz and S. T. Dumais. Personalized information delivery: An analysis of information filtering methods. Communications of the ACM, 35(12):51-60, Dec. 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. 13.D. J. Hand. Construction and Assessment of Classification Rules. Chichester:John Wiley and Sons, 1997.Google ScholarGoogle Scholar
  14. 14.E. M. Houseman and D. E. Kaskela. State of the art of selective dissemination of information. IEEE Transactions on Engineering Writing and Speech, 13(2):78-83, 1970.Google ScholarGoogle ScholarCross RefCross Ref
  15. 15.R. B. T. II, C. Olsen, and J. R. Dietrich. Attributes of news about firms: An analysis of firm-specific new reported in the wall street journal index. Journal of Accounting Research, 25(2), 1987.Google ScholarGoogle Scholar
  16. 16.T. Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In Proceedings of the Fourteenth International Conference on Machine Learning, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. 17.V. Lavrenko, M. Schmill, D. Lawrie, P. Ogilvie, D. Jensen, and J. Allan. Language models for financial news recommendation. In Proceedings of the Ninth International Conference on Information and Knowledge Management, pages 389-396, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. 18.C. Marshall and F. Shipman. Spatial hypertext and the practice of information triage. In Proceedings of the '97 ACM Conference on Hypertext, pages 124-133, Apr 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. 19.A. Martin, G. Doddington, T. Kamm, , M. Ordowski, and M. Przybocki. The DET curve in assessment of detection task performance. In Proceedings EuroSpeech, volume4, pages 1895-1898, 1997.Google ScholarGoogle Scholar
  20. 20.A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996.Google ScholarGoogle Scholar
  21. 21.D. J. Mostow. Machine transformation of advice into a heuristic search procedure. In Machine Learning: An Artificial Intelligence Approach, pages 367-403. Morgan Kaufmann, 1983.Google ScholarGoogle Scholar
  22. 22.K.-B. Ng and P. Kantor. Predicting the effectiveness of naive data fusion on the basis of system characteristics. Journal of American Society for Information Science, 51(13):1177-1189, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. 23.F. Provost and T. Fawcett. Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions. In Proceedings of the Third International Conference on Knowledge Discovery and Data Mining, pages 445-453, 1997.Google ScholarGoogle Scholar
  24. 24.F. Provost and T. Fawcett. Robust classification for imprecise environments. Machine Learning, 42:203-231, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. 25.J. Rocchio. Relevance feedback in information retrieval. In Salton, editor, The SMART Retrieval System: Experiments in Automatic Document Processing, chapter 14, pages 313-323. Prentice-Hall, 1971.Google ScholarGoogle Scholar
  26. 26.G. Salton and C. Buckley. Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science, 41:288-297, 1990.Google ScholarGoogle ScholarCross RefCross Ref
  27. 27.G. Salton and M. J. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, Inc., 1983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. 28.R. Schapire, Y. Singer, and A. Singhal. Boosting and Rocchio applied to text filtering. In Proceedings of ACM SIGIR, pages 215-223, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. 29.J. Swets. Measuring the accuracy of diagnostic systems. Science, 240:1285-1293, 1988.Google ScholarGoogle ScholarCross RefCross Ref
  30. 30.F. Walls, H. Jin, S. Sista, and R. Schwartz. Probabilistic models for topic detection and tracking. In IEEE International Conference On Acoustics, Speech and Signal Processing, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. 31.J. P. Yamron, L. Gillick, S. Knecht, S. Lowe, and P. van Mulbregt. Statistical models for tracking and detection. In Working notes of the DARPA TDT-3 Workshop, 2000.Google ScholarGoogle Scholar
  32. 32.Y. Yang, T. Ault, T. Pierce, and C. W. Lattimer. Improving text categorization methods for event tracking. In Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval, pages 65-72, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. 33.Y. Yang, J. G. Carbonell, R. D. Brown, T. Pierce, B. T. Archibald, and X. Liu. Learning approaches for detecting and tracking news events. IEEE Intelligent Systems, 14(4):32-43, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. 34.Y. Yang, T. Pierce, and J. G. Carbonell. A study on retrospective and on-line event detection. In ACM SIGIR Conference on Research and Development in Information Retrieval, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Intelligent information triage

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
          September 2001
          454 pages
          ISBN:1581133316
          DOI:10.1145/383952

          Copyright © 2001 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 September 2001

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          SIGIR '01 Paper Acceptance Rate47of201submissions,23%Overall Acceptance Rate792of3,983submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader