Editorial

https://doi.org/10.1016/j.ipm.2007.05.002Get rights and content

Introduction

Text summarization was one of the early research areas for computer processing of text, driven by the hope that one could automatically mine keywords or create abstracts from scientific articles. Statistical approaches were developed in the late 1950s that showed promise, but the field soon became eclipsed by more trendy areas such as machine translation, or information retrieval.

But the field has enjoyed a rebirth as can be noted by the number of summarization meetings held recently. The explosion of online information, either on the web or in large organizational information stores, requires new ways of interacting with text, and summarization is an obvious technique.

Summarization is of interest to the natural language processing community and the information retrieval community, both of which have made significant contributions to this rebirth. Papers in this issue come from both communities.

Section snippets

The papers in this issue

The first paper in this issue, “Automatic summarizing: the state of the art”, by Karen Spärck Jones, is a personal view of the field of summarization, examining current evaluation practices and in particular looking at the state of the art with respect to factors such as the purpose of the summary, the input material and the expected output. For each of these factors she identifies pertinent papers and points out major gaps in research. The final section of her paper discusses various system

References (0)

Cited by (0)

View full text