Skip to main content
Log in

An n-gram-based approach for detecting approximately duplicate database records

  • Published:
International Journal on Digital Libraries Aims and scope Submit manuscript

Abstract.

Detecting and eliminating duplicate records is one of the major tasks for improving data quality. The task, however, is not as trivial as it seems since various errors, such as character insertion, deletion, transposition, substitution, and word switching, are often present in real-world databases. This paper presents an n-gram-based approach for detecting duplicate records in large databases. Using the approach, records are first mapped to numbers based on the n-grams of their field values. The obtained numbers are then clustered, and records within a cluster are taken as potential duplicate records. Finally, record comparisons are performed within clusters to identify true duplicate records. The unique feature of this method is that it does not require preprocessing to correct syntactic or typographical errors in the source data in order to achieve high accuracy. Moreover, sorting the source data file is unnecessary. Only a fixed number of database scans is required. Therefore, compared with previous methods, the algorithm is more time efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Published online: 22 August 2001

Rights and permissions

Reprints and permissions

About this article

Cite this article

Tian, Z., Lu, H., Ji, W. et al. An n-gram-based approach for detecting approximately duplicate database records. Int J Digit Libr 3, 325–331 (2002). https://doi.org/10.1007/s007990100044

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s007990100044

Navigation