Loading [a11y]/accessibility-menu.js
anyOCR: A sequence learning based OCR system for unlabeled historical documents | IEEE Conference Publication | IEEE Xplore

anyOCR: A sequence learning based OCR system for unlabeled historical documents


Abstract:

Institutes and libraries around the globe are preserving the literary heritage by digitizing historical documents. However, to make this data easily accessible the scanne...Show More

Abstract:

Institutes and libraries around the globe are preserving the literary heritage by digitizing historical documents. However, to make this data easily accessible the scanned documents need to be transformed into search-able text. State of the art OCR systems using Long-Short-Term-Memory networks (LSTM) have been applied successfully to recognize text in both printed and handwritten form. Besides the general challenges with historical documents, e.g. poor image quality, damaged characters, etc., especially unknown scripts and old fonds make it difficult to provide the large amount of transcribed training data required for these methods to perform well. Transcribing the documents manually is very costly in terms of man-hours and require language specific expertise. The unknown fonds and requirement for meaningful context also make the use of synthetic data unfeasible. We therefore propose an end-to-end framework anyOCR that cuts the required input from language experts to a minimum and is therefore easily extendable to other documents. Our approach combines the strengths of segmentation-based OCR methods utilizing clustering on individual characters and segmentation-free OCR methods utilizing a LSTM architecture. The proposed approach is applied to a collection of 15th century Latin documents. Combining the initial clustering with segmentation-free OCR was able to reduce the initial error of about 16% to less than 8%.
Date of Conference: 04-08 December 2016
Date Added to IEEE Xplore: 24 April 2017
ISBN Information:
Conference Location: Cancun, Mexico

Contact IEEE to Subscribe

References

References is not available for this document.