Scene Text Deblurring in Non-stationary Video Sequences

https://doi.org/10.1016/j.procs.2016.08.259Get rights and content
Under a Creative Commons license
open access

Abstract

Text detection in natural scenes burdened by imperfect shooting conditions and blurring artifacts is the subject of the present paper. The text as a linguistic component provides a significant amount of information for scene understanding, scene categorization, image retrieval, and many other challenging problems. Usually real video sequences suffer from the superposition of the complicated impacts that are often analyzed separately. The main attention focuses on the text detection with geometric and blurring distortions under blurring and camera shooting artifacts. The original methodology based on the analysis of the gradient sharp profiles includes the automatic text detection in fully or partially blurred frames of a non-stationary video sequence. Also, the blind technique of a blurred text restoration is discussed. Additionally some results of the text detection are mentioned. The detection results for corrupted text fragments from test dataset ICDAR 2015 achieve 76–83% and prevail the detection results of the non-processed by deblurring procedure text fragments upon 40–52%.

Keywords

text deblurring
scene text detection
non-blind kernel
blind kernel
non-stationary video sequence

Cited by (0)

Peer-review under responsibility of KES International.