Abstract:
This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the tex...Show MoreMetadata
Abstract:
This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the text regions by learning an attention mask automatically. The fine-grained attention mask is directly incorporated into the convolutional feature maps of a neural network to produce graininess-aware feature maps, which essentially obstruct the background inference and especially emphasize the text regions. Therefore, our graininess-aware feature maps concentrate on text regions, in especial those of exceedingly small size. Additionally, to address the extreme text-background class imbalance during training, we also propose a newfangled loss function, named Focal Negative Loss (FNL). The proposed loss function is able to down-weight the loss assigned to easy negative samples. Consequently, the proposed FNL can make training focused on hard negative samples. To evaluate the effectiveness of our text attention module and FNL, we integrate them into the efficient and accurate scene text detector (EAST). The comprehensive experimental results demonstrate that our text attention module and FNL can increase the performance of EAST by F-score of 3.98% on ICDAR2015 dataset and 1.87% on MSRA-TD500 dataset.
Date of Conference: 14-19 July 2019
Date Added to IEEE Xplore: 30 September 2019
ISBN Information: