Loading [MathJax]/extensions/MathMenu.js
Cross-Lingual Text Image Recognition via Multi-Hierarchy Cross-Modal Mimic | IEEE Journals & Magazine | IEEE Xplore

Cross-Lingual Text Image Recognition via Multi-Hierarchy Cross-Modal Mimic


Abstract:

Optical character recognition and machine translation are usually studied and applied separately. In this paper, we consider a new problem named cross-lingual text image ...Show More

Abstract:

Optical character recognition and machine translation are usually studied and applied separately. In this paper, we consider a new problem named cross-lingual text image recognition (CLTIR) that integrates these two tasks together. The core of this problem is to recognize source language texts shown in images and transcribe them to the target language in an end-to-end manner. Traditional cascaded systems perform text image recognition and text translation sequentially. This can lead to error accumulation and parameter redundancy problems. To overcome these problems, we propose a multihierarchy cross-modal mimic (MHCMM) framework for end-to-end CLTIR, which can be trained with a massive bilingual text corpus and a small number of bilingual annotated text images. In this framework, a plug-in machine translation model is used as a teacher to guide the CLTIR model for learning representations compatible with image and text modes. Via adversarial learning and attention mechanisms, the proposed mimic method can integrate both global and local information in the semantic space. Experiments on a newly collected dataset demonstrate the superiority of the proposed framework. Our method outperforms other pipelines while containing fewer parameters. Additionally, the MHCMM framework can utilize a large-scale bilingual corpus to further improve the performance efficiently. The visualization of attention scores indicates that the proposed model can read text images in a fashion similar to the machine translation model reading text tokens.
Published in: IEEE Transactions on Multimedia ( Volume: 25)
Page(s): 4830 - 4841
Date of Publication: 16 June 2022

ISSN Information:

Funding Agency:


References

References is not available for this document.