ABSTRACT
In this paper, we describe an experimental speech translation system utilizing small, PC-based hardware with multi-modal user interface. Two major problems for people using an automatic speech translation device are speech recognition errors and language translation errors. In this paper we focus on developing techniques to overcome these problems. The techniques include a new language translation approach based on example sentences, simplified expression rules, and a multi-modal user interface which shows possible speech recognition candidates retrieved from the example sentences. Combination of the proposed techniques can provide accurate language translation performance even if the speech recognition result contains some errors. We propose to use keyword classes by looking at the dependency between keywords to detect the misrecognized keywords and to search the example expressions. Then, the suitable example expression is chosen using a touch panel or by pushing buttons. The language translation picks up the expression in the other language, which should always be grammatically correct. Simplified translated expressions are realized by speech-act based simplifying rules so that the system can avoid various redundant expressions. A simple comparison study showed that the proposed method outputs almost 2 to 10 times faster than a conventional translation device.
- O. Furuse, H. Iida: Constituent Boundary Parsing for Example-based Machine Trasnlation. Proc. Coling94, pp. 105--111 (1994) Google ScholarDigital Library
- Satoshi Sato, MBT2: a Method for Combining Fragments of Examples in Example-based Translation. Artificial Intelligence 75, pp. 31--49 (1995). Google ScholarDigital Library
- K. Ishikawa, E. Sumita, and H. Iida: Example-Based Error Recovery Method for Speech Translation. ICSLP98, pp. 1147--1150 (1998)Google Scholar
- Chengqing Zong, Yumi Wakita, Bo Xu, Kenji Matsui and Zhenbiao Chen: Japanese-to-Chinese Spoken Language Translation Based on the Simplified Expression. ICSLP2000Google Scholar
- Yumi Wakita, Kenji Matsui, and Yoshinori Sagisaka: Fine Keyword Clustering using a Thesaurus and Example Sentences for speech translation. ICSLP2000Google Scholar
- An experimental multilingual speech translation system
Recommendations
The ATR multilingual speech-to-speech translation system
In this paper, we describe the ATR multilingual speech-to-speech translation (S2ST) system, which is mainly focused on translation between English and Asian languages (Japanese and Chinese). There are three main modules of our S2ST system: large-...
Generating Arabic text in multilingual speech-to-speech machine translation framework
The interlingual approach to machine translation (MT) is used successfully in multilingual translation. It aims to achieve the translation task in two independent steps. First, meanings of the source-language sentences are represented in an intermediate ...
Distributed speech translation technologies for multiparty multilingual communication
Developing a multilingual speech translation system requires efforts in constructing automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS) components for all possible source and target languages. If the ...
Comments