Abstract:
Grapheme-to-phoneme (G2P) models convert a written word into its corresponding pronunciation and are essential components in automatic-speech-recognition and text-to-spee...Show MoreMetadata
Abstract:
Grapheme-to-phoneme (G2P) models convert a written word into its corresponding pronunciation and are essential components in automatic-speech-recognition and text-to-speech systems. Recently, the use of neural encoder-decoder architectures has substantially improved G2P accuracy for mono- and multi-lingual cases. However, most multilingual G2P studies focus on sets of languages that share similar graphemes, such as European languages. Multilingual G2P for languages from different writing systems, e.g. European and East Asian, remains an understudied area. In this work, we propose a multilingual G2P model with byte-level input representation to accommodate different grapheme systems, along with an attention-based Transformer architecture. We evaluate the performance of both character-level and byte-level G2P using data from multiple European and East Asian locales. Models using byte representation yield 16.2%– 50.2% relative word error rate improvement over character-based counterparts for mono- and multi-lingual use cases. In addition, byte-level models are 15.0%–20.1% smaller in size. Our results show that byte is an efficient representation for multilingual G2P with languages having large grapheme vocabularies.
Published in: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 04-08 May 2020
Date Added to IEEE Xplore: 09 April 2020
ISBN Information: