Abstract
Currently, there are few available tools to separate ancient Japanese sentences into words. Therefore, it is difficult to extract archaic Japanese words from Japanese ancient writings. We propose a method of word segmentation for Japanese ancient writings. We calculate the likelihood of character n-grams to be words, and extract character n-grams with higher likelihood as archaic Japanese words. We conducted word separation experiments using the term likelihood with the proposed method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Sproat, R., Shih, C.: A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese & Oriental Languages 4(4), 336–351 (1990)
Mochihashi, D., Yamada, T., Ueda, N.: Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling. In: Proc. ACL-IJCNLP 2009, pp. 100–108 (2009)
Maeda, A., Sadat, F., Yoshikawa, M., Uemura, S.: Query term disambiguation for Web cross-language information retrieval using a search engine. In: Proc. IRAL 2000, pp. 25–32 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yoshimura, M., Kimura, F., Maeda, A. (2012). Word Segmentation for Text in Japanese Ancient Writings Based on Probability of Character N-Grams. In: Chen, HH., Chowdhury, G. (eds) The Outreach of Digital Libraries: A Globalized Resource Network. ICADL 2012. Lecture Notes in Computer Science, vol 7634. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34752-8_38
Download citation
DOI: https://doi.org/10.1007/978-3-642-34752-8_38
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34751-1
Online ISBN: 978-3-642-34752-8
eBook Packages: Computer ScienceComputer Science (R0)