Skip to main content

A Text-Independent Forced Alignment Method for Automatic Phoneme Segmentation

  • Conference paper
  • First Online:
AI 2022: Advances in Artificial Intelligence (AI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13728))

Included in the following conference series:

Abstract

Phoneme segmentation is important for many healthcare applications, such as the diagnosis and monitoring of children with speech sound disorders (SSDs). This is usually addressed by performing forced alignment (FA), which essentially annotates an audio file to provide information on what has been uttered and where. While many FA tools exist, very few can work automatically without the assistance of a transcription. This work aims at providing a novel text-independent FA tool by using two models, namely wav2vec 2.0 and an unsupervised segmentor known as UnsupSeg. To provide labels to the segments, the class regions that are obtained by nearest-neighbour classification with wav2vec 2.0 labels pre-CTC collapse as the reference points. Maximal overlap between the class regions and the segments determines class label. Additional post-processing steps, such as over-fitting cleaning and application of voice activity detection, are also performed to further improve the segmentation performance. All the models used to create the tool are self-supervised, and thus can leverage great amounts of unlabelled data to reduce the need for labelled data. When evaluated on the TIMIT dataset, our implementation achieved a harmonic mean score of 76.88%, competitive against other alternatives.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This refers to conditional dependence on the previous states in a sequence.

References

  1. Speech sound disorders-articulation and phonology (2022). https://www.asha.org/practice-portal/clinical-topics/articulation-and-phonology/

  2. Baevski, A., Zhou, Y., Mohamed, A., Auli, M.: wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Proceedings of Advances in Neural Information Processing Systems, vol. 33, pp. 12449–12460 (2020)

    Google Scholar 

  3. Daniel, G.R., McLeod, S.: Children with speech sound disorders at school: challenges for children, parents and teachers. Aust. J. Teach. Educ. 42(2), 81–101 (2017)

    Article  Google Scholar 

  4. Furlong, L., Serry, T., Erickson, S., Morris, M.E.: Processes and challenges in clinical decision-making for children with speech-sound disorders. Int. J. Lang. Commun. Disord. 53(6), 1124–1138 (2018)

    Article  Google Scholar 

  5. Garofolo, J.S., et al.: TIMIT Acoustic-Phonetic Continuous Speech Corpus (1993). https://catalog.ldc.upenn.edu/LDC93S1

  6. Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)

    Article  Google Scholar 

  7. Konstantinidis, S.: Computing the edit distance of a regular language. Inf. Comput. 205(9), 1307–1316 (2007)

    Article  MATH  Google Scholar 

  8. Kreuk, F.: Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation (INTERSPEECH 2020) (2021)

    Google Scholar 

  9. Kreuk, F., Keshet, J., Adi, Y.: Self-Supervised Contrastive Learning for Unsupervised Phoneme Segmentation. arXiv:2007.13465 [cs, eess, stat] (2020)

  10. Kreuk, F., Sheena, Y., Keshet, J., Adi, Y.: Phoneme boundary detection using learnable segmental features. In: Proceedings of ICASSP, Barcelona, Spain, pp. 8089–8093. IEEE (2020)

    Google Scholar 

  11. Mahr, T., Berisha, V., Kawabata, K., Liss, J., Hustad, K.: Performance of forced-alignment algorithms on children’s speech. Technical report, PsyArXiv (2020)

    Google Scholar 

  12. McKechnie, J., Ahmed, B., Gutierrez-Osuna, R., Monroe, P., McCabe, P., Ballard, K.J.: Automated speech analysis tools for children’s speech production: a systematic literature review. Int. J. Speech Lang. Pathol. 20(6), 583–598 (2018)

    Article  Google Scholar 

  13. McKechnie, J.G.: Exploring the use of technology for assessment and intensive treatment of childhood apraxia of speech. Ph.D. thesis (2019)

    Google Scholar 

  14. Mcleod, S., Baker, E.: Speech-language pathologists’ practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders. Clin. Linguist. Phon. 28(7–8), 508–531 (2014)

    Article  Google Scholar 

  15. McLeod, S., et al.: Profile of Australian preschool children with speech sound disorders at risk for literacy difficulties. Aust. J. Learn. Diffic. 22(1), 15–33 (2017)

    Article  Google Scholar 

  16. McLeod, S., et al.: Tutorial: speech assessment for multilingual children who do not speak the same language (s) as the speech-language pathologist. Am. J. Speech Lang. Pathol. 26(3), 691–708 (2017)

    Article  Google Scholar 

  17. Nelson, T.L., Mok, Z., Eecen, K.T.: Use of transcription when assessing children’s speech: Australian speech-language pathologists’ practices, challenges, and facilitators. Folia Phoniatr. Logop. 72(2), 131–142 (2020)

    Article  Google Scholar 

  18. Oschshorn, R., Hawkins, M.: Gentle (2017)

    Google Scholar 

  19. von Platen, P.: Fine-Tune Wav2Vec2 for English ASR in Hugging Face with huggingface Transformers (2021). https://huggingface.co/blog/fine-tune-wav2vec2-english

  20. Rosenfelder, I., et al.: Fave (forced alignment and vowel extraction) suite version 1.1.3 (2014). https://doi.org/10.5281/zenodo.9846

  21. Schuster, M., Paliwal, K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  22. Zhu, J., Zhang, C., Jurgens, D.: Phone-to-audio alignment without text: a semi-supervised approach. In: Proceedings of ICASSP, pp. 8167–8171. IEEE (2022)

    Google Scholar 

Download references

Acknowledgement

This work has been supported by the Western Australian Future Health Research and Innovation Fund, which is an initiative of the WA State Government”. This work is being conducted to inform a larger research program, being led by a team of researchers at Curtin university. The research program is focused on the development of an application that will provide objective kinematic and acoustic measurements, to support speech language pathologists in the diagnosis of speech sound disorders. The authors would also like to thank Pawsey supercomputing centre for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duc-Son Pham .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wohlan, B., Pham, DS., Chan, K.Y., Ward, R. (2022). A Text-Independent Forced Alignment Method for Automatic Phoneme Segmentation. In: Aziz, H., Corrêa, D., French, T. (eds) AI 2022: Advances in Artificial Intelligence. AI 2022. Lecture Notes in Computer Science(), vol 13728. Springer, Cham. https://doi.org/10.1007/978-3-031-22695-3_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22695-3_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22694-6

  • Online ISBN: 978-3-031-22695-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics