Skip to main content

Computational Cost Reduction by Selective Attention for Fast Speaker Adaptation in Multilayer Perceptron

  • Conference paper
  • First Online:
Developments in Applied Artificial Intelligence (IEA/AIE 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2358))

Abstract

Selective attention learning is proposed to improve the speed of the error backpropagation algorithm of a multilayer Perceptron. Class-selective relevance for evaluating the importance of a hidden node in an off-line stage and a node attention technique for measuring the local errors appearing at the output and hidden nodes in an on-line learning process are employed to selectively update the weights of the network. The acceleration of learning time is then achieved by lowering the computational cost required for learning. By combining this method with other types of improved learning algorithms, further improvement in learning speed is also achieved. The effectiveness of the proposed method is demonstrated by the speaker adaptation task of an isolated word recognition system. The experimental results show that the proposed selective attention technique can reduce the adaptation time more than 65% in an average sense.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jacobs, R.A.: Increased Rates of Convergence Through Learning Rate Adaptation, Neural Networks, 1 (1988) 295–307

    Article  Google Scholar 

  2. Park, D.J., Jun, B.E., Kim, J.H.: Novel Fast Training Algorithm for Multilayer Feedforward Neural Network, Electronic Letters, 28(6) (1992) 543–544

    Article  Google Scholar 

  3. Fahlman, S.E.: Faster-Learning Variations on Backpropagation: An Empirical Study, Proc. Connectionist Models Summer School, Carnegie Mellon University, (1988)38–51

    Google Scholar 

  4. Oh, S.H.: Improving the Error Back-Propagation Algorithm with a Modified Error Function, IEEE Trans. Neural Networks, 8(3) (1997) 799–803

    Article  Google Scholar 

  5. Mozer, M.C., Smolensky, P.: Using Relevance to Reduce Network Size Automatically, Connection Science, 1(1) (1989) 3–16

    Article  Google Scholar 

  6. Abrash, V., Franco, H., Sankar, A., Cohen, M.: Connectionist Speaker Normalization and Adaptation, Proc. European Conf. Speech Communication and Technology, (1995) 2183–2186

    Google Scholar 

  7. Lee, C.H., Lin, C.H., Juang, B.H.: A Study on Speaker Adaptation of the Parameters of Continuous Density Hidden Markov Models, IEEE Trans. Signal Processing, 39(4) (1991) 806–814

    Article  Google Scholar 

  8. Erdogan, S.S., Ng, G.S., Patrick, K.H.C.: Measurement Criteria for Neural Network Pruning,” Proc. IEEE TENCON Digital Signal Processing Applications, 1 (1996) 83–89

    Article  Google Scholar 

  9. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation, in Parallel Distributed Processing, Cambridge, MA: MIT Press, (1986)318–364

    Google Scholar 

  10. Davis, S.B., Mermelstein, P.: Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences, IEEE Trans. Acoustics, Speech, and Signal Processing, 28(4) (1980) 357–366

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kim, IC., Chien, SI. (2002). Computational Cost Reduction by Selective Attention for Fast Speaker Adaptation in Multilayer Perceptron. In: Hendtlass, T., Ali, M. (eds) Developments in Applied Artificial Intelligence. IEA/AIE 2002. Lecture Notes in Computer Science(), vol 2358. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48035-8_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-48035-8_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43781-9

  • Online ISBN: 978-3-540-48035-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics