Skip to main content

Parallelism, Localization and Chain Gradient Tuning Combinations for Fast Scalable Probabilistic Neural Networks in Data Mining Applications

  • Conference paper
Book cover Artificial Intelligence: Theories and Applications (SETN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7297))

Included in the following conference series:

Abstract

This work investigates the scalability of Probabilistic Neural Networks via parallelization and localization, and a chain gradient tuning. Since PNN model is inherently parallel three common parallel approaches are studied here, namely data parallel, neuron parallel and pipelining. Localization methods via clustering algorithms are utilized to reduce the hidden layer size of PNNs. A problem of localization may be present in the case of multi-class data. In this paper we propose two simple fast approximate solutions. The first is using sigma smoothing parameters obtained from the parallel PNN initial training directly to clustering. In this case a substantial reduction of neurons is achieved without significant loss of recognition accuracy. The second is an effort for an additional tuning. Via confidence outputs we employ a chain training approach to tune for the best possible PNN architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dunham, M.H.: Data mining introductory and advanced topics. Prentice Hall (2004)

    Google Scholar 

  2. Specht, D.: Probabilistic neural networks. Neural Networks 3, 109–118 (1990)

    Article  Google Scholar 

  3. Secretan, J., Georgiopoulos, M., Maidhof, I., Shibly, P., Hecker, J.: Methods for Parallelizing the Probabilistic Neural Network on a Beowulf Cluster Computer. In: International Joint Conference on Neural Networks, IJCNN 2006, Vancouver, pp. 2378–2385 (2006)

    Google Scholar 

  4. Cardona, K., Secretan, J., Georgiopoulos, M., Anagnostopoulos, G.: A Grid Based System for Data Mining Using MapReduce. Technical Report, TR-2007-02 (2007)

    Google Scholar 

  5. Bastke, S., Deml, M., Schmidt, S.: Combining statistical network data, probabilistic neural networks and the computational power of GPUs for anomaly detection in computer networks. In: 19th International Conference on Automated Planning and Scheduling, Workshop on Intelligent Security (SecArt 2009), Thessaloniki, Greece (2009)

    Google Scholar 

  6. Šerbedžija, N.: Simulating Artificial Neural Networks on Parallel Architectures. IEEE Computer, Special Issue on Neural Computing 29(3), 56–63 (1996)

    Google Scholar 

  7. Pethick, M., Liddle, M., Werstein, P., Huang, Z.: Parallelization of a backpropagation neural network on a cluster computer. In: 15th IASTED International Conference on Parallel and Distributed Computing and Systems, CA, USA, November 3-5, pp. 574–582 (2003)

    Google Scholar 

  8. Specht, D.F.: Enhancements to the probabilistic neural networks. In: Proc. IEEE Int. Joint Conf. Neural Networks, Baltimore, MD, pp. 761–768 (1992)

    Google Scholar 

  9. Burrascano, P.: Learning vector quantization for the probabilistic neural network. IEEE Transactions on Neural Networks 2, 458–461 (1991)

    Article  Google Scholar 

  10. Traven, H.G.C.: A neural network approach to statistical pattern classification by “semi-parametric” estimation of probability density functions. IEEE Transactions on Neural Networks 2, 366–377 (1991)

    Article  Google Scholar 

  11. Babich, G.A., Camps, O.I.: Weighted Parzen windows for pattern classification. IEEE Trans. Pattern Anal. Mach. Intell. 18(5), 567–570 (1996)

    Article  Google Scholar 

  12. Berthold, M., Diamond, J.: Constructive training of probabilistic neural networks. Neurocomputing 19, 167–183 (1998)

    Article  Google Scholar 

  13. Zhong, M., et al.: Gap-Based Estimation: Choosing the Smoothing Parameters for Probabilistic and General Regression Neural Networks. Neural Computation 19, 2840–2864 (2007)

    Article  MATH  Google Scholar 

  14. Chang, R.K.Y., Loo, C.K., Rao, M.V.C.: A Global k-means Approach for Autonomous Cluster Initialization of Probabilistic Neural Network. Informatica 32, 219–225 (2008)

    MATH  Google Scholar 

  15. Georgiou, V.L., Alevizos, P.D., Vrahatis, M.N.: Novel approaches to probabilistic neural networks through bagging and evolutionary estimating of prior probabilities. Neural Processing Letters 27, 153–162 (2008)

    Article  MATH  Google Scholar 

  16. Frey, B.J., Dueck, D.: Clustering by passing messages between data points. Science 315(5814), 972–976 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sarimveis, H., Alexandridis, A., Bafas, G.: A fast training algorithm for RBF networks based on subtractive clustering. Neurocomputing, 501–505 (2003)

    Google Scholar 

  18. Gonzalez, T.F.: Clustering to minimize the maximum intercluster distance. Theoretical Computer Science 38(2-3), 293–306 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  19. Bern, M., Eppstein, D.: Approximation Algorithms for Geometric Problems. In: Approximation algorithms for NP-hard problems, pp. 296–345. PWS Publishing (1997)

    Google Scholar 

  20. Frank, A., Asuncion, A.: UCI Machine Learning Repository. University of California, Irvine (2010), http://archive.ics.uci.edu/ml

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kokkinos, Y., Margaritis, K. (2012). Parallelism, Localization and Chain Gradient Tuning Combinations for Fast Scalable Probabilistic Neural Networks in Data Mining Applications. In: Maglogiannis, I., Plagianakos, V., Vlahavas, I. (eds) Artificial Intelligence: Theories and Applications. SETN 2012. Lecture Notes in Computer Science(), vol 7297. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30448-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-30448-4_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-30447-7

  • Online ISBN: 978-3-642-30448-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics