Skip to main content

Part of the book series: Informatik-Fachberichte ((2252,volume 291))

Abstract

Whereas the identification and control of linear systems is well understood, this does not apply in general to nonlinear systems. Here, neural nets open up new paths for the treatment of multidimensional nonlinear systems as well as the possibility of adaptive readjustments to changes of the environment and of the system parameters. The advantages of neural control are of particular value for robotics. On the subsymbolic level, the goal is a symbiosis between sensorics and actuatorics and neural signal processing and control. However, we do intend to use traditional AI-techniques in cases where a robust knowledge representation is required which goes beyond the subsymbolic level, e.g. for space representation. In many applications, the problem is to extract significant control parameters from visual sensor data in a robust and efficient manner. For this task, neural nets are suited particularly well. Mathematical models for machine learning as well as unifying dynamical concepts will be utilized to achieve quantitative, generalizable results with respect to the efficiency of neural nets, by taking into account the real world requirements for control tasks with respect to performance, reliability and fault tolerance. Speech is of special significance for the dialogue with autonomous systems. Since neural nets have lead to encouraging results in speech processing, corresponding techniques will also be applied in robotics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J.S. Albus. A New Approach to Manipulator Control: The Cerebellar Model Articulaton Con¬troller (CMAC). Transactions of the ASME, Journal of Dynamic Systems, Measurement, and Control: 221–227, Sept. 1975.

    Google Scholar 

  2. S. Annulova, J. Cuellar, K. U. HofFgen, and H. U. Simon. Probably almost optimal neural classifiers. In preparation.

    Google Scholar 

  3. H. Asada. Teaching and Learning of Compliance Using Neurol Nets: Representation and Generation of Nonlinear Compliance. 1990 IEEE Int. Conf. Robotics and Automation, Cincinnati, May 13–18, 1990.

    Google Scholar 

  4. E. B. Baum. The perceptron algorithm is fast for non-malicious distributions. Neural Computationi, 2: 249–261, 1990.

    Google Scholar 

  5. E. B. Baum. Polynomial time algorithms for learning neural nets. In M. A. Fulk and J. Case, editors, Proc. of the 3rd Annual Workshop on Computational Learning Theory, 258–273, San Mateo, California, Aug. 1990. Morgan Kaufmann.

    Google Scholar 

  6. E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Computationi, 1: 151–160, 1989.

    Article  Google Scholar 

  7. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association on Computing Machinery, 36 (4): 929–965, Oct. 1989.

    MathSciNet  MATH  Google Scholar 

  8. L. Bottou, J.S. Liénard. Multispeaker Digit Recognition. Intl. Conf. on Connectionism in Perspective, Zurich, 38–44, 1988.

    Google Scholar 

  9. M. Codogno, R. Gemello, F. Mana, P. Demichelis, P. Laface, E. Piccolo. ESPRIT Project 2059 “Pygmalion”. Final Report on Task 4. 3, 1990.

    Google Scholar 

  10. W. Finnoff, H.G. Zimmermann. Reducing complexity and improving generalization in neural networks by mixed strategies. Submitted to NIPS 91.

    Google Scholar 

  11. C. Freksa. Qualitative spatial reasoning. In Mark and Frank [30].

    Google Scholar 

  12. C. Freksa. Temporal reasoning based on semi-intervals. Technical Report TR-90-016, ICSI, Berkeley, CA, April 1990.

    Google Scholar 

  13. P. Fischer, S. Polt, and H. U. Simon. Probably almost bayes decisions. In Proc. of the 4th Annual Workshop on Computational Learning Theory, San Mateo, California, Aug. 1991. To appear.

    Google Scholar 

  14. H. Hackbarth, M. Immendórfer. Speaker-dependent isolated word recognition by artifical neural networks. Proc. VERBA 90 Intl. Conf, on Speech Technol., 91–98, 1990.

    Google Scholar 

  15. H. Hackbarth, J. Mantel. Neural subnet assembly for recognition from medium-sized vocabularies. ICANN-91 Neurocomputing Conf., Helsinki, 1991 (accepted).

    Google Scholar 

  16. S.J. Hanson, L.Y. Pratt. Comparing biases for minimal network construction with back-propagation. Advances in Neural Information Processing I, D. S. Touretzky, Ed., Morgan Kaufman, 177–185, 1989.

    Google Scholar 

  17. D. Haussler. Generalizing the pac model for neural net and other learning applications. Research Report UCSC-CRL-89-30, University of California Santa Cruz, Sept. 1989.

    Google Scholar 

  18. D. Hernández. Relative Representation of Spatial Knowledge: The 2-D Case. In Mark and Frank [30]

    Google Scholar 

  19. G. E. Hinton. Connectionist learning procedures. Artificial Intelligence, 40: 185–235, 1989.

    Article  Google Scholar 

  20. J. Hollatz, B. Schürmann. The “Detailed Balance” Net: A Stable Asymmetric Artificial Neural System for Unsupervised Learning. Proceedings of the IEEE International Conference on Neural Networks, San Diego Vol. III, 453–459, 1990.

    Google Scholar 

  21. R. Hofmann, M. Roscheisen, V. Tresp. Parsimonious Networks of Locally-Tuned Units. Submitted to NIPS 91.

    Google Scholar 

  22. B. Huberman, D. Rumelhart, A. Weigand. Generalization by weight elimination with application to forecasting. Advances in Neural Information Processing III, Ed. R. P. Lippman and J. Moody, Morgan Kaufmann, 1991.

    Google Scholar 

  23. E. Karnin. A simple procedure for pruning back-propagation trained neural networks. IEEE Trans, on Neural Networks, 1. 2, 239–242, June 1990.

    Article  Google Scholar 

  24. M. Kearns, M. Li, L. Pitt, and L. Valiant. Recent results on boolean concept learning. In Workshop on Machine Learning, Irvine, 1987.

    Google Scholar 

  25. M. J. Kearns and R. E. Shapire. Eficient distribution-free learning of probabilistic concepts. In Proc. of the 31st Symposium on Foundations of Computer Science. IEEE Computer Society, Oct. 1990. To appear.

    Google Scholar 

  26. A. Krause, H. Hackbarth. Scaly artificial neural networks for speaker-independent recognition of isolated words. Proc. IEEE ICASSP SI, 21–24, 1989.

    Google Scholar 

  27. F. Lange. A Learning Concept for Improving Robot Force Control, IFAC Symposium on Robot Control. Karlsruhe, Oct. 1988.

    Google Scholar 

  28. F. Lange. Schätzung und Darstellung von mehrdimensionalen Abbildungen, DLR-Mitteilung. DLR-Mitt. 90–06.

    Google Scholar 

  29. Y. Le Cun, J.S. Denker, S.A. Solla. Optimal Brain Damage, in: D.S. Touretzky (ed.), Neural Information Processing Systems, Morgan Kaufmann, 598–605, 1990.

    Google Scholar 

  30. D.M. Mark,A.U. Frank, editors. Cognitive and Linguistic Aspects of Geographic Space. NATO Advanced Studies Institute. Kluwer, Dordrecht, 1990.

    Google Scholar 

  31. S. Miesbach. Effective Gradient Computation for Continuous and Discrete Time-Dependent Neural Networks. Submitted to IJCANN-91, Singapore.

    Google Scholar 

  32. J. Moody, C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, Vol. 1, 281–294, 1989.

    Article  Google Scholar 

  33. M.C. Mozer, P. Smolensky. Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment, in: D.S. Touretzky (ed.), Neural Information Processing Systems, Morgan Kaufmann, 107–115, 1989.

    Google Scholar 

  34. K.S. Narendra, K. Parthasarathy. Identification and Control of Dynamical Systems Using Neural Networks. IEEE Transactions on Neural Networks, Vol.1, No.1, 4–27, 1990.

    Google Scholar 

  35. T. Poggio, F. Girosi. Networks for approximation and learning. Proceedings of the IEEE Vol. 78, 1481–1497, 1990.

    Article  Google Scholar 

  36. U. Ramacher, B. Schürmann. Unified Description of Neural Algorithms for Time-Independent Pattern Recognition, in: U. Ramacher, U. Rückert (ed.), VLSI Design of Neural Networks, Kluwer Academic Publishers, 255–270, 1990.

    Google Scholar 

  37. J.H. Schmidhuber. Learning to Control Fast-Weight Memories: An Alternative to Recurrent Nets. Technical Report FKI-147-91, Institut für Informatik, Technische Universität München, 1990.

    Google Scholar 

  38. J.H. Schmidhuber. Learning to Generate Sub-Goals for Action Sequences. Proceedings ICANN 91, Elsevier Science Publishers B.V., 1991, to appear.

    Google Scholar 

  39. J.H. Schmidhuber. Neural Sequence Chunkers. Technical Report FKI-148-91, Institut für Informatik, Technische Universität München, 1991.

    Google Scholar 

  40. J.H. Schmidhuber. Adaptive Curiosity and Adaptive Confidence. Technical Report FKI-149-91, Institut für Informatik, Technische Universität München, 1991.

    Google Scholar 

  41. J.H. Schmidhuber. An O(n3) Learning Algorithm for Fully Recurrent Networks. Technical Report FKI-151-91, Institut für Informatik, Technische Universität München, 1991.

    Google Scholar 

  42. B. Schürmann, J. Hollatz, D. Gawronska. Recurrent and Feedforward Multi Layer Perceptrons in Comparison. Submitted to NIPS 91.

    Google Scholar 

  43. H. U. Simon. Algorithmisches Lernen auf der Basis empirischer Daten. In Tagungsband des 4’ten int. GI-Kongresses über wissensbasierte Systeme, Oct. 1991. These Proceedings.

    Google Scholar 

  44. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11): 1134–1142, Nov. 1984.

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schürmann, B., Hirzinger, G., Hernández, D., Simon, H.U., Hackbarth, H. (1991). Neural Control Within the BMFT-Project Neres. In: Brauer, W., Hernández, D. (eds) Verteilte Künstliche Intelligenz und kooperatives Arbeiten. Informatik-Fachberichte, vol 291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76980-1_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-76980-1_51

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-54617-7

  • Online ISBN: 978-3-642-76980-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics