Abstract
Whereas the identification and control of linear systems is well understood, this does not apply in general to nonlinear systems. Here, neural nets open up new paths for the treatment of multidimensional nonlinear systems as well as the possibility of adaptive readjustments to changes of the environment and of the system parameters. The advantages of neural control are of particular value for robotics. On the subsymbolic level, the goal is a symbiosis between sensorics and actuatorics and neural signal processing and control. However, we do intend to use traditional AI-techniques in cases where a robust knowledge representation is required which goes beyond the subsymbolic level, e.g. for space representation. In many applications, the problem is to extract significant control parameters from visual sensor data in a robust and efficient manner. For this task, neural nets are suited particularly well. Mathematical models for machine learning as well as unifying dynamical concepts will be utilized to achieve quantitative, generalizable results with respect to the efficiency of neural nets, by taking into account the real world requirements for control tasks with respect to performance, reliability and fault tolerance. Speech is of special significance for the dialogue with autonomous systems. Since neural nets have lead to encouraging results in speech processing, corresponding techniques will also be applied in robotics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
J.S. Albus. A New Approach to Manipulator Control: The Cerebellar Model Articulaton Con¬troller (CMAC). Transactions of the ASME, Journal of Dynamic Systems, Measurement, and Control: 221–227, Sept. 1975.
S. Annulova, J. Cuellar, K. U. HofFgen, and H. U. Simon. Probably almost optimal neural classifiers. In preparation.
H. Asada. Teaching and Learning of Compliance Using Neurol Nets: Representation and Generation of Nonlinear Compliance. 1990 IEEE Int. Conf. Robotics and Automation, Cincinnati, May 13–18, 1990.
E. B. Baum. The perceptron algorithm is fast for non-malicious distributions. Neural Computationi, 2: 249–261, 1990.
E. B. Baum. Polynomial time algorithms for learning neural nets. In M. A. Fulk and J. Case, editors, Proc. of the 3rd Annual Workshop on Computational Learning Theory, 258–273, San Mateo, California, Aug. 1990. Morgan Kaufmann.
E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Computationi, 1: 151–160, 1989.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association on Computing Machinery, 36 (4): 929–965, Oct. 1989.
L. Bottou, J.S. Liénard. Multispeaker Digit Recognition. Intl. Conf. on Connectionism in Perspective, Zurich, 38–44, 1988.
M. Codogno, R. Gemello, F. Mana, P. Demichelis, P. Laface, E. Piccolo. ESPRIT Project 2059 “Pygmalion”. Final Report on Task 4. 3, 1990.
W. Finnoff, H.G. Zimmermann. Reducing complexity and improving generalization in neural networks by mixed strategies. Submitted to NIPS 91.
C. Freksa. Qualitative spatial reasoning. In Mark and Frank [30].
C. Freksa. Temporal reasoning based on semi-intervals. Technical Report TR-90-016, ICSI, Berkeley, CA, April 1990.
P. Fischer, S. Polt, and H. U. Simon. Probably almost bayes decisions. In Proc. of the 4th Annual Workshop on Computational Learning Theory, San Mateo, California, Aug. 1991. To appear.
H. Hackbarth, M. Immendórfer. Speaker-dependent isolated word recognition by artifical neural networks. Proc. VERBA 90 Intl. Conf, on Speech Technol., 91–98, 1990.
H. Hackbarth, J. Mantel. Neural subnet assembly for recognition from medium-sized vocabularies. ICANN-91 Neurocomputing Conf., Helsinki, 1991 (accepted).
S.J. Hanson, L.Y. Pratt. Comparing biases for minimal network construction with back-propagation. Advances in Neural Information Processing I, D. S. Touretzky, Ed., Morgan Kaufman, 177–185, 1989.
D. Haussler. Generalizing the pac model for neural net and other learning applications. Research Report UCSC-CRL-89-30, University of California Santa Cruz, Sept. 1989.
D. Hernández. Relative Representation of Spatial Knowledge: The 2-D Case. In Mark and Frank [30]
G. E. Hinton. Connectionist learning procedures. Artificial Intelligence, 40: 185–235, 1989.
J. Hollatz, B. Schürmann. The “Detailed Balance” Net: A Stable Asymmetric Artificial Neural System for Unsupervised Learning. Proceedings of the IEEE International Conference on Neural Networks, San Diego Vol. III, 453–459, 1990.
R. Hofmann, M. Roscheisen, V. Tresp. Parsimonious Networks of Locally-Tuned Units. Submitted to NIPS 91.
B. Huberman, D. Rumelhart, A. Weigand. Generalization by weight elimination with application to forecasting. Advances in Neural Information Processing III, Ed. R. P. Lippman and J. Moody, Morgan Kaufmann, 1991.
E. Karnin. A simple procedure for pruning back-propagation trained neural networks. IEEE Trans, on Neural Networks, 1. 2, 239–242, June 1990.
M. Kearns, M. Li, L. Pitt, and L. Valiant. Recent results on boolean concept learning. In Workshop on Machine Learning, Irvine, 1987.
M. J. Kearns and R. E. Shapire. Eficient distribution-free learning of probabilistic concepts. In Proc. of the 31st Symposium on Foundations of Computer Science. IEEE Computer Society, Oct. 1990. To appear.
A. Krause, H. Hackbarth. Scaly artificial neural networks for speaker-independent recognition of isolated words. Proc. IEEE ICASSP SI, 21–24, 1989.
F. Lange. A Learning Concept for Improving Robot Force Control, IFAC Symposium on Robot Control. Karlsruhe, Oct. 1988.
F. Lange. Schätzung und Darstellung von mehrdimensionalen Abbildungen, DLR-Mitteilung. DLR-Mitt. 90–06.
Y. Le Cun, J.S. Denker, S.A. Solla. Optimal Brain Damage, in: D.S. Touretzky (ed.), Neural Information Processing Systems, Morgan Kaufmann, 598–605, 1990.
D.M. Mark,A.U. Frank, editors. Cognitive and Linguistic Aspects of Geographic Space. NATO Advanced Studies Institute. Kluwer, Dordrecht, 1990.
S. Miesbach. Effective Gradient Computation for Continuous and Discrete Time-Dependent Neural Networks. Submitted to IJCANN-91, Singapore.
J. Moody, C. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, Vol. 1, 281–294, 1989.
M.C. Mozer, P. Smolensky. Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment, in: D.S. Touretzky (ed.), Neural Information Processing Systems, Morgan Kaufmann, 107–115, 1989.
K.S. Narendra, K. Parthasarathy. Identification and Control of Dynamical Systems Using Neural Networks. IEEE Transactions on Neural Networks, Vol.1, No.1, 4–27, 1990.
T. Poggio, F. Girosi. Networks for approximation and learning. Proceedings of the IEEE Vol. 78, 1481–1497, 1990.
U. Ramacher, B. Schürmann. Unified Description of Neural Algorithms for Time-Independent Pattern Recognition, in: U. Ramacher, U. Rückert (ed.), VLSI Design of Neural Networks, Kluwer Academic Publishers, 255–270, 1990.
J.H. Schmidhuber. Learning to Control Fast-Weight Memories: An Alternative to Recurrent Nets. Technical Report FKI-147-91, Institut für Informatik, Technische Universität München, 1990.
J.H. Schmidhuber. Learning to Generate Sub-Goals for Action Sequences. Proceedings ICANN 91, Elsevier Science Publishers B.V., 1991, to appear.
J.H. Schmidhuber. Neural Sequence Chunkers. Technical Report FKI-148-91, Institut für Informatik, Technische Universität München, 1991.
J.H. Schmidhuber. Adaptive Curiosity and Adaptive Confidence. Technical Report FKI-149-91, Institut für Informatik, Technische Universität München, 1991.
J.H. Schmidhuber. An O(n3) Learning Algorithm for Fully Recurrent Networks. Technical Report FKI-151-91, Institut für Informatik, Technische Universität München, 1991.
B. Schürmann, J. Hollatz, D. Gawronska. Recurrent and Feedforward Multi Layer Perceptrons in Comparison. Submitted to NIPS 91.
H. U. Simon. Algorithmisches Lernen auf der Basis empirischer Daten. In Tagungsband des 4’ten int. GI-Kongresses über wissensbasierte Systeme, Oct. 1991. These Proceedings.
L. G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11): 1134–1142, Nov. 1984.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1991 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Schürmann, B., Hirzinger, G., Hernández, D., Simon, H.U., Hackbarth, H. (1991). Neural Control Within the BMFT-Project Neres. In: Brauer, W., Hernández, D. (eds) Verteilte Künstliche Intelligenz und kooperatives Arbeiten. Informatik-Fachberichte, vol 291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76980-1_51
Download citation
DOI: https://doi.org/10.1007/978-3-642-76980-1_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-54617-7
Online ISBN: 978-3-642-76980-1
eBook Packages: Springer Book Archive