Abstract
In this paper, we present a new model for time-series forecasting using radial basis functions (RBFs) as a unit of artificial neural networks (ANNs), which allows the inclusion of exogenous information (EI) without additional pre-processing. We begin by summarizing the most well-known EI techniques used ad hoc, i.e., principal component analysis (PCA) and independent component analysis (ICA). We analyze the advantages and disadvantages of these techniques in time-series forecasting using Spanish bank and company stocks. Then, we describe a new hybrid model for time-series forecasting which combines ANNs with genetic algorithms (GAs). We also describe the possibilities when implementing the model on parallel processing systems.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Before discussing this problem, we prove the existence of an exact representation for the continuous function in terms of simpler functions using Kolmogorov’s theorem.
The subindex k is related to the structure or subset of loss functions used in the approximation.
Roughly speaking, the VC dimension, h, measures how many training points can be separated for all possible labeling using functions of the class.
In the strict sense presented in [33], that is, they are bounded functions or satisfy a certain inequality.
For example, Vapnik’s ε insensitive loss function [33]: \(L{\left( {f{\left( x \right)} - y} \right)} = \left\{ {\begin{array}{*{20}l} {{{\left| {f{\left( x \right)} - y} \right|} - \varepsilon } \hfill} & {{{\text{for }}{\left| {f{\left( x \right)} - y} \right|} \geqslant \varepsilon } \hfill} \\ {0 \hfill} & {{{\text{otherwise}}} \hfill} \\ \end{array} } \right. \)
This calculation must be performed several times during the process.
Generalized differentiation of a function: \({\text{d}}R{\left| f \right|} = {\left[ {{\left( {{\text{d}} \mathord{\left/ {\vphantom {{\text{d}} {{\text{d}}\rho }}} \right. \kern-\nulldelimiterspace} {{\text{d}}\rho }} \right)}R{\left[ {f + \rho h} \right]}} \right]}, \) where h∈H.
The principal feature of these algorithms is the sequential adaptation of neural resources.
References
Pollock DSG (1999) A handbook of time series analysis, signal processing and dynamics. Academic Press, San Diego, California
Box GEP, Jenkins GM, Reinsel GC (1994) Time series analysis: forecasting and control, 3rd edn. Prentice Hall, Englewood Cliffs, New Jersey
Platt J (1991) A resource-allocating network for function interpolation. Neural Comput 3(2):213-225
Salmerón-Campos M (2001) Predicción de series temporales can redes neuronales de funciones radiales y técnicas de descomposición matricial. PhD thesis, Departamento de Arquitectura y Technología de Computadores, University of Granada
Moisés Salmerón, Julio Ortega, Carlos G. Puntonet, Alberto Prieto (2001) Improved RAN sequential prediction using orthogonal techniques. Neurocomputing 41:153-172
Masters T (1995) Neural, novel and hybrid algorithms for time series prediction. Wiley, New York
Back AD, Weigend AS (1997) Discovering structure in finance using independent component analysis. In: Proceedings of the 5th international conference on neural networks in the capital markets (Computational finance 1997), London, December 1997
Back AD, Trappenberg TP (2001) Selecting inputs for modelling using normalized higher order statistics and independent component analysis. IEEE Trans Neural Networ 12:(3):612-617
Hyvarinen A, Oja E (2000) Independent component analysis: algorithms and applications. Neural Networks 13:411-430
Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Comput 7:1129-1159
Comon P (1994) Independent component analysis: a new concept. Signal Process 36:287-314
Amari S, Cichocki A, Yang HH (1996) A new learning algorithm for blind source separation. In: Advances in neural information processing systems 8. MIT Press, Cambridge, Massachusetts, pp 757-763
Puntonet CG (1994) Nuevos algoritmos de separación de fuentes en medios lineales. PhD thesis, Departamento de Arquitectura y Tecnología de Computadores, University of Granada
Theis FJ, Jung A, Puntonet C, Lang EW (2003) Linear geometric ICA: fundamentals and algorithms. Neural Comput 15(2):419-439
Puntonet CG, Mansour A, Ohnishi N (2002) Blind multiuser separation of instantaneous mixture algorithm based on geometrical concepts. Signal Process 82(8):1155-1175
Puntonet CG, Ali Mansour (2001) Blind separation of sources using density estimation and simulated annealing. IEICE Trans Fund Electr E84-A:2539-2547
Rodríguez-Álvarez M, Puntonet CG, Rojas I (2001) Separation of sources based on the partitioning of the space of observations. Lect Notes Comput Sci 2085:762-769
Górriz Sáez JM (2003) Algoritmos híbridos la modelización de series temporales con técnicas ar-ica. PhD thesis, Departamento de Ing de Sistemas y Aut Tec Eleectrónica y Electrónica , University of Cádiz
Back AD, Weigend AS (1997) Discovering structure in finance using independent component analysis. In: Proceedings of the 5th international conference on neural networks in the capital markets (Computational finance 1997), London, December 1997
Moody J, Darken CJ (1989) Fast learning in networks of locally-tuned processing units. Neural Comput 1:284-294
Hastie T, Tibshirani R, Friedman V (2000) The elements of statistical learning. Springer, Berlin Heidelberg New York
Michalewicz Z (1992) Genetic algorithms + data structures = evolution programs. Springer, Berlin Heidelberg New York
Szapiro T, Matwin S, Haigh K (1991) Genetic algorithms approach to a negotiation support system. IEEE Trans Syst Man Cybern 21:102-114
Chen S, Wu Y (1997) Genetic algorithm optimization for blind channel identification with higher order cumulant fitting. IEEE Trans Evolut Comput 1:259-264
Chao L, Sethares W (1994) Nonlinear parameter estimation via the genetic algorithm. IEEE Trans Signal Proces 42:927-935
Olle Haggstrom (1998) Finite Markov chains and algorithmic applications. Cambridge University Press, Cambridge, UK
Schmitt LM, Nehaniv CL, Fujii RH (1998) Linear analysis of genetic algorithms. Theor Comput Sci 200:101-134
Suzuki J (1995) A markov chain analysis on simple genetic algorithms. IEEE Trans Syst Man Cybern 25:655-659
Eiben AE, Aarts EHL, Van Hee KM (1991) Global convergence of genetic algorithms: a markov chain analysis, parallel problem solving from nature. Lect Notes Comput Sci 496:4-12
Schmitt LM (2001) Theory of genetic algorithms. Theoret Comput Sci 259:1-61
Lozano JA, Larrañaga P, Graña M, Albizuri FX (1999) Genetic algorithms: bridging the convergence gap. Theoret Comput Sci 229:11-22
Rudolph G (1994) Convergence analysis of canonical genetic algorithms. IEEE Trans Neural Networ 5:96-101
Vapnik V (1998) Statistical learning theory. Wiley, New York
Tikhonov AN, Arsenin VY (1997) Solutions of ill-posed problems. Winston, Washington, pp 415-438
Vapnik V, Chervonenkis A (1974) Theory of pattern recognition (in Russian). Nauka, Moscow
Muller KR, Smola AJ, Ratsch G, Scholkopf B, Kohlmorgen J (1999) Using support vector machines for time series prediction. In: Scholkopf B, Burges CJC, Smola AJ (eds) Advances in kernel methods-support vector learning. MIT Press, Cambridge, Massachusetts, pp 243-254
Muller KR, Smola AJ, Ratsch G, Scholkopf B, Kohlmorgen J, Vapnik V (1997) Predicting time series with support vector machines. In: Proceedings of the 7th international conference on artificial neural networks (CANN’97), Lausanne, Switzerland, May 1997, pp 999-1004
Smola AJ, Scholkopf B, Muller KR (1998) The connection between regularization operators and support vector kernels. Neural Networks 11:637-649
Muller KR, Mika S, Ratsch G, Tsuda K, Scholkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Networ 12(2):181-201
Vapnik V, Lerner A (1963) Pattern recognition using generalized portrait method. Automat Rem Contr 24:774-780
Kuhn HW, Tucker AW (1951) Nonlinear programming. In: Proceedings of the 2nd Berkeley symposium on mathematical statistics and probabilistics. University of California Press, pp 481-492
Kohonen T (1990) The self-organizing map. P IEEE 78(9):1464-1480
Cao L (2003) Support vector machines experts for time series forecasting. Neurocomputing 51:321-339
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Górriz, J.M., Puntonet, C.G., Salmerón, M. et al. A new model for time-series forecasting using radial basis functions and exogenous data. Neural Comput & Applic 13, 101–111 (2004). https://doi.org/10.1007/s00521-004-0412-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-004-0412-5