Abstract
Because of the curse of dimensionality, the data in high-dimensional space hardly afford sufficient information for neural networks training. Hence, this is a tough task to approximate the high-dimensional space using neural networks. To address this, here proposes the method that neural networks approximate a high-dimensional function that can effectively approach the high-dimensional space, rather than using neural networks to directly approximate the high-dimensional space. Hence, two boundaries were derived by Lipschitz condition, i.e., the one is that neural networks approximate a high-dimensional function, and the other is that a high-dimensional function approaches the high-dimensional space. Experimental results on synthetic and real-world datasets show that our method is effective and outperforms the competing methods in the performance to approximate the high-dimensional space. We find that this manner of using neural networks to approximate a high-dimensional function that can effectively approach the high-dimensional space is more resistance to the curse of dimensionality. In addition, the ability of the proposed method to approximate the high-dimensional space is related to the number of hidden layers and the choice of high-dimensional functions, but more relies on the latter. Our findings demonstrate that it is no obvious dependency between the number of hidden layers respecting the proposed method and the choice for high-dimensional functions.







Similar content being viewed by others
Data availability
All real-world datasets in this work can be found at http://archive.ics.uci.edu/ml/
References
Bethany Lusch J, Kutz N, Brunton SL (2018) Deep learning for universal linear embeddings of nonlinear dynamics. Nat Commun 9:1–10
Le Cun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444
Chen T, Ruifeng Xu, He Y, Wang X (2017) Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN. Expert Syst Appl 72:221–230
Oliaee SME, Shoorehdeli MA, Teshnehlab M (2018) Faults detecting of high-dimension gas turbine by stacking DNN and LLM. In: 2018 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), pp 142–145
Berman JJ (2013) Principles, and practice of big data: preparing, sharing, and analyzing complex information. Newnes
Ta QM, Nguyen H-T, Cheah CC (2020) Data-driven learning for approximation of nonlinear functions with stochastic disturbances. In: 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). IEEE
Astafyev AN, Gerashchenko SI, Markuleva MV (2020) Neural network system for medical data approximation. In: 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). IEEE.
Calafiore GC, Gaubert S, Possieri C (2020) A universal approximation result for difference of log-sum-exp neural networks. IEEE Trans Neural Netw Learn Syst 31(12):5603–5612
Krishnan R, Subedar M, Tickoo O (2019) Efficient priors for scalable variational inference in Bayesian deep neural networks. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 773–777
Hoffman MD, Blei DM, Wang C, Paisley J (2013) Stochastic variational inference. J Mach Learn Res 14(1):1303–1347
Nalisnick E, Hernández-Lobato JM, Smyth P (2019) Dropout as a structured shrinkage prior. In: International Conference on Machine Learning, pp 4712–4722
Rocha R, Gomide F (2016) Performance evaluation of evolving classifier algorithms in high dimensional spaces. In: 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), pp 1–6
Vignac C, Ortiz-Jiménez G, Frossard P (2020) On the choice of graph neural network architectures. In: 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 8489–8493
Shchur O, Mumme M, Bojchevski A, Günnemann S (2018) Pitfalls of graph neural network evaluation. arXiv:1811.05868
Deng Y, Bao F, Dai Q (2019) Scalable analysis of cell-type composition from single-cell transcriptomics using deep recurrent learning. Nat Methods 16:311–314
Zhu B, Liu JZ, Cauley SF (2018) Image reconstruction by domain transform manifold learning. Nature 555:487–492
Andras P (2014) Function approximation using combined unsupervised and supervised learning. IEEE Trans Neural Netw Learn Syst 25(3):495–505
Andras P (2015) High-dimensional function approximation using local linear embedding. In: International Joint Conference on Neural Networks
Andras P (2018) Random projection neural network approximation. In: 2018 International Joint Conference on Neural Networks (IJCNN)
Petersen P, Voigtlaender F (2018) Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Netw 108:296–330
Voigtlaender F, Petersen P (2019) Approximation in Lp(μ) with deep ReLU neural networks. arXiv:1904.04789
Lu J, Shen Z, Yang H, Zhang S (2020) Deep network approximation for smooth functions. arXiv:2001.03040
Yarotsky D (2017) Error bounds for approximations with deep ReLU networks. Neural Netw 94:103–114
Jia Y, Chen F, Wu P (2019) A study of online function approximation system based on BP neural network. In: 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). IEEE
Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121–134
Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2(4):303–314
Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
Zainuddin Z, Fard SP (2015) Approximation of multivariate 2π-periodic functions by multiple 2π-periodic approximate identity neural networks based on the universal approximation theorems. In: 2015 11th International Conference on Natural Computation (ICNC). IEEE
Schwab C, Zech J (2019) Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ. Anal Appl 17(01):19–55
Grohs P, Hornung F, Jentzen A, von Wurstemberger P (2018) A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of black-Scholes partial differential equations. arXiv:1809.02362
Jentzen A, Salimova D, Welti T (2018) A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients. arXiv:1809.07321
Hutzenthaler M, Jentzen A, Kruse T, Nguyen TA (2020) A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations. Social Netw Partial Differ Equ Appl 1(2):10
Bölcskei H, Grohs P, Kutyniok G, Petersen P (2019) Optimal approximation with sparsely connected deep neural networks. SIAM J Math Data Sci 1(1):8–45
Lee H, Ge R, Ma T, Risteski A, Arora S (2017) On the ability of neural nets to express distributions. In: Proc. Conf. Learn. Theory, pp 1271–1296
Elbrächter D, Perekrestenko D, Grohs P, Bölcskei H (2019) Deep neural network approximation theory. arXiv:1901.02220
Guliyev NJ, Ismailov VE (2018) On the approximation by single hidden layer feedforward neural networks with fixed weights. Neural Netw 98:296–304
Voevoda AA, Romannikov DO (2018) Synthesis of a neural network for N-dimension surfaces approximation. In: 2018 XIV International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE). IEEE
EminOrhan A, Ma WJ (2017) Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 8(1):1–14
Cheridito P, Jentzen A, Rossmannek F (2021) Efficient approximation of high-dimensional functions with neural networks. IEEE Trans Neural Netw Learn Syst 15:1–15
Huang Y, Capretz LF, Ho D (2019) Neural network models for stock selection based on fundamental analysis. In: 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). IEEE
Campos GO, Zimek A, Sander J, Campello RJGB (2016) On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Min Knowl Disc 30:891–927
Acknowledgements
This work was supported by the Science and Technology Research Program of Chongqing Municipal Education Commission of China under Grant KJQN201903003. And the Science and Technology Research Program of Chongqing Municipal Education Commission of China under Grant KJQN202003001. And the Chongqing Municipal Education Commission of China under Grant 192072. And the Higher Education of Chongqing Municipal Education Commission of China under Grant CQGJ20ZX021.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zheng, J., Wang, J., Chen, Y. et al. Effective approximation of high-dimensional space using neural networks. J Supercomput 78, 4377–4397 (2022). https://doi.org/10.1007/s11227-021-04038-2
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-021-04038-2