skip to main content
research-article

Neural Network Pruning by Recurrent Weights for Finance Market

Published: 22 January 2022 Publication History

Abstract

Convolutional Neural Networks (CNNs) and deep learning technology are applied in current financial market to rapidly promote the development of finance market and Internet economy. The continuous development of neural networks with more hidden layers improves the performance but increases the computational complexity. Generally, channel pruning methods are useful to compact neural networks. However, typical channel pruning methods would remove layers by mistake due to the static pruning ratio of manual setting, which could destroy the whole structure of neural networks. It is difficult to improve the ratio of compressing neural networks only by pruning channels while maintaining good network structures.
Therefore, we propose a novel neural Networks Pruning by Recurrent Weights (NPRW) that can repeatedly evaluate the significance of weights and adaptively adjust them to compress neural networks within acceptable loss of accuracy. The recurrent weights with low sensitivity are compulsorily set to zero by evaluating the magnitude of weights, and pruned network only uses a few significant weights. Then, we add the regularization to the scaling factors on neural networks, in which recurrent weights with high sensitivity can be dynamically updated and weights of low sensitivity stay at zero invariably. By this way, the significance of channels can be quantitatively evaluated by recurrent weights. It has been verified with typical neural networks of LeNet, VGGNet, and ResNet on multiple benchmark datasets involving stock index futures, digital recognition, and image classification. The pruned LeNet-5 achieves the 58.9% reduction amount of parameters with 0.29% loss of total accuracy for Shanghai and Shenzhen 300 stock index futures. As for the CIFAR-10, the pruned VGG-19 reduces more than 50% FLOPs, and the decrease of network accuracy is less than 0.5%. In addition, the pruned ResNet-164 tested on the SVHN reduces more than 58% FLOPs with relative improvement on accuracy by 0.11%.

References

[1]
C. H. Chen. 1994. Neural networks for financial market prediction. In Proceedings of the IEEE International Conference on Neural Networks (ICNN). 1199–1202. DOI:https://doi.org/10.1109/ICNN.1994.374354
[2]
Songqiao Qi, Kaijun Jin, Baisong Li, and Yufeng Qian. 2020. The exploration of internet finance by using neural network. J. Comput. Appl. Math. 369 (May 2020), 112630. DOI:https://doi.org/10.1016/j.cam.2019.112630
[3]
Jon Ander Gómez, Juan Arévalo, Roberto Paredes, and Jordi Nin. 2018. End-to-end neural network architecture for fraud scoring in card payments. Pattern Recog. Lett. 105 (Aug. 2018), 175–181. DOI:https://doi.org/10.1016/j.patrec.2017.08.024
[4]
Nick F. Ryman-Tubb, Paul Krause, and Wolfgang Garn. 2018. How artificial intelligence and machine learning research impacts payment card fraud detection: A survey and industry benchmark. Eng. Applic. Artif. Intell. 76, NOV. (Sept. 2018), 130–157. DOI:https://doi.org/10.1016/j.engappai.2018.07.008
[5]
Chunchun Chen, Pu Zhang, Yuan Liu, and Jun Liu. 2020. Financial quantitative investment using convolutional neural network and deep learning technology. Neurocomputing (May 2020), 384–390. DOI:https://doi.org/10.1016/j.neucom.2019.09.092
[6]
Jiboning Zhang. 2020. Investment risk model based on intelligent fuzzy neural network and VaR. J. Comput. Appl. Math. 371 (June 2020), 112707. DOI:https://doi.org/10.1016/j.cam.2019.112707
[7]
He Li, Kaoru Ota, and Mianxiong Dong. 2018. Learning IoT in edge: Deep learning for the internet of things with edge computing. IEEE Netw. 32, 1 (Jan. 2018), 96–101. DOI:https://doi.org/10.1109/MNET.2018.1700202
[8]
Avraam Tsantekidis, Nikolaos Passalis, Anastasios Tefas, Juho Kanniainen, and Alexandros Iosifidis. 2017. Forecasting stock prices from the limit order book using convolutional neural networks. In Proceedings of the IEEE 19th Conference on Business Informatics (CBI). 7–12. DOI:https://doi.org/10.1109/CBI.2017.23
[9]
Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 815–823. DOI:https://doi.org/10.1109/CVPR.2015.7298682
[10]
Ayan Kumar Bhunia, Abhirup Das, Ankan Kumar Bhunia, Perla Sai Raj Kishore, and Partha Pratim Roy. 2019. Handwriting recognition in low-resource scripts using adversarial learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 4767–4776. DOI:https://doi.org/10.1109/CVPR.2019.00490
[11]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 1097–1105. DOI:4824-imagenet-classification-with-deep-convolutional-neural-networks.
[12]
Rupesh K. Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Training very deep networks. In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 2377–2385. DOI:5850-training-very-deep-networks.
[13]
Songwen. Pei, Fuwu. Tang, Yanfei. Ji, Jing. Fan, and Zhong. Ning. 2018. Localized traffic sign detection with multi-scale deconvolution networks. In Proceedings of the IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC). 355–360. DOI:https://doi.org/10.1109/COMPSAC.2018.00056
[14]
Jianbo Ye, Xin Lu, Zhe Lin, and James Z. Wang. 2018. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. In Proceedings of the International Conference on Learning Representations (ICLR). 1–11. DOI:1802.00124
[15]
Songwen Pei, Tianma Shen, Xianrong Wang, Chunhua Gu, Zhong Ning, Xiaochun Ye, and Naixue Xiong. 2020. 3DACN: 3D augmented convolutional network for time series data. Inf. Sci. (Mar. 2020), 17–29. DOI:https://doi.org/10.1016/j.ins.2019.11.040
[16]
Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. 2018. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In Proceedings of the European Conference on Computer Vision (ECCV). 122–138. DOI:10.1007/978-3-030-01264-98
[17]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 4510–4520. DOI:https://doi.org/10.1109/CVPR.2018.00474
[18]
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. 2016. XNOR-Net: ImageNet classification using binary convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV). 525–542. DOI:10.1007/978-3-319-46493-032
[19]
Misha Denil, Babak Shakibi, Laurent Dinh, Marc’Aurelio Ranzato, and Nando De Freitas. 2013. Predicting parameters in deep learning. In Proceedings of the 26th International Conference on Neural Information Processing System (NIPS). 2148–2156. DOI:10.5555/2999792.2999852
[20]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient ConvNets. In Proceedings of the International Conference on Learning Representations (ICLR). 1–13. DOI:1608.08710
[21]
Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. 2017. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. In Proceedings of the International Conference on Learning Representations (ICLR). 1–9. DOI:1607.03250
[22]
Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. 2016. Learning structured sparsity in deep neural networks. In Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS). 2074–2082. DOI:6504-learning-structured-sparsity-in-deep-neural-networks.
[23]
Jose M. Alvarez and Mathieu Salzmann. 2016. Learning the number of neurons in deep networks. In Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS). 2270–2278. DOI:6372-learning-the-number-of-neurons-in-deep-networks.
[24]
Yihui He, Xiangyu Zhang, and Jian Sun. 2017. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1398–1406. DOI:https://doi.org/10.1109/ICCV.2017.155
[25]
Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu. 2018. Discrimination-aware channel pruning for deep neural networks. In Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS). 875–886. DOI:7367-discrimination-aware-channel-pruning-for-deep-neural-networks.
[26]
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS). 1135–1143. DOI:5784-learning-both-weights-and-connections-for-efficient-neural-network.
[27]
Song Han, Huizi Mao, and W. J. Dally. 2016. Compressing deep neural networks with pruning trained quantization and Huffman coding. In Proceedings of the International Conference on Learning Representations (ICLR). 1–14. DOI:1510.00149
[28]
Songwen Pei, Yusheng Wu, and Meikang Qiu. 2020. Neural network compression and acceleration by federated pruning. In Proceedings of the 20th International Conference on Algorithm and Architecture for Parallel Processing (ICA3PP). 1–10.
[29]
Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Proceedings of the 27th International Conference on Neural Information Processing Systems. 1269–1277. DOI:5544-exploiting-linear-structure-within-convolutional-networks-for-efficient-evaluation.
[30]
Min Wang, Baoyuan Liu, and Hassan Foroosh. 2016. Design of efficient convolutional layers using single intra-channel convolution, topological subdivisioning and spatial “bottleneck” structure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1–9. DOI:1608.04337
[31]
Lu Hou and James T. Kwok. 2018. Loss-aware weight quantization of deep networks. In Proceedings of the International Conference on Learning Representations (ICLR). 1–16. DOI:1802.08635
[32]
Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. 2017. ThiNet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 5068–5076. DOI:https://doi.org/10.1109/ICCV.2017.541
[33]
Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In Proceedings of the 32nd International Conference on International Conference on Machine Learning. 2285–2294. DOI:10.5555/3045118.3045361
[34]
Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. 2017. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2755–2763. DOI:https://doi.org/10.1109/ICCV.2017.298
[35]
Meikang Qiu and Edwin H.-M. Sha. 2009. Cost minimization while satisfying hard/soft timing constraints for heterogeneous embedded systems. ACM Trans. Des. Autom. Electron. Syst. 14, 2 (2009), 1–30. DOI:https://doi.org/10.1145/1497561.1497568
[36]
Meikang Qiu, Zhi Chen, Jianwei Niu, Ziliang Zong, Gang Quan, Xiao Qin, and Laurence T. Yang. 2015. Data allocation for hybrid memory with genetic algorithm. IEEE Trans. Emerg. Topics Comput. 3, 4 (Feb. 2015), 544–555. DOI:https://doi.org/10.1109/TETC.2015.2398824
[37]
Han Qiu, Meikang Qiu, Zhihui Lu, and Gerard Memmi. 2019. An efficient key distribution system for data fusion in V2X heterogeneous networks. Inf. Fusion (Feb. 2019). DOI:https://doi.org/10.1016/j.inffus.2019.02.002
[38]
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778. DOI:https://doi.org/10.1109/CVPR.2016.90
[39]
R. Reed. 1993. Pruning algorithms—A survey. IEEE Trans. Neural Netw. 4, 5 (Sep. 1993), 740–747. DOI:https://doi.org/10.1109/72.248452
[40]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR). 1–14. DOI:1409.1556
[41]
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. 2016. Deep networks with stochastic depth. In Proceedings of the European Conference on Computer Vision (ECCV). 646–661. DOI:10.1007/978-3-319-46493-039
[42]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1026–1034. DOI:https://doi.org/10.1109/ICCV.2015.123

Cited By

View all
  • (2024)A roulette wheel-based pruning method to simplify cumbersome deep neural networksNeural Computing and Applications10.1007/s00521-024-09719-636:22(13915-13933)Online publication date: 1-Aug-2024
  • (2023)M-E-AWA: A Novel Task Scheduling Approach Based on Weight Vector Adaptive Updating for Fog ComputingProcesses10.3390/pr1104105311:4(1053)Online publication date: 31-Mar-2023
  • (2023)Joint Architecture Design and Workload Partitioning for DNN Inference on Industrial IoT ClustersACM Transactions on Internet Technology10.1145/355163823:1(1-21)Online publication date: 23-Feb-2023
  • Show More Cited By

Index Terms

  1. Neural Network Pruning by Recurrent Weights for Finance Market

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Internet Technology
    ACM Transactions on Internet Technology  Volume 22, Issue 3
    August 2022
    631 pages
    ISSN:1533-5399
    EISSN:1557-6051
    DOI:10.1145/3498359
    • Editor:
    • Ling Liu
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 January 2022
    Accepted: 01 November 2021
    Revised: 01 October 2020
    Received: 01 August 2020
    Published in TOIT Volume 22, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Channel pruning
    2. neural networks
    3. recurrent weights
    4. finance market

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • National Natural Science Foundation of China
    • Shanghai Natural Science Foundation
    • Open Project Program of Shanghai Key Laboratory of Data Science
    • State Key Lab of Computer Architecture, ICT, CAS

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)61
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A roulette wheel-based pruning method to simplify cumbersome deep neural networksNeural Computing and Applications10.1007/s00521-024-09719-636:22(13915-13933)Online publication date: 1-Aug-2024
    • (2023)M-E-AWA: A Novel Task Scheduling Approach Based on Weight Vector Adaptive Updating for Fog ComputingProcesses10.3390/pr1104105311:4(1053)Online publication date: 31-Mar-2023
    • (2023)Joint Architecture Design and Workload Partitioning for DNN Inference on Industrial IoT ClustersACM Transactions on Internet Technology10.1145/355163823:1(1-21)Online publication date: 23-Feb-2023
    • (2023)Convolutional neural network pruning based on misclassification costThe Journal of Supercomputing10.1007/s11227-023-05487-779:18(21185-21234)Online publication date: 19-Jun-2023

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media