skip to main content
10.1145/3512388.3512425acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicigpConference Proceedingsconference-collections
research-article

A Discrete Matrix-product Operation

Published: 28 March 2022 Publication History

Abstract

Convolution neural networks based on discrete convolution operation play an indelible role in the breakthrough and development of deep learning. However, the convolutional neural network also has shortcomings and drawbacks. It is more inclined to remember the attributes of the learning object, and lacks the learning of the directional information of the attributes of the learning object. In response to these shortcomings and drawbacks, based on the discrete convolution operation, the paper proposes a new operation called discrete matrix-product operation. The paper focuses on the definition and properties of discrete matrix-product operation, as well as the definition and properties and matrix-product theorem of its discrete Fourier transform. It provides a theoretical basis for the proposal of matrix-product neural network and hopes to promote the further development of deep learning

References

[1]
Hubel D H and Wiesel T. 1962. Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex. Journal of Physiology, 160, 1, 106-154
[2]
Wiesel D H and Hubel T N. 1959. Receptive fields of single neurons in the cat's striate contex. Journal of Physiology, 148, 3, 574-591.
[3]
Fukushima K. 1980. Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36, 4, 193-202.
[4]
LeCun Y and Bottou L. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 11, 2278-2324.
[5]
Krizhevsky A, Sutskever I, and Hinton G E. 2012. ImageNet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
[6]
Oliver Urbann and Jonas Stenzel. 2019. A convolutional neural network that self-contained counts. Journal of Image and Graphics, 7, 4, 112-116.
[7]
Umme Fawzia Rahim and Hiroshi Mineno. 2020. Tomato flower detection and counting in greenhouses using faster region-based convolutional neural network. Journal of Image and Graphics, 8, 4, 107-113.
[8]
Ryo Hasegawa, Yutaro Iwamoto, and Yen-Wei Chen. 2020. Robust Japanese road sign detection and recognition in complex scenes using convolutional neural networks. Journal of Image and Graphics, 8, 3, 59-66.
[9]
Florian Spiess, Lucas Reinhart, Norbert Strobel, Dennis Kaiser, Samuel Kounev, and Tobias Kaupp. 2021. People detection with depth silhouettes and convolutional neural networks on a mobile robot. Journal of Image and Graphics, 9, 4, 135-139.
[10]
Hwei Jen Lin, Yoshimasa Tokuyama, and Zi Jun Lin. 2019. Residual learning based convolutional neural network for super resolution. Journal of Image and Graphics, 7, 4, 126-129.
[11]
Volodymyr Mnih, Koray Kavukcuoglu, David,Silver, Andrei A,Rusu, Joel,Veness, Marc G,Bellemare, Alex,Graves, Martin,Riedmiller, Andreas K,Fidjeland, and Georg,Ostrovski. 2015. Human-level control through deep reinforcement learning. Nature, 518, 7540, 529-533.
[12]
David Silver, Julian Schrittwieser, Karen Simonyan, Loannis Antonoglou, and Guez Huang. 2017. Mastering the game of Go without human knowledge. Nature, 550, 7676, 354-359.
[13]
Li Y, Shan C, Li H, and Ou J. 2021. A capsule-unified framework of deep neural networks for graphical programming. Soft Computing, 25, 5786, 1-23.
[14]
Glorot X, Bordes A, and Bengio Y. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, 315-323.
[15]
Maas A L, Hannun A Y, and Ng A Y. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml., 30, 1, 3-8.
[16]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Shaoqing Ren. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision. 1026-1034.
[17]
Clevert D A, Unterthiner T, and Hochreiter S. 2015. Fast and accurate deep network learning by exponential linear units (elus). In ICLR.
[18]
Simonyan K and Zisserman A. 2014. Very deep convolutional networks for large-scale image recognition. Computer Science.
[19]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, and Andrew Rabinovich. 2015. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),1-9.
[20]
Lin M, Chen Q, and Yan S. 2013. Network in network. Computer Science.
[21]
Girshick R, Donahue J, Darrell T, and Malik J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. CVPR, 580-587.
[22]
He K, Zhang X, Ren S, and Sun J. 2014. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European conference on computer vision. Springer, Cham, 346-361.
[23]
Girshick R. 2015. Fast R-CNN. Proc. ICCV, 1385-1394.
[24]
Ren S, He K, Girshick R, and Sun J. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91-99.
[25]
Redmon J, Divvala S, Girshick R, and Farhadi A. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
[26]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, and Berg AC. 2016. Ssd: Single shot multibox detector. In European conference on computer vision. Springer, Cham, 21-37.
[27]
He K, Gkioxari G, Dollár P, and Girshick R. 2017. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2980-2988.
[28]
Hinton G E, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. 2012. Improving neural networks by preventing co-adaptation of feature detectors[J]. Computer Science, 3, 4, 212-223.
[29]
Wan L, Zeiler M, Zhang S, LeCun Y, and Fergus R. 2013. Regularization of neural networks using dropconnect. In International Conference on Machine Learning, 1058-1066.
[30]
Ioffe S and Szegedy C. 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on International Conference on Machine Learning. JMLR.org, 448-456.
[31]
He K, Zhang X, Ren S, and Sun J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
[32]
Iandola F N, Han S, Moskewicz M W, Ashraf K, Dally WJ, and Keutzer K. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size[J]. arXiv: 1602.07360.
[33]
Sabour S, Frosst N, and Hinton G E. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, 3859-3869.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICIGP '22: Proceedings of the 2022 5th International Conference on Image and Graphics Processing
January 2022
391 pages
ISBN:9781450395465
DOI:10.1145/3512388
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 March 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. discrete Fourier transform
  2. discrete matrix-product operation
  3. matrix-product theorem
  4. properties of discrete Fourier transform
  5. properties of discrete matrix-product operation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Anhui Polytechnic University Introduced Talent Research Startup Fund

Conference

ICIGP 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 17
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media