Stable and compact design of Memristive GoogLeNet Neural Network
Section snippets
Preface
In recent years, benefiting from technology progresses in algorithms, computing power, and data sets, deep learning has been developed tremendously in various fields such as security, e-commerce, manufacturing, agriculture, and smart home, and improved the efficiency of human production and life greatly [1]. Current intelligent applications based on deep learning usually rely on cloud data centers with powerful computing capabilities because of a lot of requirement of calculations [2], [3].
Related work
Memristor with the characteristics of variable resistance, low non-volatile power consumption and high integration density has very good application prospects in the fields of storage, artificial neural network and logic computing.
The development of neuromorphic computing circuits based on memristors can be divided into three stages: The first stage is the development of a single device. In 2008, Professor Stan William of Hewlett–Packard Lab produced a memristor in the laboratory at the first
Model of memristor
Academician Leon Chua (UC Berkeley) first proposed the concept of memristors in 1976 [13]. Since Hewlett–Packard Labs proposed the physical realization and mathematical model of the memristor in 2008 [46], [47], the research on the memristor has become a research hot-spot in academia and industry. Research teams all over the world are trying to use various materials to prepare new devices with memristor characteristics. With the production of memristors with different materials and
Overview of MGNN
Deep learning networks such as AlexNet and Visual Geometry Group (VGG) obtain better recognition results from the perspective of increasing the depth of the network. However, the increase in the number of layers will bring about problems such as over-fitting, gradient disappearance, and gradient explosion. GoogLeNet improves the training effect from the perspective of using computing resources more efficiently and extracting more features under the same amount of calculation. Since GoogLeNet
Optimization of Batch Normalization layer
Memristive Batch Normalization layers usually follow the memristive convolution layer. MBN layers are used to speed up training process, reduce over-fitting, and make the network in-sensitive to conductance initialization [52]. MBN layers try to normalize the output of each memristive convolution layer to data with mean 0 and variance 1. The output of the kth MBN layer can be defined asIn Eq. (13), the features extracted by the previous memristive
Experiment overview
Firstly, we established the MGNN model using the tensorflow framework on the traditional Graphics Processing Unit (GPU) server. Then we used the CIFAR-10 data set to train the model. In the training process, regularization constraints are used for pruning the model. The trained parameters are imported into the MGNN circuit model established by MATLAB simulink, and the circuit model is used to analyze the image recognition accuracy, required memristor crossbars, power consumption and the
Conclusion
In this article, a new type of passive device with integrated storage and calculation name as memristor is used to design a compact and stable MGNN circuit, which adopts convolution and multi-scale feature fusion in structure to reduce the number of memristive neural network layers and maintain the recognition accuracy of the circuit. In order to get a more compact circuit, this article designs word-line pruning and bit-line pruning methods for the memristive convolution layer, which
CRediT authorship contribution statement
Huanhuan Ran: Writing - original draft, Resources. Shiping Wen: Conceptualization. Kaibo Shi: Investigation. Tingwen Huang: Supervision.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Huanhuan Ran is with the Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu?Sichuan, 610054, China.
References (55)
- et al.
All one needs to know about fog computing and related edge computing paradigms: A complete survey
Journal of Systems Architecture
(2019) - et al.
An efficient memristor-based circuit implementation of squeeze-and-excitation fully convolutional neural networks
IEEE Transactions on Neural Networks and Learning Systems
(2020) - et al.
Event-based sliding-mode synchronization of delayed memristive neural networks via continuous/periodic sampling algorithm
Applied Mathematics and Computation
(2020) - et al.
Event-triggered synchronization of multiple memristive neural networks with cyber-physical attacks
Information Sciences
(2020) - et al.
Global exponential synchronization of delayed memristive neural networks with reaction-diffusion terms
Neural Networks
(2020) - et al.
Memristor standard cellular neural networks computing in the flux–charge domain
Neural Networks
(2017) - et al.
Memristive lstm network for sentiment analysis
IEEE Transactions on Systems, Man, and Cybernetics: Systems
(2019) - et al.
Neuromemristive circuits for edge computing: A review
IEEE Transactions on Neural Networks and Learning Systems
(2020) - et al.
Bringing deep learning at the edge of information-centric internet of things
IEEE Communications Letters
(2018) A comprehensive review on emerging artificial neuromorphic devices
Applied Physics Reviews
(2020)
Secure and efficient vehicle-to-grid energy trading in cyber physical systems: Integration of blockchain and edge computing
IEEE Transactions on Systems, Man, and Cybernetics: Systems
Observer-based adaptive control for multiagent systems with unknown parameters under attacks
IEEE Transactions on Neural Networks and Learning Systems
Edge intelligence: Paving the last mile of artificial intelligence with edge computing
Proceedings of the IEEE
Deep learning with edge computing: A review
Proceedings of the IEEE
Memristive devices and systems
Proceedings of the IEEE
Cascaded architecture for memristor crossbar array based larger-scale neuromorphic computing
IEEE Access
Cascaded neural network for memristor based neuromorphic computing
Nanoscale memristor device as synapse in neuromorphic systems
Nano Letters
Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory
ACM SIGARCH Computer Architecture News
CKFO: Convolutional kernel first operated algorithm with applications in memristor-based convolutional neural networks
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Memristor-based edge computing of blaze block for image recognition
IEEE Transactions on Neural Networks and Learning Systems
Efficient training algorithms for neural networks based on memristive crossbar circuits
Cited by (22)
A Hybrid Weight Quantization Strategy for Memristive Neural Networks
2023, NeurocomputingInception-embedded attention memory fully-connected network for short-term wind power prediction
2023, Applied Soft ComputingCancelable ECG biometric based on combination of deep transfer learning with DNA and amino acid approaches for human authentication
2022, Information SciencesCitation Excerpt :These models have already learned to extract informative and powerful ECG features. There are several available pre-trained networks trained such as resnet [39], googlenet [40], xception [41] and vgg [42]. In this study, we worked on vgg-16 pre-train model [42].
Automated Pallet Racking Examination in Edge Platform Based on MobileNetV2: Towards Smart Manufacturing
2024, Journal of Grid Computing
Huanhuan Ran is with the Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu?Sichuan, 610054, China.
Shipping Wen is with Australian Artificial Intelligence Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia.
Kaibo Shi is with School of Information Science and Engineering, Chengdu University, Chengdu, Sichuan, China.
Tingwen Huang is with Science Program, Texas AM University at Qatar, 23874, Doha, Qatar.