Original contributionWeight quantization in Boltzmann machines
References (7)
- et al.
Cognitive Science
(1985) - et al.
Advanced research in VLSI
IEEE Circ. and Dev. Magazine
(1989, September)
There are more references available in the full text version of this article.
Cited by (34)
Robust open-set classification for encrypted traffic fingerprinting
2023, Computer NetworksSingle-shot pruning and quantization for hardware-friendly neural network acceleration
2023, Engineering Applications of Artificial IntelligencePruning and quantization for deep neural network acceleration: A survey
2021, NeurocomputingCitation Excerpt :Compressing CNNs by reducing precision values has been previously proposed. Converting floating-point parameters into low numerical precision datatypes for quantizing neural networks was proposed as far back as the 1990s [67,14]. Renewed interest in quantization began in the 2010s when 8-bit weight values were shown to accelerate inference without a significant drop in accuracy [233].
Graph Structure Learning-Based Compression Method for Convolutional Neural Networks
2024, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)Lightweight network with masks for light field image super-resolution based on swin attention
2024, Multimedia Tools and ApplicationsA Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification
2023, ACM Transactions on Intelligent Systems and Technology
Copyright © 1991 Published by Elsevier Ltd.