Abstract
With the advent of the Internet of Everything, the proliferation of data has put a huge burden on data centers and network bandwidth. To ease the pressure on data centers, edge computing, a new computing paradigm, is gradually gaining attention. Meanwhile, artificial intelligence services based on deep learning are also thriving. However, such intelligent services are usually deployed in data centers, which cause high latency. The combination of edge computing and artificial intelligence provides an effective solution to this problem. This new intelligence paradigm is called edge intelligence. In this paper, we focus on edge training and edge inference, the prior training models using local data at the resource-constrained edge devices. The latter deploying models at the edge devices through model compression and inference acceleration. This paper provides a comprehensive survey of existing architectures, technologies, frameworks and implementations in these two areas, and discusses existing challenges, possible solutions and future directions. We believe that this survey will make more researchers aware of edge intelligence.
Similar content being viewed by others
Notes
References
AbdulRahman S, Tout H, Mourad A et al (2020) Fedmccs: multicriteria client selection model for optimal iot federated learning. IEEE Internet Things J 8(6):4723–4735. https://doi.org/10.1109/JIOT.2020.3028742
Aji AF, Heafield K (2017) Sparse communication for distributed gradient descent. ArXiv preprint arXiv:1704.05021
Almaslukh B, Al-Muhtadi J, Artoli AM (2018) A robust convolutional neural network for online smartphone-based human activity recognition. J Intell Fuzzy Syst 35(2):1609–1620. https://doi.org/10.3233/JIFS-169699
Anwar S, Sung W (2016) Compact deep convolutional neural networks with coarse pruning. ArXiv preprint arXiv:1610.09639
Anwar S, Hwang K, Sung W (2017) Structured pruning of deep convolutional neural networks. ACM J Emerging Technol Comput Syst (JETC) 13(3):1–18. https://doi.org/10.1145/3005348
Aono Y, Hayashi T, Wang L et al (2017) Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans Inf Forensics Secur 13(5):1333–1345. https://doi.org/10.1109/TIFS.2017.2787987
Apicharttrisorn K, Ran X, Chen J, et al (2019) Frugal following: Power thrifty object detection and tracking for mobile augmented reality. In: Proceedings of the 17th conference on embedded networked sensor systems, pp 96–109. https://doi.org/10.1145/3356250.3360044
Astrid M, Lee SI (2017) Cp-decomposition with tensor power method for convolutional neural networks compression. In: 2017 IEEE international conference on big data and smart computing (BigComp). IEEE, pp 115–118. https://doi.org/10.1109/BIGCOMP.2017.7881725
Ba LJ, Caruana R (2013) Do deep nets really need to be deep? ArXiv preprint arXiv:1312.6184
Bagdasaryan E, Veit A, Hua Y, et al (2020) How to backdoor federated learning. In: International conference on artificial intelligence and statistics. PMLR, pp 2938–2948. http://proceedings.mlr.press/v108/bagdasaryan20a.html
Banbury CR, Reddi VJ, Lam M, et al (2020) Benchmarking tinyml systems: challenges and direction. ArXiv preprint arXiv:2003.04821
Bao X, Su C, Xiong Y, et al (2019) Flchain: a blockchain for auditable federated learning with trust and incentive. In: 2019 5th international conference on big data computing and communications (BIGCOM). IEEE, pp 151–159. https://doi.org/10.1109/BIGCOM.2019.00030
Bellman R (1953) An introduction to the theory of dynamic programming. Rand Corporation, Santa Monica. https://apps.dtic.mil/sti/pdfs/AD0074903.pdf
Bengio Y, Ducharme R, Vincent P, et al (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155. http://jmlr.org/papers/v3/bengio03a.html
Bhattacharya S, Lane ND (2016) From smart to deep: robust activity recognition on smartwatches using deep learning. In: 2016 IEEE international conference on pervasive computing and communication workshops (PerCom Workshops), pp 1–6. https://doi.org/10.1109/PERCOMW.2016.7457169
Blot M, Picard D, Cord M, et al (2016) Gossip training for deep learning. ArXiv preprint arXiv:1611.09726
Bolukbasi T, Wang J, Dekel O, et al (2017) Adaptive neural networks for efficient inference. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, proceedings of machine learning research, vol 70. PMLR, pp 527–536. https://proceedings.mlr.press/v70/bolukbasi17a.html
Bonawitz K, Ivanov V, Kreuter B, et al (2017) Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 1175–1191. https://doi.org/10.1145/3133956.3133982
Bonomi F, Milito RA, Zhu J, et al (2012) Fog computing and its role in the internet of things. In: Gerla M, Huang D (eds) Proceedings of the first edition of the MCC workshop on mobile cloud computing, MCC@SIGCOMM 2012, Helsinki, Finland, August 17, 2012. ACM, pp 13–16. https://doi.org/10.1145/2342509.2342513
Buciluǎ C, Caruana R, Niculescu-Mizil A (2006) Model compression. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, pp 535–541. https://doi.org/10.1145/1150402.1150464
Bulat A, Tzimiropoulos G (2019) Xnor-net++: improved binary neural networks. ArXiv preprint arXiv:1909.13863
Caldas S, Konečny J, McMahan HB, et al (2018) Expanding the reach of federated learning by reducing client resource requirements. ArXiv preprint arXiv:1812.07210
Canel C, Kim T, Zhou G, et al (2019) Scaling video analytics on constrained edge nodes. ArXiv preprint arXiv:1905.13536. https://proceedings.mlsys.org/book/273.pdf
Chen PY, Hsieh JW, Gochoo M, et al (2019a) Smaller object detection for real-time embedded traffic flow estimation using fish-eye cameras. In: 2019 IEEE international conference on image processing (ICIP), pp 2956–2960. https://doi.org/10.1109/ICIP.2019.8803719
Chen TYH, Ravindranath L, Deng S, et al (2015a) Glimpse: Continuous, real-time object recognition on mobile devices. In: Proceedings of the 13th ACM conference on embedded networked sensor systems, pp 155–168. https://doi.org/10.1145/2972413.2972423
Chen W, Wilson J, Tyree S, et al (2015b) Compressing neural networks with the hashing trick. In: International conference on machine learning. PMLR, pp 2285–2294. http://arxiv.org/abs/1504.04788
Chen W, Wilson J, Tyree S, et al (2016) Compressing convolutional neural networks in the frequency domain. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1475–1484. https://doi.org/10.1145/2939672.2939839
Chen Y, Luo T, Liu S, et al (2014) Dadiannao: a machine-learning supercomputer. In: 2014 47th annual IEEE/ACM International symposium on microarchitecture. IEEE, pp 609–622. https://doi.org/10.1109/MICRO.2014.58
Chen Y, Sun X, Jin Y (2019) Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans Neural Netw Learn Syst 31(10):4229–4238. https://doi.org/10.1109/TNNLS.2019.2953131
Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258. https://doi.org/10.1109/CVPR.2017.195
Courbariaux M, Bengio Y, David JP (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In: Advances in neural information processing systems, pp 3123–3131. http://arxiv.org/abs/1511.00363
Courbariaux M, Hubara I, Soudry D, et al (2016) Binarized neural networks: training deep neural networks with weights and activations constrained to \(+ 1\) or \(-1\). ArXiv preprint arXiv:1602.02830
Deng S, Zhao H, Fang W et al (2020) Edge intelligence: the confluence of edge computing and artificial intelligence. IEEE Internet Things J 7(8):7457–7469. https://doi.org/10.1109/JIOT.2020.2984887
Denil M, Shakibi B, Dinh L, et al (2013) Predicting parameters in deep learning. ArXiv preprint arXiv:1306.0543. https://proceedings.neurips.cc/paper/2013/hash/7fec306d1e665bc9c748b5d2b99a6e97-Abstract.html
Denton EL, Zaremba W, Bruna J, et al (2014) Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in neural information processing systems, pp 1269–1277. https://proceedings.neurips.cc/paper/2014/hash/2afe4567e1bf64d32a5527244d104cea-Abstract.html
Diethe T, Twomey N, Flach PA (2016) Active transfer learning for activity recognition. In: 24th European symposium on artificial neural networks, ESANN 2016, Bruges, Belgium, April 27–29, 2016. http://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-99.pdf
Dosovitskiy A, Beyer L, Kolesnikov A, et al (2020) An image is worth 16x16 words: transformers for image recognition at scale. ArXiv preprint arXiv:2010.11929
Drolia U, Guo K, Narasimhan P (2017a) Precog: prefetching for image recognition applications at the edge. In: Proceedings of the 2nd ACM/IEEE symposium on edge computing, pp 1–13. https://doi.org/10.1145/3132211.3134456
Drolia U, Guo K, Tan J, et al (2017b) Cachier: edge-caching for recognition applications. In: 2017 IEEE 37th international conference on distributed computing systems (ICDCS). IEEE, pp 276–286. https://doi.org/10.1109/ICDCS.2017.94
Du G, Zhang J, Luo Z et al (2020) Joint imbalanced classification and feature selection for hospital readmissions. Knowl-Based Syst 200(106):020. https://doi.org/10.1016/j.knosys.2020.106020
Du G, Zhang J, Ma F et al (2021) Towards graph-based class-imbalance learning for hospital readmission. Expert Syst Appl 176(114):791. https://doi.org/10.1016/j.eswa.2021.114791
Du K, Pervaiz A, Yuan X, et al (2020b) Server-driven video streaming for deep learning inference. In: Proceedings of the annual conference of the ACM special interest group on data communication on the applications, technologies, architectures, and protocols for computer communication. association for computing machinery, New York, NY, USA, pp 557–570. https://doi.org/10.1145/3387514.3405887
Duan M, Liu D, Chen X, et al (2019) Astraea: self-balancing federated learning for improving classification accuracy of mobile deep learning applications. In: 2019 IEEE 37th international conference on computer design (ICCD). IEEE, pp 246–254. https://doi.org/10.1109/ICCD46524.2019.00038
Duan M, Liu D, Chen X et al (2020) Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Trans Parallel Distrib Syst 32(1):59–71. https://doi.org/10.1109/TPDS.2020.3009406
Dwisnanto Putro M, Nguyen DL, Jo KH (2020) Fast eye detector using CPU based lightweight convolutional neural network. In: 2020 20th international conference on control, automation and systems (ICCAS), pp 12–16. https://doi.org/10.23919/ICCAS50221.2020.9268234
Elsken T, Metzen JH, Hutter F (2019) Neural architecture search: a survey. J Mach Learn Res 20(1):1997–2017. http://jmlr.org/papers/v20/18-598.html
Geyer RC, Klein T, Nabi M (2017) Differentially private federated learning: a client level perspective. ArXiv preprint arXiv:1712.07557
Gibiansky A (2017) Bringing HPC techniques to deep learning. Baidu Research, Tech Rep. http://research.baidu.com/bringing-hpc-techniques-deep-learning
Gong Y, Liu L, Yang M, et al (2014) Compressing deep convolutional networks using vector quantization. ArXiv preprint arXiv:1412.6115
Group OCAW, et al (2017) Openfog reference architecture for fog computing. OPFRA001 20817:162. https://www.openfogconsortium.org/wp-content/uploads/OpenFog_Reference_Architecture_2_09_17-FINAL.pdf
Guo J, Li Y, Lin W, et al (2018a) Network decoupling: From regular to depthwise separable convolutions. ArXiv preprint arXiv:1808.05517
Guo P, Hu B, Li R, et al (2018b) Foggycache: cross-device approximate computation reuse. In: Proceedings of the 24th annual international conference on mobile computing and networking, pp 19–34. https://doi.org/10.1145/3241539.3241557
Guo Y, Yao A, Chen Y (2016) Dynamic network surgery for efficient DNNs. ArXiv preprint arXiv:1608.04493
Gupta S, Agrawal A, Gopalakrishnan K, et al (2015) Deep learning with limited numerical precision. In: International conference on machine learning. PMLR, pp 1737–1746. http://proceedings.mlr.press/v37/gupta15.html
Han S, Mao H, Dally WJ (2015a) Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. ArXiv preprint arXiv:1510.00149
Han S, Pool J, Tran J, et al (2015b) Learning both weights and connections for efficient neural networks. ArXiv preprint arXiv:1506.02626
Han S, Liu X, Mao H, et al (2016) EIE: efficient inference engine on compressed deep neural network. In: 43rd ACM/IEEE annual international symposium on computer architecture, ISCA 2016, Seoul, South Korea, June 18–22, 2016. IEEE Computer Society, pp 243–254. https://doi.org/10.1109/ISCA.2016.30
Hartmann F, Suh S, Komarzewski A, et al (2019) Federated learning for ranking browser history suggestions. CoRR. http://arxiv.org/abs/1911.11807
Hassibi B, Stork DG, Wolff GJ (1993) Optimal brain surgeon and general network pruning. In: IEEE international conference on neural networks. IEEE, pp 293–299. https://doi.org/10.1109/ICNN.1993.298572
He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778. https://doi.org/10.1109/CVPR.2016.90
He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE international conference on computer vision, pp 1389–1397. https://doi.org/10.1109/ICCV.2017.155
Hegedűs I, Danner G, Jelasity M (2019) Gossip learning as a decentralized alternative to federated learning. In: IFIP international conference on distributed applications and interoperable systems. Springer, pp 74–90. https://doi.org/10.1007/978-3-030-22496-7_5
Heo B, Lee M, Yun S, et al (2019) Knowledge distillation with adversarial samples supporting decision boundary. In: Proceedings of the AAAI conference on artificial intelligence, pp 3771–3778. http://arxiv.org/abs/1805.05532
Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. ArXiv preprint arXiv:1503.02531. https://doi.org/10.4140/TCP.n.2015.249
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554. https://doi.org/10.1162/neco.2006.18.7.1527
Hitaj B, Ateniese G, Perez-Cruz F (2017) Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp 603–618. https://doi.org/10.1145/3133956.3134012
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Holi JL, Hwang JN (1993) Finite precision error analysis of neural network hardware implementations. IEEE Trans Comput 42(3):281–290. https://doi.org/10.1109/12.210171
Howard A, Sandler M, Chu G, et al (2019) Searching for mobilenetv3. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 1314–1324. http://arxiv.org/abs/1905.02244
Howard AG, Zhu M, Chen B, et al (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. ArXiv preprint arXiv:1704.04861
Hsieh K, Ananthanarayanan G, Bodik P, et al (2018) Focus: querying large video datasets with low latency and low cost. In: 13th USENIX symposium on operating systems design and implementation (OSDI 18). USENIX Association, Carlsbad, CA, pp 269–286. https://www.usenix.org/conference/osdi18/presentation/hsieh
Hu C, Jiang J, Wang Z (2019) Decentralized federated learning: a segmented gossip approach. ArXiv preprint arXiv:1908.07782
Hu J, Shen L, Sun G (2018a) Squeeze-and-excitation networks. In: 2018 IEEE conference on computer vision and pattern recognition, CVPR 2018, Salt Lake City, UT, USA, June 18–22, 2018. Computer Vision Foundation/IEEE Computer Society, pp 7132–7141. https://doi.org/10.1109/CVPR.2018.00745
Hu Q, Wang P, Cheng J (2018b) From hashing to CNNs: training binary weight networks via hashing. In: 32nd AAAI conference on artificial intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16466
Huang G, Sun Y, Liu Z, et al (2016) Deep networks with stochastic depth. In: European conference on computer vision. Springer, pp 646–661. https://doi.org/10.1007/978-3-319-46493-0_39
Huang Y, Wang F, Wang F, et al (2019) Deepar: a hybrid device-edge-cloud execution framework for mobile deep learning applications. In: IEEE INFOCOM 2019-IEEE conference on computer communications workshops (INFOCOM WKSHPS). IEEE, pp 892–897. https://doi.org/10.1109/INFCOMW.2019.8845240
Iandola FN, Han S, Moskewicz MW, et al (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(< 0.5\) MB model size. ArXiv preprint arXiv:1602.07360
Jaderberg M, Vedaldi A, Zisserman A (2014) Speeding up convolutional neural networks with low rank expansions. ArXiv preprint arXiv:1405.3866
Jain S, Zhang X, Zhou Y, et al (2018) Rexcam: Resource-efficient, cross-camera video analytics at scale. ArXiv preprint arXiv:1811.01268
Jain S, Zhang X, Zhou Y, et al (2020) Spatula: efficient cross-camera video analytics on large camera networks. In: 2020 IEEE/ACM symposium on edge computing (SEC). IEEE, pp 110–124. https://doi.org/10.1109/SEC50012.2020.00016
Janjua ZH, Vecchio M, Antonini M et al (2019) IRESE: an intelligent rare-event detection system using unsupervised learning on the IOT edge. Eng Appl Artif Intell 84:41–50. https://doi.org/10.1016/j.engappai.2019.05.011
Jiang Y, Wang S, Valls V, et al (2019) Model pruning enables efficient federated learning on edge devices. ArXiv preprint arXiv:1909.12326
Kang D, Emmons J, Abuzaid F, et al (2017a) Noscope: optimizing neural network queries over video at scale. ArXiv preprint arXiv:1703.02529. https://doi.org/10.14778/3137628.3137664
Kang Y, Hauswald J, Gao C et al (2017) Neurosurgeon: collaborative intelligence between the cloud and mobile edge. ACM SIGARCH Comput Arch News 45(1):615–629. https://doi.org/10.1145/3037697.3037698
Kholod I, Yanaki E, Fomichev D et al (2021) Open-source federated learning frameworks for IOT: a comparative review and analysis. Sensors 21(1):167. https://doi.org/10.3390/s21010167
Kim H, Park J, Bennis M et al (2019) Blockchained on-device federated learning. IEEE Commun Lett 24(6):1279–1283. https://doi.org/10.1109/LCOMM.2019.2921755
Kim J, Park S, Kwak N (2018) Paraphrasing complex network: network compression via factor transfer. ArXiv preprint arXiv:1802.04977
Kim YD, Park E, Yoo S, et al (2015) Compression of deep convolutional neural networks for fast and low power mobile applications. ArXiv preprint arXiv:1511.06530. https://doi.org/10.1109/ICCV.2015.73
Ko JH, Na T, Amir MF, et al (2018) Edge-host partitioning of deep neural networks with feature space encoding for resource-constrained internet-of-things platforms. In: 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, pp 1–6. http://arxiv.org/abs/1802.03835
Konečnỳ J, McMahan HB, Yu FX, et al (2016) Federated learning: strategies for improving communication efficiency. ArXiv preprint arXiv:1610.05492
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105. https://doi.org/10.1145/3065386
Lalitha A, Kilinc OC, Javidi T, et al (2019) Peer-to-peer federated learning on graphs. ArXiv preprint arXiv:1901.11173
Lane ND, Bhattacharya S, Georgiev P, et al (2016) Deepx: a software accelerator for low-power deep learning inference on mobile devices. In: 2016 15th ACM/IEEE international conference on information processing in sensor networks (IPSN). IEEE, pp 1–12. https://doi.org/10.1109/IPSN.2016.7460664
Laskaridis S, Venieris SI, Almeida M, et al (2020) Spinn: synergistic progressive inference of neural networks over device and cloud. In: Proceedings of the 26th annual international conference on mobile computing and networking, pp 1–15. https://doi.org/10.1145/3372224.3419194
Lebedev V, Ganin Y, Rakhuba M, et al (2014) Speeding-up convolutional neural networks using fine-tuned CP-decomposition. ArXiv preprint arXiv:1412.6553
LeCun Y, Boser BE, Denker JS et al (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551. https://doi.org/10.1162/neco.1989.1.4.541
LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances in neural information processing systems, pp 598–605. http://papers.nips.cc/paper/250-optimal-brain-damage
Lee C, Hong S, Hong S et al (2020) Performance analysis of local exit for distributed deep neural networks over cloud and edge computing. ETRI J 42(5):658–668. https://doi.org/10.4218/etrij.2020-0112
Lee R, Venieris SI, Dudziak L, et al (2019) Mobisr: Efficient on-device super-resolution through heterogeneous mobile processors. In: The 25th annual international conference on mobile computing and networking, pp 1–16. https://doi.org/10.1145/3300061.3345455
Lee S, Kim H, Jeong B et al (2021) A training method for low rank convolutional neural networks based on alternating tensor compose-decompose method. Appl Sci 11(2):643. https://doi.org/10.3390/app11020643
Li D, Wang X, Kong D (2018a) Deeprebirth: accelerating deep neural network execution on mobile devices. In: Proceedings of the AAAI conference on artificial intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16652
Li D, Tasci S, Ghosh S, et al (2019a) RILOD: near real-time incremental learning for object detection at the edge. In: Chen S, Onishi R, Ananthanarayanan G, et al (eds) Proceedings of the 4th ACM/IEEE symposium on edge computing, SEC 2019, Arlington, Virginia, USA, November 7–9, 2019. ACM, pp 113–126. https://doi.org/10.1145/3318216.3363317
Li E, Zeng L, Zhou Z et al (2019) Edge AI: on-demand accelerating deep neural network inference via edge computing. IEEE Trans Wirel Commun 19(1):447–457. https://doi.org/10.1109/TWC.2019.2946140
Li F, Zhang B, Liu B (2016a) Ternary weight networks. ArXiv preprint arXiv:1605.04711
Li H, Kadav A, Durdanovic I, et al (2016b) Pruning filters for efficient convnets. ArXiv preprint arXiv:1608.08710
Li H, Hu C, Jiang J, et al (2018b) Jalad: joint accuracy-and latency-aware deep structure decoupling for edge-cloud execution. In: 2018 IEEE 24th international conference on parallel and distributed systems (ICPADS). IEEE, pp 671–678. https://doi.org/10.1109/PADSW.2018.8645013
Li L, Ota K, Dong M (2018) Deep learning for smart industry: efficient manufacture inspection system with fog computing. IEEE Trans Industr Inf 14(10):4665–4673. https://doi.org/10.1109/TII.2018.2842821
Li M, Xie L, Lv Z, et al (2020) Multistep deep system for multimodal emotion detection with invalid data in the internet of things. IEEE Access 8:187,208–187,221. https://doi.org/10.1109/ACCESS.2020.3029288
Li X, Huang K, Yang W, et al (2019c) On the convergence of fedavg on non-iid data. ArXiv preprint arXiv:1907.02189
Li Y, Lin S, Zhang B, et al (2019d) Exploiting kernel sparsity and entropy for interpretable CNN compression. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2800–2809. https://doi.org/10.1109/CVPR.2019.00291
Lin C, Zhong Z, Wu W, et al (2018a) Synaptic strength for convolutional neural network. ArXiv preprint arXiv:1811.02454
Lin M, Chen Q, Yan S (2013) Network in network. ArXiv preprint arXiv:1312.4400
Lin S, Ji R, Chen C et al (2018) Holistic CNN compression via low-rank decomposition with knowledge transfer. IEEE Trans Pattern Anal Mach Intell 41(12):2889–2905. https://doi.org/10.1109/TPAMI.2018.2873305
Lin X, Zhao C, Pan W (2017a) Towards accurate binary convolutional neural network. ArXiv preprint arXiv:1711.11294
Lin Y, Han S, Mao H, et al (2017b) Deep gradient compression: reducing the communication bandwidth for distributed training. ArXiv preprint arXiv:1712.01887
Ling S, Pastor A, Li J, et al (2020) Few-shot pill recognition. In: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation/IEEE, pp 9786–9795. https://doi.org/10.1109/CVPR42600.2020.00981
Liu L, Li H, Gruteser M (2019) Edge assisted real-time object detection for mobile augmented reality. In: The 25th annual international conference on mobile computing and networking, pp 1–16. https://doi.org/10.1145/3300061.3300116
Liu M, Ding X, Du W (2020) Continuous, real-time object detection on mobile devices without offloading. In: 2020 IEEE 40th international conference on distributed computing systems (ICDCS). IEEE, pp 976–986. https://doi.org/10.1109/ICDCS47774.2020.00085
Liu Y, Garg S, Nie J et al (2021) Deep anomaly detection for time-series data in industrial IOT: a communication-efficient on-device federated learning approach. IEEE Internet Things J 8(8):6348–6358. https://doi.org/10.1109/JIOT.2020.3011726
Liu Z, Li J, Shen Z, et al (2017) Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE international conference on computer vision, pp 2736–2744. https://doi.org/10.1109/ICCV.2017.298
Lo C, Su YY, Lee CY, et al (2017) A dynamic deep neural network design for efficient workload allocation in edge computing. In: 2017 IEEE international conference on computer design (ICCD). IEEE, pp 273–280. https://doi.org/10.1109/ICCD.2017.49
Lu S, Zhang Y, Wang Y (2020) Decentralized federated learning for electronic health records. In: 2020 54th annual conference on information sciences and systems (CISS). IEEE, pp 1–5. https://doi.org/10.1109/CISS48834.2020.1570617414
Lu Y, Huang X, Dai Y et al (2019) Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Trans Industr Inf 16(6):4177–4186. https://doi.org/10.1109/TII.2019.2942190
Lungu I, Aimar A, Hu Y et al (2020) Siamese networks for few-shot learning on edge embedded devices. IEEE J Emerg Sel Topics Circuits Syst 10(4):488–497. https://doi.org/10.1109/JETCAS.2020.3033155
Luo JH, Wu J (2020) Autopruner: an end-to-end trainable filter pruning method for efficient deep model inference. Pattern Recogn 107(107):461. https://doi.org/10.1016/j.patcog.2020.107461
Luo JH, Wu J, Lin W (2017) Thinet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE international conference on computer vision, pp 5058–5066. https://doi.org/10.1109/ICCV.2017.541
Luo Y, Yu S (2021) AILC: accelerate on-chip incremental learning with compute-in-memory technology. IEEE Trans Comput 70(8):1225–1238. https://doi.org/10.1109/TC.2021.3053199
Ma N, Zhang X, Zheng HT, et al (2018) Shufflenet v2: Practical guidelines for efficient CNN architecture design. In: Proceedings of the European conference on computer vision (ECCV), pp 116–131. https://doi.org/10.1007/978-3-030-01264-9_8
Manessi F, Rozza A, Bianco S, et al (2018) Automated pruning for deep neural network compression. In: 2018 24th international conference on pattern recognition (ICPR). IEEE, pp 657–664. https://doi.org/10.1109/ICPR.2018.8546129
Mao J, Chen X, Nixon KW, et al (2017a) Modnn: Local distributed mobile computing system for deep neural network. In: Design, automation & test in Europe conference & exhibition (DATE). IEEE, pp 1396–1401. https://doi.org/10.23919/DATE.2017.7927211
Mao J, Yang Z, Wen W, et al (2017b) Mednn: a distributed mobile system with enhanced partition and deployment for large-scale DNNs. In: 2017 IEEE/ACM international conference on computer-aided design (ICCAD). IEEE, pp 751–756. https://doi.org/10.1109/ICCAD.2017.8203852
Marco VS, Taylor B, Wang Z, et al (2019) Optimizing deep learning inference on embedded systems through adaptive model selection. CoRR. http://arxiv.org/abs/1911.04946
Martinez B, Yang J, Bulat A, et al (2020) Training binary neural networks with real-to-binary convolutions. ArXiv preprint arXiv:2003.11535
Mathur A, Zhang T, Bhattacharya S, et al (2018) Using deep data augmentation training to address software and hardware heterogeneities in wearable and smartphone sensing devices. In: Mottola L, Gao J, Zhang P (eds) Proceedings of the 17th ACM/IEEE international conference on information processing in sensor networks, IPSN 2018, Porto, Portugal, April 11–13, 2018. IEEE/ACM, pp 200–211. https://doi.org/10.1109/IPSN.2018.00048
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133. https://doi.org/10.1007/BF02459570
McMahan B, Moore E, Ramage D, et al (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics. PMLR, pp 1273–1282. http://proceedings.mlr.press/v54/mcmahan17a.html
Melis L, Song C, De Cristofaro E, et al (2019) Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE symposium on security and privacy (SP). IEEE, pp 691–706. https://doi.org/10.1109/SP.2019.00029
Mell P, Grance T et al (2011) The NIST definition of cloud computing. https://doi.org/10.6028/NIST.SP.800-145
Mirzadeh SI, Farajtabar M, Li A, et al (2020) Improved knowledge distillation via teacher assistant. In: Proceedings of the AAAI conference on artificial intelligence, pp 5191–5198. https://aaai.org/ojs/index.php/AAAI/article/view/5963
Mishra A, Marr D (2017) Apprentice: using knowledge distillation techniques to improve low-precision network accuracy. ArXiv preprint arXiv:1711.05852
Mishra A, Nurvitadhi E, Cook JJ, et al (2017) WRPN: wide reduced-precision networks. ArXiv preprint arXiv:1709.01134
Mnih V, Kavukcuoglu K, Silver D, et al (2013) Playing atari with deep reinforcement learning. CoRR. http://arxiv.org/abs/1312.5602
Molchanov P, Tyree S, Karras T, et al (2016) Pruning convolutional neural networks for resource efficient inference. ArXiv preprint arXiv:1611.06440. https://openreview.net/forum?id=SJGCiw5gl
Novikov A, Podoprikhin D, Osokin A, et al (2015) Tensorizing neural networks. ArXiv preprint arXiv:1509.06569
Pakha C, Chowdhery A, Jiang J (2018) Reinventing video streaming for distributed vision analytics. In: 10th USENIX workshop on hot topics in cloud computing (HotCloud 18). USENIX Association, Boston. https://www.usenix.org/conference/hotcloud18/presentation/pakha
Panda P, Ankit A, Wijesinghe P, et al (2016) Falcon: feature driven selective classification for energy-efficient image recognition. IEEE Trans Comput-Aided Des Integr Circuits Syst. https://doi.org/10.1109/TCAD.2017.2681075
Panda P, Sengupta A, Roy K (2017) Energy-efficient and improved image recognition with conditional deep learning. ACM J Emerging Technol Comput Syst (JETC) 13(3):1–21. https://doi.org/10.1145/3007192
Park E, Kim D, Kim S, et al (2015) Big/little deep neural network for ultra low power inference. In: 2015 international conference on hardware/software codesign and system synthesis (CODES+ISSS). https://doi.org/10.1109/CODESISSS.2015.7331375
Patarasuk P, Yuan X (2009) Bandwidth optimal all-reduce algorithms for clusters of workstations. J Parallel Distrib Comput 69(2):117–124. https://doi.org/10.1016/j.jpdc.2008.09.002
Qi T, Wu F, Wu C, et al (2020) Privacy-preserving news recommendation model training via federated learning. CoRR. https://arxiv.org/abs/2003.09592
Radu V, Henne M (2019) Vision2sensor: knowledge transfer across sensing modalities for human activity recognition. Proc ACM Interact Mob Wearable Ubiquitous Technol 3(3):84:1–84:21. https://doi.org/10.1145/3351242
Rastegari M, Ordonez V, Redmon J, et al (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In: European conference on computer vision. Springer, pp 525–542. https://doi.org/10.1007/978-3-319-46493-0_32
Reisizadeh A, Mokhtari A, Hassani H, et al (2020) Fedpaq: a communication-efficient federated learning method with periodic averaging and quantization. In: International conference on artificial intelligence and statistics. PMLR, pp 2021–2031. http://proceedings.mlr.press/v108/reisizadeh20a.html
Rigamonti R, Sironi A, Lepetit V, et al (2013) Learning separable filters. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2754–2761. https://doi.org/10.1109/CVPR.2013.355
Romero A, Ballas N, Kahou SE, et al (2014) Fitnets: hints for thin deep nets. ArXiv preprint arXiv:1412.6550
Roy AG, Siddiqui S, Pölsterl S, et al (2019) Braintorrent: a peer-to-peer environment for decentralized federated learning. ArXiv preprint arXiv:1905.06731
Sainath TN, Kingsbury B, Sindhwani V, et al (2013) Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 6655–6659. https://doi.org/10.1109/ICASSP.2013.6638949
Samarakoon S, Bennis M, Saad W et al (2020) Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Trans Commun 68(2):1146–1159. https://doi.org/10.1109/TCOMM.2019.2956472
Sanchez-Iborra R, Skarmeta AF (2020) Tinyml-enabled frugal smart objects: Challenges and opportunities. IEEE Circuits Syst Mag 20(3):4–18. https://doi.org/10.1109/MCAS.2020.3005467
Sandler M, Howard A, Zhu M, et al (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
Sau BB, Balasubramanian VN (2016) Deep model compression: distilling knowledge from noisy teachers. ArXiv preprint arXiv:1610.09650
Savazzi S, Nicoli M, Rampa V (2020) Federated learning with cooperating devices: a consensus approach for massive IoT networks. IEEE Internet Things J 7(5):4641–4654. https://doi.org/10.1109/JIOT.2020.2964162
Seide F, Fu H, Droppo J, et al (2014) 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs. In: 15th annual conference of the international speech communication association. http://www.isca-speech.org/archive/interspeech_2014/i14_1058.html
Shahmohammadi F, Hosseini A, King CE, et al (2017) Smartwatch based activity recognition using active learning. In: Bonato P, Wang H (eds) Proceedings of the 2nd IEEE/ACM international conference on connected health: applications, systems and engineering technologies, CHASE 2017, Philadelphia, PA, USA, July 17–19, 2017. IEEE Computer Society/ACM, pp 321–329. https://doi.org/10.1109/CHASE.2017.115
Sheller MJ, Reina GA, Edwards B, et al (2018) Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation. In: International MICCAI Brainlesion workshop. Springer, pp 92–104. https://doi.org/10.1007/978-3-030-11723-8_9
Shi W, Cao J, Zhang Q et al (2016) Edge computing: vision and challenges. IEEE Internet Things J 3(5):637–646. https://doi.org/10.1109/JIOT.2016.2579198
Shokri R, Shmatikov V (2015) Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp 1310–1321. https://doi.org/10.1109/ALLERTON.2015.7447103
Silver D, Huang A, Maddison CJ, et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484–489. https://doi.org/10.1038/nature16961
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv preprint ArXiv:1409.1556
Smola A, Narayanamurthy S (2010) An architecture for parallel topic models. Proc VLDB Endow 3(1-2):703–710. https://doi.org/10.14778/1920841.1920931
Soudry D, Hubara I, Meir R (2014) Expectation backpropagation: parameter-free training of multilayer neural networks with continuous or discrete weights. In: NIPS, p 2. https://proceedings.neurips.cc/paper/2014/hash/076a0c97d09cf1a0ec3e19c7f2529f2b-Abstract.html
Srivastava N, Hinton G, Krizhevsky A, et al (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958. http://dl.acm.org/citation.cfm?id=2670313
Stahl R, Hoffman A, Mueller-Gritschneder D, et al (2021) Deeperthings: fully distributed CNN inference on resource-constrained edge devices. Int J Parallel Progr. https://doi.org/10.1007/s10766-021-00712-3
Stamoulis D, Chin T, Prakash AK, et al (2019) Designing adaptive neural networks for energy-constrained image classification. In: 2018 IEEE/ACM international conference on computer-aided design (ICCAD). https://doi.org/10.1145/3240765.3240796
Swaminathan S, Garg D, Kannan R et al (2020) Sparse low rank factorization for deep neural network compression. Neurocomputing 398:185–196. https://doi.org/10.1016/j.neucom.2020.02.035
Szegedy C, Liu W, Jia Y, et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9. https://doi.org/10.1109/CVPR.2015.7298594
Tan M, Le Q (2019) Efficientnet: rethinking model scaling for convolutional neural networks. In: International conference on machine learning. PMLR, pp 6105–6114. http://proceedings.mlr.press/v97/tan19a.html
Tan M, Le QV (2021) Efficientnetv2: smaller models and faster training. ArXiv preprint arXiv:2104.00298
Tang Z, Shi S, Chu X, et al (2020) Communication-efficient distributed deep learning: a comprehensive survey. CoRR. https://arxiv.org/abs/2003.06307
Tann H, Hashemi S, Bahar RI et al (2016) Runtime configurable deep neural networks for energy-accuracy trade-off. ACM. https://doi.org/10.1145/2968456.2968458
Taylor B, Marco VS, Wolff W et al (2018) Adaptive deep learning model selection on embedded systems. ACM SIGPLAN Notices 53(6):31–43. https://doi.org/10.1145/3299710.3211336
Teerapittayanon S, McDanel B, Kung HT (2016) Branchynet: fast inference via early exiting from deep neural networks. In: 2016 23rd international conference on pattern recognition (ICPR). IEEE, pp 2464–2469. https://doi.org/10.1109/ICPR.2016.7900006
Tian X, Zhu J, Xu T et al (2021) Mobility-included DNN partition offloading from mobile devices to edge clouds. Sensors 21(1):229. https://doi.org/10.3390/s21010229
Touvron H, Cord M, Douze M, et al (2020) Training data-efficient image transformers & distillation through attention. ArXiv preprint arXiv:2012.12877
Truex S, Baracaldo N, Anwar A, et al (2019) A hybrid approach to privacy-preserving federated learning. In: Proceedings of the 12th ACM workshop on artificial intelligence and security, pp 1–11. https://doi.org/10.1145/3338501.3357370
Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998–6008. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
Wang J, Feng Z, Chen Z, et al (2018a) Bandwidth-efficient live video analytics for drones via edge computing. In: 2018 IEEE/ACM symposium on edge computing (SEC). IEEE, pp 159–173. https://doi.org/10.1109/SEC.2018.00019
Wang P, Cheng J (2016) Accelerating convolutional neural networks for mobile applications. In: Proceedings of the 24th ACM international conference on multimedia, pp 541–545. https://doi.org/10.1145/2964284.2967280
Wang S, Tuor T, Salonidis T et al (2019) Adaptive federated learning in resource constrained edge computing systems. IEEE J Sel Areas Commun 37(6):1205–1221. https://doi.org/10.1109/JSAC.2019.2904348
Wang X, Yu F, Dou ZY, et al (2018b) Skipnet: learning dynamic routing in convolutional networks. In: Proceedings of the European conference on computer vision (ECCV), pp 409–424, https://doi.org/10.1007/978-3-030-01261-8_25
Wang X, Han Y, Leung V et al (2020) Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun Surv Tutor 22(99):869–904. https://doi.org/10.1109/COMST.2020.2970550
Wang X, Yang Z, Wu J, et al (2021) Edgeduet: Tiling small object detection for edge assisted autonomous mobile vision. In: IEEE INFOCOM 2021—IEEE conference on computer communications, pp 1–10. https://doi.org/10.1109/INFOCOM42981.2021.9488843
Wei K, Li J, Ding M et al (2020) Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans Inf Forensics Secur 15:3454–3469. https://doi.org/10.1109/TIFS.2020.2988575
Wen W, Wu C, Wang Y, et al (2016) Learning structured sparsity in deep neural networks. Adv Neural Inf Process Syst 29:2074–2082. https://proceedings.neurips.cc/paper/2016/file/41bfd20a38bb1b0bec75acf0845530a7-Paper.pdf
Weng J, Weng J, Zhang J et al (2021) Deepchain: auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans Dependable Secur Comput 18(5):2438–2455. https://doi.org/10.1109/TDSC.2019.2952332
Wistuba M, Rawat A, Pedapati T (2019) A survey on neural architecture search. ArXiv preprint arXiv:1905.01392
Wu J, Leng C, Wang Y, et al (2016) Quantized convolutional neural networks for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4820–4828. https://doi.org/10.1109/CVPR.2016.521
Xie S, Girshick R, Dollár P, et al (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1492–1500. https://doi.org/10.1109/CVPR.2017.634
Xu D, Li T, Li Y, et al (2020) Edge intelligence: architectures, challenges, and applications. ArXiv preprint arXiv:2003.12172
Xu M, Zhu M, Liu Y, et al (2018) Deepcache: principled cache for mobile deep vision. In: Proceedings of the 24th annual international conference on mobile computing and networking, pp 129–144. https://doi.org/10.1145/3241539.3241563
Xue J, Li J, Gong Y (2013) Restructuring of deep neural network acoustic models with singular value decomposition. In: Interspeech, pp 2365–2369. http://www.isca-speech.org/archive/interspeech_2013/i13_2365.html
Yang L, Chen X, Perlaza SM et al (2020) Special issue on artificial-intelligence-powered edge computing for internet of things. IEEE Internet Things J 7(10):9224–9226. https://doi.org/10.1109/JIOT.2020.3019948
Yang L, Han Y, Chen X, et al (2020b) Resolution adaptive networks for efficient inference. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2369–2378. https://doi.org/10.1109/CVPR42600.2020.00244
Yang Q, Liu Y, Chen T et al (2019) Federated machine learning: concept and applications. ACM Trans Intell Syst Technol 10(2):1–19. https://doi.org/10.1145/3298981
Yang TJ, Chen YH, Sze V (2017) Designing energy-efficient convolutional neural networks using energy-aware pruning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5687–5695. https://doi.org/10.1109/CVPR.2017.643
Yi J, Choi S, Lee Y (2020) Eagleeye: wearable camera-based person identification in crowded urban spaces. In: MobiCom ’20: The 26th annual international conference on mobile computing and networking, London, United Kingdom, September 21–25, 2020. ACM, pp 4:1–4:14. https://doi.org/10.1145/3372224.3380881
Yim J, Joo D, Bae J, et al (2017) A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4133–4141. https://doi.org/10.1109/CVPR.2017.754
Yoshida N, Nishio T, Morikura M, et al (2020) Hybrid-fl for wireless networks: cooperative learning mechanism using non-IID data. In: ICC 2020-2020 IEEE international conference on communications (ICC). IEEE, pp 1–7. https://doi.org/10.1109/ICC40277.2020.9149323
You Z, Yan K, Ye J, et al (2019) Gate decorator: global filter pruning method for accelerating deep convolutional neural networks. ArXiv preprint arXiv:1909.08174
Yu R, Li A, Chen CF, et al (2018) Nisp: pruning networks using neuron importance score propagation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9194–9203. https://doi.org/10.1109/CVPR.2018.00958
Zagoruyko S, Komodakis N (2016a) Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. ArXiv preprint arXiv:1612.03928
Zagoruyko S, Komodakis N (2016b) Wide residual networks. ArXiv preprint arXiv:1605.07146
Zeng L, Li E, Zhou Z et al (2019) Boomerang: on-demand cooperative deep neural network inference for edge intelligence on the industrial internet of things. IEEE Netw 33(5):96–103. https://doi.org/10.1109/MNET.001.1800506
Zeng X, Cao K, Zhang M (2017) MobileDeepPill: a small-footprint mobile deep learning system for recognizing unconstrained pill images. In: Choudhury T, Ko SY, Campbell A, et al (eds) Proceedings of the 15th annual international conference on mobile systems, applications, and services, MobiSys’17, Niagara Falls, NY, USA, June 19–23, 2017. ACM, pp 56–67. https://doi.org/10.1145/3081333.3081336
Zhang C, Cao Q, Jiang H, et al (2018a) Ffs-va: a fast filtering system for large-scale video analytics. In: Proceedings of the 47th international conference on parallel processing, pp 1–10. https://doi.org/10.1145/3225058.3225103
Zhang C, Cao Q, Jiang H et al (2020) A fast filtering mechanism to improve efficiency of large-scale video analytics. IEEE Trans Comput 69(6):914–928. https://doi.org/10.1109/TC.2020.2970413
Zhang L, Song J, Gao A, et al (2019) Be your own teacher: improve the performance of convolutional neural networks via self distillation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 3713–3722. https://doi.org/10.1109/ICCV.2019.00381
Zhang W, Li X, Ma H et al (2021) Federated learning for machinery fault diagnosis with dynamic validation and self-supervision. Knowl Based Syst 213(106):679. https://doi.org/10.1016/j.knosys.2020.106679
Zhang W, Wang X, Zhou P, et al (2021b) Client selection for federated learning with non-IID data in mobile edge computing. IEEE Access 9:24,462–24,474. https://doi.org/10.1109/ACCESS.2021.3056919
Zhang X, Zou J, He K et al (2015) Accelerating very deep convolutional networks for classification and detection. IEEE Trans Pattern Anal Mach Intell 38(10):1943–1955. https://doi.org/10.1109/TPAMI.2015.2502579
Zhang X, Zhou X, Lin M, et al (2018b) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6848–6856. https://doi.org/10.1109/CVPR.2018.00716
Zhang Y, Wallace B (2015) A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. ArXiv preprint arXiv:1510.03820
Zhang Y, Xiang T, Hospedales TM, et al (2018c) Deep mutual learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4320–4328. http://arxiv.org/abs/1706.00384
Zhao Y, Li M, Lai L, et al (2018a) Federated learning with non-IID data. ArXiv preprint arXiv:1806.00582
Zhao Y, Zhao J, Jiang L et al (2020) Privacy-preserving blockchain-based federated learning for IoT devices. IEEE Internet Things J 8(3):1817–1829. https://doi.org/10.1109/JIOT.2020.3017377
Zhao Z, Barijough KM, Gerstlauer A (2018) Deepthings: distributed adaptive deep learning inference on resource-constrained IoT edge clusters. IEEE Trans Comput Aided Des Integr Circuits Syst 37(11):2348–2359. https://doi.org/10.1109/TCAD.2018.2858384
Zhou A, Yao A, Guo Y, et al (2017) Incremental network quantization: towards lossless CNNs with low-precision weights. ArXiv preprint arXiv:1702.03044
Zhou G, Fan Y, Cui R, et al (2018) Rocket launching: a universal and efficient framework for training well-performing light net. In: 32nd AAAI conference on artificial intelligence. https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16090
Zhou J, Wang Y, Ota K et al (2019) Aaiot: accelerating artificial intelligence in IoT systems. IEEE Wirel Commun Lett 8(3):825–828. https://doi.org/10.1109/LWC.2019.2894703
Zhou S, Wu Y, Ni Z, et al (2016) Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients. ArXiv preprint arXiv:1606.06160
Zhou Z, Chen X, Li E et al (2019) Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc IEEE 107(8):1738–1762. https://doi.org/10.1109/JPROC.2019.2918951
Zhu J, Zhao Y, Pei J (2021) Progressive kernel pruning based on the information mapping sparse index for CNN compression. IEEE Access 9:10,974–10,987. https://doi.org/10.1109/ACCESS.2021.3051504
Zuo Y, Chen B, Shi T, et al (2020) Filter pruning without damaging networks capacity. IEEE Access 8:90,924–90,930. https://doi.org/10.1109/ACCESS.2020.2993932
Acknowledgements
The authors are very appreciative to the reviewers for their precious comments which enormously ameliorated the quality of this paper. This work was supported in part by National Key R&D Program of China (2018YFB1701802); National Natural Science Foundation of China 61802280,61806143, 61772365, 41772123; Tianjin Technology Innovation Guide Special 21YDTPJC00130.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Su, W., Li, L., Liu, F. et al. AI on the edge: a comprehensive review. Artif Intell Rev 55, 6125–6183 (2022). https://doi.org/10.1007/s10462-022-10141-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10462-022-10141-4