skip to main content
10.1145/3412382.3458266acmconferencesArticle/Chapter ViewAbstractPublication PagescpsweekConference Proceedingsconference-collections
research-article

Deep Functional Network (DFN): Functional Interpretation of Deep Neural Networks for Intelligent Sensing Systems

Authors Info & Claims
Published:20 May 2021Publication History

ABSTRACT

We introduce Deep Functional Network (DFN) that approximates a black-box Deep Neural Network (DNN) to a functional program consisting of a set of well-known functions and data flows among them. A DFN not only provides a semantic interpretation of a DNN but also enables easy deployment and optimization of the translated program according to the requirements and constraints of the target intelligent sensing system. To interpret a DNN, we propose the DFN framework consisting of two steps: 1) function estimation that estimates the distribution of functions likely to be used in the source DNN and 2) network formation that finds a functional network in the form of a directed acyclic graph (DAG) given the estimated function distribution. Our empirical study conducted with 16 state-of-the-art DNNs demonstrates that the generated DFNs provide semantic understandings of the DNNs along with comparable classification accuracy to the source DNNs. We implement two intelligent sensing systems that use the proposed DFN: 1) a mobile robot that avoids obstacles detected by a camera and 2) a smartphone-based human activity recognizer using IMU sensors, where different sizes of DFNs are generated to complete the task under various resource budgets, i.e., execution time and energy consumption dynamically imposed by run-time scenarios. The experiment result demonstrates that a set of DFNs generated from a single DNN enable both systems to achieve the desired performance under various resource constraints based on semantic understanding of the DNNs.

References

  1. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). 265--283.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Milton Abramowitz and Irene A Stegun. 1948. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Vol. 55. US Government printing office.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Milton Abramowitz, Irene A Stegun, and Robert H Romer. 1988. Handbook of mathematical functions with formulas, graphs, and mathematical tables.Google ScholarGoogle Scholar
  4. Larry C Andrews. 1998. Special functions of mathematics for engineers. Vol. 49. Spie Press.Google ScholarGoogle Scholar
  5. Robert Andrews, Joachim Diederich, and Alan B Tickle. 1995. Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-based systems 8, 6 (1995), 373--389.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixelwise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10, 7 (2015).Google ScholarGoogle Scholar
  7. Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2016. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989 (2016).Google ScholarGoogle Scholar
  8. Osbert Bastani, Carolyn Kim, and Hamsa Bastani. 2017. Interpretability via model extraction. arXiv preprint arXiv:1706.09773 (2017).Google ScholarGoogle Scholar
  9. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Network dissection: Quantifying interpretability of deep visual representations. In Proceedings of the IEEE conference on computer vision and pattern recognition. 6541--6549.Google ScholarGoogle ScholarCross RefCross Ref
  10. Daniel S Berman, Anna L Buczak, Jeffrey S Chavis, and Cherita L Corbett. 2019. A survey of deep learning methods for cyber security. Information 10, 4 (2019), 122.Google ScholarGoogle ScholarCross RefCross Ref
  11. Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J Gutjahr. 2009. A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing 8, 2 (2009), 239--287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Christian Blum and Andrea Roli. 2003. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM computing surveys (CSUR) 35, 3 (2003), 268--308.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 67--74.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1721--1730.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Giuseppe Casalicchio, Christoph Molnar, and Bernd Bischl. 2018. Visualizing the feature importance for black box models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 655--670.Google ScholarGoogle Scholar
  16. Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, and Manmohan Chandraker. 2017. Learning efficient object detection models with knowledge distillation. In Advances in Neural Information Processing Systems. 742--751.Google ScholarGoogle Scholar
  17. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.Google ScholarGoogle Scholar
  18. John K Feser, Swarat Chaudhuri, and Isil Dillig. 2015. Synthesizing data structure transformations from input-output examples. In ACM SIGPLAN Notices, Vol. 50. ACM, 229--239.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400 (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Fred W Glover and Gary A Kochenberger. 2006. Handbook of metaheuristics. Vol. 57. Springer Science & Business Media.Google ScholarGoogle Scholar
  21. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Google. 2016. Battery Historian. https://github.com/google/batteryhistorian.Google ScholarGoogle Scholar
  23. Google. 2018. Timeline visualization for TensorFlow using Chrome Trace Format. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/client/timeline.py.Google ScholarGoogle Scholar
  24. Sumit Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. In ACM Sigplan Notices, Vol. 46. ACM, 317--330.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sumit Gulwani, Vijay Anand Korthikanti, and Ashish Tiwari. 2011. Synthesizing geometry constructions. In ACM SIGPLAN Notices, Vol. 46. ACM, 50--61.Google ScholarGoogle Scholar
  26. Karthik S Gurumoorthy, Amit Dhurandhar, and Guillermo Cecchi. 2017. Protodash: Fast interpretable prototype selection. arXiv preprint arXiv:1707.01212 (2017).Google ScholarGoogle Scholar
  27. Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015).Google ScholarGoogle Scholar
  28. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  29. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. In European conference on computer vision. Springer, 630--645.Google ScholarGoogle ScholarCross RefCross Ref
  30. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).Google ScholarGoogle Scholar
  31. Paul Hudak. 1989. Conception, evolution, and application of functional programming languages. ACM Computing Surveys (CSUR) 21, 3 (1989), 359--411.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Thomas Henry Huxley. 1968. On the origin of species. University of Michigan P.Google ScholarGoogle Scholar
  33. Dario Izzo, Francesco Biscani, and Alessio Mereta. 2017. Differentiable genetic programming. In European Conference on Genetic Programming. Springer, 35--51.Google ScholarGoogle ScholarCross RefCross Ref
  34. Emanuel Kitzelmann. 2009. Inductive programming: A survey of program synthesis techniques. In International workshop on approaches and applications of inductive programming. Springer, 50--73.Google ScholarGoogle Scholar
  35. William M Kolb. 1984. Curve fitting for programmable calculators. Imtec.Google ScholarGoogle Scholar
  36. John R Koza and John R Koza. 1992. Genetic programming: on the programming of computers by means of natural selection. Vol. 1. MIT press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  38. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.Google ScholarGoogle Scholar
  39. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016).Google ScholarGoogle Scholar
  40. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.Google ScholarGoogle ScholarCross RefCross Ref
  41. Yann LeCun, LD Jackel, Leon Bottou, A Brunot, Corinna Cortes, JS Denker, Harris Drucker, I Guyon, UA Muller, Eduard Sackinger, et al. 1995. Comparison of learning algorithms for handwritten digit recognition. In International conference on artificial neural networks, Vol. 60. Perth, Australia, 53--60.Google ScholarGoogle Scholar
  42. Seulki Lee and Shahriar Nirjon. 2019. Neuro. ZERO: a zero-energy neural network accelerator for embedded sensing and inference systems. In Proceedings of the 17th Conference on Embedded Networked Sensor Systems. 138--152.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Seulki Lee and Shahriar Nirjon. 2020. SubFlow: A Dynamic Induced-Subgraph Strategy Toward Real-Time DNN Inference and Training. In 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS). IEEE, 15--29.Google ScholarGoogle Scholar
  44. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).Google ScholarGoogle Scholar
  45. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, 740--755.Google ScholarGoogle ScholarCross RefCross Ref
  46. Zachary C Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (2018), 31--57.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. 2017. A survey on deep learning in medical image analysis. Medical image analysis 42 (2017), 60--88.Google ScholarGoogle Scholar
  48. Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2018. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV). 19--34.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Bruce J MacLennan. 1990. Functional programming: practice and theory. Addison-Wesley Longman Publishing Co., Inc.Google ScholarGoogle Scholar
  50. Rowan McAllister, Yarin Gal, Alex Kendall, Mark Van Der Wilk, Amar Shah, Roberto Cipolla, and Adrian Weller. 2017. Concrete problems for autonomous vehicle safety: Advantages of bayesian deep learning. International Joint Conferences on Artificial Intelligence, Inc.Google ScholarGoogle ScholarCross RefCross Ref
  51. Aditya Menon, Omer Tamuz, Sumit Gulwani, Butler Lampson, and Adam Kalai. 2013. A machine learning framework for programming by example. In International Conference on Machine Learning. 187--195.Google ScholarGoogle Scholar
  52. Julian F Miller. 1999. An empirical study of the efficiency of learning boolean functions using a cartesian genetic programming approach. In Proceedings of the Genetic and Evolutionary Computation Conference, Vol. 2. 1135--1142.Google ScholarGoogle Scholar
  53. Julian Francis Miller. 2019. Cartesian genetic programming: its status and future. Genetic Programming and Evolvable Machines (2019), 1--40.Google ScholarGoogle Scholar
  54. Julian Francis Miller and Simon L Harding. 2008. Cartesian genetic programming. In Proceedings of the 10th annual conference companion on Genetic and evolutionary computation. 2701--2726.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Julian F Miller, Peter Thomson, and Terence Fogarty. 1997. Designing electronic circuits using evolutionary algorithms. arithmetic circuits: A case study.Google ScholarGoogle Scholar
  56. Wouter Minnebo, Sean Stijven, and Katya Vladislavleva. [n.d.]. Empowering Knowledge Computing with Variable Selection. ([n. d.]).Google ScholarGoogle Scholar
  57. Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421 (2018).Google ScholarGoogle Scholar
  58. Antonio Polino, Razvan Pascanu, and Dan Alistarh. 2018. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668 (2018).Google ScholarGoogle Scholar
  59. Oleksandr Polozov and Sumit Gulwani. 2015. FlashMeta: a framework for inductive program synthesis. In ACM SIGPLAN Notices, Vol. 50. ACM, 107--126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. AV Prokhorov. 2001. Encyclopedia of Mathematics.Google ScholarGoogle Scholar
  61. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135--1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Tara Sainath and Carolina Parada. 2015. Convolutional neural networks for small-footprint keyword spotting. (2015).Google ScholarGoogle Scholar
  63. Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. 2014. A dataset and taxonomy for urban sound research. In Proceedings of the 22nd ACM international conference on Multimedia. ACM, 1041--1044.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4510--4520.Google ScholarGoogle ScholarCross RefCross Ref
  65. Eric Schkufza, Rahul Sharma, and Alex Aiken. 2016. Stochastic program optimization. Commun. ACM 59, 2 (2016), 114--122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Michael Schmidt and Hod Lipson. 2009. Distilling free-form natural laws from experimental data. science 324, 5923 (2009), 81--85.Google ScholarGoogle Scholar
  67. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 815--823.Google ScholarGoogle ScholarCross RefCross Ref
  68. Robert W Sebesta. 2016. Concepts of programming languages. Pearson Education India.Google ScholarGoogle Scholar
  69. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  70. Irwin Sobel. 2014. History and definition of the sobel operator. Retrieved from the World Wide Web 1505 (2014).Google ScholarGoogle Scholar
  71. Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. 2011. The German Traffic Sign Recognition Benchmark: A multi-class classification competition.. In IJCNN, Vol. 6. 7.Google ScholarGoogle ScholarCross RefCross Ref
  72. Allan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kjærgaard, Anind Dey, Tobias Sonne, and Mads Møller Jensen. 2015. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. 127--140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818--2826.Google ScholarGoogle ScholarCross RefCross Ref
  75. El-Ghazali Talbi. 2009. Metaheuristics: from design to implementation. Vol. 74. John Wiley & Sons.Google ScholarGoogle Scholar
  76. Ian Thompson. 2011. NIST Handbook of Mathematical Functions, edited by Frank WJ Olver, Daniel W. Lozier, Ronald F. Boisvert, Charles W. Clark.Google ScholarGoogle Scholar
  77. Leonardo Vanneschi and Riccardo Poli. 2012. Genetic Programming---introduction, applications, theory and open issues. Handbook of natural computing (2012), 709--739.Google ScholarGoogle Scholar
  78. varunverlencar. 2016. Dynamic-Obstacle-Avoidance-DL. https://github.com/varunverlencar/Dynamic-Obstacle-Avoidance-DL.Google ScholarGoogle Scholar
  79. Vesselin K Vassilev and Julian F Miller. 2000. The advantages of landscape neutrality in digital circuit evolution. In International Conference on Evolvable Systems. Springer, 252--263.Google ScholarGoogle ScholarCross RefCross Ref
  80. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3156--3164.Google ScholarGoogle ScholarCross RefCross Ref
  81. Gang Wang and Terence Soule. 2004. How to choose appropriate function sets for gentic programming. In European Conference on Genetic Programming. Springer, 198--207.Google ScholarGoogle ScholarCross RefCross Ref
  82. Limin Wang, Sheng Guo, Weilin Huang, and Yu Qiao. 2015. Places205-vggnet models for scene recognition. arXiv preprint arXiv:1508.01667 (2015).Google ScholarGoogle Scholar
  83. Pete Warden. 2018. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018).Google ScholarGoogle Scholar
  84. Seth Warner. 1990. Modern Algebra, chapter 1.Google ScholarGoogle Scholar
  85. Soeren H Welling, Hanne HF Refsgaard, Per B Brockhoff, and Line H Clemmensen. 2016. Forest floor visualizations of random forests. arXiv preprint arXiv:1605.09196 (2016).Google ScholarGoogle Scholar
  86. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. 2048--2057.Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Shuochao Yao, Shaohan Hu, Yiran Zhao, Aston Zhang, and Tarek Abdelzaher. 2017. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the 26th International Conference on World Wide Web. 351--360.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems 30, 9 (2019), 2805--2824.Google ScholarGoogle Scholar
  89. G Zames, NM Ajlouni, NM Ajlouni, NM Ajlouni, JH Holland, WD Hills, and DE Goldberg. 1981. Genetic algorithms in search, optimization and machine learning. Information Technology Journal 3, 1 (1981), 301--302.Google ScholarGoogle Scholar
  90. Jie Zhang, Xiaolong Wang, Dawei Li, and Yalin Wang. 2018. Dynamically hierarchy revolution: dirnet for compressing recurrent neural network on mobile devices. arXiv preprint arXiv:1806.01248 (2018).Google ScholarGoogle Scholar
  91. Lide Zhang, Birjodh Tiwana, Zhiyun Qian, Zhaoguang Wang, Robert P Dick, Zhuoqing Morley Mao, and Lei Yang. 2010. Accurate online power estimation and automatic battery behavior based power model generation for smartphones. In Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis. 105--114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Ye Zhang and Byron Wallace. 2015. A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820 (2015).Google ScholarGoogle Scholar
  93. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. 2014. Learning deep features for scene recognition using places database. In Advances in neural information processing systems. 487--495.Google ScholarGoogle Scholar

Index Terms

  1. Deep Functional Network (DFN): Functional Interpretation of Deep Neural Networks for Intelligent Sensing Systems

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          IPSN '21: Proceedings of the 20th International Conference on Information Processing in Sensor Networks (co-located with CPS-IoT Week 2021)
          May 2021
          423 pages
          ISBN:9781450380980
          DOI:10.1145/3412382

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 20 May 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate143of593submissions,24%
        • Article Metrics

          • Downloads (Last 12 months)45
          • Downloads (Last 6 weeks)6

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader