skip to main content
10.1145/3408308.3427977acmotherconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article

EdgeNILM: Towards NILM on Edge devices

Published:18 November 2020Publication History

ABSTRACT

Non-intrusive load monitoring (NILM) or energy disaggregation refers to the task of estimating the appliance power consumption given the aggregate power consumption readings. Recent state-of-the-art neural networks based methods are computation and memory intensive, and thus not suitable to run on "edge devices". Recent research has proposed various methods to compress neural networks without significantly impacting accuracy. In this work, we study different neural network compression schemes and their efficacy on the state-of-the-art neural network NILM method. We additionally propose a multi-task learning-based architecture to compress models further. We perform an extensive evaluation of these techniques on two publicly available datasets and find that we can reduce the memory and compute footprint by a factor of up to 100 without significantly impacting predictive performance.

References

  1. Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. 2017. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC) 13, 3 (2017), 1--18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. K Carrie Armel, Abhay Gupta, Gireesh Shrimali, and Adrian Albert. 2013. Is disaggregation the holy grail of energy efficiency? The case of electricity. Energy Policy 52 (2013), 213--234.Google ScholarGoogle ScholarCross RefCross Ref
  3. Nipun Batra, Jack Kelly, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, and Mani Srivastava. 2014. NILMTK: an open source toolkit for non-intrusive load monitoring. In Proceedings of the 5th international conference on Future energy systems. 265--276.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Nipun Batra, Rithwik Kukunuri, Ayush Pandey, Raktim Malakar, Rajat Kumar, Odysseas Krystalakos, Mingjun Zhong, Paulo Meira, and Oliver Parson. 2019. Towards reproducible state-of-the-art energy disaggregation. In Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation. 193--202.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Christian Beckel, Leyna Sadamori, Thorsten Staake, and Silvia Santini. 2014. Revealing household characteristics from smart meter data. Energy 78 (2014), 397--410.Google ScholarGoogle ScholarCross RefCross Ref
  6. Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is the state of neural network pruning? arXiv preprint arXiv:2003.03033 (2020).Google ScholarGoogle Scholar
  7. Richard Caruana. 1993. Multitask Learning: A Knowledge-Based Source of Inductive Bias. In Proceedings of the Tenth International Conference on Machine Learning. Morgan Kaufmann, 41--48.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Rich Caruana. 1997. Multitask Learning. Machine Learning, 28, 41--75, 10.1023/A:1007379606734 (1997).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Dennis, Don Kurian and Gopinath, Sridhar and Gupta, Chirag and Kumar, Ashish and Kusupati, Aditya and Patil, Shishir G and Simhadri, Harsha Vardhan. [n.d.]. EdgeML: Machine Learning for resource-constrained edge devices. https://github.com/Microsoft/EdgeMLGoogle ScholarGoogle Scholar
  10. Michele D'Incecco, Stefano Squartini, and Mingjun Zhong. 2019. Transfer learning for non-intrusive load monitoring. IEEE Transactions on Smart Grid 11, 2 (2019), 1419--1429.Google ScholarGoogle ScholarCross RefCross Ref
  11. Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015).Google ScholarGoogle Scholar
  12. George William Hart. 1992. Nonintrusive appliance load monitoring. Proc. IEEE 80, 12 (1992), 1870--1891.Google ScholarGoogle ScholarCross RefCross Ref
  13. Kanghang He, Lina Stankovic, Jing Liao, and Vladimir Stankovic. 2016. Non-intrusive load disaggregation using graph signal processing. IEEE Transactions on Smart Grid 9, 3 (2016), 1739--1747.Google ScholarGoogle ScholarCross RefCross Ref
  14. Yiling Jia, Nipun Batra, Hongning Wang, and Kamin Whitehouse. 2019. A tree-structured neural network model for household energy breakdown. In The World Wide Web Conference. 2872--2878.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jack Kelly, Nipun Batra, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, and Mani Srivastava. 2014. Nilmtk v0. 2: a non-intrusive load monitoring toolkit for large scale data sets: demo abstract. In Proceedings of the 1st ACM Conference on Embedded Systems for Energy-efficient Buildings. 182--183.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jack Kelly and William Knottenbelt. 2015. Neural nilm: Deep neural networks applied to energy disaggregation. In Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments. 55--64.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Jack Kelly and William Knottenbelt. 2015. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. Scientific data 2, 1 (2015), 1--14.Google ScholarGoogle Scholar
  18. J Zico Kolter, Siddharth Batra, and Andrew Y Ng. 2010. Energy disaggregation via discriminative sparse coding. In Advances in Neural Information Processing Systems. 1153--1161.Google ScholarGoogle Scholar
  19. J Zico Kolter and Tommi Jaakkola. 2012. Approximate inference in additive factorial hmms with application to energy disaggregation. In Artificial intelligence and statistics. 1472--1482.Google ScholarGoogle Scholar
  20. J Zico Kolter and Matthew J Johnson. 2011. REDD: A public data set for energy disaggregation research. In Workshop on data mining applications in sustainability (SIGKDD), San Diego, CA, Vol. 25. 59--62.Google ScholarGoogle Scholar
  21. Odysseas Krystalakos, Christoforos Nalmpantis, and Dimitris Vrakas. 2018. Sliding window approach for online energy disaggregation using artificial neural networks. In Proceedings of the 10th Hellenic Conference on Artificial Intelligence.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. 2014. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553 (2014).Google ScholarGoogle Scholar
  23. Yann LeCun, John S Denker, and Sara A Solla. 1990. Optimal brain damage. In Advances in neural information processing systems. 598--605.Google ScholarGoogle Scholar
  24. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).Google ScholarGoogle Scholar
  25. Steven J Nowlan and Geoffrey E Hinton. 1992. Simplifying neural networks by soft weight-sharing. Neural computation 4, 4 (1992), 473--493.Google ScholarGoogle Scholar
  26. Sanghyun Son, Seungjun Nah, and Kyoung Mu Lee. 2018. Clustering Convolutional Kernels to Compress Deep Neural Networks. In Proceedings of the European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarCross RefCross Ref
  27. Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, and Tomoaki Nishimura. 2018. Spectral-Pruning: Compressing deep neural network via spectral analysis. arXiv preprint arXiv:1808.08558 (2018).Google ScholarGoogle Scholar
  28. Karen Ullrich, Edward Meeds, and Max Welling. 2017. Soft Weight-Sharing for Neural Network Compression. (02 2017).Google ScholarGoogle Scholar
  29. Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, and Charles Sutton. 2018. Sequence-to-point learning with neural networks for non-intrusive load monitoring. In Thirty-second AAAI conference on artificial intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  30. Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 (2017).Google ScholarGoogle Scholar

Index Terms

  1. EdgeNILM: Towards NILM on Edge devices

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      BuildSys '20: Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
      November 2020
      361 pages
      ISBN:9781450380614
      DOI:10.1145/3408308

      Copyright © 2020 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 November 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      BuildSys '20 Paper Acceptance Rate38of139submissions,27%Overall Acceptance Rate148of500submissions,30%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader