Skip to main content

Advertisement

Log in

Co-active: an efficient selective relabeling model for resource constrained edge AI

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

With high-quality annotation data, edge AI has emerged as a pivotal technology in various domains. Unfortunately, due to sensor errors and discrepancies in data collection, datasets often suffer from noisy labels. Identifying and relabeling all the noisy data becomes imperative, but it’s labor-intensive and time-consuming. To ensure the robustness of resource-constrained edge AI models with noisy labels, in this paper, we propose an efficient selective relabeling method

leveraging expert knowledge, termed “Co-active”. The method involves three important steps: Noisy Data Identification, Informative Data Selection, and Model Re-training. Initially, we pre-train an encoder model with early stopping to detect noisy instances by analyzing the prediction discrepancies between two classifiers that have been initialized differently. Then we select the most informative noisy data from previous instances by introducing a novel priority scorer that combines entropy and dynamic loss variations. Following this, we utilize the mixup technique to retrain the model. The retraining process involves a dataset that is a composite of clean, relabeled, and data that is potentially clean. This is facilitated by a novel loss function that adeptly balances classification accuracy with regularization terms. Additionally, we introduce strategies for dynamically adjusting the size of the relabel dataset to optimize the labeling budget and enhance model robustness. Our extensive experiments across four datasets demonstrate that Co-active consistently achieves the best performance, with an average improvement of 18.67%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

The data and materials for this study are readily available upon request, ensuring transparency and reproducibility for interested researchers.

References

  1. Xu, D., Li, T., Li, Y., Su, X., Tarkoma, S., Jiang, T., Crowcroft, J. & Hui, P., (2020). Edge intelligence: Architectures, challenges, and applications. arXiv preprint arXiv:2003.12172

  2. Mani, A., Kavya, G. & Bapu, B. (2023). A proficient resource allocation using hybrid optimization algorithm for massive internet of health things devices contemplating privacy fortification in cloud edge computing environment. Wireless Networks, 1–13.

  3. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.

    Google Scholar 

  4. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485–5551.

    MathSciNet  Google Scholar 

  5. Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), 107–115.

    Article  MATH  Google Scholar 

  6. Deng, S., Zhao, H., Fang, W., Yin, J., Dustdar, S., & Zomaya, A. Y. (2020). Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet of Things Journal, 7(8), 7457–7469.

    Article  MATH  Google Scholar 

  7. Xia, X., Liu, T., Han, B., Gong, M., Yu, J., Niu, G. & Sugiyama, M. (2021). Sample selection with uncertainty of losses for learning with noisy labels. arXiv preprint arXiv:2106.00445

  8. Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I. & Sugiyama, M. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in Neural Information Processing Systems 31.

  9. Yi, L., Liu, S., She, Q., McLeod, A. I. & Wang, B. (2022). On learning contrastive representations for learning with noisy labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 16682–16691).

  10. Yan, J., Luo, L., Xu, C., Deng, C. & Huang, H. (2022). Noise is also useful: Negative correlation-steered latent contrastive learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 31–40).

  11. Nguyen, D. T., Mummadi, C. K., Ngo, T. P. N., Nguyen, T. H. P., Beggel, L. & Brox, T. (2019). Self: Learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842

  12. Zheng, G., Awadallah, A. H. & Dumais, S. (2021). Meta label correction for noisy label learning. In AAAI.

  13. Wu, Y., Shu, J., Xie, Q., Zhao, Q. & Meng, D. (2021). Learning to purify noisy labels via meta soft label corrector. In Proceedings of the AAAI conference on artificial intelligence, vol. 35, pp. 10388–10396.

  14. Mallem, S., Hasnat, A., & Nakib, A. (2023). Efficient meta label correction based on meta learning and bi-level optimization. Engineering Applications of Artificial Intelligence, 117, 105517.

    Article  MATH  Google Scholar 

  15. Ren, P., Xiao, Y., Chang, X., Huang, P.-Y., Li, Z., Gupta, B. B., Chen, X., & Wang, X. (2021). A survey of deep active learning. ACM Computing Surveys (CSUR), 54(9), 1–40.

    Article  MATH  Google Scholar 

  16. Cao, X. & Tsang, I. W. (2021). Bayesian active learning by disagreements: A geometric perspective. arXiv preprint arXiv:2105.02543

  17. Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.

  18. He, K., Zhang, X., Ren, S. & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.

  19. Bai, Y., Yang, E., Han, B., Yang, Y., Li, J., Mao, Y., Niu, G., & Liu, T. (2021). Understanding and improving early stopping for learning with noisy labels. Advances in Neural Information Processing Systems, 34, 24392–24403.

    Google Scholar 

  20. Zhang, H., Cisse, M., Dauphin, Y. N. & Lopez-Paz, D. (2018). mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.

  21. Li, J., Socher, R. & Hoi, S. C. (2020). Dividemix: Learning with noisy labels as semi-supervised learning. In ICLR.

  22. Arazo, E., Ortego, D., Albert, P., O’Connor, N. & McGuinness, K. (2019). Unsupervised label noise modeling and loss correction. In International Conference on Machine Learning, (pp. 312–321), PMLR.

  23. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y. & Potts, C. (2013). Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, (pp. 1631–1642).

  24. Asghar, N. (2016). Yelp dataset challenge: Review rating prediction. arXiv preprint arXiv:1605.05362

  25. He, R. & McAuley, J. (2016). Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th international conference on world wide web, (pp. 507–517).

  26. Tanaka, D., Ikami, D., Yamasaki, T. & Aizawa, K. (2018). Joint optimization framework for learning with noisy labels. In CVPR, (pp. 5552–5560).

  27. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692

  28. Reimers, N. & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084

  29. Holub, A., Perona, P. & Burl, M. C. (2008). Entropy-based active learning for object recognition. In 2008 IEEE computer society conference on computer vision and pattern recognition workshops, (pp. 1–8), IEEE.

  30. Lin, C., Mausam, M. & Weld, D. (2016). Re-active learning: Active learning with relabeling. In Proceedings of the AAAI conference on artificial intelligence, vol. 30.

  31. Bernhardt, M., Castro, D. C., Tanno, R., Schwaighofer, A., Tezcan, K. C., Monteiro, M., Bannur, S., Lungren, M. P., Nori, A., Glocker, B., et al. (2022). Active label cleaning for improved dataset quality under resource constraints. Nature Communications, 13(1), 1161.

    Article  Google Scholar 

  32. Hendrycks, D., Mazeika, M., Wilson, D. & Gimpel, K. (2018). Using trusted data to train deep networks on labels corrupted by severe noise. Advances in Neural Information Processing Systems 31.

  33. Shu, J., Xie, Q., Yi, L., Zhao, Q., Zhou, S., Xu, Z. & Meng, D. (2019). Meta-weight-net: Learning an explicit mapping for sample weighting. NIPS 32.

  34. Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. The Journal of Machine learning research, 7, 1–30.

    MathSciNet  MATH  Google Scholar 

  35. Xu, D., Li, T., Li, Y., Su, X., Tarkoma, S., Jiang, T., Crowcroft, J., & Hui, P. (2021). Edge intelligence: Empowering intelligence to the edge of network. Proceedings of the IEEE, 109(11), 1778–1837.

    Article  Google Scholar 

  36. Cao, L. (2022). Decentralized AI: Edge intelligence and smart blockchain, metaverse, web3, and desci. IEEE Intelligent Systems, 37(3), 6–19.

    Article  MATH  Google Scholar 

  37. Yang, W., Liew, Z. Q., Lim, W. Y. B., Xiong, Z., Niyato, D., Chi, X., Cao, X., & Letaief, K. B. (2022). Semantic communication meets edge intelligence. IEEE Wireless Communications, 29(5), 28–35.

    Article  Google Scholar 

  38. Cruciani, F., Cleland, I., Nugent, C., McCullagh, P., Synnes, K., & Hallberg, J. (2018). Automatic annotation for human activity recognition in free living using a smartphone. Sensors, 18(7), 2203.

    Article  Google Scholar 

  39. Bota, P., Silva, J., Folgado, D., & Gamboa, H. (2019). A semi-automatic annotation approach for human activity recognition. Sensors, 19(3), 501.

    Article  MATH  Google Scholar 

  40. Northcutt, C., Jiang, L., & Chuang, I. (2021). Confident learning: Estimating uncertainty in dataset labels. JAIR, 70, 1373–1411.

    Article  MathSciNet  MATH  Google Scholar 

  41. Pleiss, G., Zhang, T., Elenberg, E., & Weinberger, K. Q. (2020). Identifying mislabeled data using the area under the margin ranking. Advances in Neural Information Processing Systems, 33, 17044–17056.

    Google Scholar 

  42. Ren, M., Zeng, W., Yang, B. & Urtasun, R. (2018). Learning to reweight examples for robust deep learning. In ICML, (pp. 4334–4343), PMLR.

  43. Xia, X., Han, B., Zhan, Y., Yu, J., Gong, M., Gong, C. & Liu, T. (2023). Combating noisy labels with sample selection by mining high-discrepancy examples. In Proceedings of the IEEE/CVF international conference on computer vision, (pp. 1833–1843).

  44. Li, Y., Han, H., Shan, S. & Chen, X. (2023). Disc: Learning from noisy labels via dynamic instance-specific selection and correction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (pp. 24070–24079).

  45. Zhang, H., Li, S., Zeng, D., Yan, C. & Ge, S. (2024). Coupled confusion correction: Learning from crowds with sparse annotations. In Proceedings of the AAAI conference on artificial intelligence, vol. 38, pp. 16732–16740.

  46. Park, D., Choi, S., Kim, D., Song, H. & Lee, J.-G. (2024). Robust data pruning under label noise via maximizing re-labeling accuracy. Advances in Neural Information Processing Systems, 36 .

  47. Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 135, 364–381.

    Article  MATH  Google Scholar 

  48. Zhan, X., Wang, Q., Huang, K.-H., Xiong, H., Dou, D. & Chan, A. B. (2022). A comparative survey of deep active learning. arXiv preprint arXiv:2203.13450

  49. Iraola, D. M. & Yepes, A. J. (2021). Single versus multiple annotation for named entity recognition of mutations. arXiv preprint arXiv:2101.07450

  50. El-Hasnony, I. M., Elzeki, O. M., Alshehri, A., & Salem, H. (2022). Multi-label active learning-based machine learning model for heart disease prediction. Sensors, 22(3), 1184.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research is partially supported by China Postdoctoral Science Foundation (No. 2023M730347).

Author information

Authors and Affiliations

Authors

Contributions

Chenyu Hou designed the study and wrote the main manuscript text; Kai Jiang conducted the experiments, and collected and analyzed the data; Tiantian Li was responsible for the preparation of some experimental materials and conducted specific experimental procedures; Meng Zhou and Jun Jiang collected the data and performed the statistical analysis for the study; All authors reviewed the manuscript.

Corresponding author

Correspondence to Kai Jiang.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hou, C., Jiang, K., Li, T. et al. Co-active: an efficient selective relabeling model for resource constrained edge AI. Wireless Netw 31, 2653–2666 (2025). https://doi.org/10.1007/s11276-025-03903-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-025-03903-9

Keywords