skip to main content
research-article

Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness

Published:29 April 2024Publication History
Skip Abstract Section

Abstract

The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner’s objective is to progressively acquire new tasks as they arrive over time, while also guaranteeing statistical parity among various protected sub-populations, such as race and gender when it comes to the newly introduced tasks. A significant limitation of current approaches lies in their heavy reliance on the i.i.d (independent and identically distributed) assumption concerning data, leading to a static regret analysis of the framework. Nevertheless, it’s crucial to note that achieving low static regret does not necessarily translate to strong performance in dynamic environments characterized by tasks sampled from diverse distributions. In this article, to tackle the fairness-aware online learning challenge in evolving settings, we introduce a unique regret measure, FairSAR, by incorporating long-term fairness constraints into a strongly adapted loss regret framework. Moreover, to determine an optimal model parameter at each time step, we introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML. This algorithm possesses the ability to adjust to dynamic environments by effectively managing bias control and model accuracy. The problem is framed as a bi-level convex-concave optimization, considering both the model’s primal and dual parameters, which pertain to its accuracy and fairness attributes, respectively. Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints. Our experimental evaluation of various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches rooted in the most advanced prior online learning methods.

REFERENCES

  1. [1] Dan Biddle. 2006. Adverse impact and test validation: A practitioner’s guide to valid and defensible employment testing. Gower, page 220. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Cesa-Bianchi Nicolò and Lugosi Gábor. 2006. Prediction, Learning, and Games. Cambridge University PressGoogle ScholarGoogle Scholar
  3. [3] Daniely Amit, Gonen Alon, and Shalev-Shwartz Shai. 2015. Strongly adaptive online learning. In Proceedings of the ICML.Google ScholarGoogle Scholar
  4. [4] Cynthia Dwork, et al. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference.Google ScholarGoogle Scholar
  5. [5] Finn Chelsea, Abbeel Pieter, and Levine Sergey. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the ICML.Google ScholarGoogle Scholar
  6. [6] Finn Chelsea, Rajeswaran Aravind, Kakade Sham, and Levine Sergey. 2019. Online meta-learning. In Proceedings of the ICML.Google ScholarGoogle Scholar
  7. [7] András Gyorgy, Tamás Linder, and Gábor Lugosi. 2012. Efficient tracking of large classes of experts. IEEE Transactions on Information Theory 58, 11 (2012), 6709–6725.Google ScholarGoogle Scholar
  8. [8] Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems. 3323–3331.Google ScholarGoogle Scholar
  9. [9] Hazan Elad and Minasyan Edgar. 2020. Faster projection-free online learning. In Proceedings of the 33rd Conference on Learning Theory.Google ScholarGoogle Scholar
  10. [10] Elad Hazan, and Comandur Seshadhri. 2007. Adaptive algorithms for online decision problems. Electronic Colloquium on Computational Complexity (ECCC). 14, 088 (2007).Google ScholarGoogle Scholar
  11. [11] Jenatton Rodolphe, Huang Jim, and Archambeau Cedric. 2016. Adaptive algorithms for online convex optimization with long-term constraints. Proceedings of the ICML.Google ScholarGoogle Scholar
  12. [12] Jun Kwang-Sung, Orabona Francesco, Wright Stephen, and Willett Rebecca. 2017. Improved strongly adaptive online learning using coin betting. In Proceedings of the AISTATS.Google ScholarGoogle Scholar
  13. [13] Koh Pang Wei, Sagawa Shiori, Marklund Henrik, Xie Sang Michael, Zhang Marvin, Balsubramani Akshay, Hu Weihua, Yasunaga Michihiro, Phillips Richard Lanas, Gao Irena, Lee Tony, David Etienne, Stavness Ian, Guo Wei, Earnshaw Berton, Haque Imran, Beery Sara M., Leskovec Jure, Kundaje Anshul, Pierson Emma, Levine Sergey, Finn Chelsea, and Liang Percy. 2021. WILDS: A benchmark of in-the-wild distribution shifts. In Proceedings of the ICML.Google ScholarGoogle Scholar
  14. [14] Lohaus Michael, Perrot Michael, and Luxburg Ulrike Von. 2020. Too relaxed to be fair. In Proceedings of the ICML.Google ScholarGoogle Scholar
  15. [15] Luo Haipeng and Schapire Robert E.. 2015. Achieving all with no parameters: Adanormalhedge. In Proceedings of the Conference on Learning Theory. PMLR, 12861304.Google ScholarGoogle Scholar
  16. [16] Mehrdad Mahdavi, Rong Jin, and Tianbao Yang. 2012. Trading regret for efficiency: online convex optimization with long term constraints. The Journal of Machine Learning Research 13.1 (2012), 2503–2528.Google ScholarGoogle Scholar
  17. [17] Miller Jennifer. 2020. Is an algorithm less racist than a loan officer? Retrieved Sept. 18 2020 from https://www.nytimes.com/2020/09/18/business/digital-mortgages.htmlGoogle ScholarGoogle Scholar
  18. [18] Wan Yuanyu, Xue Bo, and Zhang Lijun. 2021. Projection-free online learning in dynamic environments. InProceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  19. [19] Wang Zhuoyi, Chen Yuqiao, Zhao Chen, Lin Yu, Zhao Xujiang, Tao Hemeng, Wang Yigong, and Khan Latifur. 2021. CLEAR: Contrastive-prototype learning with drift estimation for resource constrained stream mining. In Proceedings of the WWW.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Wu Yongkai, Zhang Lu, and Wu Xintao. 2019. On convexity and bounds of fairness-aware classification. In Proceedings of the WWW.Google ScholarGoogle Scholar
  21. [21] Xie Jiahao, Shen Zebang, Zhang Chao, Wang Boyu, and Qian Hui. 2020. Efficient projection-free online methods with stochastic recursive gradient. In Proceedings of the AAAI.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Jianjun Yuan and Andrew Lamperski. 2018. Online convex optimization for cumulative constraints. Advances in Neural Information Processing Systems. 6140–6149.Google ScholarGoogle Scholar
  23. [23] Zhang Lijun, Lu Shiyin, and Yang Tianbao. 2020. Minimizing dynamic regret and adaptive regret simultaneously. In Proceedings of the International Conference on Artificial Intelligence and Statistics.Google ScholarGoogle Scholar
  24. [24] Lijun Zhang, Shiyin Lu, and Zhi-Hua Zhou. 2018. Adaptive online learning in dynamic environments. Advances in Neural Information Processing Systems. 1323–1333.Google ScholarGoogle Scholar
  25. [25] Zhao Chen. 2021. Fairness-Aware Multi-Task and Meta Learning. Ph. D. Dissertation.Google ScholarGoogle Scholar
  26. [26] Zhao Chen and Chen Feng. 2019. Rank-based multi-task learning for fair regression. In Proceedings of the IEEE International Conference on Data Mining (ICDM).Google ScholarGoogle Scholar
  27. [27] Zhao Chen and Chen Feng. 2020. Unfairness discovery and prevention for few-shot regression. In Proceedings of the 2020 IEEE International Conference on Knowledge Graph (ICKG).Google ScholarGoogle Scholar
  28. [28] Zhao Chen, Chen Feng, and Thuraisingham Bhavani. 2021. Fairness-aware online meta-learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining.Google ScholarGoogle Scholar
  29. [29] Zhao Chen, Mi Feng, Wu Xintao, Jiang Kai, Khan Latifur, and Chen Feng. 2022. Adaptive fairness-aware online meta-learning for changing environments. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 25652575.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Zinkevich Martin. 2003. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the ICML.Google ScholarGoogle Scholar

Index Terms

  1. Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Knowledge Discovery from Data
            ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 6
            July 2024
            760 pages
            ISSN:1556-4681
            EISSN:1556-472X
            DOI:10.1145/3613684
            Issue’s Table of Contents

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 29 April 2024
            • Online AM: 20 February 2024
            • Accepted: 15 December 2023
            • Received: 15 September 2023
            Published in tkdd Volume 18, Issue 6

            Check for updates

            Qualifiers

            • research-article
          • Article Metrics

            • Downloads (Last 12 months)104
            • Downloads (Last 6 weeks)34

            Other Metrics

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Full Text

          View this article in Full Text.

          View Full Text