skip to main content
10.1145/3488560.3502211acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
extended-abstract
Public Access

Trustworthy Machine Learning: Fairness and Robustness

Published: 15 February 2022 Publication History

Abstract

In recent years, machine learning (ML) technologies have experienced swift developments and attracted extensive attention from both academia and industry. The applications of ML are extended to multiple domains, from computer vision, text processing, to recommendations, etc. However, recent studies have uncovered the untrustworthy side of ML applications. For example, ML algorithms could show human-like discrimination against certain individuals or groups, or make unreliable decisions in safety-critical scenarios, which implies the absence of fairness and robustness, respectively. Consequently, building trustworthy machine learning systems has become an urgent need. My research strives to help meet this demand. In particular, my research focuses on designing trustworthy ML models and spans across three main areas: (1) fairness in ML, where we aim to detect, eliminate bias and ensure fairness in various ML applications; (2) robustness in ML, where we seek to ensure the robustness of certain ML applications towards adversarial attacks; (3) specific applications of ML, where my research involves the development of ML-based natural language processing (NLP) models and recommendation systems.

Supplementary Material

MP4 File (WSDM22-dc01.mp4)
The presentation video of the WSDM 2022 doctoral consortium research statement "Trustworthy Machine Learning: Fairness and Robustness".

References

[1]
Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and Natural Noise Both Break Neural Machine Translation. In ICLR.
[2]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NeurIPS.
[3]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAccT.
[4]
Alvin Chan, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Yang Liu, and Yew Soon Ong. 2018. Metamorphic relation based adversarial attacks on differentiable neural computer. arXiv preprint (2018).
[5]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In International Conference on Computer, Control and Communication.
[6]
Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G Dimakis, Inderjit S Dhillon, and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv preprint (2018).
[7]
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In ICML.
[8]
Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020 a. Does Gender Matter? Towards Fairness in Dialogue Systems. In COLING.
[9]
Haochen Liu, Tyler Derr, Zitao Liu, and Jiliang Tang. 2019. Say what i want: Towards the dark side of neural dialogue models. arXiv preprint (2019).
[10]
Haochen Liu, Wei Jin, Hamid Karimi, Zitao Liu, and Jiliang Tang. 2021a. The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification. In Findings of ACL-IJCNLP.
[11]
Haochen Liu, Zitao Liu, Zhongqin Wu, and Jiliang Tang. 2020b. Personalized Multimodal Feedback Generation in Education. In COLING.
[12]
Haochen Liu, Da Tang, Ji Yang, Xiangyu Zhao, Jiliang Tang, and Youlong Cheng. 2021b. Self-supervised Learning for Alleviating Selection Bias in Recommendation Systems. (2021).
[13]
Haochen Liu, Joseph Thekinen, Sinem Mollaoglu, Da Tang, Ji Yang, Youlong Cheng, Hui Liu, and Jiliang Tang. 2021 c. Toward Annotator Group Bias in Crowdsourcing. arXiv preprint (2021).
[14]
Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, and Jiliang Tang. 2020 d. Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning. In EMNLP .
[15]
Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Anil K Jain, and Jiliang Tang. 2021 d. Trustworthy ai: A computational perspective. arXiv preprint (2021).
[16]
Haochen Liu, Zhiwei Wang, Tyler Derr, and Jiliang Tang. 2020c. Chat as expected: Learning to manipulate black-box neural dialogue models. arXiv preprint (2020).
[17]
Haochen Liu, Xiangyu Zhao, Chong Wang, Xiaobing Liu, and Jiliang Tang. 2020e. Automated embedding size search in deep recommender systems. In SIGIR.
[18]
Marcelo O.R. Prates, Pedro H. Avelar, and Luis C. Lamb. 2019. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications (2019).
[19]
Hee Jung Ryu, Margaret Mitchell, and Hartwig Adam. 2017. Improving smiling detection with race and gender diversity. arXiv preprint (2017).
[20]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks.
[21]
Kush R Varshney. 2019. Trustworthy machine learning and artificial intelligence. XRDS: Crossroads, The ACM Magazine for Students (2019).
[22]
Marty J. Wolf, Keith W. Miller, and Frances S. Grodzinsky. 2017. Why we should have seen that coming: comments on microsoft's tay "experiment," and wider implications. The ORBIT Journal (2017).
[23]
Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing (2020).
[24]
Wei Emma Zhang, Quan Z Sheng, Ahoud Abdulrahmn F Alhazmi, and Chenliang Li. 2019. Generating textual adversarial examples for deep learning models: A survey. arXiv preprint (2019).
[25]
Xiangyu Zhao, Haochen Liu, Wenqi Fan, Hui Liu, Jiliang Tang, and Chong Wang. 2021a. AutoLoss: Automated Loss Function Search in Recommendations. arXiv preprint (2021).
[26]
Xiangyu Zhao, Haochen Liu, Hui Liu, Jiliang Tang, Weiwei Guo, Jun Shi, Sida Wang, Huiji Gao, and Bo Long. 2021b. AutoDim: Field-aware Embedding Dimension Searchin Recommender Systems. In Web Conference.

Cited By

View all
  • (2023)Study on Network Importance for ML End Application RobustnessICC 2023 - IEEE International Conference on Communications10.1109/ICC45041.2023.10279698(6627-6632)Online publication date: 28-May-2023

Index Terms

  1. Trustworthy Machine Learning: Fairness and Robustness

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '22: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining
    February 2022
    1690 pages
    ISBN:9781450391320
    DOI:10.1145/3488560
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 February 2022

    Check for updates

    Author Tags

    1. fairness
    2. machine learning
    3. robustness

    Qualifiers

    • Extended-abstract

    Funding Sources

    Conference

    WSDM '22

    Acceptance Rates

    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)217
    • Downloads (Last 6 weeks)20
    Reflects downloads up to 13 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Study on Network Importance for ML End Application RobustnessICC 2023 - IEEE International Conference on Communications10.1109/ICC45041.2023.10279698(6627-6632)Online publication date: 28-May-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media