ABSTRACT
In recent years, machine learning (ML) technologies have experienced swift developments and attracted extensive attention from both academia and industry. The applications of ML are extended to multiple domains, from computer vision, text processing, to recommendations, etc. However, recent studies have uncovered the untrustworthy side of ML applications. For example, ML algorithms could show human-like discrimination against certain individuals or groups, or make unreliable decisions in safety-critical scenarios, which implies the absence of fairness and robustness, respectively. Consequently, building trustworthy machine learning systems has become an urgent need. My research strives to help meet this demand. In particular, my research focuses on designing trustworthy ML models and spans across three main areas: (1) fairness in ML, where we aim to detect, eliminate bias and ensure fairness in various ML applications; (2) robustness in ML, where we seek to ensure the robustness of certain ML applications towards adversarial attacks; (3) specific applications of ML, where my research involves the development of ML-based natural language processing (NLP) models and recommendation systems.
Supplemental Material
- Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and Natural Noise Both Break Neural Machine Translation. In ICLR.Google Scholar
- Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NeurIPS.Google Scholar
- Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAccT.Google Scholar
- Alvin Chan, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Yang Liu, and Yew Soon Ong. 2018. Metamorphic relation based adversarial attacks on differentiable neural computer. arXiv preprint (2018).Google Scholar
- Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In International Conference on Computer, Control and Communication.Google ScholarCross Ref
- Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G Dimakis, Inderjit S Dhillon, and Michael Witbrock. 2018. Discrete attacks and submodular optimization with applications to text classification. arXiv preprint (2018).Google Scholar
- Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. Towards understanding and mitigating social biases in language models. In ICML.Google Scholar
- Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2020 a. Does Gender Matter? Towards Fairness in Dialogue Systems. In COLING.Google Scholar
- Haochen Liu, Tyler Derr, Zitao Liu, and Jiliang Tang. 2019. Say what i want: Towards the dark side of neural dialogue models. arXiv preprint (2019).Google Scholar
- Haochen Liu, Wei Jin, Hamid Karimi, Zitao Liu, and Jiliang Tang. 2021a. The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification. In Findings of ACL-IJCNLP.Google Scholar
- Haochen Liu, Zitao Liu, Zhongqin Wu, and Jiliang Tang. 2020b. Personalized Multimodal Feedback Generation in Education. In COLING.Google Scholar
- Haochen Liu, Da Tang, Ji Yang, Xiangyu Zhao, Jiliang Tang, and Youlong Cheng. 2021b. Self-supervised Learning for Alleviating Selection Bias in Recommendation Systems. (2021).Google Scholar
- Haochen Liu, Joseph Thekinen, Sinem Mollaoglu, Da Tang, Ji Yang, Youlong Cheng, Hui Liu, and Jiliang Tang. 2021 c. Toward Annotator Group Bias in Crowdsourcing. arXiv preprint (2021).Google Scholar
- Haochen Liu, Wentao Wang, Yiqi Wang, Hui Liu, Zitao Liu, and Jiliang Tang. 2020 d. Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning. In EMNLP .Google Scholar
- Haochen Liu, Yiqi Wang, Wenqi Fan, Xiaorui Liu, Yaxin Li, Shaili Jain, Anil K Jain, and Jiliang Tang. 2021 d. Trustworthy ai: A computational perspective. arXiv preprint (2021).Google Scholar
- Haochen Liu, Zhiwei Wang, Tyler Derr, and Jiliang Tang. 2020c. Chat as expected: Learning to manipulate black-box neural dialogue models. arXiv preprint (2020).Google Scholar
- Haochen Liu, Xiangyu Zhao, Chong Wang, Xiaobing Liu, and Jiliang Tang. 2020e. Automated embedding size search in deep recommender systems. In SIGIR.Google Scholar
- Marcelo O.R. Prates, Pedro H. Avelar, and Luis C. Lamb. 2019. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications (2019).Google Scholar
- Hee Jung Ryu, Margaret Mitchell, and Hartwig Adam. 2017. Improving smiling detection with race and gender diversity. arXiv preprint (2017).Google Scholar
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks.Google Scholar
- Kush R Varshney. 2019. Trustworthy machine learning and artificial intelligence. XRDS: Crossroads, The ACM Magazine for Students (2019).Google Scholar
- Marty J. Wolf, Keith W. Miller, and Frances S. Grodzinsky. 2017. Why we should have seen that coming: comments on microsoft's tay "experiment," and wider implications. The ORBIT Journal (2017).Google Scholar
- Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing (2020).Google ScholarCross Ref
- Wei Emma Zhang, Quan Z Sheng, Ahoud Abdulrahmn F Alhazmi, and Chenliang Li. 2019. Generating textual adversarial examples for deep learning models: A survey. arXiv preprint (2019).Google Scholar
- Xiangyu Zhao, Haochen Liu, Wenqi Fan, Hui Liu, Jiliang Tang, and Chong Wang. 2021a. AutoLoss: Automated Loss Function Search in Recommendations. arXiv preprint (2021).Google Scholar
- Xiangyu Zhao, Haochen Liu, Hui Liu, Jiliang Tang, Weiwei Guo, Jun Shi, Sida Wang, Huiji Gao, and Bo Long. 2021b. AutoDim: Field-aware Embedding Dimension Searchin Recommender Systems. In Web Conference.Google Scholar
Index Terms
- Trustworthy Machine Learning: Fairness and Robustness
Recommendations
Machine Learning Robustness, Fairness, and their Convergence
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningResponsible AI becomes critical where robustness and fairness must be satisfied together. Traditionally, the two topics have been studied by different communities for different applications. Robust training is designed for noisy or poisoned data where ...
Speed And Accuracy Are Not Enough! Trustworthy Machine Learning
AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyClassical linear/shallow learning is relatively easy to analyze and understand, but the power of deep learning is often desirable. I am developing a hybrid approach in order to obtain learning algorithms that are both trustworthy and accurate. My ...
Airtime Fairness for IEEE 802.11 Multirate Networks
Under a multi rate network scenario, the IEEE 802.11 DCF MAC fails to provide air-time fairness for all competing stations since the protocol is designed for ensuring max-min throughput fairness and the maximum achievable throughput by any station gets ...
Comments