skip to main content
10.1145/3510003.3510087acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

FairNeuron: improving deep neural network fairness with adversary games on selective neurons

Published: 05 July 2022 Publication History

Abstract

With Deep Neural Network (DNN) being integrated into a growing number of critical systems with far-reaching impacts on society, there are increasing concerns on their ethical performance, such as fairness. Unfortunately, model fairness and accuracy in many cases are contradictory goals to optimize during model training. To solve this issue, there has been a number of works trying to improve model fairness by formalizing an adversarial game in the model level. This approach introduces an adversary that evaluates the fairness of a model besides its prediction accuracy on the main task, and performs joint-optimization to achieve a balanced result. In this paper, we noticed that when performing backward propagation based training, such contradictory phenomenon are also observable on individual neuron level. Based on this observation, we propose FairNeuron, a DNN model automatic repairing tool, to mitigate fairness concerns and balance the accuracy-fairness trade-off without introducing another model. It works on detecting neurons with contradictory optimization directions from accuracy and fairness training goals, and achieving a trade-off by selective dropout. Comparing with state-of-the-art methods, our approach is lightweight, scaling to large models and more efficient. Our evaluation on three datasets shows that FairNeuron can effectively improve all models' fairness while maintaining a stable utility.

References

[1]
[n.d.]. CIFAR-10 and CIFAR-100 datasets. https://www.cs.toronto.edu/~kriz/cifar.html
[2]
[n.d.]. The First International Beauty Contest Judged by Artificial Intelligence. http://beauty.ai
[3]
[n.d.]. Machine Bias --- ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[4]
[n.d.]. Microsoft's neo-Nazi sexbot was a great lesson for makers of AI assistants. https://www.technologyreview.com/2018/03/27/144290/microsofts-neonazi-sexbot-was-a-great-lesson-for-makers-of-ai-assistants/
[5]
[n.d.]. MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris Burges. http://yann.lecun.com/exdb/mnist/
[6]
[n.d.]. Tune: Scalable Hyperparameter Tuning --- Ray v1.9.0. https://docs.ray.io/en/latest/tune/index.html
[7]
2016. A beauty contest was judged by AI and the robots didn't like dark skin. http://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people Section: Technology.
[8]
2020. NAB turns to AI to decide on small business loans. https://www.afr.com/companies/financial-services/nab-turns-to-artificial-intelligence-to-assess-small-business-loans-20201204-p56kmk Section: financialservices.
[9]
2021. How AI will change the HR industry | HRExecutive.com. http://hrexecutive.com/ai-will-make-traditional-hr-extinct-how-to-prepare-for-whats-next/
[10]
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A System for Large-Scale Machine Learning. 265--283. https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi
[11]
Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. 2019. One-Network Adversarial Fairness. Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 2019), 2412--2420.
[12]
Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2018. Automated test generation to detect individual discrimination in AI models. arXiv preprint arXiv:1809.03260 (2018).
[13]
Glenn Amnions and James R Larust. [n.d.]. Improving Data-flow Analysis with Path Profiles. ([n.d.]), 13.
[14]
Rico Angell, Brittany Johnson, Yuriy Brun, and Alexandra Meliou. 2018. Themis: automatically testing software for discrimination. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering - ESEC/FSE 2018. ACM Press, Lake Buena Vista, FL, USA, 871--875.
[15]
Solon Barocas and Andrew D. Selbst. 2016. Big data's disparate impact. Calif. L. Rev. 104 (2016), 671. Publisher: HeinOnline.
[16]
Richard Berk. 2019. Accuracy and fairness for juvenile justice risk assessments. Journal of Empirical Legal Studies 16, 1 (2019), 175--194. Publisher: Wiley Online Library.
[17]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2021. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods &Research 50, 1 (2021), 3--44. Publisher: Sage Publications Sage CA: Los Angeles, CA.
[18]
Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H. Chi. 2017. Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations. arXiv:1707.00075 [cs] (July 2017). http://arxiv.org/abs/1707.00075 arXiv:1707.00075.
[19]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, and Jiakai Zhang. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
[20]
Tim Brennan and William L. Oliver. 2013. Emergence of machine learning techniques in criminology: implications of complexity in our data and in research questions. Criminology & Pub. Pol'y 12 (2013), 551. Publisher: HeinOnline.
[21]
Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. arXiv:2010.04053 [cs, stat] (Oct. 2020). http://arxiv.org/abs/2010.04053 arXiv:2010.04053.
[22]
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. 2017. Mode Regularized Generative Adversarial Networks. arXiv:1612.02136 [cs] (March 2017). http://arxiv.org/abs/1612.02136 arXiv: 1612.02136.
[23]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2(2017), 153--163. Publisher: Mary Ann Liebert, Inc. 140 Huguenot Street, 3rd Floor New Rochelle, NY 10801 USA.
[24]
T. Anne Cleary. 1966. Test bias: Validity of the Scholastic Aptitude Test for Negro and White students in integrated colleges. ETS Research Bulletin Series 1966, 2 (1966), i--23. Publisher: Wiley Online Library.
[25]
Pieter Delobelle, Paul Temple, Gilles Perrouin, Benoît Frénay, Patrick Heymans, and Bettina Berendt. 2021. Ethical adversaries: Towards mitigating unfairness with adversarial machine learning. ACM SIGKDD Explorations Newsletter 23, 1 (2021), 32--41. Publisher: ACM New York, NY, USA.
[26]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference on - ITCS '12. ACM Press, Cambridge, Massachusetts, 214--226.
[27]
Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. arXiv preprint arXiv:1808.06640 (2018).
[28]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15. ACM Press, Sydney, NSW, Australia, 259--268.
[29]
Rupert Freeman, Nisarg Shah, and Rohit Vaish. 2020. Best of Both Worlds: Ex-Ante and Ex-Post Fairness in Resource Allocation. arXiv:2005.14122 [cs] (May 2020). http://arxiv.org/abs/2005.14122 arXiv: 2005.14122.
[30]
Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 329--338.
[31]
Yaroslav Ganin and Victor Lempitsky. [n.d.]. Unsupervised Domain Adaptation by Backpropagation. ([n. d.]), 10.
[32]
Xuanqi Gao. 2022. FairNeuron. https://github.com/Antimony5292/FairNeuronoriginal-date: 2021-09-01T12:52:43Z.
[33]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems, Vol. 27. Curran Associates, Inc. https://papers.nips.cc/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html
[34]
Robert M. Guion. 1966. Employment tests and discriminatory hiring. Industrial Relations: A Journal of Economy and Society 5, 2 (1966), 20--37. Publisher: Wiley Online Library.
[35]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315--3323.
[36]
Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning. PMLR, 1929--1938.
[37]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[38]
Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in neural information processing systems. 2042--2050.
[39]
Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014).
[40]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In 2009 2nd international conference on computer, control and communication. IEEE, 1--6.
[41]
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems 33, 1 (Oct. 2012), 1--33.
[42]
Faisal Kamiran, Asim Karim, and Xiangliang Zhang. 2012. Decision Theory for Discrimination-Aware Classification. In 2012 IEEE 12th International Conference on Data Mining. IEEE, Brussels, Belgium, 924--929.
[43]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic fairness. In Aea papers and proceedings, Vol. 108. 22--27.
[44]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. [n.d.]. Counterfactual Fairness. NIPS 2017 ([n.d.]), 11.
[45]
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. 2020. Fairness without Demographics through Adversarially Reweighted Learning. arXiv:2006.13114 [cs, stat] (Nov. 2020). http://arxiv.org/abs/2006.13114 arXiv: 2006.13114.
[46]
Preethi Lahoti, Krishna P. Gummadi, and Gerhard Weikum. 2019. Operationalizing individual fairness with pairwise fair representations. arXiv preprint arXiv:1907.01439 (2019).
[47]
Shiqing Ma, Yousra Aafer, Zhaogui Xu, Wen-Chuan Lee, Juan Zhai, Yingqi Liu, and Xiangyu Zhang. 2017. LAMP: data provenance for graph based machine learning algorithms through derivative computation. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 786--797.
[48]
Shiqing Ma, Yingqi Liu, Wen-Chuan Lee, Xiangyu Zhang, and Ananth Grama. 2018. MODE: automated neural network model debugging via state differential analysis and input selection. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering - ESEC/FSE 2018. ACM Press, Lake Buena Vista, FL, USA, 175--186.
[49]
Annette Markham and Elizabeth Buchanan. 2012. Ethical decision-making and internet research: Version 2.0. recommendations from the AoIR ethics working committee. Available online: aoir.org/reports/ethics2.pdf (2012).
[50]
Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. 2018. Cycles in Adversarial Regularized Learning. In Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics, 2703--2717.
[51]
Cathy O'neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
[52]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. [n.d.]. PyTorch: An Imperative Style, High-Performance Deep Learning Library. ([n. d.]), 12.
[53]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. [n.d.]. On Fairness and Calibration. ([n. d.]), 10.
[54]
Executive Office of the President, Cecilia Munoz, Domestic Policy Council Director, Megan (US Chief Technology Officer Smith (Office of Science, Technology Policy)), DJ (Deputy Chief Technology Officer for Data Policy, Chief Data Scientist Patil (Office of Science, and Technology Policy)). 2016. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President.
[55]
United States Executive Office of the President and John Podesta. 2014. Big data: Seizing opportunities, preserving values. White House, Executive Office of the President.
[56]
Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, and Yuhao Zhu. 2019. Adversarial Defense Through Network Profiling Based Path Extraction. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Long Beach, CA, USA, 4772--4781.
[57]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 469--481.
[58]
Prasanna Sattigeri, Samuel C. Hoffman, Vijil Chenthamarakshan, and Kush R. Varshney. 2019. Fairness GAN: Generating datasets with fairness properties using a generative adversarial network. IBM Journal of Research and Development 63, 4/5 (2019), 3--1. Publisher: IBM.
[59]
Florian Tramer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. 2017. FairTest: Discovering Unwarranted Associations in Data-Driven Applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, Paris, 401--416.
[60]
Sakshi Udeshi, Pryanshu Arora, and Sudipta Chattopadhyay. 2018. Automated directed fairness testing. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering - ASE 2018. ACM Press, Montpellier, France, 98--108.
[61]
Elmira van den Broek, Anastasia Sergeeva, and Marleen Huysman. 2019. Hiring algorithms: An ethnography of fairness in practice. (2019).
[62]
Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10 (2017), 3152676. Publisher: Springer.
[63]
Yulong Wang, Hang Su, Bo Zhang, and Xiaolin Hu. 2018. Interpret Neural Networks by Identifying Critical Data Routing Paths. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, 8906--8914.
[64]
Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2018. Fairgan: Fairness-aware generative adversarial networks. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 570--575.
[65]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW '17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1171--1180.
[66]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR, 962--970.
[67]
Richard Zemel. [n.d.]. Learning Fair Representations. ([n. d.]), 9.
[68]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New Orleans LA USA, 335--340.
[69]
Chongjie Zhang and Julie A. Shah. 2014. Fairness in multi-agent sequential decision-making. In Advances in Neural Information Processing Systems. 2636--2644.
[70]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2017. Achieving Non-Discrimination in Data Release. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, Halifax NS Canada, 1335--1344.
[71]
Peixin Zhang, Jingyi Wang, Jun Sun, Guoliang Dong, Xinyu Wang, Xingen Wang, Jin Song Dong, and Ting Dai. 2020. White-box fairness testing through adversarial sampling. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. ACM, Seoul South Korea, 949--960.
[72]
Xiaoyu Zhang, Juan Zhai, Shiqing Ma, and Chao Shen. 2021. AUTOTRAINER: An Automatic DNN Training Problem Detection and Repair System. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, Madrid, ES, 359--371.
[73]
Ziqi Zhang, Yuanchun Li, Yao Guo, Xiangqun Chen, and Yunxin Liu. 2020. Dynamic Slicing for Deep Neural Networks. (2020), 13.

Cited By

View all
  • (2025)Architectural tactics to achieve quality attributes of machine-learning-enabled systems: a systematic literature reviewJournal of Systems and Software10.1016/j.jss.2025.112373223(112373)Online publication date: May-2025
  • (2024)FIPSER: Improving Fairness Testing of DNN by Seed PrioritizationProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695486(1069-1081)Online publication date: 27-Oct-2024
  • (2024)AutoRIC: Automated Neural Network Repairing Based on Constrained OptimizationACM Transactions on Software Engineering and Methodology10.1145/369063434:2(1-29)Online publication date: 4-Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '22: Proceedings of the 44th International Conference on Software Engineering
May 2022
2508 pages
ISBN:9781450392211
DOI:10.1145/3510003
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 July 2022

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. fairness
  2. neural networks
  3. path analysis

Qualifiers

  • Research-article

Funding Sources

Conference

ICSE '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)132
  • Downloads (Last 6 weeks)6
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Architectural tactics to achieve quality attributes of machine-learning-enabled systems: a systematic literature reviewJournal of Systems and Software10.1016/j.jss.2025.112373223(112373)Online publication date: May-2025
  • (2024)FIPSER: Improving Fairness Testing of DNN by Seed PrioritizationProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695486(1069-1081)Online publication date: 27-Oct-2024
  • (2024)AutoRIC: Automated Neural Network Repairing Based on Constrained OptimizationACM Transactions on Software Engineering and Methodology10.1145/369063434:2(1-29)Online publication date: 4-Sep-2024
  • (2024)Enhancing Algorithmic Fairness: Integrative Approaches and Multi-Objective Optimization Application in Recidivism ModelsProceedings of the 19th International Conference on Availability, Reliability and Security10.1145/3664476.3669978(1-10)Online publication date: 30-Jul-2024
  • (2024)MirrorFair: Fixing Fairness Bugs in Machine Learning Software via Counterfactual PredictionsProceedings of the ACM on Software Engineering10.1145/36608011:FSE(2121-2143)Online publication date: 12-Jul-2024
  • (2024)Fairness Testing: A Comprehensive Survey and Analysis of TrendsACM Transactions on Software Engineering and Methodology10.1145/365215533:5(1-59)Online publication date: 4-Jun-2024
  • (2024)NeuFair: Neural Network Fairness Repair with DropoutProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680380(1541-1553)Online publication date: 11-Sep-2024
  • (2024)Efficient DNN-Powered Software with Fair Sparse ModelsProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680336(983-995)Online publication date: 11-Sep-2024
  • (2024)Interpretability Based Neural Network RepairProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680330(908-919)Online publication date: 11-Sep-2024
  • (2024)COSTELLO: Contrastive Testing for Embedding-Based Large Language Model as a Service EmbeddingsProceedings of the ACM on Software Engineering10.1145/36437671:FSE(906-928)Online publication date: 12-Jul-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media