Abstract
The optimal situation to make a decision is to have all variables in grasp. This however, almost never occurs. There has been research on counterfactuals as a way to provide more explainable systems and models. In furtherance of this research, this paper proposes CORFAD, Counterfactual Retrieval for Augmentation and Decisions. We explore user generated counterfactual tweets and by aggregating counterfactual statements that relate to pre-determined keywords, CORFAD simplifies data analysis by suggesting variables towards which future actions might have the greater or lesser effects towards a defined goal. This has the dual purpose of making synthetic counterfactual data generation more focused and less likely to generate non-useful explanations, while also able to stand alone to assist decision makers. This paper uses as test case, Counterfactual Statements connected with the Tesla Model 3 to explore insights that can guide decision-making in situations where multiple variables are possible and exist.
This work was supported by the National Natural Science Foundation of China [71901150] and China Postdoctoral Science Foundation Grant [2019M663083], Guangdong Province Postgraduate Education Innovation Plan (2019SFKC46).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A fair idea of 1 GB of text data is reading 1000 books, each of 600 pages, and containing 300 words per page.
- 2.
- 3.
- 4.
- 5.
- 6.
This hinges on improvements made on the underlying model, which achieves an F1 score of 86.9% on the SemEval Post-Evaluation Leader-board.
References
Nutt, P.C.: Models for decision making in organizations and some contextual variables which stipulate optimal use. Acad. Manag. Rev. 1(2), 84–98 (1976). https://doi.org/10.5465/1976.4408670
Bottou, L., et al.: Counterfactual reasoning and learning systems: the example of computational advertising. J. Mach. Learn. Res. 14(1), 3207–3260 (2013)
Pearl, J.: Causal and counterfactual inference. In: The Handbook of Rationality, pp. 1–41 (2018)
Son, Y., et al.: Recognizing counterfactual thinking in social media texts. In: ACL (2017). https://doi.org/10.18653/v1/P17-2103
Whitty, M.T.: Liar, liar! An examination of how open, supportive and honest people are in chat rooms. Comput. Hum. Behav. 18(4), 343–352 (2002)
Hendricks, L. A., Hu, R., Darrell, T., Akata, Z.: Generating counterfactual explanations with natural language. arXiv preprint arXiv:1806.09809. (2018)
Ramaravind, K.M., Amit, S., Chenhao T.: Explaining machine learning classifiers through diverse counterfactual explanations. In Conference on Fairness, Accountability, and Transparency, 27–30 January (2020). https://doi.org/10.1145/3351095.3372850
Lewis, D.: Counterfactuals. John Wiley & Sons, Hoboken (2013)
Brill, E.: A simple rule-based part of speech tagger. In: Proceedings of the Third Conference on Applied Natural Language Processing, pp. 152–155. Association for Computational Linguistics (1992)
Manning, C.D., Manning, C.D., Schütze, H.: Foundations of Statistical Natural Language Processing. MIT Press, Cambridge (1999)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding arXiv preprint: 1810.04805. (2018). https://doi.org/10.18653/v1/N19-1423
Peters, M.E., Ruder, S., Smith, N.A.: To Tune or not to tune? Adapting pretrained representations to diverse tasks. In: ACL (2019). https://doi.org/10.18653/v1/W19-4302
Yang, X., Obadinma, S., Zhao, H., Zhang, Q., Matwin, S., Zhu, X.: SemEval-2020 Task 5: counterfactual recognition. In: Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020) (2020)
Russell, M.A.: Mining the Social Web: Data Mining Facebook, Twitter, LinkedIn, Google+, GitHub, and More. O’Reilly Media, Inc. (2013). https://doi.org/10.1080/15536548.2015.1046287
Nwaike, K., Jiao, L. : Counterfactual detection meets transfer learning. In: Modelling Causal Reasoning in Language: Detecting Counterfactuals at SemEval-2020 Task [5] (2020, accepted)
Agerri, R., Artola, X., Beloki, Z., Rigau, G., Soroa, A.: Big data for natural language processing: a streaming approach. Knowl.-Based Syst. 79, 36–42 (2015)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint 1907.11692 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kelechi, N., Geng, S. (2020). Counterfactual Retrieval for Augmentation and Decisions. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-62460-6_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62459-0
Online ISBN: 978-3-030-62460-6
eBook Packages: Computer ScienceComputer Science (R0)