Abstract
Remote work due to the COVID-19 pandemic is expected to be the new normal, suggesting a situation where people use their personal computers at home for several activities like reading emails, surfing the web, chatting with friends. While doing this, users are not focused on securing their systems and they often do not have the skills and knowledge to defend against cybercrime. In this paper, we present the design and the evaluation of a novel interface that warns users against phishing attacks. This interface looks like the ones shown by browsers like Chrome and Firefox when opening a suspicious phishing website, but it includes information that explains the reasons why the website might be a scam. Such explanations are based on website features commonly used by AI-based systems to classify a website as phishing or not and aim to help users detecting phishing websites. To ensure a high understandability and effectiveness of the explanations, the C-HIP model was adopted to design such messages, which have been iteratively refined performing a static analysis of their comprehension, sentiment, and readability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brynjolfsson, E., Horton, J.J., Ozimek, A., Rock, D., Sharma, G., TuYe, H.-Y.: COVID-19 and remote work: an early look at US data, pp. 0898–2937. National Bureau of Economic Research (2020)
Wiggen, J.: The impact of COVID-19 on cyber crime and state-sponsored cyber activities (2020)
Bott, E.: How many people still run windows 7. https://www.zdnet.com/article/as-support-ends-windows-7-users-head-for-the-exits/. Accessed 28 Oct 2020
Gallagher, S., Brandt, A.: Facing down the myriad threats tied to COVID19 (2020). https://news.sophos.com/enus/2020/04/14/covidmalware. Accessed 28 Oct 2020
Dhamija, R., Tygar, J.D., Hearst, M.: Why phishing works. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 581–590 (2006)
Friedman, B., Hurley, D., Howe, D.C., Felten, E., Nissenbaum, H.: Users’ conceptions of web security: a comparative study. In: CHI 2002 Extended Abstracts On Human Factors in Computing Systems, pp. 746–747 (2002)
Bravo-Lillo, C., Cranor, L., Komanduri, S., Schechter, S., Sleeper, M.: Harder to ignore? Revisiting pop-up fatigue and approaches to prevent it. In: 10th Symposium On Usable Privacy and Security (SOUPS 2014), pp. 105–111 (2014)
Jackson, C., Simon, D.R., Tan, D.S., Barth, A.: An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks. In: Financial Cryptography and Data Security, Berlin, Heidelberg, pp. 281–293 (2007)
Desolda, G., Di Nocera, F., Ferro, L., Lanzilotti, R., Maggi, P., Marrella, A.: Alerting users about phishing attacks. In: Moallem, Abbas (ed.) HCII 2019. LNCS, vol. 11594, pp. 134–148. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22351-9_9
Aneke, J., Ardito, C., Desolda, G.: Designing an intelligent user interface for preventing phishing attacks. In: IFIP Conference on Human-Computer Interaction, pp. 97–106 (2019)
Alsharnouby, M., Alaca, F., Chiasson, S.: Why phishing still works: user strategies for combating phishing attacks. Int. J. Human-Comput. Stud. 82, 69–82 (2015)
Harbach, M., Fahl, S., Yakovleva, P., Smith, M.: Sorry, i don’t get it: an analysis of warning message texts. In: International Conference on Financial Cryptography and Data Security, pp. 94–111 (2013)
Greenwald, S.J., Olthoff, K.G., Raskin, V., Ruch, W.: The user non-acceptance paradigm: INFOSEC’s dirty little secret. In: Proceedings of the 2004 Workshop on New Security Paradigms, pp. 35–43 (2004)
Kumaran, N., Lugani, S.: Protecting businesses against cyber threats during COVID-19 and beyond. Google Cloud, vol. 16 (2020)
Williams, C.M., Chaturvedi, R., Chakravarthy, K.: Cybersecurity risks in a pandemic. J. Med. Internet Res. 22, e23692 (2020)
Wogalter, M.S.: Handbook of Warnings. CRC Press, Boca Raton (2006)
Laughery, K., DeJoy, D., Wogalter, M.: Warnings and Risk Communication. Taylor and Francis, Philadelphia (1999)
Wogalter, M.S., Conzola, V.C., Smith-Jackson, T.L.: Research-based guidelines for warning design and evaluation. Appl. Ergon. 33, 219–230 (2002)
West, R.: The psychology of security. Commun. ACM 51, 34–40 (2008)
Kumaraguru, P., et al.: Getting users to pay attention to anti-phishing education: evaluation of retention and transfer. In: Proceedings of the Anti-Phishing Working Groups 2nd Annual eCrime Researchers Summit, pp. 70–81 (2007)
De Paula, R., et al.: Two experiences designing for effective security. In: Proceedings of the 2005 Symposium on Usable Privacy and Security, pp. 25–34 (2005)
Felt, A.P., Ha, E., Egelman, S., Haney, A., Chin, E., Wagner, D.: Android permissions: User attention, comprehension, and behavior. In: Proceedings of the Eighth Symposium on Usable Privacy and Security, pp. 1–14 (2012)
Wash, R., Rader, E., Vaniea, K., Rizor, M.: Out of the loop: how automated software updates cause unintended security consequences. In: 10th Symposium On Usable Privacy and Security ({SOUPS} 2014), pp. 89–104 (2014)
Bravo-Lillo, C., Cranor, L.F., Downs, J., Komanduri, S.: Bridging the gap in computer security warnings: a mental model approach. IEEE Secur. Priv. 9, 18–26 (2011)
Fagan, M., Khan, M.M.H.: Why do they do what they do? A study of what motivates users to (not) follow computer security advice. In: Twelfth Symposium on Usable Privacy and Security (SOUPS 2016), pp. 59–75 (2016)
Almuhimedi, H., Felt, A.P., Reeder, R.W., Consolvo, S.: Your reputation precedes you: History, reputation, and the chrome malware warning. In: 10th Symposium on Usable Privacy and Security (SOUPS 2014), pp. 113–128 (2014)
Gainsbury, S.M., Russell, A., Gainsbury, S., Aro, D., Ball, D., Tobar, C.: Optimal content for warning messages to enhance consumer decision making and reduce problem gambling. Knowl. Educ. Law Manage. 11(3), 64–80 (2015)
Shropshire, J.D., Warkentin, M., Johnston, A.C.: Impact of negative message framing on security adoption. J. Comput. Inf. Syst. 51, 41–51 (2010)
Witte, K.: Putting the fear back into fear appeals: the extended parallel process model. Commun. Monogr. 59, 329–349 (1992)
Egelman, S., Cranor, L.F., Hong, J.: You’ve been warned: an empirical study of the effectiveness of web browser phishing warnings (2008)
Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N., Cranor, L.F.: Crying wolf: an empirical study of SSL warning effectiveness. In: USENIX Security Symposium, pp. 399–416 (2009)
Bravo-Lillo, C., Cranor, L.F., Downs, J., Komanduri, S., Sleeper, M.: Improving computer security dialogs. In: IFIP Conference on Human-Computer Interaction, pp. 18–35 (2011)
Furnell, S.: Why users cannot use security. Comput. Secur. 24, 274–279 (2005)
Herzberg, A.: Why Johnny can’t surf (safely)? Attacks and defenses for web users. Comput. Secur. 28, 63–71 (2009)
Thelwall, M.: The heart and soul of the web? Sentiment strength detection in the social web with sentistrength. In: Hołyst, J.A. (ed.) Cyberemotions. UCS, pp. 119–134. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-43639-5_7
Mohammad, S.M.: Challenges in sentiment analysis. In: Cambria, E., Das, D., Bandyopadhyay, S., Feraco, A. (eds.) A Practical Guide to Sentiment Analysis, pp. 61–83. Springer, New York (2017). https://doi.org/10.1007/978-3-319-55394-8_4
Kiritchenko, S., Zhu, X., Mohammad, S.M.: Sentiment analysis of short informal texts. J. Artif. Intell. Res. 50, 723–762 (2014)
Bellegarda, J.R.: Emotion analysis using latent affective folding and embedding. In: Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pp. 1–9 (2010)
Boucouvalas, A.C.: Real time text-to-emotion engine for expressive internet communications. In: Proceedings of International Symposium on Communication Systems, Networks and Digital Signal Processing (CSNDSP-2002) (2002)
Francisco, V., Gervás, P.: Automated mark up of affective information in english texts. In: International Conference on Text, Speech and Dialogue, pp. 375–382 (2006)
Mohammad, S.M.: Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In: Meiselman, H.L. (ed.) Emotion measurement, pp. 201–237. Elsevier, Duxford (2016)
Liu, B., Zhang, L.: A survey of opinion mining and sentiment analysis. In: Aggarwal, C., Zhai, C. (eds.) Mining Text Data, pp. 415–463. Springer, Boston (2017). https://doi.org/10.1007/978-1-4614-3223-4_13
Pang, B., Lee, L.: Opinion mining and sentiment analysis. In: Foundations and Trends® in Information Retrieval, vol. 2, pp. 1–135 (2008)
Biondi, G., Franzoni, V., Poggioni, V.: A deep learning semantic approach to emotion recognition using the IBM watson bluemix alchemy language. In: International Conference on Computational Science and Its Applications, pp. 718–729 (2017)
Whaley, C.: Security companies might be messing with IT managers’ minds. Comput. Can. 31, 17 (2005)
Johnston, A.C.: Fear appeals and information security behaviours: an empirical study. MIS Q. 34, 549–566 (2010)
Richards, J.C., Platt, J., Platt, H.: Longman Dictionary of Language Teaching Applied Linguistics, vol. 288 (1992)
Zamanian, M., Heydari, P.: Readability of Texts: State of the Art. Theor. Pract. Lang. Stud. 2, 43–53 (2012)
Dale, E., Chall, J.S.: The concept of readability. Elementary Engl. 26, 19–26 (1949)
Gee, J.P.: Three paradigms in reading (really literacy) research and digital media. In: Reading at a Crossroads? Disjunctures and Continuities in Current Conceptions and Practices, vol. 35 (2015)
Paris, S.G., Stahl, S.A.: Children’s reading comprehension and assessment. Routledge (2005)
Cranor, L.F.: A framework for reasoning about the human in the loop (2008)
DuBay, W.H.: The Principles of Readability. In: Online Submission (2004)
Flesch, R.: A new readability yardstick. J. Appl. Psychol. 32, 221 (1948)
Mc Laughlin, G.H.: SMOG grading-a new readability formula. J. Read. 12, 639–646 (1969)
Feng, L., Elhadad, N., Huenerfauth, M.: Cognitively motivated features for readability assessment. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pp. 229–237 (2009)
Redmiles, E.M., et al.: First steps toward measuring the readability of security advice, ed (2018)
Harbach, M., Fahl, S., Muders, T., Smith, M.:Towards measuring warning readability. In: Proceedings of the 2012 ACM Conference on Computer and Communications Security, pp. 989–991 (2012)
Felt, A.P., et al.: Improving SSL warnings: comprehension and adherence. In: ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea (2015)
Flesch, R.: Flesch-Kincaid readability test. Retrieved October, vol. 26, p. 3 (2007)
Scranton, M.A.: SMOG grading: a readability formula by G. Harry McLaughlin. Kansas State University (1970)
Zhou, S., Jeong, H., Green, P.A.: How consistent are the best-known readability equations in estimating the readability of design standards? IEEE Trans. Prof. Commun. 60, 97–111 (2017)
Webex. https://www.webfx.com/tools/read-able/check.php. Accessed 28 Dec 2020
Hidayatillah, N., Zainil, Y.: The readability of students’ textbook used in semantic and pragmatic course in english language education program of UNP. J. Engl. Lang. Teach. 9, 144–159 (2020)
Heydari, P., Riazi, A.M.: Readability of texts: human evaluation versus computer index. Mediterr. J. Soc. Sci. 3, 177–190 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Aneke, J., Ardito, C., Desolda, G. (2021). Help the User Recognize a Phishing Scam: Design of Explanation Messages in Warning Interfaces for Phishing Attacks. In: Moallem, A. (eds) HCI for Cybersecurity, Privacy and Trust. HCII 2021. Lecture Notes in Computer Science(), vol 12788. Springer, Cham. https://doi.org/10.1007/978-3-030-77392-2_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-77392-2_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77391-5
Online ISBN: 978-3-030-77392-2
eBook Packages: Computer ScienceComputer Science (R0)