skip to main content
10.1145/3539618.3591917acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

On Stance Detection in Image Retrieval for Argumentation

Published:18 July 2023Publication History

ABSTRACT

Given a text query on a controversial topic, the task of Image Retrieval for Argumentation is to rank images according to how well they can be used to support a discussion on the topic. An important subtask therein is to determine the stance of the retrieved images, i.e., whether an image supports the pro or con side of the topic. In this paper, we conduct a comprehensive reproducibility study of the state of the art as represented by the CLEF'22 Touché lab and an in-house extension of it. Based on the submitted approaches, we developed a unified and modular retrieval process and reimplemented the submitted approaches according to this process. Through this unified reproduction (which also includes models not previously considered), we achieve an effectiveness improvement in argumentative image detection of up to 0.832 precision@10. However, despite this reproduction success, our study also revealed a previously unknown negative result: for stance detection, none of the reproduced or new approaches can convincingly beat a random baseline. To understand the apparent challenges inherent to image stance detection, we conduct a thorough error analysis and provide insight into potential new ways to approach this task.

Skip Supplemental Material Section

Supplemental Material

SIGIR23-rep3789.mp4

mp4

139.1 MB

References

  1. Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Data Acquisition for Argument Search: The args.me Corpus. In KI 2019: Advances in Artificial Intelligence - 42nd German Conference on AI, Kassel, Germany, September 23--26, 2019, Proceedings (Lecture Notes in Computer Science, Vol. 11793), Christoph Benzmü ller and Heiner Stuckenschmidt (Eds.). Springer, 48--59. https://doi.org/10.1007/978-3-030-30179-8_4Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alexander Bondarenko, Maik Fröbe, Johannes Kiesel, Shahbaz Syed, Timon Gurcke, Meriem Beloucif, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2022. Overview of Touché 2022: Argument Retrieval. In Experimental IR Meets Multilinguality, Multimodality, and Interaction - 13th International Conference of the CLEF Association, CLEF 2022, Bologna, Italy, September 5-8, 2022, Proceedings (Lecture Notes in Computer Science, Vol. 13390), Alberto Barrón-Cede n o, Giovanni Da San Martino, Mirko Degli Esposti, Fabrizio Sebastiani, Craig Macdonald, Gabriella Pasi, Allan Hanbury, Martin Potthast, Guglielmo Faggioli, and Nicola Ferro (Eds.). Springer, 311--336. https://doi.org/10.1007/978-3-031-13643-6_21Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Jan Braker, Lorenz Heinemann, and Tobias Schreieder. 2022. Aramis at Touché 2022: Argument Detection in Pictures using Machine Learning. Working Notes Papers of the CLEF (2022).Google ScholarGoogle Scholar
  4. Thilo Brummerloh, Miriam Louise Carnot, Shirin Lange, and Gregor Pfänder. 2022. Boromir at Touché 2022: Combining Natural Language Processing and Machine Learning Techniques for Image Retrieval for Arguments. Working Notes Papers of the CLEF (2022).Google ScholarGoogle Scholar
  5. Marc Champagne and Ahti-Veikko Pietarinen. 2019. Why Images Cannot be Arguments, but Moving Ones Might. Argumentation, Vol. 34, 2 (June 2019), 207--236. https://doi.org/10.1007/s10503-019-09484-0Google ScholarGoogle ScholarCross RefCross Ref
  6. Fachbereich WD 5: Wirtschaft und Verkehr, Ernährung, Landwirtschaft und Verbraucherschutz. 2017. Wirksamkeit von bildlichen Warnhinweisen auf Zigarettenpackungen. Technical Report WD 5 - 3000 - 024/17. Deutscher Bundestag.Google ScholarGoogle Scholar
  7. Johannes Kiesel, Nico Reichenbach, Benno Stein, and Martin Potthast. 2021. Image Retrieval for Arguments Using Stance-Aware Query Expansion. In 8th Workshop on Argument Mining (ArgMining 2021) at EMNLP, Khalid Al-Khatib, Yufang Hou, and Manfred Stede (Eds.). Association for Computational Linguistics, 36--45. https://doi.org/10.18653/v1/2021.argmining-1.4Google ScholarGoogle ScholarCross RefCross Ref
  8. Jens E. Kjeldsen. 2014. The Rhetoric of Thick Representation: How Pictures Render the Importance and Strength of an Argument Salient. Argumentation, Vol. 29, 2 (Dec. 2014), 197--215. https://doi.org/10.1007/s10503-014-9342-2Google ScholarGoogle ScholarCross RefCross Ref
  9. Afshan Latif, Aqsa Rasheed, Umer Sajid, Jameel Ahmed, Nouman Ali, Naeem Iqbal Ratyal, Bushra Zafar, Saadat Hanif Dar, Muhammad Sajid, and Tehmina Khalil. 2019. Content-based image retrieval and feature extraction: a comprehensive review. Mathematical Problems in Engineering , Vol. 2019 (2019).Google ScholarGoogle Scholar
  10. Ran Levy, Ben Bogin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative content search engine using weak supervision. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2019, Emily M. Bender, Leon Derczynski, and Pierre Isabelle (Eds.). Association for Computational Linguistics, 2066--2081. https://aclanthology.org/C18-1176/Google ScholarGoogle Scholar
  11. M.S. Meharban and Dr.S. Priya. 2016. A Review on Image Retrieval Techniques. Bonfring International Journal of Advances in Image Processing, Vol. 6, 2 (April 2016), 07-10. https://doi.org/10.9756/bijaip.8136Google ScholarGoogle ScholarCross RefCross Ref
  12. Finn Årup Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903 (2011).Google ScholarGoogle Scholar
  13. Jason Obeid and Enamul Hoque. 2020. Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model. In Proceedings of the 13th International Conference on Natural Language Generation (INLG'20), Brian Davis, Yvette Graham, John D. Kelleher, and Yaji Sripada (Eds.). Association for Computational Linguistics, 138--147. https://aclanthology.org/2020.inlg-1.20/Google ScholarGoogle ScholarCross RefCross Ref
  14. Martin Potthast, Lukas Gienapp, Florian Euchner, Nick Heilenkö tter, Nico Weidmann, Henning Wachsmuth, Benno Stein, and Matthias Hagen. 2019. Argument Search: Assessing Argument Relevance. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, Benjamin Piwowarski, Max Chevalier, Éric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer (Eds.). ACM, 1117--1120. https://doi.org/10.1145/3331184.3331327Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18--24 July 2021, Virtual Event (Proceedings of Machine Learning Research, Vol. 139), , Marina Meila and Tong Zhang (Eds.). PMLR, 8748--8763. http://proceedings.mlr.press/v139/radford21a.htmlGoogle ScholarGoogle Scholar
  16. Yong Rui, Thomas S Huang, and Sharad Mehrotra. 1997. Content-based image retrieval with relevance feedback in MARS. In Proceedings of international conference on image processing, Vol. 2. IEEE, 815--818.Google ScholarGoogle ScholarCross RefCross Ref
  17. Hong Shao, Yueshu Wu, Wen-cheng Cui, and Jinxia Zhang. 2008. Image Retrieval Based on MPEG-7 Dominant Color Descriptor. In Proceedings of the 9th International Conference for Young Computer Scientists, ICYCS 2008, Zhang Jia Jie, Hunan, China, November 18-21, 2008. IEEE Computer Society, 753--757. https://doi.org/10.1109/ICYCS.2008.89Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Arnold WM Smeulders, Marcel Worring, Simone Santini, Amarnath Gupta, and Ramesh Jain. 2000. Content-based image retrieval at the end of the early years. IEEE Transactions on pattern analysis and machine intelligence, Vol. 22, 12 (2000), 1349--1380.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Smith. 2007. An Overview of the Tesseract OCR Engine. In 9th International Conference on Document Analysis and Recognition (ICDAR'07). IEEE Computer Society, 629--633. https://doi.org/10.1109/ICDAR.2007.4376991Google ScholarGoogle ScholarCross RefCross Ref
  20. Martin Solli and Reiner Lenz. 2011. Color emotions for multi-colored images. Color Research & Application, Vol. 36, 3 (April 2011), 210--221. https://doi.org/10.1002/col.20604Google ScholarGoogle ScholarCross RefCross Ref
  21. Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. ArgumenText: Searching for Arguments in Heterogeneous Sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 2-4, 2018, Demonstrations, Yang Liu, Tim Paek, and Manasi S. Patwardhan (Eds.). Association for Computational Linguistics, 21--25. https://doi.org/10.18653/v1/n18-5005Google ScholarGoogle ScholarCross RefCross Ref
  22. Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017. Building an Argument Search Engine for the Web. In Proceedings of the 4th Workshop on Argument Mining, ArgMining@EMNLP 2017, Copenhagen, Denmark, September 8, 2017, Ivan Habernal, Iryna Gurevych, Kevin D. Ashley, Claire Cardie, Nancy L. Green, Diane J. Litman, Georgios Petasis, Chris Reed, Noam Slonim, and Vern R. Walker (Eds.). Association for Computational Linguistics, 49--59. https://doi.org/10.18653/v1/w17-5106Google ScholarGoogle ScholarCross RefCross Ref
  23. Shiwei Zhang, Xiuzhen Zhang, Jeffrey Chan, and Paolo Rosso. 2019. Irony detection via sentiment-based transfer learning. Information Processing and Management, Vol. 56, 5 (2019), 1633--1644. https://doi.org/10.1016/j.ipm.2019.04.006Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. On Stance Detection in Image Retrieval for Argumentation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2023
      3567 pages
      ISBN:9781450394086
      DOI:10.1145/3539618

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 July 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader