skip to main content
10.1145/3654823.3654862acmotherconferencesArticle/Chapter ViewAbstractPublication PagescacmlConference Proceedingsconference-collections
research-article

Semantic Driven Stylized Image Captioning For Artworks

Published: 29 May 2024 Publication History

Abstract

Most existing image captioning methods have shown impressive performance on real-life images, but few of them explore to generate stylized captions on artificial images like artworks. There are two challenges in this task: firstly, modern techniques usually extract fine-grained visual features by using pretrained object detectors, which perform poorly on artworks due to the domain gap, further limits the whole model’s performance. Secondly, diverse emotional inclinations may result in varying attention to semantics in image, making it challenging to establish a clear correlation between emotion and visual input. To address these issues, we propose our object detector-free Semantic-Driven Stylized Image Captioning Networks (SD-Net). To be specific, we initially utilize an encoder to acquire visual tokens on CLIP image features, then enhance it with an additional emotion token. After that, a semantic retrieval module is employed to directly reconstruct emotion-related semantic words. Finally, we integrate the visual tokens and semantic words into a sentence decoder to generate the stylized caption. Our experimental results demonstrate that our approach achieves competitive performance on Artemis dataset in various benchmarks.

References

[1]
Panos Achlioptas, Maks Ovsjanikov, Kilichbek Haydarov, Mohamed Elhoseiny, and Leonidas J Guibas. 2021. Artemis: Affective language for visual art. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11569–11579.
[2]
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition. 6077–6086.
[3]
Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3128–3137.
[4]
Yehao Li, Yingwei Pan, Ting Yao, and Tao Mei. 2022. Comprehending and ordering semantics for image captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 17990–17999.
[5]
Alexander Mathews, Lexing Xie, and Xuming He. 2016. Senticap: Generating image descriptions with sentiments. In Proceedings of the AAAI conference on artificial intelligence, Vol. 30.
[6]
Alexander Mathews, Lexing Xie, and Xuming He. 2018. Semstyle: Learning to generate stylised image captions using unaligned text. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8591–8600.
[7]
Youssef Mohamed, Faizan Farooq Khan, Kilichbek Haydarov, and Mohamed Elhoseiny. 2022. It is okay to not be okay: Overcoming emotional bias in affective image captioning by contrastive data collection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 21263–21272.
[8]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748–8763.
[9]
Wentian Zhao, Xinxiao Wu, and Xiaoxun Zhang. 2020. Memcap: Memorizing style knowledge for image captioning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 12984–12992.
[10]
Yucheng Zhou and Guodong Long. 2023. Style-Aware Contrastive Learning for Multi-Style Image Captioning. arXiv preprint arXiv:2301.11367 (2023).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CACML '24: Proceedings of the 2024 3rd Asia Conference on Algorithms, Computing and Machine Learning
March 2024
478 pages
ISBN:9798400716416
DOI:10.1145/3654823
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 May 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. multi-modal learning
  2. neural networks
  3. stylized image captioning
  4. visual language model

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CACML 2024

Acceptance Rates

Overall Acceptance Rate 93 of 241 submissions, 39%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 23
    Total Downloads
  • Downloads (Last 12 months)23
  • Downloads (Last 6 weeks)6
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media