Loading [a11y]/accessibility-menu.js
A Fine-Grained Network for Joint Multimodal Entity-Relation Extraction | IEEE Journals & Magazine | IEEE Xplore

A Fine-Grained Network for Joint Multimodal Entity-Relation Extraction


Abstract:

Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from m...Show More

Abstract:

Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from multimodal data such as text sentences with associated images. Previous JMERE methods have primarily employed 1) pipeline models, which apply pre-trained unimodal models separately and ignore the interaction between tasks, or 2) word-pair relation tagging methods, which neglect neighboring word pairs. To address these limitations, we propose a fine-grained network for JMERE. Specifically, we introduce a fine-grained alignment module that utilizes a phrase-patch to establish connections between text phrases and visual objects. This module can learn consistent multimodal representations from multimodal data. Furthermore, we address the task-irrelevant image information issue by proposing a gate fusion module, which mitigates the impact of image noise and ensures a balanced representation between image objects and text representations. Furthermore, we design a multi-word decoder that enables ensemble prediction of tags for each word pair. This approach leverages the predicted results of neighboring word pairs, improving the ability to extract multi-word entities. Evaluation results from a series of experiments demonstrate the superiority of our proposed model over state-of-the-art models in JMERE.
Published in: IEEE Transactions on Knowledge and Data Engineering ( Volume: 37, Issue: 1, January 2025)
Page(s): 1 - 14
Date of Publication: 25 October 2024

ISSN Information:

Funding Agency:


References

References is not available for this document.