Loading [MathJax]/extensions/MathZoom.js
Summarize While Translating: Universal Model With Parallel Decoding for Summarization and Translation | IEEE Conference Publication | IEEE Xplore

Summarize While Translating: Universal Model With Parallel Decoding for Summarization and Translation


Abstract:

Recently, multi-decoder and universal models have attracted increased interest in speech and language processing as they allow learning common representations across task...Show More

Abstract:

Recently, multi-decoder and universal models have attracted increased interest in speech and language processing as they allow learning common representations across tasks. These models learn a common representation by sharing a part of or all network parameters. Moreover, such a universal model can handle tasks unseen during training (zero-shot tasks). However, these models do not fully exploit inter-dependencies between tasks during decoding since they usually perform decoding for each task independently. In this paper, we propose to address this issue by extending the universal model to perform multi-task parallel decoding with a cross-attention module between decoders to capture task inter-dependencies explicitly. We also introduce a novel multi-stream beam search algorithm to allow such parallel decoding. We test our proposed model on multi-lingual (English and Portuguese) text/speech translation and summarization, confirming its potential, especially in zero-shot tasks.
Date of Conference: 16-20 December 2023
Date Added to IEEE Xplore: 19 January 2024
ISBN Information:
Conference Location: Taipei, Taiwan

Contact IEEE to Subscribe

References

References is not available for this document.