Loading [a11y]/accessibility-menu.js
Truth or Fiction: Multimodal Learning Applied to Earnings Calls | IEEE Conference Publication | IEEE Xplore

Truth or Fiction: Multimodal Learning Applied to Earnings Calls


Abstract:

A significant amount of resources have been used in both academia and industry to study the impact of financial text on company perception and performance. In order to mi...Show More

Abstract:

A significant amount of resources have been used in both academia and industry to study the impact of financial text on company perception and performance. In order to mitigate potential adverse outcomes, companies have begun to regulate word usage based on perceived sentiment, making conventional text-based analysis less reliable. To address this, we present a multimodal bidirectional Long Short-Term Memory (LSTM) framework augmented with a cross-attention fusion mechanism trained on audio and text data obtained from quarterly earnings conferences calls. The framework is applied to two tasks: financial restatement prediction and market movement prediction. We compare the proposed model against several baseline methods and find that while it does not achieve superior performance, our results show that utilizing multimodal data leads to a substantial increase in model accuracy for restatement prediction. Furthermore, we gain insight on the effectiveness of semantic-and emotion-related features towards these tasks.
Date of Conference: 17-20 December 2022
Date Added to IEEE Xplore: 26 January 2023
ISBN Information:
Conference Location: Osaka, Japan

Contact IEEE to Subscribe

References

References is not available for this document.