Skip to main content

Text-Visualizing Neural Network Model: Understanding Online Financial Textual Data

  • Conference paper
  • First Online:
Book cover Advances in Knowledge Discovery and Data Mining (PAKDD 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10939))

Included in the following conference series:

Abstract

This study aims to visualize financial documents to swiftly obtain market sentiment information from these documents and determine the reason for which sentiment decisions are made. This type of visualization is considered helpful for nonexperts to easily understand technical documents such as financial reports. To achieve this, we propose a novel interpretable neural network (NN) architecture called gradient interpretable NN (GINN). GINN can visualize both the market sentiment score from a whole financial document and the sentiment gradient scores in concept units. We experimentally demonstrate the validity of text visualization produced by GINN using a real textual dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://textream.yahoo.co.jp/category/1834773.

  2. 2.

    In http://socsim.t.u-tokyo.ac.jp/wp/index.php/2017/11/15/titoh/ginn/.

References

  1. Ravi, K., Ravi, V.: A survey on opinion mining and sentiment analysis: tasks, approaches and applications. Knowl.-Based Syst. 89(C), 14–46 (2015)

    Article  Google Scholar 

  2. Hechtlinger, Y.: Interpretation of prediction models using the input gradient. In: NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems (2016)

    Google Scholar 

  3. Bach, S., Binder, A., Montavon, G., Klauschen, F., Muller, K.R., Samek, W.: On pixel-wise explanations for nonlinear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 1–46 (2015)

    Google Scholar 

  4. Mikolov, T., Chen, K., Sutskever, I., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS 2013, pp. 3111–3119 (2013)

    Google Scholar 

  5. Hornik, K., Feinerer, I., Kober, M., Buchta, C.: Spherical k-means clustering. J. Stat. Softw. 50(10), 1–22 (2012)

    Article  Google Scholar 

  6. Zhao, P., Zhang, T.: Accelerating minibatch stochastic gradient descent using stratified sampling. arXiv:1405.3080v1 (2014)

  7. Kingma, D.P., Ba., J.L.: Adam: a method for stochastic optimization. In: ICLR 2015 (2015)

    Google Scholar 

  8. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  9. Kjellin, P.E., Liu, Y.: A survey on interactivity in topic models. IJACSA 7(14), 456–461 (2016)

    Google Scholar 

  10. Tandem anchoring: a multiword anchor approach for interactive topic modeling. In: ACL 2017, pp. 896–905 (2017)

    Google Scholar 

  11. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: ICML 2017 (2017)

    Google Scholar 

  12. Xu, Q., Zhao, Q., Pei, W., Yang, L., He, Z.: Design interpretable neural network trees through self-organized learning of features. In: IJCNN 2004 (2004)

    Google Scholar 

  13. Zhang, Q., Wu, Y.N., Zhu, S.: Interpretable convolutional neural networks. arXiv:1710.00935 (2017)

  14. Mnih, V., Heess, N., Graves, A., Kavukcuoglu, K.: Recurrent models of visual attention. In: NIPS 2014, pp. 2204–2212 (2014)

    Google Scholar 

  15. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: neural image caption generation with visual attention. In: ICML 2015, pp. 77–81 (2015)

    Google Scholar 

  16. Dong, Y., Su, H., Zhu., J, Zhang, B.: Improving interpretability of deep neural networks with semantic information. In: CVPR 2017 (2017)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by JSPS Fellows Grant Number 17J04768.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomoki Ito .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 133 KB)

Theoretical Analysis of the II Algorithm

Theoretical Analysis of the II Algorithm

Let \(\varOmega _{dw}^{(k)}\) be a set of words included in the kth cluster and included in the polarity dictionary, \(D^{(p)}\) and \(D^{(n)}\) be the positive and negative document sets, \(\partial {w}_{k, i}^{(2)*}\) be the ith component of \(\partial \varvec{w}_{k}^{(2)*}\), \(p^{-}(w_{k,i})\) be \(p \left( j \in D^{(n)} | z_{j, i}^{(1, k)} > 0\right) \), \(p^{+}(w_{k,i})\) be \(1 - p^{-}(w_{k,i})\), and \(\partial \varvec{H}^{(j, t)}\) be the gradient value of \(\varvec{H}^{(j, t)}\) in Update. Then,

Proposition 1

If we utilize Update for the parameter updates, then,

$$\begin{aligned} \left\{ \begin{array}{cc} E[\partial {w}_{k, i}^{(2)*}]< 0 &{} \left( \frac{p^{+}(w_{k,i})}{p^{-}(w_{k,i})}> \frac{E[|\varDelta ^{(2)*}_{j, k}|| z_{j, i}^{(1, k)} = 1 \cap j \in D^{(n)}]}{E[|\varDelta ^{(2)*}_{j, k}| | z_{j ,i}^{(1, k)} = 1 \cap j \in D^{(p)}]}\right) \\ E[\partial {w}_{k, i}^{(2)*}] > 0 &{} \left( \frac{p^{+}(w_{k,i})}{p^{-}(w_{k,i})} < \frac{E[|\varDelta ^{(2)*}_{j, k}|| z_{j, i}^{(1, k)} = 1 \cap j \in D^{(n)}]}{E[|\varDelta ^{(2)*}_{j, k}| | z_{j ,i}^{(1, k)} = 1 \cap j \in D^{(p)}]}\right) \end{array} \right. . \end{aligned}$$
(1)

is established. Proposition 1 indicates that if Cond 1: the values of \({t^+}\) and \({t^-}\) are sufficiently large, and Cond 2: for every word \(w_{k,i^{+}} \in \varOmega _{dw}^{(k)} \cap \varOmega _{pw}^{(k)}\), and \(w_{k,i^{-}} \in \varOmega _{dw}^{(k)} \cap \varOmega _{nw}^{(k)}\), the initial values of \(w^{(2)}_{k,i^{+}}\) and \(w^{(2)}_{k,i^{-}}\) given by Init are positive and sufficiently large, and negative and sufficiently small, respectively, are met for every k, then, the II algorithm is expected to award each positive word \(\in \varOmega _{pw}^{(k)}\) (negative word \(\in \varOmega _{nw}^{(k)})\) a positive (negative) sentiment score.

Let \({\varvec{H}^{d}}^{(j, t)}\) be \(\varvec{H}^{(j, t)} - {\varvec{H}^{*}}^{(j, t)}\). Then, the following propositions important for explaining the market mood predictability of GINN are established.

Proposition 2

If the initial values of \(|\varvec{W^{(3)}}|\) and \(|\varvec{W^{(4)}}|\) are sufficiently small (Cond 3) and for every \(j \in \varOmega ^{(t)}_m\), the values of \(\varvec{z}^{(2)}_{j}\) are \( \left\{ \begin{array}{cc} positive &{} (j \in D^{(p)}) \\ negative &{} (j \in D^{(n)}) \end{array} \right. \), then, the first and second row vector values of \(\partial \varvec{H}^{(j, t)}\) are positive and negative respectively, and \( \frac{\sum _{j \in \varOmega ^{(t+1)}_m} \Vert {\varvec{H}^{d}}^{(j, t+1)} \Vert _{1} }{\sum _{j \in \varOmega ^{(t+1)}_m} \Vert \varvec{H}^{(j, t+1)}\Vert _{1}} \le \frac{\sum _{j \in \varOmega ^{(t+1)}_m} \Vert {\varvec{H}^{d}}^{(j, t)} \Vert _{1} }{\sum _{j \in \varOmega ^{(t+1)}_m} \Vert \varvec{H}^{(j, t)}\Vert _{1}}. \)

Proposition 3

If, for every k, Cond 1–3 are established, the values \(|\varOmega _{pw}^{(k, t^+)}|\), \(|\varOmega _{nw}^{(k, t^-)}|\) and \(|\varOmega _m|\) are sufficiently large, then, \(\lim _{t \rightarrow \infty } \frac{\sum _{j \in \varOmega ^{(t)}_m} \Vert {\varvec{H}^{d}}^{(j, t)} \Vert _{1} }{\sum _{j \in \varOmega ^{(t)}_m} \Vert \varvec{H}^{(j, t)}\Vert _{1}} = 0.\)

See the supplementary material (See footnote 2) for the proofs and the details.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ito, T., Sakaji, H., Tsubouchi, K., Izumi, K., Yamashita, T. (2018). Text-Visualizing Neural Network Model: Understanding Online Financial Textual Data. In: Phung, D., Tseng, V., Webb, G., Ho, B., Ganji, M., Rashidi, L. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science(), vol 10939. Springer, Cham. https://doi.org/10.1007/978-3-319-93040-4_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93040-4_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93039-8

  • Online ISBN: 978-3-319-93040-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics