Skip to main content

Context Relevance for Text Analysis and Enhancement for Soft Information Fusion

  • Chapter
  • First Online:
Book cover Context-Enhanced Information Fusion

Abstract

Soft information fusion, fusing information from natural language messages with other soft information and with information from physical sensors is facilitated by representing the information in the messages as a formally defined propositional graph that abides by the uniqueness principle—the principle that every entity or event that is mentioned in the message is represented by a unique node in the graph, or, at worst, by several nodes connected by co-referentiality relations. To further facilitate information fusion, information from the message is enhanced with relevant information from background knowledge sources. What knowledge is relevant is determined by also representing the background knowledge as a propositional graph, embedding the knowledge graph from the messages into the background knowledge graph using the uniqueness principle to fuse a message graph node with a background knowledge graph node, and then using spreading activation to find subgraphs of the background knowledge graph. This combination of the message graph with the retrieved subgraphs is considered the “relevant information.” In this chapter, we discuss, evaluate, and compare two techniques for spreading activation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Node wft4! has been left uncollapsed in preparation for Fig. 14.3.

  2. 2.

    More appropriately, this can be viewed as constraining the memory encoding process of some agent, though a knowledge engineer is performing the encoding process.

  3. 3.

    A value of 1.0 is used for the initial activation value since it is recommended in the Texai approach, and will always be greater than or equal to the activation threshold.

  4. 4.

    The activation threshold was 0.5 and decay was 0.9 (c.f., Sect. 14.5.1.1).

  5. 5.

    These rules are provisional ones created for testing purposes, and are probably not those a SME would come up with for this domain.

  6. 6.

    The actual representation of this reasoning rule is much more specific.

  7. 7.

    ACT-R representations typically use smaller knowledge bases than those in large-scale systems, like Cyc [36], that require information retrieval techniques. As such, the activation calculation used in ACT-R gives a ranking to all the information in declarative memory and then selects the best ranked results as a match.

  8. 8.

    The ACT-R specification [29] recommends a value of 0.5 for d after numerous tests, but this was for the retrieval of one chunk and may be different for using the spreading activation algorithm for information retrieval.

  9. 9.

    The equation provided in [29] calculates the associative strength as \(S_{ji} \approx ln(prob(i|j)/prob(i))\), but provides no specification for calculating the probabilities in a propositional graph.

  10. 10.

    At the time of this study, Tractor was in its infancy, and thus the messages had to be manually translated into SNePS 3 propositional graphs. This limited the number of examples we could use in the evaluation.

  11. 11.

    The SNePS 3 KRR system, background knowledge sources and means of loading them into SNePS 3, message representations, and code for evaluating the algorithms is available at http://www.cse.buffalo.edu/~mwk3/Papers/evaluation.html.

  12. 12.

    An origin set for a proposition is the set of propositions used in the derivation of that proposition. Origin sets originate from relevance logic proof theory [40].

References

  1. G.A. Gross, R. Nagi, K. Sambhoos, D.R. Schlegel, S.C. Shapiro, G. Tauer, Towards hard + soft data fusion: processing architecture and implementation for the joint fusion and analysis of hard and soft intelligence data, in Proceedings of the 15th International Conference on Information Fusion (Fusion 2012) (ISIF, 2012), pp. 955–962

    Google Scholar 

  2. M. Prentice, M. Kandefer, S.C. Shapiro, Tractor: a framework for soft information fusion, in Proceedings of the 13th International Conference on Information Fusion (Fusion2010) (2010), pp. Th3.2.2

    Google Scholar 

  3. S.C. Shapiro, D.R. Schlegel, Natural language understanding for soft information fusion, in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) (ISIF, 2013), 9 p (unpaginated)

    Google Scholar 

  4. A.B. Poore, S. Lu, B.J. Suchomel, Data association using multiple frame assignments, in Handbook of Multisensor Data Fusion (chapter 13), 2nd ed. ed. by M. Liggins, D. Hall, J. Llinas (CRC Press, 2009), pp. 299–318

    Google Scholar 

  5. J.L. Graham, A new synthetic dataset for evaluating soft and hard fusion algorithms, in Proceedings of the SPIE Defense, Security, and Sensing Symposium: Defense Transformation and Net-Centric Systems 2011 (2011), pp. 25–29

    Google Scholar 

  6. D.R. Schlegel, S.C. Shapiro, Visually interacting with a knowledge base using frames, logic, and propositional graphs, in Graph Structures for Knowledge Representation and Reasoning, vol. 7205, Lecture Notes in Artificial Intelligence, ed. by M. Croitoru, S. Rudolph, N. Wilson, J. Howse, O. Corby (Springer, Berlin, 2012), pp. 188–207

    Chapter  Google Scholar 

  7. S.C. Shapiro, Belief spaces as sets of propositions. J. Exp. Theor. Artif. Intell. (JETAI), 5(2 and 3):225–235 (1993, Apr–Sept)

    Google Scholar 

  8. S.C. Shapiro, W.J. Rapaport, The SNePS family. Comput. Math. Appl. 23(2–5):243–275 (1992, Jan–Mar) Reprinted in [24, pp. 243–275]

    Google Scholar 

  9. S.C. Shapiro, Symmetric relations, intensional individuals, and variable binding. Proc. IEEE 74(10), 1354–1363 (1986)

    Article  Google Scholar 

  10. A.S. Maida, S.C. Shapiro, Intensional concepts in propositional semantic networks. Cogn. Sci. 6(4):291–330 (1982, Oct–Dec). Reprinted in [7, pp. 170–189]

    Google Scholar 

  11. S.C. Shapiro, Cables, paths and “subconscious” reasoning in propositional semantic networks, in Principles of Semantic Networks: Explorations in the Representation of Knowledge, ed. by J. Sowa (Morgan Kaufmann, Los Altos, CA, 1991), pp. 137–156

    Chapter  Google Scholar 

  12. A.N. Steinberg, G. Rogova, Situation and context in data fusion and natural language understanding, in Proceedings of the 11th International Conference on Information Fusion (Fusion2008) (IEEE, 2008, June), pp. 1–8

    Google Scholar 

  13. S.C. Shapiro, An introduction to SNePS 3, in Conceptual Structures: Logical, Linguistic, and Computational Issues, vol. 1867, Lecture Notes in Artificial Intelligence, ed. by B. Ganter, G.W. Mineau (Springer, Berlin, 2000), pp. 510–524

    Chapter  Google Scholar 

  14. N.A. Bradley, M.D. Dunlop, Toward a multidisciplinary model of context to support contextaware computing. Hum. Comput. Interact. 20(4), 403–446 (2005)

    Article  Google Scholar 

  15. D. Godden, A. Baddeley, Contextdependent memory in two natural environments: on land and underwater. Br. J. Psychol. 66, 325–332 (1975)

    Article  Google Scholar 

  16. S. Smith, E. Vela, Environmental context-dependent memory. Psychon. Bull. Rev. 8(2), 203–220 (2001)

    Article  Google Scholar 

  17. S. Smith, Remembering in and out of context. J. Exp. Psychol. Hum. Learn. Memory 5(5), 460–471 (1979)

    Article  Google Scholar 

  18. A.K. Dey, Understanding and using context. Pers. Ubiquit. Comput. 5(1), 4–7 (2001)

    Article  Google Scholar 

  19. G. Chen, D. Kotz, A survey of context-aware mobile computing research. Technical Report TR2000–381, Department of Computer Science, Dartmouth College, Hanover, NH, November 2000

    Google Scholar 

  20. J. McCarthy, Generality in artificial intelligence. Commun. ACM 30(12), 1030–1035 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  21. M. Benerecetti, P. Bouquet, C. Ghidini, Contextual reasoning distilled. JETAI 12(3), 279–305 (2000)

    MATH  Google Scholar 

  22. P. Bouquet, C. Ghidini, F.O. Giunchiglia, E. Blanzieri, Theories and uses of context in knowledge representation and reasoning. J. Pragmat. 35(3), 403–446 (2003)

    Article  Google Scholar 

  23. S. Buvač, Quantificational logic of context, in Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), Menlo Park, CA (AAAI Press, 1996), pp. 600–606

    Google Scholar 

  24. P. Brézillon, Context in artificial intelligence: I. A survey of the literature. Comput. Artif. Intell. 18(4), 321–340 (1999)

    MATH  Google Scholar 

  25. P. Brézillon, Context in problem solving. Knowl. Eng. Rev. 14(1), 47–80 (1999)

    Article  Google Scholar 

  26. P. Brézillon, Context in artificial intelligence: II. Key elements of contexts. Comput. Artif. Intell. 18(5), 425–446 (1999)

    MATH  Google Scholar 

  27. D. Lenat, The dimensions of context-space. Technical report, Cycorp, October 1998

    Google Scholar 

  28. R.P. Arritt, R.M. Turner, Situation assessment for autonomous underwater vehicles using a priori contextual knowledge, in Proceedings of the Thirteenth International Symposium on Unmanned Untethered Submersible Technology (UUST) (2003)

    Google Scholar 

  29. J. R. Anderson. Human associative memory. In How Can the Human Mind Occur in the Physical Universe? (Oxford University Press, New York, 2007), pp. 91–134

    Google Scholar 

  30. A.M. Collins, E.F. Loftus, A spreading activation theory of semantic processing. Psychol. Rev. 82(6), 407–428 (1975)

    Article  Google Scholar 

  31. F. Crestani, Application of spreading activation techniques in information retrieval. Artif. Intell. Rev. 11(6), 453–482 (1997)

    Article  Google Scholar 

  32. S.L. Reed, Texai (2010). http://sourceforge.net/projects/texai/

  33. J.R. Anderson, Cognitive architecture, in How Can the Human Mind Occur in the Physical Universe?, pages 3–43. Oxford University Press, New York, 2007

    Google Scholar 

  34. M.B. Howes, Long-term memory: ongoing research, in Human Memory: Structures and Images (SAGE Publications, Thousand Oaks, 2007)

    Google Scholar 

  35. D. Bothell. ACT-R 6.0 Reference Manual (2010)

    Google Scholar 

  36. D.B. Lenat, Cyc: a large-scale investment in knowledge infrastructure. Commun. ACM 38(11), 33–38 (1995)

    Article  Google Scholar 

  37. National Geospatial-Intelligence Agency. NGA GEOnet Names Server (2014). http://earth-info.nga.mil/gns/html/

  38. C.J. van Rijsbergen, Information Retrieval (Butterworths, London, second edition, 1979)

    MATH  Google Scholar 

  39. M. Kandefer, S.C. Shapiro, An F-measure for context-based information retrieval, in Commonsense 2009: Proceedings of the Ninth International Symposium on Logical Formalizations of Commonsense Reasoning, ed. by G. Lakemeyer, L. Morgenstern, M.-A. Williams (The Fields Institute, Toronto, CA, 2009), pp. 79–84

    Google Scholar 

  40. S.C. Shapiro, Relevance logic in computer science, in Entailment, volume II, ed. by A.R. Anderson, N.D. Belnap Jr, M. Dunn (Princeton University Press, Princeton, 1992), pp. 553–563

    Google Scholar 

  41. M. Kandefer and S. C. Shapiro. Evaluating spreading activation for soft information fusion, in Proceedings of the 14th International Conference on Information Fusion (Fusion 2011), pages 498–505. ISIF, 2011

    Google Scholar 

  42. M.W. Kandefer, S.C. Shapiro, A categorization of contextual constraints, in Biologically Inspired Cognitive Architectures: Papers from the 2008 AAAI Fall Symposium, ed. by A. Samsonovich (AAAI Press, Menlo Park, CA, 2008), pp. 88–93

    Google Scholar 

  43. R.J. Brachman, H.J. Levesque (eds.), Readings in knowledge representation (Morgan Kaufmann, San Mateo, 1985)

    MATH  Google Scholar 

  44. F. Lehmann (ed.), Semantic Networks in Artificial Intelligence (Pergamon Press, Oxford, 1992)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Office of Naval Research under contract N00173-08-C-4004, and by a Multidisciplinary University Research Initiative (MURI) grant (Number W911NF-09-1-0392) for “Unified Research on Network-based Hard/Soft Information Fusion ,” issued by the US Army Research Office (ARO) under the program management of Dr. John Lavery. The work describe here was done while both authors were in the Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY. Parts of this paper were taken from [39, 41, 42].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stuart C. Shapiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland (outside the USA)

About this chapter

Cite this chapter

Kandefer, M., Shapiro, S.C. (2016). Context Relevance for Text Analysis and Enhancement for Soft Information Fusion. In: Snidaro, L., García, J., Llinas, J., Blasch, E. (eds) Context-Enhanced Information Fusion. Advances in Computer Vision and Pattern Recognition. Springer, Cham. https://doi.org/10.1007/978-3-319-28971-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28971-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28969-4

  • Online ISBN: 978-3-319-28971-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics