Sequential inference with reliable observations: Learning to construct force-dynamic models

https://doi.org/10.1016/j.artint.2006.08.003Get rights and content
Under an Elsevier user license
open archive

Abstract

We present a trainable sequential-inference technique for processes with large state and observation spaces and relational structure. We apply our technique to the problem of force-dynamic state inference from video, which is a critical component of the LEONARD [J.M. Siskind, Grounding lexical semantics of verbs in visual perception using force dynamics and event logic, Journal of Artificial Intelligence Research 15 (2001) 31–90] visual-event recognition system. LEONARD uses event definitions that are grounded in force-dynamic primitives—making robust and efficient force-dynamic inference critical to good performance. Our sequential-inference method assumes “reliable observations”, i.e., that each process state (e.g., force-dynamic state) persists long enough to be reliably inferred from the observations (e.g., video frames) it generates. We introduce the idea of a “state-inference function” (from observation sequences to underlying hidden states) for representing knowledge about a process and develop an efficient sequential-inference algorithm, utilizing this function, that is correct for processes that generate reliable observations consistent with the state-inference function. We describe a representation for state-inference functions in relational domains and give a corresponding supervised learning algorithm. Our experiments in force-dynamic state inference show that our technique provides significantly improved accuracy and speed relative to a variety of recent, hand-coded, non-trainable systems, and a trainable system based on probabilistic modeling.

Keywords

Sequence learning
Relational learning
Event recognition
Temporal learning
Inductive logic programming

Cited by (0)