8th International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS)

Research Article

On event-based motion detection and integration

  • @INPROCEEDINGS{10.4108/icst.bict.2014.257904,
        author={Tobias Brosch and Stephan Tschechne and Roman Sailer and Nora von Egloffstein and Luma Issa Abdul-Kreem and Heiko Neumann},
        title={On event-based motion detection and integration},
        proceedings={8th International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS)},
        publisher={ICST},
        proceedings_a={BICT},
        year={2015},
        month={2},
        keywords={neuromorphic hardware bio-inspired modelling event vision motion estimation},
        doi={10.4108/icst.bict.2014.257904}
    }
    
  • Tobias Brosch
    Stephan Tschechne
    Roman Sailer
    Nora von Egloffstein
    Luma Issa Abdul-Kreem
    Heiko Neumann
    Year: 2015
    On event-based motion detection and integration
    BICT
    ACM
    DOI: 10.4108/icst.bict.2014.257904
Tobias Brosch1, Stephan Tschechne1,*, Roman Sailer1, Nora von Egloffstein1, Luma Issa Abdul-Kreem1, Heiko Neumann1
  • 1: Ulm University Inst. of Neural Information Processing
*Contact email: stephan.tschechne@uni-ulm.de

Abstract

Event-based vision sensors sample individual pixels at a much higher temporal resolution and provide a representation of the visual input available in their receptive fields that is temporally independent of neighboring pixels. The information available on pixel level for subsequent processing stages is reduced to representations of changes in the local intensity function. In this paper we present theoretical implications of this condition with respect to the structure of light fields for stationary observers and local moving contrasts in the luminance function. On this basis we derive several constraints on what kind of information can be extracted from event-based sensory acquisition using the address-event-representation (AER) principle. We discuss how subsequent visual mechanisms can build upon such representations in order to integrate motion and static shape information. On this foundation we present approaches for motion detection and integration in a neurally inspired model that demonstrates the interaction of early and intermediate stages of visual processing. Results replicating experimental findings demonstrate the abilities of the initial and subsequent stages of the model in the domain of motion processing.