Abstract
We present a multi-modal video analysis framework for life-logging research. Domain-specific approaches and alternative software solutions are referenced, then we briefly outline the concept and realization of our OS X-based software for experimental research on segmentation of continuous video using sensor context. The framework facilitates visual inspection, basic data annotation, and the development of sensor fusion-based machine learning algorithms.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bush, V.: As we may think. The Atlantic Monthly 176(1) (7 1945); NB: Illustrations with the mentioned head-mounted camera appear in Life Magazine 11 (1945)
Mann, S.: Wearable computing: A first step toward personal imaging. IEEE Computer 30(2), 25–32 (1997)
Billinghurst, M., Starner, T.: Wearable devices: New ways to manage information. Computer 32(1), 57–64 (1999)
Maeda, M.: DARPA ASSIST. This is an electronic document (Feburary 28, 2005), http://assist.mitre.org/ (retrieved: Mar 18, 2009)
Microsoft Research Sensors and Devices Group: Microsoft Research SenseCam. This is an electronic document, http://research.microsoft.com/sendev/projects/sensecam/ (retrieved: March 18, 2009)
Smeaton, A.F.: Content vs. context for multimedia semantics: The case of sensecam image structuring. In: Proceedings of The First International Conference on Semantics And Digital Media Technology, pp. 1–10 (2006)
Doherty, A., Byrne, D., Smeaton, A.F., Jones, G., Hughes., M.: Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs. In: Proc. Intl. Conference on Image and Video Retrieval (2008)
Lienhart, R., Pfeiffer, S., Effelsberg, W.: Video abstracting. Communications of the ACM 40(12), 54–62 (1997)
Ferman, A.M., Tekalp, A.M.: Efficient filtering and clustering methods for temporal video segmentation and visual summarization. Journal of Visual Communication and Image Representation 9(4), 336–351 (1998)
Kubat, R., DeCamp, P., Roy, B.: TotalRecall: Visualization and semi-automatic annotation of very large audio-visual corpora. In: ICMI 2007: Proceedings of the 9th international conference on Multimodal interfaces, pp. 208–215 (2007)
MacNeil, R.: Generating multimedia presentations automatically using TYRO, the constraint, case-based designer’s apprentice. In: Proc. VL, pp. 74–79 (1991)
Dickmann, L., Fernan, M.J., Kanakis, A., Kessler, A., Sulak, O., von Maydell, P., Beauregard, S.: Context-aware classification of continuous video from wearables. In: Proc. Conference on Designing for User Experience (DUX 2007) (2007)
Jaeger, H.: Discovering multiscale dynamical features with hierarchical echo state networks. Technical Report 10, Jacobs University/SES (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lensing, T., Dickmann, L., Beauregard, S. (2009). A Framework for Review, Annotation, and Classification of Continuous Video in Context. In: Butz, A., Fisher, B., Christie, M., Krüger, A., Olivier, P., Therón, R. (eds) Smart Graphics. SG 2009. Lecture Notes in Computer Science, vol 5531. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02115-2_28
Download citation
DOI: https://doi.org/10.1007/978-3-642-02115-2_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02114-5
Online ISBN: 978-3-642-02115-2
eBook Packages: Computer ScienceComputer Science (R0)