Elsevier

Pattern Recognition Letters

Volume 31, Issue 12, 1 September 2010, Pages 1627-1632
Pattern Recognition Letters

DynTex: A comprehensive database of dynamic textures

https://doi.org/10.1016/j.patrec.2010.05.009Get rights and content

Abstract

We present the DynTex database of high-quality dynamic texture videos. It consists of over 650 sequences of dynamic textures, mostly in everyday surroundings. Additionally, we propose a scheme for the manual annotation of the sequences based on a detailed analysis of the physical processes underlying the dynamic textures. Using this scheme we describe the texture sequences in terms of both visual structure and semantic content. The videos and annotations are made publicly available for scientific research.

Introduction

In recent years we have witnessed a rapid growth of interest in the study of dynamic texture (DT). This new field of research offers an extension of the study of static texture into the temporal domain. Dynamic texture phenomena can be observed all around us in daily life, e.g. moving trees in the wind, rippling water, fluttering sails and flags, or moving crowds of people in a shopping street.

Just like static textures, dynamic textures can be studied in a wide variety of ways. Research on texture synthesis aims to model dynamic textures for realistic rendering (e.g. Szummer and Picard, 1996, Saisan et al., 2001 or Filip et al., 2006). A related area is texture analysis where the problem is to characterize the visual properties of the texture, e.g. its regularity or granularity. This can also be useful for texture retrieval. For texture detection the goal is to detect when a certain type of dynamic texture appears in a video sequence, e.g. to detect fire (Dedeoglu et al., 2005). An interesting problem is also to discern dynamic texture from camera motion (Amiaz et al., 2007). Texture segmentation is about video segmentation and the accurate localization of the textures, in space, and, possibly, in time (e.g. Doretto et al., 2003, Chan and Vasconcelos, 2005). For texture recognition the aim is to recognize the type of dynamic texture, possibly from among several other ones (Péteri and Chetverikov, 2005, Zhao and Pietikainen, 2007); a review on DT description and recognition is presented in (Chetverikov and Péteri, 2005). In most of the cases, works on this topic consider well segmented sequences of dynamic textures. A final topic, irregularity detection has, to our knowledge, not been investigated so far. It aims at detecting irregular motions in sequences consisting of pure dynamic textures (for instance the detection of a piece of wood drifting on a river surface), and can be seen as the counterpart of defect detection for static textures (see Chetverikov and Hanbury, 2002).

With regard to the task of setting up test sets for performance evaluation, each of these areas of study have their own specific requirements. For instance, testing performance of methods for texture synthesis is best done on close-up sequences of the texture, whereas texture detection requires dynamic texture that is shown in context. For texture segmentation it may be beneficial to offer sequences where several dynamic texture phenomena are present in the same sequence.

In this paper we present the DynTex database. It consists of over 650 high-quality sequences of dynamic texture. It aims to serve as a standard database for dynamic texture research and to accommodate the needs of the different research areas mentioned above. So far no other databases suitable for this purpose are available. One interesting pioneer database to mention is a dataset compiled at MIT (Szummer, 1995). It is composed of around 25 black and white segmented sequences of dimension 170 × 115 × 120. However, this collection has a number of drawbacks: video dimensions are small (especially in the temporal direction); there is only a single occurrence per class, and not enough classes are available for practical classification purposes; finally, some of the sequences show undesirable camera motion. Other datasets of dynamic textures used in research papers have been shot by the authors themselves and are not publicly released, which prevents from using them for comparison in other research works.

The need for a standard database is clearly demonstrated by the interest of the research community: at the time of writing of this paper, DynTex has already over 300 registered researchers (about half of which are PhD candidates) that use the database for their studies.

The structure of the paper is as follows. In Section 2 we shortly review our understanding of dynamic textures as extensions of spatial dynamic textures to the temporal domain, and discuss some implications for the compilation of the DynTex database. In Section 3 we describe the acquisition protocol of the texture sequences and the video formats in which the sequences are made available. In Section 4 we propose a video annotation scheme based on the physical processes underlying the dynamical textures. This scheme has been used for the manual annotation of the full collection of DynTex sequences. In Section 5 we discuss the two primary means by which the annotations are made available. We also discuss how users of the database can create their own test subset selections for their specific research purposes. Finally, in Section 6 we present our conclusion and discuss a number of prospects for future extensions.

Section snippets

Dynamic textures

Giving a proper definition of texture is a notoriously difficult problem. One reason is that texture has so many aspects influencing its perception as a coherent pattern. To some extent it may possess a predictable, regular organization but it will also often display a strong stochastic component. This applies to qualities such as size, orientation (or lack thereof in the case of isotropy), shape and the layout of the constituent parts. Each of these and their variations may exhibit themselves

Video acquisition protocol

As much as practically feasible, the shots of dynamic textures were taken in the surroundings and circumstances in which they occur in daily life. Generally we have aimed to supply both a close-up shot of the texture, and a shot of the texture in its natural context.

The dynamic texture sequences have been acquired using a SONY 3 CCD camera mounted on a tripod. All sequences are recorded in PAL format (720 × 576), 25 fps, interlaced. Before each shot the white balance was calibrated by means of a

Dynamic texture annotation

We have manually annotated the DynTex database by means of a description scheme based on the physical texture processes occurring in the sequences. The descriptors are divided into three main categories: content management descriptors, structural descriptors and semantic descriptors. The annotations serve at least three purposes: (i) they can assist users in retrieving particular dynamic textures; for example, when looking for trees in heavy wind, we may filter by selecting oscillating motions

Querying and browsing DynTex

The DynTex database is located at the following URL: http://projects.cwi.nl/dyntex/. After registration, the sequences can be downloaded and used for research purposes under a Creative Commons Attribution-NonCommercial-ShareAlike license.4

A Microsoft Access database with a customized interface is made available for quick browsing of the data set and convenient selection of dynamic textures of interest. A screenshot of the browsing interface is shown in Fig. 3

Conclusion and future work

Motivated by the increasing interest of the computer vision community for the study of dynamic textures and the lack of freely available databases, we present the DynTex dynamic texture database. With more than 650 sequences of dynamic textures, shot in different conditions, DynTex targets many computer vision applications (e.g. recognition of dynamic textures, spatio-temporal segmentation, synthesis). It provides a set of annotated sequences designed to serve for testing and comparing methods.

Acknowledgments

The authors would like to thank Dr. Eric Pauwels for his help and the Centrum Wiskunde & Informatica (CWI), Amsterdam, the Netherlands, for hosting the DynTex database.

References (14)

  • Amiaz, T., Fazekas, S., Chetverikov, D., Kiryati, N., 2007. Detecting regions of dynamic texture. In: 1st Int. Conf. on...
  • Chan, A.B., Vasconcelos, N., 2005. Mixtures of dynamic textures. In: Proc. Int. Conf. Computer Vision, pp. I:...
  • D. Chetverikov et al.

    Finding defects in texture using regularity and local orientation

    Pattern Recognit.

    (2002)
  • N. Chetverikov et al.

    A brief survey of dynamic texture description and recognition

  • Dedeoglu, Yigithan, Toreyin, B.Ugur , Gudukbay, Ugur, Enis Cetin, A, 2005. Real-time fire and flame detection in video....
  • Doretto, G., Cremers, D., Favaro, P., Soatto, S, 2003. Dynamic texture segmentation. In: Proc. Int. Conf. Computer...
  • Filip, J., Haindl, M., Chetverikov, D., 2006. Fast synthesis of dynamic colour textures. In: Proc. 18th IAPR Int. Conf....
There are more references available in the full text version of this article.

Cited by (277)

  • Detecting Water in Visual Image Streams from UAV with Flight Constraints

    2023, Journal of Visual Communication and Image Representation
  • Dynamic texture representation based on oriented magnitudes of Gaussian gradients

    2021, Journal of Visual Communication and Image Representation
    Citation Excerpt :

    In the meantime, an equiangular kernel was proposed in [34] to build a dictionary in reasonable dimension. Like the geometry-based approaches, the dictionary-based methods have arduously faced with “understanding” the complex dynamic properties of DTs in DynTex [16] and DynTex++ [17]. Filter-based methods: Thanks to the robustness against changes of environmental elements, illumination and noise, the filter-based methods have achieved potential results of DT recognition.

View all citing articles on Scopus
View full text