Skip to main content
Log in

Motion synthesis through 1D affine matching

  • Theoreticak Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

We present the study of a data-driven motion synthesis approach based on a 1D affine image-matching equation. We start by deriving the relevant properties of the exact matching operator, such as the existence of a singular point. Next, we approximate such operator by the Green’s function of a second-order differential equation, finding that it leads to a more compelling motion impression, due to the incorporation of blur. We then proceed to show that, by judicious choice of the matching parameters, the 1D affine Green’s filter allows the simulation of a broad class of effects, such as zoom-in and zoom-out, and of complex nonrigid motions such as that of a pulsating heart.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Shinya M, Fournier A (1992) Stochastic motion – motion under the influence of wind. Comput Graph Forum 11(3):119–128

    Article  Google Scholar 

  2. Oziem D, Campbell N, Dalton C, Gibson D, Thomas B (2004) Combining sampling and autoregression for motion synthesis. Proc. of the Computer Graphics International Conference, pp. 510–513

  3. Foster N, Metaxas D (1996) Realistic animation of liquids. CVGIP 58(5):471–483

    Google Scholar 

  4. Freeman W, Adelson E (1991) The design and use of steerable filters. IEEE Trans PAMI 13(9):891–906

    Google Scholar 

  5. Freeman W, Adelson E, Heeger D (1991) Motion without movement. Comput Graph 4 25:27–30

    Google Scholar 

  6. Brostow GJ, Essa I (2001) Image-based motion blur for stop motion animation. Proc. of the 28th annual conference on Computer Graphics and Interactive Techniques, pp. 561–566

  7. Glassner A (1999) An open and shut case computer graphics. IEEE Comput Graph Appl 19:82–92

    Google Scholar 

  8. Potmesil M, Chakravarty I (1983) Modeling motion blur in computer generated images. Comput Graph 17:389–399

    Article  Google Scholar 

  9. Max NL, Lerner DM (1985) A two-and-a-half-D motion blur algorithm. Comput Graph 19:85–93

    Article  Google Scholar 

  10. Horn B, Schunck B (1981) Determining optical flow. Artif Intell 17:185–203

    Article  Google Scholar 

  11. Lucas D, Kanade T (1981) An Interative Image registration technique with an application to stereo vision. In Proc Seventh IJCAI, Vancouver, pp. 674–679

  12. Black M, Anandan P (1996) The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput Vis Image Underst 63(1):75–104

    Article  Google Scholar 

  13. Torreão JRA (2001) A Green’s function approach to shape from shading. Pattern Recognit 34:2367–2382

    Article  MATH  Google Scholar 

  14. Torreão JRA (2003) Geometric-photometric approach to monocular shape estimation. Image Vis Comput 21:1045–1061

    Article  Google Scholar 

  15. Rav-Acha A, Peleg S (2000) Restoration of multiple images with motion blur in different directions. Workshop on applications of computer vision, Palm Springs, pp. 22–28

  16. Martinsen T, Quintero FM, Skarda D (1996) Plug-in to the GIMP (Open Source Code version 1.22)

  17. GIMP (2001) The GIMP Team, {\em web site available at http://www.gimp.org}

  18. Zill DG, Cullen MR (1993) Differential equations with boundary-value problems. PWS Publishing Company

  19. Evans LC (1997) Partial differential equations. American Mathematical Society

  20. Verri A, Girosi F, Torre V (1989) Mathematical properties of the 2D motion field: from singular points to motion parameters. J Opt Soc Am A 6(5):698–712

    Article  MathSciNet  Google Scholar 

  21. Corpetti T, Mémin E, Pérez P (2003) Extraction of singular points from dense motion fields: an analytic approach. J Math Imaging Vis 19(3):175–198

    Article  MATH  Google Scholar 

  22. Ford R, Strickland R (1995) Representing and visualizing fluid flow images and velocimetry data by nonlinear dynamical systems. CVGIP: Graph Models Image Process 57(6):462–482

    Article  MATH  Google Scholar 

  23. Nogawa H, Nakajima Y, Sato Y (1997) Acquisition of symbolic description from flow fields: a new approach based on a fluid model. IEEE Trans Pattern Anal Mach Intell 19(1):58–63

    Article  Google Scholar 

  24. Wohn K, Waxman A (1990) The analytic structure of image flows: deformation and segmentation. Comput Vis Graph Image Process 2(49):127–151

    Article  Google Scholar 

  25. Maurizot M, Bouthemy P, Delyon B, Juditski A, Odobez J (1995) Determination of singular points in 2D deformable flow fields. Proc 2nd IEEE Int Conf Image Process 3:488–491

    Article  Google Scholar 

  26. Foley J, Van Dam A, Feiner S, Hughes J (1990) Computer graphics: principles and practice in C, 2nd Edition, Addison-Wesley systems programming series

  27. Rekleitis IM (1995) Visual motion estimation based on motion blur Interpretation, M. Sc. Thesis of Computer Science, School of Computer Science, McGill University, Montreal

  28. Rekleitis IM (1996) Steerable filters and cepstral analysis for optical flow calculation from a single blurred image. Vision Interface, pp. 159–166

  29. Rekleitis IM (1996) Optical flow recognition from the power spectrum of a single blurred image. Proc of IEEE International Conference on Image Processing

  30. Ferreira Jr PE, Torreão JRA, Carvalho PCP (2004) Data-based motion simulation through a Green’s function approach. Proc of XVII SIBGRAPI, pp. 193–199

  31. Jahne B, Hauβecker H, Geiβler P (1999) Handbook of computer vision and applications, Vol 2, Academic, London

    Google Scholar 

  32. Ferreira Jr PE, Torreão JRA, Carvalho PCP, Velho L (2005) Video interpolation through Green’s functions of matching equations. Proc of IEEE International Conference on Image Processing

  33. Beylkin G (1993) Chapter in the book Wavelets: mathematics and applications, CRC Press

  34. Black M (1996) Area-based optical flow: robust affine regression, Software available on-line at http://www.cs.brown.edu/people/black/

Download references

Acknowledgments

The research reported here has been partially developed at IMPA’s VISGRAF Laboratory, with the sponsorship of CAPES, and at the Department of Computer Science at UFBA, with the sponsorship of FAPESB. The first author would like to thank Professor Augusto C. P. L. da Costa, and the secretary Dilson Anunciação, for their support of his work at UFBA. J.R.A. Torreão acknowledges a grant from CNPq-Brasil. The authors would also like to thank Professor Michael Black for allowing the use of his affine optical flow code [34].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Perfilino E. Ferreira Jr.

Appendix: Numerical validation of the experiments

Appendix: Numerical validation of the experiments

As a means of validating the experiments in Section 4, we have used a software for motion estimation—kindly provided to us by Professor Michael Black—which is based on affine regression [34]. In it, the 2D affine model is expressed as

$$ \left[\begin{array}{c}{\tilde{U}}\\{\tilde{V}}\end{array}\right] = \left[\begin{array}{c}{\tilde{u_0}}\\ -{\tilde{v_0}}\end{array}\right] + \left[\begin{array}{cc} {\tilde{u_1}}&{\tilde{u_2}}\\-{\tilde{v_1}}& {\tilde{v_2}}\end{array}\right] \cdot \left[\begin{array}{c}{x - c_x}\\{y - c_y}\end{array}\right], $$
(23)

where (c x , c y )T denotes the coordinates of the central image point. Comparing the above with Eq. (16), we find the relations

$$ \left\{\begin{array}{l} u_0 = \tilde {u_0} - \tilde{u_1} c_x - \tilde{u_2} c_y\equiv {u^{*}_0} \\ u_1 = \tilde{u_1} \\ u_2 = \tilde{u_2} \\ v_0 = -\tilde{v_0} + \tilde{v_1} c_x - \tilde{v_2} c_y\equiv {v^{*}_0} \\ v_1 = -\tilde{v_1} \\ v_2 = \tilde{v_2} \end{array}\right. $$
(24)

Taking the above into account, we have thus employed Michael Black’s program to estimate the affine motion components, in order to compare them with the input parameters of the Green’s filter. In each considered sequence, the input image and a synthesized one have been used for this purpose. It should be noted that, since we have restricted ourselves here to a separable 2D affine model, it is expected that we should find \(\tilde{u_2}\approx\tilde{v_1}\approx 0,\) in all the experiments. Below, we present the validation results only for a subset of the more complex simulated sequences, namely those illustrated in Figs. 1216.

Zoom-out. Table 1 presents the optical flow parameters yielded by [34], along with those used as input to the Green’s filter. The third frame in Fig. 13 has been used.

Table 1 Zoom-out (Fig. 13)

We see that a very good correspondence is obtained in this case, for all the parameters.

Zoom-in. Again, the estimated and input parameters are very consistent, as shown by Table 2. The second frame in Fig. 14 has been used.

Table 2 Zoom-in (Fig. 14)

Funny eye. The second frame in Fig. 16 has been used. Table 3 shows the estimated and input parameters. Again, the correspondence is fairly good.

Table 3 Funny eye (Fig. 16)

Next, we discuss two examples where the validation through Michael Black’s program has not been possible, those of the pulsating heart and the deforming ball simulations.

Pulsating heart. Table 4 shows the input parameters and those estimated from the third frame of Fig. 12. The data are inconsistent.

Table 4 Pulsating heart (Fig. 12)

A similar situation occurs with the deforming ball, as shown below:

Deforming ball. Table 5 shows the input parameters and those estimated from the second frame of Fig. 15. Again, the data are not consistent, although the errors are somewhat smaller than in the pulsating heart experiment.

Table 5 Deforming ball (Fig. 15)

We conjecture that the problem, in the above simulations, may arise from the fact that, in both cases, the input images consist of the superposition of a central object over a dark background, what could somehow induce errors in the estimation process. In order to check such hypothesis, we performed an additional test based on an image where such a clear-cut figure/background segmentation is not present. For this purpose, we chose the input image to the zoom experiments, applying over it a Green’s filter with parameters (u 0, u 1, x U ) =  (2, −0.031, 64), in order to simulate only horizontal motion, as in the deforming ball and pulsating heart examples. The generated pair appears in Fig. 17.

Fig. 17
figure 17

Additional test: a Original image. b G-filtered image, for parameters (u 0, u 1, x U ) =  (2, −0.031, 64)

Table 6, below, shows the estimated and input parameters, which, in this case, prove fairly consistent.

Table 6 Additional test (Fig. 17)

From the foregoing discussion, we may conclude that, except in the case of images with the characteristics of Figs. 12 and 15—i.e., with a sharp figure/background separation—the Green’s function simulations can be numerically validated by the motion estimation algorithm of [34].

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ferreira, P.E., Torreão, J.R.A., Carvalho, P.C.P. et al. Motion synthesis through 1D affine matching. Pattern Anal Applic 11, 45–58 (2008). https://doi.org/10.1007/s10044-007-0078-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-007-0078-6

Keywords

Navigation