Skip to main content
Log in

Synthetic data generation for classification via uni-modal cluster interpolation

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

The observations used to classify data from real systems often vary as a result of changing operating conditions (e.g. velocity, load, temperature, etc.). Hence, to create accurate classification algorithms for these systems, observations from a large number of operating conditions must be used in algorithm training. This can be an arduous, expensive, and even dangerous task. Treating an operating condition as an inherently metric continuous variable (e.g. velocity, load or temperature) and recognizing that observations at a single operating condition can be viewed as a data cluster enables formulation of interpolation techniques. This paper presents a method that uses data clusters at operating conditions where data has been collected to estimate data clusters at other operating conditions, enabling classification. The mathematical tools that are key to the proposed data cluster interpolation method are Catmull–Rom splines, the Schur decomposition, singular value decomposition, and a special matrix interpolation function. The ability of this method to accurately estimate distribution, orientation and location in the feature space is then shown through three benchmark problems involving 2D feature vectors. The proposed method is applied to empirical data involving vibration-based terrain classification for an autonomous robot using a feature vector of dimension 300, to show that these estimated data clusters are more effective for classification purposes than known data clusters that correspond to different operating conditions. Ultimately, it is concluded that although collecting real data is ideal, these estimated data clusters can improve classification accuracy when it is inconvenient or difficult to collect additional data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Bartels, R. H., Beatty, J. C., & Barsky B. A. (1998). Hermite and cubic spline interpolation. In An introduction to splines for use in computer graphics and geometric modeling (pp. 9–17). San Francisco, CA: Morgan Kaufmann.

  • Collins, E. G. Jr., & Coyle, E. J. (2008). Vibration-based terrain classification using surface profile input frequency responses. In Proceedings of IEEE International Conference on Robotics and Automation. Pasadena, California.

  • Coyle, E. J., Collins, E. G. Jr., & Roberts, R. G. (2011). Speed independent terrain classification using singular value decomposition interpolation. In Proceedings of IEEE International Conference on Robotics and Automation. Shanghai, China. (submitted for publication).

  • Culver, W. J. (1966). On the existence and uniqueness of the real logarithm of a matrix. Proceedings of American Mathimatical Society, 17, 1146–1151.

    Article  MATH  MathSciNet  Google Scholar 

  • Davies, P. I., & Higham, N. J. (2003). A Schur–Parlett algorithm for computing matrix functions. SIAM Journal of Matrix Analysis Applications, 25, 464–485.

    Article  MATH  MathSciNet  Google Scholar 

  • Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern classification (2nd ed.). New York: Wiley.

    MATH  Google Scholar 

  • DuPont, E. M., Collins, E. G, Jr., Coyle, E. J., & Roberts, R. G. (2008a). Terrain classification using vibration sensors: Theory and methods. In E. V. Gaines & L. W. Peskov (Eds.), New Research on Mobile Robotics. Hauppauge, NY: Nova.

  • DuPont, E. M., Moore, C. A., Collins, E. G, Jr, & Coyle, E. J. (2008b). Frequency response method for online terrain identification in unmanned ground vehicles. Autonomous Robots, 24(4), 337–347.

    Google Scholar 

  • DuPont, E. M., Moore, C. A., & Roberts, R. G. (2008c). Terrain classification for mobile robots traveling at various speeds an eigenspace manifold approach. In Proceedings of IEEE International Conference on Robotics and Automation. Pasadena, California.

  • Feiveson, A. H. (1966). The generation of a random sample-covariance matrix. Technical, Report NASA-TN-D-3207, NASA.

  • Hildebrand, F. B. (1987). Introduction to numerical analysis (2nd ed.). New York: Dover Publications Inc.

    MATH  Google Scholar 

  • Park, F. C., & Ravani, B. (1997). Smooth invariant interpolation of rotations. ACM Transactions on Graphics, 16(3), 277–295.

    Article  Google Scholar 

  • Rasmussen, C. E., & Williams, C. K. I. (2005). Gaussian processes for machine learning (Adaptive computation and machine learning). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Shoemake, K. (1985). Animating rotation with quaternion curves. SIGGRAPH Computer Graphics, 19(3), 245–254.

    Article  Google Scholar 

  • Ward, C. C., & Iagnemma, K. (2008). Speed-independent vibration-based terrain classification for passenger vehicles. Vehicle System Dynamics, 00, 1–19.

    Google Scholar 

  • Weiss, C., Fröhlich, H., & Zell, A. (2006). Vibration-based terrain classification using support vector machines. In Proceedings of the International Conference on Intelligent Robots and Systems. Beijing, China.

  • Yuan, Q., Thangali A., Ablavsky V., & Sclaroff S. (2007). Parameter sensitive detectors. In Computer Vision and Pattern Recognition (CVPR) pp. 1–6.

Download references

Acknowledgments

This work was prepared through collaborative participation in the Robotics Consortium which is sponsored by the U. S. Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD 19-01-2- 0012. The U. S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. Funding for this research also provided by the National Science Foundation, Project EEC-0540865.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric J. Coyle.

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

Referring to Theorem 1 of Sect. 2.2.2, which presented the canonical form of (20), it is well known that the eigenvalues of a real, orthogonal matrix have unit magnitude so the first part of the theorem concerning the form of the eigenvalues is clear. Before proving the remainder of the theorem, two simple lemmas on complex vectors are presented. The first lemma concerns an effect that scalar multiplication can have on the dot product of the real and imaginary parts of a complex vector.

Lemma 1

Given a vector \(\mathbf{w} \in \mathcal{C}^n\), there exists a scalar \(z \in \mathcal{C}\) such that the real and imaginary parts of \(z\mathbf{w}\) are perpendicular to each other.

Proof

For notational convenience, \(z=a+jb\) will be used and \(\mathbf{w} = \mathbf{u} + j\mathbf{v}\) where \(a, b \in \mathcal{R}\) and \(\mathbf{u}, \mathbf{v} \in \mathcal{R}^n\). Then \(z\mathbf{w}=a\mathbf{u}-b\mathbf{v}+j(b\mathbf{u}+a\mathbf{v})\), and the dot product of the real and imaginary parts of \(z\mathbf{w}\) is

$$\begin{aligned}&(a\mathbf{u}-b\mathbf{v}) \cdot (b\mathbf{u}+a\mathbf{v}) = a^2 \mathbf{u} \cdot \mathbf{v}\nonumber \\&\quad + ab (\Vert \mathbf{u}\Vert ^2-\Vert \mathbf{v}\Vert ^2) - b^2 \mathbf{u} \cdot \mathbf{v}. \end{aligned}$$
(30)

The goal is to find \(a\) and \(b\) so that this dot product is zero. If \(\mathbf{u} \cdot \mathbf{v} = 0\), then \(z=1\) will suffice; otherwise set the real part \(a=1\) and choose \(b \in \mathcal{R}\) so that \(\mathbf{u} \cdot \mathbf{v} b^2 + (\Vert \mathbf{v}\Vert ^2-\Vert \mathbf{u}\Vert ^2) b - \mathbf{u} \cdot \mathbf{v} = 0\). This can be done precisely when the discriminant is nonnegative, i.e., when \((\Vert \mathbf{v}\Vert ^2-\Vert \mathbf{u}\Vert ^2)^2+4(\mathbf{u} \cdot \mathbf{v})^2 \ge 0\), which is clearly true. This yields a suitable complex scalar \(z=a+jb\).\(\square \)

The next lemma concerns the nature of the complex eigenvectors of an orthogonal matrix when the real and imaginary parts are perpendicular to each other.

Lemma 2

Suppose that \(\lambda \) is a non-real eigenvalue of an orthogonal matrix \(U\) and that \(\mathbf{w} = \mathbf{u} + j\mathbf{v}\) is a corresponding eigenvector, where the real vectors \(\mathbf{u}\) and \(\mathbf{v}\) are perpendicular to each other. Then \(\Vert \mathbf{u}\Vert = \Vert \mathbf{v}\Vert \).

Proof

The scalar \(\lambda \) has the form \(\lambda = e^{j \theta } = \cos \theta + j \sin \theta \) where \(\sin \theta \ne 0\). By equating the real parts of the expressions \(U\mathbf{w} = U\mathbf{u} + j U\mathbf{v}\) and

$$\begin{aligned}&U\mathbf{w} = (\cos \theta + j \sin \theta ) (\mathbf{u} + j\mathbf{v})\nonumber \\&\quad \quad \,\, = \cos \theta \mathbf{u} - \sin \theta \mathbf{v} + j (\sin \theta \mathbf{u} + \cos \theta \mathbf{v}) \end{aligned}$$
(31)

and using the facts that \(\Vert U\mathbf{u}\Vert =\Vert \mathbf{u}\Vert \) and \(\mathbf{u} \cdot \mathbf{v} = 0\), the result \(\Vert \mathbf{u}\Vert ^2 = \Vert U \mathbf{u}\Vert ^2 = \cos ^2\theta \Vert \mathbf{u}\Vert ^2 + \sin ^2\theta \Vert \mathbf{u}\Vert ^2\) is obtained, which further simplifies to \(\sin ^2\theta (\Vert \mathbf{u}\Vert ^2-\Vert \mathbf{v}\Vert ^2)=0\). Equating the imaginary parts yields the same expression. Since \(\sin \theta \ne 0\), it follows that \(\Vert \mathbf{u}\Vert ^2=\Vert \mathbf{v}\Vert ^2\), which proves the result.\(\square \)

Theorem 1 can now be proven.

Proof

Once again, it is well known that the eigenvalues of a real, orthogonal matrix have unit magnitude, i.e., the eigenvalues have the form \(1, -1\), or \(e^{j \theta }\). Suppose that \(\mathbf{q}\) is a unit length eigenvector of \(U\) corresponding to \(1\) or \(-1\). Letting \(Q_1\) be an orthogonal matrix with \(\mathbf{q}\) as its first column, then \(Q_1^T U Q_1\) is an orthogonal matrix whose first column equals \(\left[ \begin{array}{llll} \pm 1&0&\cdots&0 \end{array}\right] ^T\). As an orthogonal matrix with a value of \(\pm 1\) in its \((1,1)\) element, it follows that the remaining elements in the first row of \(Q_1^T U Q_1\) must all be zero. Consequently, \(U\) can be written in the following block diagonal representation:

$$\begin{aligned} U = Q_1 \left[ \begin{array}{l@{\quad }l} \pm 1 &{} \\ &{} U_2 \\ \end{array}\right] Q_1^T. \end{aligned}$$
(32)

The situation is somewhat more involved for complex eigenvalues. Since \(U\) is a real matrix, its complex eigenvalues appear as complex conjugate pairs, and the corresponding eigenvectors can also be chosen to be complex conjugates of each other. Let \(\lambda = e^{j \theta } = \cos \theta + j \sin \theta \) be a non-real eigenvalue, where \(\sin \theta \) is necessarily nonzero. Next, note that by Lemmas 1 and 2 the corresponding eigenvector \(\mathbf{w}=\mathbf{u}+j\mathbf{v}\) can be chosen so that its real and imaginary parts have unit norm and are perpendicular to each other. It can then be shown that

$$\begin{aligned} \left[ \begin{array}{l@{\quad }l} \mathbf{u} &{} -\mathbf{v} \\ \end{array}\right] ^T U \left[ \begin{array}{l@{\quad }l} \mathbf{u} &{} -\mathbf{v} \\ \end{array}\right] = \left[ \begin{array}{l@{\quad }l} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \\ \end{array}\right] . \end{aligned}$$
(33)

Letting \(Q_1\) be an orthogonal matrix whose first two columns are \(\mathbf{u}\) and \(-\mathbf{v}\), yields

$$\begin{aligned} U = Q_1 \left[ \begin{array}{ll} \left[ \begin{array}{l@{\quad }l}\cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \\ \end{array}\right] &{} \\ &{} U_2 \\ \end{array} \right] Q_1^T. \end{aligned}$$
(34)

It is then possible to iteratively apply (32) and (34) to obtain the form given in (20). For example, the orthogonal matrix \(U_2\) in (20) has the same eigenvalues as \(U\) except that the first eigenvalue of \(U\) is removed. So, if there is another real eigenvalue, then one can choose an \((n-1) \times (n-1)\) orthogonal matrix \(Q_2\) such that \(Q_2^TU_2Q_2\) also has the same form as (32). Hence, for the orthogonal matrix \(Q=Q_1 \mathrm{diag}(1,Q_2)\),

$$\begin{aligned} U = Q \left[ \begin{array}{lll} \pm 1 &{} &{} \\ &{} \pm 1 &{} \\ &{} &{} U_3 \\ \end{array} \right] Q^T. \end{aligned}$$
(35)

Thus, iteratively applying the above procedures results in the desired form.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Coyle, E.J., Roberts, R.G., Collins, E.G. et al. Synthetic data generation for classification via uni-modal cluster interpolation. Auton Robot 37, 27–45 (2014). https://doi.org/10.1007/s10514-013-9373-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-013-9373-9

Keywords

Navigation