Skip to main content
Log in

A Robust Feature Matching Approach for Photography Originality Test

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Nowadays, with the help of photo processing software, it is easy to ‘create’ a photo from other people’s photography works. So, more and more unoriginal photography works have appeared in some photography contests. In order to protect the copyright and maintain fairness, it is crucial to recognize these plagiarisms. However, it is a difficult task because most plagiarisms have undergone copying, scaling, cropping and other processing. Even worse, most original copies don’t have any digital watermarks on them.

In this paper, we propose a novel learning-based feature matching approach to deal with this problem. It uses affine invariant features to identity bogus photos. First, we adopt an extremely fast algorithm to extract keypoints. Then, using color and texture representation, the keypoints that belong to different objects or background are clustered into corresponding groups. Next, based on the partition of the deformation space, a multilayer ferns model is trained to recognize local patches and get coarse pose estimations at the same time. At last, a linear predictor is adopted to refine the estimation, so as to get the accurate homography. We test our approach on several public datasets and a special dataset from national photography database. The experiment result demonstrates that our method can provide robust and powerful matching ability. Especially in some difficult matching conditions, in which other state-of-the-art methods can not yield good result, our approach also performs remarkably well. Furthermore, as there is no need to compute complicated descriptors, our method is very fast at run-time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Rothganger, F., Lazebnik, S., Schmid, C., Ponce, J.: 3D object modeling and recognition using local affine-invariant image descriptors and multi-view spatial constraints. Int. J. Comput. Vis. 66(3), 231–259 (2006)

    Article  Google Scholar 

  2. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: Conference on Computer Vision and Pattern Recognition (2007)

    Google Scholar 

  3. Nishida, K., Kurita, T., Ogiuchi, Y., Higashikubo, M.: Visual tracking algorithm using pixel-pair feature. In: International Conference on Pattern Recognition (2010)

    Google Scholar 

  4. Goedeme, T., Tuytelaars, T., Van Gool, L.: Fast wide baseline matching for visual navigation. In: Conference on Computer Vision and Pattern Recognition (2004)

    Google Scholar 

  5. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 20(2), 91–110 (2004)

    Article  Google Scholar 

  6. Ke, Y., Sukthankar, R.: PCA-SIFT: a more distinctive representation for local image descriptors. In: Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 506–513 (2004)

    Google Scholar 

  7. Bay, H., Tuytelaars, T., Van Gool, L.J.: SURF: speeded up robust features. In: European Conference on Computer Vision, vol. 1, pp. 404–417 (2006)

    Google Scholar 

  8. Sinha, S., Frahm, J.M., Pollefeys, M., et al.: GPU-based video feature tracking and matching. In: EDGE 2006, Workshop on Edge Computing Using New Commodity Architectures (2006)

    Google Scholar 

  9. Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1465–1479 (2006)

    Article  Google Scholar 

  10. Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, vol. 15, p. 50 (1988)

    Google Scholar 

  11. Rosten, E., Drummond, T.: Machine learning for high speed corner detection. In: European Conference on Computer Vision, vol. 1, pp. 430–443 (2006)

    Google Scholar 

  12. Smith, S., Brady, J.: SUSAN—a new approach to low level image processing. Int. J. Comput. Vis. 23, 45–78 (1997)

    Article  Google Scholar 

  13. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Van Gool, L.: A comparison of affine region detectors. Int. J. Comput. Vis. 65(1), 43–72 (2005)

    Article  Google Scholar 

  14. Özuysal, M., Fua, P., Lepetit, V.: Fast keypoint recognition in ten lines of code. In: Conference on Computer Vision and Pattern Recognition (2007)

    Google Scholar 

  15. Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: International Conference on Computer Vision, vol. 2, pp. 1508–1515 (2005)

    Google Scholar 

  16. Quinlan, J.: Induction of decision trees. Mach. Learn. 1, 1–106 (1986)

    Google Scholar 

  17. Quinlan, J.: C4.5: programs for machine learning. In: International Conference on Machine Learning (1993)

    Google Scholar 

  18. Estrada, F., Fua, P., Lepetit, V., Suusstrunk, S.: Appearance based keypoint clustering. In: Conference on Computer Vision and Pattern Recognition (2009)

    Google Scholar 

  19. Tuzel, O., Porikli, F., Meer, P.: Region covariance: a fast descriptor for detection and classification. In: European Conference on Computer Vision, vol. 2, pp. 589–600 (2006)

    Google Scholar 

  20. Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. Int. J. Comput. Vis. 62(1–2), 61–81 (2005)

    Google Scholar 

  21. Estrada, F., Jepson, A., Chennubhotla, C.: Spectral embedding and min-cut for image segmentation. In: British Machine Vision Conference, pp. 317–326 (2004)

    Google Scholar 

  22. Cheng, Y.: Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 17(8), 790–799 (1995)

    Article  Google Scholar 

  23. Hinterstoisser, S., Benhimane, S., Navab, N., Fua, P., Lepetit, V.: Online learning of patch perspective rectification for efficient object detection. In: Conference on Computer Vision and Pattern Recognition (2008)

    Google Scholar 

  24. Jurie, F., Dhome, M.: Hyperplane approximation for template matching. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 996–1000 (2002)

    Article  Google Scholar 

  25. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  26. Gao, C., Song, Y., Jia, P.: Multilayer ferns: a learning-based approach of patch recognition and homography extraction. In: The Ninth International Conference on Machine Learning and Applications, pp. 198–203 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ce Gao.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gao, C., Song, Y. & Jia, P. A Robust Feature Matching Approach for Photography Originality Test. J Math Imaging Vis 44, 185–194 (2012). https://doi.org/10.1007/s10851-011-0321-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-011-0321-z

Keywords

Navigation