ABSTRACT
Face annotation makes it easy to share and manage digital photos and videos. While state-of-the-art face recognition algorithms can achieve high accuracy to support automatic face annotation, their implementations on an embedded platform cannot achieve real-time performance due to the demanding computational requirement. However, the availability of an embedded GPU in most smartphones offers the opportunity to use it as an accelerator for the face recognition task. In this demonstration, we show that, with acceleration achieved by the embedded low-power GPU, a real-time face annotation system could be realized on an existing off-the-shelf smartphone.
- Chu, S-W., Yeh, M-C., Cheng, K-T. 2008. A real-time, embedded face-annotation system. ACM MM Technical demonstrations Google ScholarDigital Library
- http://opencv.willowgarage.com/wiki/FaceDetectionGoogle Scholar
- Su, Y., Shan, S., Chen, X., Gao, W. 2009. Hierarchical Ensemble of Global and Local Classifiers for Face Recognition. In IEEE Transactions on Image Processing, vol. 18, issue 8, 1885--1896 Google ScholarDigital Library
- http://developer.android.com/index.htmlGoogle Scholar
- Moreland, K., Angel E. 2003. The FFT on a GPU. In Proceeding of SIGGRAPH, pp. 112--119, July 2003 Google ScholarDigital Library
- http://www.khronos.org/openglesGoogle Scholar
Index Terms
- A GPU-accelerated face annotation system for smartphones
Recommendations
A real-time, embedded face-annotation system
MM '08: Proceedings of the 16th ACM international conference on MultimediaFace detection and recognition have numerous multimedia applications of broad interest, one of which is automatic face annotation. There exist many robust algorithms tackling these problems but most of these algorithms are computationally demanding and ...
Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers
Highlights- Generate parallel CUDA code from sequential C input code using a compiler-based tool for key operators in Geometric Multigrid.
AbstractGPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model ...
Optimizing linpack benchmark on GPU-accelerated petascale supercomputer
Special issue on Community Analysis and Information RecommendationIn this paper we present the programming of the Linpack benchmark on TianHe-1 system, the first petascale supercomputer system of China, and the largest GPU-accelerated heterogeneous system ever attempted before. A hybrid programming model consisting of ...
Comments