Skip to main content

Advertisement

Log in

LAR: a low-power, high-precision mobile phone-based AR system

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

Abstract

Mobile crowd sensing is a novel large-scale sensing pattern, which applies users’ smart devices to analyze social context and human activity, and then learns the intelligent information serving for various innovative services. An augmented reality (AR) system can bring more authentic experience for human life, and more and more scholars are increasingly interested in this technology. However, there are some problems in the previous methods, such as the expensive deployment, low accuracy, and high latency. These problems greatly limit the application of augmented reality systems. In this paper, we design a lightweight augmented reality system, called LAR, which can recognize the target object quickly and precisely by using a feature matching algorithm. In LAR, the target object is shot twice to get the distance between the user and the object, and we take the distance as the feature of the target object. Furthermore, the feature extraction algorithm, named s-SURF, is designed to extract image features. LAR combines the distance with image feature and then matches with these images stored in database. In addition, the flutter-free algorithm is used to denoise and get a clearer image. Finally, we design a prototype system to evaluate LAR performance. The accuracy of LAR is 86%, and the time delay achieves 141 ms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Avila L, Bailey M (2016) Augment your reality. IEEE Comput Graph 36(1):6–7

    Article  Google Scholar 

  2. Bahrami Z, Tab FA (2018) A new robust video watermarking algorithm based on surf features and block classification. Multimed Tools Appl 77(1):327–345

    Article  Google Scholar 

  3. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (surf). Comput Vis Image Underst 110(3):346–359

    Article  Google Scholar 

  4. Bruni V, Vitulano D (2003) A Wiener filter improvement combining wavelet domains. In: International conference on image analysis and processing. IEEE, Piscataway, pp 518–523

  5. Cao C, Li Z, Zhou P, Li M (2018) Amateur: augmented reality based vehicle navigation system. Interact Mob Wearable Ubiquitous Technol 2(4):155:1–155:24

    Google Scholar 

  6. Drost B, Ulrich M, Navab N, Ilic S (2010) Model globally, match locally: efficient and robust 3d object recognition. In: IEEE Computer society conference on computer vision and pattern recognition. IEEE, Piscataway, pp 998–1005

  7. Goferman S, Zelnikmanor L, Tal A (2012) Context-aware saliency detection. IEEE T Pattern Anal 34(10):1915–1926

    Article  Google Scholar 

  8. Guo B, Chen C, Zhang D, Yu Z, Chin A (2016) Mobile crowd sensing and computing: when participatory sensing meets participatory social media. IEEE Commun Mag 54(2):131–137

    Article  Google Scholar 

  9. Guo W, Zhu W, Yu Z, Wang J, Guo B (2019) A survey of task allocation: contrastive perspectives from wireless sensor networks and mobile crowdsensing. IEEE Access 7:78,406–78,420

    Article  Google Scholar 

  10. Han M, Lyu Z, Qiu T, Xu M (2018) A review on intelligence dehazing and color restoration for underwater images. IEEE T Syst Man Cyb-S PP:1–13

    Google Scholar 

  11. Hasanuzzaman FM, Yang X, Tian Y (2011) Robust and effective component-based banknote recognition by surf features. In: Annual wireless and optical communications conference. IEEE, Piscataway, pp 1–6

  12. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. Computing Research Repository, arXiv:1704.04861

  13. Huang J, Chen Z, Ceylan D, Jin H (2017) 6-dof VR videos with a single 360-camera. In: Rosenberg ES, Krum DM, Wartell Z, Mohler BJ, Babu SV, Steinicke F, Interrante V (eds) IEEE virtual reality, Los Angeles, CA, USA, March 18-22, pp. 37–44. IEEE Computer Society

  14. Jain P, Manweiler J, Roy Choudhury R (2015) Overlay: practical mobile augmented reality. In: Annual international conference on mobile systems, applications, and services. ACM, New York, pp 331–344

  15. Kim J, Parra A, Yue J, Li H, Delp EJ (2015) Robust local and global shape context for tattoo image matching. In: IEEE International conference on image processing. IEEE, Piscataway, pp 2194–2198

  16. Kopinski T, Gepperth A, Handmann U (2016) A time-of-flight-based hand posture database for human-machine interaction. In: International conference on control, automation, robotics and vision. IEEE, Piscataway, pp 1–6

  17. Liu C, Yuen J, Torralba A (2010) Sift flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994

    Article  Google Scholar 

  18. Liu Y, Li Z (2018) Aleak: privacy leakage through context-free wearable side-channel. In: IEEE Conference on computer communications, INFOCOM. IEEE, Piscataway, pp 1232–1240

  19. Liu Y, Li Z, Liu Z, Wu K (2019) Real-time arm skeleton tracking and gesture inference tolerant to missing wearable sensors. In: Annual international conference on mobile systems, applications, and services. ACM, pp 287–299

  20. Liu Z, Li Z, Wu K, Li M (2018) Urban traffic prediction from mobility data using deep learning. IEEE Netw 32(4):40–46

    Article  Google Scholar 

  21. Loncomilla P, Ruiz-del Solar J (2006) A fast probabilistic model for hypothesis rejection in sift-based object recognition. In: Iberoamerican congress on pattern recognition. Springer, Berlin, pp 696–705

  22. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60 (2):91–110

    Article  Google Scholar 

  23. Lun R, Zhao W (2015) A survey of applications and human motion recognition with Microsoft Kinect. Intern J Pattern Recognit Artif Intell 29(05):1555,008

    Article  Google Scholar 

  24. Obdržalek Š, Kurillo G, Ofli F, Bajcsy R, Seto E, Jimison H, Pavel M (2012) Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population. In: Annual international conference of the IEEE engineering in medicine and biology society. IEEE, Piscataway, pp 1188–1193

  25. Palmarini R, Erkoyuncu JA, Roy R, Torabmostaedi H (2018) A systematic review of augmented reality applications in maintenance. Robot Comput Integr Manuf 49:215–228

    Article  Google Scholar 

  26. Rameau F, Ha H, Joo K, Choi J, Park K, Kweon IS (2016) A real-time augmented reality system to see-through cars. IEEE T Vis Comput Gr 22(11):2395–2404

    Article  Google Scholar 

  27. Ramisa A, Vasudevan S, Aldavert D, Toledo R, de Mantaras RL (2009) Evaluation of the sift object recognition method in mobile robots. In: International conference of the catalan association for artificial intelligence, vol 202. IOS press, pp 9–18

  28. Schaeferling M, Kiefer G (2011) Object recognition on a chip: a complete surf-based system on a single fpga. In: International conference on reconfigurable computing and FPGAs. IEEE, Piscataway, pp 49–54

  29. Smisek J, Jancosek M, Pajdla T (2013) 3D with Kinect. Springer, Berlin, pp 3–25

    Google Scholar 

  30. Tussyadiah IP, Jung TH, tom Dieck MC (2018) Embodiment of wearable augmented reality technology in tourism experiences. J Travel Res 57(5):597–611

    Article  Google Scholar 

  31. Wang CS, Chen CL, Guo YM (2013) A real-time indoor positioning system based on RFID and Kinect. In: Information technology convergence. Springer, Berlin, pp 575–585

  32. Wu H, Deng S, Li W, Khan SU, Yin J, Zomaya AY (2018) Request dispatching for minimizing service response time in edge cloud systems. In: International conference on computer communication and networks. IEEE, Piscataway, pp 1–9

  33. Xia L, Chen CC, Aggarwal JK (2011) Human detection using depth information by Kinect. In: CVPR 2011 Workshops. IEEE, Piscataway, pp 15–22

  34. Xie L, Sun J, Cai Q, Wang C, Wu J, Lu S (2016) Tell me what I see: recognize RFID tagged objects in augmented reality systems. In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, New York, pp 916–927

  35. Yang S, He Y, Zheng X (2019) Fovr: Attention-based VR streaming through bandwidth-limited wireless networks. In: Annual IEEE international conference on sensing, communication, and networking, boston, MA, USA, June 10-13, 2019, pp 1–9. IEEE

  36. Yang X, Mao S, Gao H, Duan Y, Zou Q (2019) Novel financial capital flow forecast framework using time series theory and deep learning: a case study analysis of Yu’e Bao transaction data. IEEE Access 7:70662–70672

    Article  Google Scholar 

  37. Yu Z, Du H, Yi F, Wang Z, Guo B (2019) Ten scientific problems in human behavior understanding. CCF Trans Pervasive Comput Interact 1(1):3–9

    Article  Google Scholar 

  38. Yu Z, Zhang D, Wang Z, Guo B, Roussaki I, Doolin K, Claffey E (2017) Toward context-aware mobile social networks. IEEE Commun Mag 55(10):168–175

    Article  Google Scholar 

  39. Zhang X, Zhou X, Lin M, Sun J (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: IEEE Conference on computer vision and pattern recognition. IEEE, Piscataway, pp 6848–6856

  40. Zhang Z (2012) Microsoft Kinect sensor and its effect. IEEE Multimedia 19(2):4–10

    Article  Google Scholar 

  41. Zhou Z, Liao H, Gu B, Huq KMS, Mumtaz S, Rodriguez J (2018) Robust mobile crowd sensing: when deep learning meets edge computing. IEEE Netw 32(4):54–60

    Article  Google Scholar 

Download references

Funding

Thiswork was supported in part by China Postdoctoral Science Foundation (No. 2017M613187), the International Cooperation Project of Shaanxi Province (No. 2020KW-004), the Key Research and Development Project of Shaanxi Province (No. 2018SF-369), and the Shaanxi Science and Technology Innovation Team Support Project under grant agreement (No. 2018TD-026).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tianzhang Xing.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, X., Shang, F., Xing, T. et al. LAR: a low-power, high-precision mobile phone-based AR system. Pers Ubiquit Comput 27, 509–521 (2023). https://doi.org/10.1007/s00779-020-01421-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-020-01421-3

Keywords

Navigation