Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13521))

Included in the following conference series:

  • 920 Accesses

Abstract

In this paper, we design a voice-assisted food recall tool that can be implemented on voice assistants of smart speakers and smartphones, enabling frequent, quick, and real-time self-administered food recall. We envision that voice-assisted food recall can improve the accuracy and usability of the web-based Automated Self-administered 24-h Assessment (ASA-24). ASA-24 was developed by the National Cancer Institute in 2010 and has been widely used in clinical and research settings, but has low compliance and completion rates for at-home users. Specifically, we designed a prototype using nine ASA-24 general questions, two free-recall questions, five ASA-24 detailed questions, and three clarifying strategies. The integration of the ASA-24 questions ensures that the output of the prototype will align with the output of the ASA-24, so as to connect to the ASA-24 for nutrition profile analysis. We recruited twenty young adults and twenty older adults to evaluate this prototype, with each using it to recall three meals. We evaluated participants’ performance per different types of questions and strategies, and analyzed the strength and weaknesses of the voice-assisted food recall. The mean success rate and session time of a single meal was (96.4%, 141.4 s) for young adults and (88.6%, 165.4 s) for older adults. The voice-assisted recall session time is significantly shorter than the ASA-24 single-meal session time, and the relevance of voice responses are determined to be high. We conducted questionnaires and interviews to obtain participants’ feedback on the feasibility and acceptability of the prototype. 65% of young and 60% of older participants prefer voice-assisted food recall over web-based food recall, showing promising feasibility and acceptance of our initial voice-assisted food recall prototype. Future works include validation of the food recall content, development of food-customized speech recognition and natural language understanding techniques to enhance accuracy, and integration of human-like assistance to improve usability.

Supported by US National Institutes of Health National Institute on Aging, under grant No. 1R01AG067416.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://epi.grants.cancer.gov/asa24/.

References

  1. Dietary guidelines for americans: 2020–2025. US Department of Agriculture (9) (2020)

    Google Scholar 

  2. Alharbi, R., Stump, T., Vafaie, N., Pfammatter, A., Spring, B., Alshurafa, N.: I can’t be myself: effects of wearable cameras on the capture of authentic behavior in the wild. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 2(3), 1–40 (2018)

    Article  Google Scholar 

  3. Batsis, J.A., et al.: Feasibility and acceptability of a technology-based, rural weight management intervention in older adults with obesity. BMC Geriatr. 21(1), 1–13 (2021)

    Article  Google Scholar 

  4. Batsis, J.A., et al.: A weight loss intervention augmented by a wearable device in rural older adults with obesity: a feasibility study. J. Geront. Ser. A 76(1), 95–100 (2021)

    Article  Google Scholar 

  5. Batsis, J.A., et al.: A community-based feasibility study of weight-loss in rural, older adults with obesity. J. Nutrit. Geront. Geriat. 39(3–4), 192–204 (2020)

    Article  Google Scholar 

  6. Bedri, A., et al.: Earbit: using wearable sensors to detect eating episodes in unconstrained environments. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 1(3), 1–20 (2017)

    Article  Google Scholar 

  7. Bi, S., et al.: Measuring children’s eating behavior with a wearable device. In: 2020 IEEE International Conference on Healthcare Informatics (ICHI), pp. 1–11. IEEE (2020)

    Google Scholar 

  8. Brooke, J.: SUS: A ‘Quick and Dirty’ Usability Scale. Usability evaluation in industry 189(3) (1996)

    Google Scholar 

  9. Budzianowski, P., Vulić, I.: Hello, it’s gpt-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. arXiv preprint arXiv:1907.05774 (2019)

  10. Budzianowski, P., et al.: Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278 (2018)

  11. Callahan, C.M., Unverzagt, F.W., Hui, S.L., Perkins, A.J., Hendrie, H.C.: Six-item screener to identify cognitive impairment among potential subjects for clinical research. Med. Care 40, 771–781 (2002)

    Article  Google Scholar 

  12. Canalys, R.F.: 56 million smart speaker sales in 2018 says canalys. https://www.voicebot.ai/2018/01/07/56-million-smart-speaker-sales-2018-says-canalys/

  13. Estruch, R., et al.: Primary prevention of cardiovascular disease with a mediterranean diet. N. Engl. J. Med. 368(14), 1279–1290 (2013)

    Article  Google Scholar 

  14. Farooq, M., Sazonov, E.: Accelerometer-based detection of food intake in free-living individuals. IEEE Sens. J. 18(9), 3752–3758 (2018)

    Article  Google Scholar 

  15. Goyal, A., Metallinou, A., Matsoukas, S.: Fast and scalable expansion of natural language understanding functionality for intelligent agents. arXiv preprint arXiv:1805.01542 (2018)

  16. Harnack, L., Stevens, M., Van Heel, N., Schakel, S., Dwyer, J.T., Himes, J.: A computer-based approach for assessing dietary supplement use in conjunction with dietary recalls. J. Food Compos. Anal. 21, S78–S82 (2008)

    Article  Google Scholar 

  17. Hossain, D., Ghosh, T., Sazonov, E.: Automatic count of bites and chews from videos of eating episodes. IEEE Access 8, 101934–101945 (2020)

    Article  Google Scholar 

  18. Jia, W., et al.: Automatic food detection in egocentric images using artificial intelligence technology. Public Health Nutr. 22(7), 1168–1179 (2019)

    Google Scholar 

  19. Kinsella, B.: Smart speaker owners use voice assistants nearly 3 times per day (2018). https://voicebot.ai/2018/04/02/smart-speaker-owners-use-voice-assistants-nearly-3-times-per-day/

  20. Laricchia, F.: Share of voice assistant users in the U.S. 2020 by device (2021). https://www.statista.com/statistics/1171363/share-of-voice-assistant-users-in-the-us-by-device/

  21. Mamatha, M., et al.: Chatbot for e-commerce assistance: based on rasa. Turkish J. Comput. Math. Educ. (TURCOMAT) 12(11), 6173–6179 (2021)

    Google Scholar 

  22. Miller, P.E., Mitchell, D.C., Harala, P.L., Pettit, J.M., Smiciklas-Wright, H., Hartman, T.J.: Development and evaluation of a method for calculating the healthy eating index-2005 using the nutrition data system for research. Public Health Nutr. 14(2), 306–313 (2011)

    Article  Google Scholar 

  23. Mitchell, E.S., et al.: Self-reported nutritional factors are associated with weight loss at 18 months in a self-managed commercial program with food categorization system: Observational study. Nutrients 13(5), 1733 (2021)

    Google Scholar 

  24. Montville, J.B., et al.: Usda food and nutrient database for dietary studies (fndds), 5.0. Procedia Food Science 2, 99–112 (2013)

    Article  Google Scholar 

  25. Nyamukuru, M.T., Odame, K.M.: Tiny eats: eating detection on a microcontroller. In: 2020 IEEE Second Workshop on Machine Learning on Edge in Sensor Systems (SenSys-ML), pp. 19–23. IEEE (2020)

    Google Scholar 

  26. Radcliffe, E., Lippincott, B., Anderson, R., Jones, M.: A pilot evaluation of mhealth app accessibility for three top-rated weight management apps by people with disabilities. Int. J. Environ. Res. Public Health 18(7), 3669 (2021)

    Google Scholar 

  27. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019)

  28. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 (2016)

  29. Ram, A., et al.: Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604 (2018)

  30. Ross, C.: Amazon Alexa is now HIPAA-compliant. Tech giant says health data can now be accessed securely (2021). https://www.statnews.com/2019/04/04/amazon-alexa-hipaa-compliant/

  31. Simpson, E., et al.: Iterative development of an online dietary recall tool: Intake24. Nutrients 9(2), 118 (2017)

    Google Scholar 

  32. Tuohy, J.P.: Amazon Alexa’s new elder care service launches today (2021). https://www.theverge.com/2021/12/7/22822026/amazon-alexa-together-elder-care-price-features-release-date

  33. Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: a survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29(12), 2724–2743 (2017)

    Article  Google Scholar 

  34. Weinschenk, C.: Smart Speaker Research Finds Strong Adoption by Seniors (2021). https://www.telecompetitor.com/smart-speaker-research-finds-strong-adoption-by-seniors/

  35. Yan, Z., Duan, N., Chen, P., Zhou, M., Zhou, J., Li, Z.: Building task-oriented dialogue systems for online shopping. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)

    Google Scholar 

  36. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: Generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohui Liang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liang, X., Batsis, J.A., Yuan, J., Zhu, Y., Driesse, T.M., Schultz, J. (2022). Voice-Assisted Food Recall Using Voice Assistants. In: Duffy, V.G., Gao, Q., Zhou, J., Antona, M., Stephanidis, C. (eds) HCI International 2022 – Late Breaking Papers: HCI for Health, Well-being, Universal Access and Healthy Aging. HCII 2022. Lecture Notes in Computer Science, vol 13521. Springer, Cham. https://doi.org/10.1007/978-3-031-17902-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17902-0_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17901-3

  • Online ISBN: 978-3-031-17902-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics