skip to main content
10.1145/3384657.3384798acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article

Facilitating Experiential Knowledge Sharing through Situated Conversations

Published: 06 June 2020 Publication History

Abstract

This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.

References

[1]
Kiyoharu Aizawa, Tetsuro Hori, Shinya Kawasaki, and Takayuki Ishikawa. 2004. Capture and efficient retrieval of life log. In Pervasive 2004 workshop on memory and sharing experiences. 15--20.
[2]
Mark Billinghurst, Jerry Bowskill, Mark Jessop, and Jason Morphett. 1998. A wearable spatial conferencing space. In Second IEEE International Symposium on Wearable Computers (ISWC ' 98). 76--83.
[3]
Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The AMI meeting corpus: A pre-announcement. In International Workshop on Machine Learning for Multimodal Interaction. 28--39.
[4]
Richard W. DeVaul, Alex "Sandy" Pentland, and Vicka R. Corey. 2003. The memory glasses: Subliminal vs. overt memory support with imperfect information. In 7th IEEE International Symposium on Wearable Computers (ISWC ' 03). 146.
[5]
John S Garofolo, Christophe Laprun, Martial Michel, Vincent M Stanford, and Elham Tabassi. 2004. The NIST meeting room pilot corpus. In Fouth International Conference on Language Resources and Evaluation.
[6]
Jennifer Healey and Rosalind W Picard. 1998. StartleCam: A Cybernetic Wearable Camera. In 2nd IEEE International Symposium on Wearable Computers (ISWC ' 98). 42--49.
[7]
Noboru Kanedera, Asuka Sumida, Takao Ikehata, and Tetsuo Funada. 2006. Subtopic segmentation in lecture speech for the creation of lecture video contents. 37, 10 (2006), 13--21.
[8]
Takekazu Kato, Takeshi Kurata, and Katsuhiko Sakaue. 2002. Face registration using wearable active vision systems for augmented memory. In Digital Image Computing: Techniques and Applications. 252--257.
[9]
Tatsuyuki Kawamura, Yasuyuki Kono, and Masatsugu Kidode. 2002. Wearable interfaces for a video diary: Towards memory retrieval, exchange, and transportation. In 6th IEEE International Symposium on Wearable Computers (ISWC ' 02). 31.
[10]
Nicky Kern, Bernt Schiele, Holger Junker, Paul Lukowicz, and Gerhard Troster. 2002. Wearable sensing to annotate meeting recordings. In Sixth International Symposium on Wearable Computers (ISWC '02). 186--193.
[11]
Mik Lamming and Mike Flynn. 1994. Forget-me-not: Intimate computing in support of human memory. In FRIEND21: International Symposium on Next Generation Human Interface. 150--158.
[12]
Kohei Matsumura and Yasuyuki Sumi. 2014. What are you talking about while driving?: An analysis of in-car conversations aimed at conversation sharing. In 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ' 14). 8.
[13]
David McNeill. 2005. Gesture, gaze, and ground. In Second International Conference on Machine Learning for Multimodal Interaction. 1--14.
[14]
Christof E. Müller, Yasuyuki Sumi, Kenji Mase, and Megumu Tsuchikawa. 2004. Experience sharing by retrieving captured conversations using non-verbal features. In 1st ACM Workshop on Continuous Archival and Retrieval of Personal Experiences (CARPE ' 04). 93--98.
[15]
Shohei Nagai, Shunichi Kasahara, and Jun Rekimoto. 2015. Live-Sphere: Sharing the surrounding visual environment for immersive experience in remote collaboration. In Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ' 15). 113--116.
[16]
Jun Rekimoto. 1999. Time-machine computing: A time-centric approach for the information environment. In 12th Annual ACM Symposium on User Interface Software and Technology (UIST ' 99). 45--54.
[17]
Jun Rekimoto, Yuji Ayatsuka, and Kazuteru Hayashi. 1998. Augment-able reality: Situated communication through physical and digital spaces. In 2nd IEEE International Symposium on Wearable Computers (ISWC ' 98). 68.
[18]
Bradley J. Rhodes. 1997. The Wearable Remembrance Agent: A System for Augmented Memory. In 1st IEEE International Symposium on Wearable Computers (ISWC ' 97). 123.
[19]
Ken Saito, Hidekazu Kubota, Yasuyuki Sumi, and Toyoaki Nishida. 2005. Support for content creation using conversation quanta. In 2005 International Conference on New Frontiers in Artificial Intelligence. 29--40.
[20]
Nitin Sawhney and Chris Schmandt. 2000. Nomadic radio: Speech and audio interaction for contextual messaging in nomadic environments. ACM Transactions on Computer-Human Interaction 7, 3 (Sept. 2000), 353--383.
[21]
Bernt Schiele, Nuria Oliver, Tony Jebara, and Alex Pentland. 1999. An interactive computer vision system DyPERS: Dynamic personal enhanced reality system. In First International Conference on Computer Vision Systems (ICVS ' 99). 51--65.
[22]
Tanja Schultz, Alex Waibel, Michael Bett, Florian Metze, Yue Pan, Klaus Ries, Thomas Schaaf, Hagen Soltau, Martin Westphal, Hua Yu, and Klaus Zechner. 2001. The ISL meeting room system. In Workshop on Hands-Free Speech Communication (HSC-2001).
[23]
Takamasa Ueda, Toshiyuki Amagasa, Masatoshi Yoshikawa, and Shunsuke Uemura. 2002. A System for retrieval and digest creation of video data based on geographic objects. In 13th International Conference on Database and Expert Systems Applications (DEXA ' 02). 768--778.
[24]
Ryoko Ueoka, Koichi Hirota, and Michitaka Hirose. 2001. Wearable computer for experience recording. In 11th international conference on artificial reality and telexistence (ICAT ' 01).
[25]
Alex Waibel, Hartwig Steusloff, Rainer Stiefelhagen, and Kym Watson. 2009. Computers in the human interaction loop. In Computers in the Human Interaction Loop. Springer, 3--6.

Index Terms

  1. Facilitating Experiential Knowledge Sharing through Situated Conversations

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AHs '20: Proceedings of the Augmented Humans International Conference
    March 2020
    296 pages
    ISBN:9781450376037
    DOI:10.1145/3384657
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 June 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. conversation
    2. experience sharing
    3. situation

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AHs '20
    AHs '20: Augmented Humans International Conference
    March 16 - 17, 2020
    Kaiserslautern, Germany

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 87
      Total Downloads
    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media