Sharing Gesture and Gaze Cues for Enhancing AR Collaboration

This project explores how gaze and gestures could be used to enhance collaboration in Mixed Reality environments. Gaze and gesture provide important cues for face to face collaboration, but it can be difficult to convey those same cues in current teleconferencing systems. However Augmented Reality and Virtual Reality technology can be used to share hand and eyep-tracking information. For example a remote user in VR could have their hands tracked and shared with a local user in AR who can see virtual hands appearing over their workspace showing them what to do. In a similar way eye-tracking technology can be used to share the gaze of a remote helper with a local working to help them perform better on a real world task. Our research has shown that sharing a wide range of different virtual gaze and gesture cues can significantly enhance remote collaboration in Mixed Reality systems.

Project Video(s):

Publications

  • CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., and Billinghurst, M

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2017, October). [POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 218-219). IEEE.

    @inproceedings{piumsomboon2017poster,
    title={[POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={218--219},
    year={2017},
    organization={IEEE}
    }
    We present CoVAR, a novel Virtual Reality (VR) and Augmented Reality (AR) system for remote collaboration. It supports collaboration between AR and VR users by sharing a 3D reconstruction of the AR user's environment. To enhance this mixed platform collaboration, it provides natural inputs such as eye-gaze and hand gestures, remote embodiment through avatar's head and hands, and awareness cues of field-of-view and gaze cue. In this paper, we describe the system architecture, setup and calibration procedures, input methods and interaction, and collaboration enhancement features.
  • A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing
    Huidong Bai , Prasanth Sasikumar , Jing Yang , Mark Billinghurst

    Huidong Bai, Prasanth Sasikumar, Jing Yang, and Mark Billinghurst. 2020. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376550

    @inproceedings{bai2020user,
    title={A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing},
    author={Bai, Huidong and Sasikumar, Prasanth and Yang, Jing and Billinghurst, Mark},
    booktitle={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
    pages={1--13},
    year={2020}
    }
    Supporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.
  • Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    Allison Jing, Kieran May, Gun Lee, Mark Billinghurst.

    Jing, A., May, K., Lee, G., & Billinghurst, M. (2021). Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration. Frontiers in Virtual Reality, 2, 79.

    @article{jing2021eye,
    title={Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration},
    author={Jing, Allison and May, Kieran and Lee, Gun and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={79},
    year={2021},
    publisher={Frontiers}
    }
    Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.
  • Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration
    Allison Jing; Gun Lee; Mark Billinghurst

    Jing, A., Lee, G., & Billinghurst, M. (2022, March). Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 250-259). IEEE.

    @inproceedings{jing2022using,
    title={Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration},
    author={Jing, Allison and Lee, Gun and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={250--259},
    year={2022},
    organization={IEEE}
    }
    In this paper, we present a 360° panoramic Mixed Reality (MR) sys-tem that visualises shared gaze cues using contextual speech input to improve task coordination. We conducted two studies to evaluate the design of the MR gaze-speech interface exploring the combinations of visualisation style and context control level. Findings from the first study suggest that an explicit visual form that directly connects the collaborators’ shared gaze to the contextual conversation is preferred. The second study indicates that the gaze-speech modality shortens the coordination time to attend to the shared interest, making the communication more natural and the collaboration more effective. Qualitative feedback also suggest that having a constant joint gaze indicator provides a consistent bi-directional view while establishing a sense of co-presence during task collaboration. We discuss the implications for the design of collaborative MR systems and directions for future research.