Allison Jing

Allison Jing

PhD Student

Allison is a PhD student at the Empathic Computing Lab. She received her Master’s degree in Human-Computer Interaction at the University of St Andrews in 2018. She currently works under Professor Mark Billinghurst and Dr Gun Lee supervision, focusing on non-verbal gaze and gesture sharing XR remote collaboration research.

Projects

  • Sharing Gesture and Gaze Cues for Enhancing AR Collaboration

    This project explores how gaze and gestures could be used to enhance collaboration in Mixed Reality environments. Gaze and gesture provide important cues for face to face collaboration, but it can be difficult to convey those same cues in current teleconferencing systems. However Augmented Reality and Virtual Reality technology can be used to share hand and eyep-tracking information. For example a remote user in VR could have their hands tracked and shared with a local user in AR who can see virtual hands appearing over their workspace showing them what to do. In a similar way eye-tracking technology can be used to share the gaze of a remote helper with a local working to help them perform better on a real world task. Our research has shown that sharing a wide range of different virtual gaze and gesture cues can significantly enhance remote collaboration in Mixed Reality systems.

Publications

  • Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    Allison Jing, Kieran May, Gun Lee, Mark Billinghurst.

    Jing, A., May, K., Lee, G., & Billinghurst, M. (2021). Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration. Frontiers in Virtual Reality, 2, 79.

    @article{jing2021eye,
    title={Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration},
    author={Jing, Allison and May, Kieran and Lee, Gun and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={79},
    year={2021},
    publisher={Frontiers}
    }
    Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.
  • Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration
    Allison Jing; Gun Lee; Mark Billinghurst

    Jing, A., Lee, G., & Billinghurst, M. (2022, March). Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 250-259). IEEE.

    @inproceedings{jing2022using,
    title={Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration},
    author={Jing, Allison and Lee, Gun and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={250--259},
    year={2022},
    organization={IEEE}
    }
    In this paper, we present a 360° panoramic Mixed Reality (MR) sys-tem that visualises shared gaze cues using contextual speech input to improve task coordination. We conducted two studies to evaluate the design of the MR gaze-speech interface exploring the combinations of visualisation style and context control level. Findings from the first study suggest that an explicit visual form that directly connects the collaborators’ shared gaze to the contextual conversation is preferred. The second study indicates that the gaze-speech modality shortens the coordination time to attend to the shared interest, making the communication more natural and the collaboration more effective. Qualitative feedback also suggest that having a constant joint gaze indicator provides a consistent bi-directional view while establishing a sense of co-presence during task collaboration. We discuss the implications for the design of collaborative MR systems and directions for future research.