Allison Jing

Allison Jing

PhD Student

Allison is a PhD student at the Empathic Computing Lab. She received her Master’s degree in Human-Computer Interaction at the University of St Andrews in 2018. She currently works under Professor Mark Billinghurst and Dr Gun Lee supervision, focusing on non-verbal gaze and gesture sharing XR remote collaboration research.

Projects

  • Sharing Gesture and Gaze Cues for Enhancing AR Collaboration

    This project explores how gaze and gestures could be used to enhance collaboration in Mixed Reality environments. Gaze and gesture provide important cues for face to face collaboration, but it can be difficult to convey those same cues in current teleconferencing systems. However Augmented Reality and Virtual Reality technology can be used to share hand and eyep-tracking information. For example a remote user in VR could have their hands tracked and shared with a local user in AR who can see virtual hands appearing over their workspace showing them what to do. In a similar way eye-tracking technology can be used to share the gaze of a remote helper with a local working to help them perform better on a real world task. Our research has shown that sharing a wide range of different virtual gaze and gesture cues can significantly enhance remote collaboration in Mixed Reality systems.

Publications

  • Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    Allison Jing, Kieran May, Gun Lee, Mark Billinghurst.

    Jing, A., May, K., Lee, G., & Billinghurst, M. (2021). Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration. Frontiers in Virtual Reality, 2, 79.

    @article{jing2021eye,
    title={Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration},
    author={Jing, Allison and May, Kieran and Lee, Gun and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={79},
    year={2021},
    publisher={Frontiers}
    }
    Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.