Mark Billinghurst

Mark Billinghurst

Director

Prof. Mark Billinghurst has a wealth of knowledge and expertise in human-computer interface technology, particularly in the area of Augmented Reality (the overlay of three-dimensional images on the real world).

In 2002, the former HIT Lab US Research Associate completed his PhD in Electrical Engineering, at the University of Washington, under the supervision of Professor Thomas Furness III and Professor Linda Shapiro. As part of the research for his thesis titled Shared Space: Exploration in Collaborative Augmented Reality, Dr Billinghurst invented the Magic Book – an animated children’s book that comes to life when viewed through the lightweight head-mounted display (HMD).

Not surprisingly, Dr Billinghurst has achieved several accolades in recent years for his contribution to Human Interface Technology research. He was awarded a Discover Magazine Award in 2001, for Entertainment for creating the Magic Book technology. He was selected as one of eight leading New Zealand innovators and entrepreneurs to be showcased at the Carter Holt Harvey New Zealand Innovation Pavilion at the America’s Cup Village from November 2002 until March 2003. In 2004 he was nominated for a prestigious World Technology Network (WTN) World Technology Award in the education category and in 2005 he was appointed to the New Zealand Government’s Growth and Innovation Advisory Board.

Originally educated in New Zealand, Dr Billinghurst is a two-time graduate of Waikato University where he completed a BCMS (Bachelor of Computing and Mathematical Science)(first class honours) in 1990 and a Master of Philosophy (Applied Mathematics & Physics) in 1992.

Research interests: Dr. Billinghurst’s research focuses primarily on advanced 3D user interfaces such as:

  • Wearable Computing – Spatial and collaborative interfaces for small wearable computers. These interfaces address the idea of what is possible when you merge ubiquitous computing and communications on the body.
  • Shared Space – An interface that demonstrates how augmented reality, the overlaying of virtual objects on the real world, can radically enhance face-face and remote collaboration.
  • Multimodal Input – Combining natural language and artificial intelligence techniques to allow human-computer interaction with an intuitive mix of voice, gesture, speech, gaze and body motion.

Projects

  • SharedSphere

    SharedSphere is a Mixed Reality based remote collaboration system which not only allows sharing a live captured immersive 360 panorama, but also supports enriched two-way communication and collaboration through sharing non-verbal communication cues, such as view awareness cues, drawn annotation, and hand gestures.

  • Augmented Mirrors

    Mirrors are physical displays that show our real world in reflection. While physical mirrors simply show what is in the real world scene, with help of digital technology, we can also alter the reality reflected in the mirror. The Augmented Mirrors project aims at exploring visualisation interaction techniques for exploiting mirrors as Augmented Reality (AR) displays. The project especially focuses on using user interface agents for guiding user interaction with Augmented Mirrors.

  • Mini-Me

    Mini-Me is an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user’s gaze direction and body gestures while it transforms in size and orientation to stay within the AR user’s field of view. We tested Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.

  • Pinpointing

    Head and eye movement can be leveraged to improve the user’s interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.

  • Empathy Glasses

    We have been developing a remote collaboration system with Empathy Glasses, a head worn display designed to create a stronger feeling of empathy between remote collaborators. To do this, we combined a head- mounted see-through display with a facial expression recognition system, a heart rate sensor, and an eye tracker. The goal is to enable a remote person to see and hear from another person's perspective and to understand how they are feeling. In this way, the system shares non-verbal cues that could help increase empathy between remote collaborators.

  • Empathy in Virtual Reality

    Virtual reality (VR) interfaces is an influential medium to trigger emotional changes in humans. However, there is little research on making users of VR interfaces aware of their own and in collaborative interfaces, one another's emotional state. In this project, through a series of system development and user evaluations, we are investigating how physiological data such as heart rate, galvanic skin response, pupil dilation, and EEG can be used as a medium to communicate emotional states either to self (single user interfaces) or the collaborator (collaborative interfaces). The overarching goal is to make VR environments more empathetic and collaborators more aware of each other's emotional state.

  • Sharing Gesture and Gaze Cues for Enhancing MR Collaboration

    This research focuses on visualizing shared gaze cues, designing interfaces for collaborative experience, and incorporating multimodal interaction techniques and physiological cues to support empathic Mixed Reality (MR) remote collaboration using HoloLens 2, Vive Pro Eye, Meta Pro, HP Omnicept, Theta V 360 camera, Windows Speech Recognition, Leap motion hand tracking, and Zephyr/Shimmer Sensing technologies

  • Using a Mobile Phone in VR

    Virtual Reality (VR) Head-Mounted Display (HMD) technology immerses a user in a computer generated virtual environment. However, a VR HMD also blocks the users’ view of their physical surroundings, and so prevents them from using their mobile phones in a natural manner. In this project, we present a novel Augmented Virtuality (AV) interface that enables people to naturally interact with a mobile phone in real time in a virtual environment. The system allows the user to wear a VR HMD while seeing his/her 3D hands captured by a depth sensor and rendered in different styles, and enables the user to operate a virtual mobile phone aligned with their real phone.

  • Show Me Around

    This project introduces an immersive way to experience a conference call - by using a 360° camera to live stream a person’s surroundings to remote viewers. Viewers have the ability to freely look around the host video and get a better understanding of the sender’s surroundings. Viewers can also observe where the other participants are looking, allowing them to understand better the conversation and what people are paying attention to. In a user study of the system, people found it much more immersive than a traditional video conferencing call and reported that they felt that they were transported to a remote location. Possible applications of this system include virtual tourism, education, industrial monitoring, entertainment, and more.

  • TBI Cafe

    Over 36,000 Kiwis experience Traumatic Brain Injury (TBI) per year. TBI patients often experience severe cognitive fatigue, which impairs their ability to cope well in public/social settings. Rehabilitation can involve taking people into social settings with a therapist so that they can relearn how to interact in these environments. However, this is a time-consuming, expensive and difficult process. To address this, we've created the TBI Cafe, a VR tool designed to help TBI patients cope with their injury and practice interacting in a cafe. In this application, people in VR practice ordering food and drink while interacting with virtual characters. Different types of distractions are introduced, such as a crying baby and loud conversations, which are designed to make the experience more stressful, and let the user practice managing stressful situations. Clinical trials with the software are currently underway.

  • Haptic Hongi

    This project explores if XR technologies help overcome intercultural discomfort by using Augmented Reality (AR) and haptic feedback to present a traditional Māori greeting. Using a Hololens2 AR headset, guests see a pre-recorded volumetric virtual video of Tania, a Māori woman, who greets them in a re-imagined, contemporary first encounter between indigenous Māori and newcomers. The visitors, manuhiri, consider their response in the absence of usual social pressures. After a brief introduction, the virtual Tania slowly leans forward, inviting the visitor to ‘hongi’, a pressing together of noses and foreheads in a gesture symbolising “ ...peace and oneness of thought, purpose, desire, and hope”. This is felt as a haptic response delivered via a custom-made actuator built into the visitors' AR headset.

  • RadarHand

    RadarHand is a wrist-worn wearable system that uses radar sensing to detect on-skin proprioceptive hand gestures, making it easy to interact with simple finger motions. Radar has the advantage of being robust, private, small, penetrating materials and requiring low computation costs. In this project, we first evaluated the proprioceptive nature of the back of the hand and found that the thumb is the most proprioceptive of all the finger joints, followed by the index finger, middle finger, ring finger and pinky finger. This helped determine the types of gestures most suitable for the system. Next, we trained deep-learning models for gesture classification. Out of 27 gesture group possibilities, we achieved 92% accuracy for a generic set of seven gestures and 93% accuracy for the proprioceptive set of eight gestures. We also evaluated RadarHand's performance in real-time and achieved an accuracy of between 74% and 91% depending if the system or user initiates the gesture first. This research could contribute to a new generation of radar-based interfaces that allow people to interact with computers in a more natural way.

  • Asymmetric Interaction for VR sketching

    This project explores how tool-based asymmetric VR interfaces can be used by artists to create immersive artwork more effectively. Most VR interfaces use two input methods of the same type, such as two handheld controllers or two bare-hand gestures. However, it is common for artists to use different tools in each hand, such as a pencil and sketch pad. The research involves developed interaction methods that use different input methods in the edge hand, such as a stylus and gesture. Using this interface, artists can rapidly sketch their designs in VR. User studies are being conducted to compare asymmetric and symmetric interfaces to see which provides the best performance and which the users prefer more.

  • Detecting of the Onset of Cybersickness using Physiological Cues

    In this project we explore if the onset of cybersickness can be detected by considering multiple physiological signals simultaneously from users in VR. We are particularly interested in physiological cues that can be collected from the current generation of VR HMDs, such as eye-gaze, and heart rate. We are also interested in exploring other physiological cues that could be available in the near future in VR HMDs, such as GSR and EEG.

  • KiwiRescuer: A new interactive exhibition using an asymmetric interaction

    This research demo aims to address the problem of passive and dull museum exhibition experiences that many audiences still encounter. The current approaches to exhibitions are typically less interactive and mostly provide single sensory information (e.g., visual, auditory, or haptic) in a one-to-one experience.

  • Tangible Augmented Reality for Learning Programming Learning

    This project explores how tangible Augmented Reality (AR) can be used to teach computer programming. We have developed TARPLE, A Tangible Augmented Reality Programming Learning Environment, and are studying its efficacy for teaching text-based programming languages to novice learners. TARPLE uses physical blocks to represent programming functions and overlays virtual imagery on the blocks to show the programming code. Use can arrange the blocks by moving them with their hands, and see the AR content either through the Microsoft Hololens2 AR display, or a handheld tablet. This research project expands upon the broader question of educational AR as well as on the questions of tangible programming languages and tangible learning mediums. When supported by the embodied learning and natural interaction affordances of AR, physical objects may hold the key to developing fundamental knowledge of abstract, complex subjects for younger learners in particular. It may also serve as a powerful future tool in advancing early computational thinking skills in novices. Evaluation of such learning environments addresses the hypothesis that hybrid tangible AR mediums are able to support an extended learning taxonomy both within the classroom and without.

  • Using 3D Spaces and 360 Video Content for Collaboration

    This project explores techniques to enhance collaborative experience in Mixed Reality environments using 3D reconstructions, 360 videos and 2D images. Previous research has shown that 360 video can provide a high resolution immersive visual space for collaboration, but little spatial information. Conversely, 3D scanned environments can provide high quality spatial cues, but with poor visual resolution. This project combines both approaches, enabling users to switch between a 3D view or 360 video of a collaborative space. In this hybrid interface, users can pick the representation of space best suited to the needs of the collaborative task. The project seeks to provide design guidelines for collaboration systems to enable empathic collaboration by sharing visual cues and environments across time and space.

  • MPConnect: A Mixed Presence Mixed Reality System

    This project explores how a Mixed Presence Mixed Reality System can enhance remote collaboration. Collaborative Mixed Reality (MR) is a popular area of research, but most work has focused on one-to-one systems where either both collaborators are co-located or the collaborators are remote from one another. For example, remote users might collaborate in a shared Virtual Reality (VR) system, or a local worker might use an Augmented Reality (AR) display to connect with a remote expert to help them complete a task.

  • AR-based spatiotemporal interface and visualization for the physical task

    The proposed study aims to assist in solving physical tasks such as mechanical assembly or collaborative design efficiently by using augmented reality-based space-time visualization techniques. In particular, when disassembling/reassembling is required, 3D recording of past actions and playback visualization are used to help memorize the exact assembly order and position of objects in the task. This study proposes a novel method that employs 3D-based spatial information recording and augmented reality-based playback to effectively support these types of physical tasks.

Publications

  • Mixed Reality Collaboration through Sharing a Live Panorama
    Gun A. Lee, Theophilus Teo, Seungwon Kim, Mark Billinghurst

    Gun A. Lee, Theophilus Teo, Seungwon Kim, and Mark Billinghurst. 2017. Mixed reality collaboration through sharing a live panorama. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (SA '17). ACM, New York, NY, USA, Article 14, 4 pages. http://doi.acm.org/10.1145/3132787.3139203

    @inproceedings{Lee:2017:MRC:3132787.3139203,
    author = {Lee, Gun A. and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    title = {Mixed Reality Collaboration Through Sharing a Live Panorama},
    booktitle = {SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    series = {SA '17},
    year = {2017},
    isbn = {978-1-4503-5410-3},
    location = {Bangkok, Thailand},
    pages = {14:1--14:4},
    articleno = {14},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3132787.3139203},
    doi = {10.1145/3132787.3139203},
    acmid = {3139203},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {panorama, remote collaboration, shared experience},
    }
    One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. SharedSphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator’s focus.
  • Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Jonathon D Hart, Barrett Ens, Robert W Lindeman, Bruce H Thomas, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 46, 13 pages. DOI: https://doi.org/10.1145/3173574.3173620

    @inproceedings{Piumsomboon:2018:MAA:3173574.3173620,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Hart, Jonathon D. and Ens, Barrett and Lindeman, Robert W. and Thomas, Bruce H. and Billinghurst, Mark},
    title = {Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {46:1--46:13},
    articleno = {46},
    numpages = {13},
    url = {http://doi.acm.org/10.1145/3173574.3173620},
    doi = {10.1145/3173574.3173620},
    acmid = {3173620},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, awareness, gaze, gesture, mixed reality, redirected, remote collaboration, remote embodiment, virtual reality},
    }
    [download]
    We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.
  • Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality
    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 81, 14 pages. DOI: https://doi.org/10.1145/3173574.3173655

    @inproceedings{Kyto:2018:PPH:3173574.3173655,
    author = {Kyt\"{o}, Mikko and Ens, Barrett and Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {81:1--81:14},
    articleno = {81},
    numpages = {14},
    url = {http://doi.acm.org/10.1145/3173574.3173655},
    doi = {10.1145/3173574.3173655},
    acmid = {3173655},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, eye tracking, gaze interaction, head-worn display, refinement techniques, target selection},
    }
    Head and eye movement can be leveraged to improve the user's interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
  • Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications
    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, Mark Billinghurst

    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, and Mark Billinghurst. 2018. Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW120, 6 pages. DOI: https://doi.org/10.1145/3170427.3188513

    @inproceedings{Ens:2018:CEM:3170427.3188513,
    author = {Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    title = {Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW120:1--LBW120:6},
    articleno = {LBW120},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188513},
    doi = {10.1145/3170427.3188513},
    acmid = {3188513},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, gesture interaction, wearable computing},
    }
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.
  • Levity: A Virtual Reality System that Responds to Cognitive Load
    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, Mark Billinghurst

    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, and Mark Billinghurst. 2018. Levity: A Virtual Reality System that Responds to Cognitive Load. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW610, 6 pages. DOI: https://doi.org/10.1145/3170427.3188479

    @inproceedings{Gerry:2018:LVR:3170427.3188479,
    author = {Gerry, Lynda and Ens, Barrett and Drogemuller, Adam and Thomas, Bruce and Billinghurst, Mark},
    title = {Levity: A Virtual Reality System That Responds to Cognitive Load},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW610:1--LBW610:6},
    articleno = {LBW610},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188479},
    doi = {10.1145/3170427.3188479},
    acmid = {3188479},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {brain computer interface, cognitive load, virtual reality, visual search task},
    }
    This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
  • Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper D115, 4 pages. DOI: https://doi.org/10.1145/3170427.3186495

    @inproceedings{Piumsomboon:2018:SDM:3170427.3186495,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {D115:1--D115:4},
    articleno = {D115},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3170427.3186495},
    doi = {10.1145/3170427.3186495},
    acmid = {3186495},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, mixed reality, multiple, remote collaboration, remote embodiment, scale, virtual reality},
    }
    We present Snow Dome, a Mixed Reality (MR) remote collaboration application that supports a multi-scale interaction for a Virtual Reality (VR) user. We share a local Augmented Reality (AR) user's reconstructed space with a remote VR user who has an ability to scale themselves up into a giant or down into a miniature for different perspectives and interaction at that scale within the shared space.
  • Filtering Shared Social Data in AR
    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, Robert W Lindeman

    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, and Robert W. Lindeman. 2018. Filtering Shared Social Data in AR. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW100, 6 pages. DOI: https://doi.org/10.1145/3170427.3188609

    @inproceedings{Nassani:2018:FSS:3170427.3188609,
    author = {Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W.},
    title = {Filtering Shared Social Data in AR},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW100:1--LBW100:6},
    articleno = {LBW100},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188609},
    doi = {10.1145/3170427.3188609},
    acmid = {3188609},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {360 panoramas, augmented reality, live video stream, sharing social experiences, virtual avatars},
    }
    We describe a method and a prototype implementation for filtering shared social data (eg, 360 video) in a wearable Augmented Reality (eg, HoloLens) application. The data filtering is based on user-viewer relationships. For example, when sharing a 360 video, if the user has an intimate relationship with the viewer, then full fidelity (ie the 360 video) of the user's environment is visible. But if the two are strangers then only a snapshot image is shared. By varying the fidelity of the shared content, the viewer is able to focus more on the data shared by their close relations and differentiate this from other content. Also, the approach enables the sharing-user to have more control over the fidelity of the content shared with their contacts for privacy.
  • A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014
    Arindam Dey, Mark Billinghurst, Robert W Lindeman, J Swan

    Dey A, Billinghurst M, Lindeman RW and Swan JE II (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 5:37. doi: 10.3389/frobt.2018.00037

    @ARTICLE{10.3389/frobt.2018.00037,
    AUTHOR={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W. and Swan, J. Edward},
    TITLE={A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014},
    JOURNAL={Frontiers in Robotics and AI},
    VOLUME={5},
    PAGES={37},
    YEAR={2018},
    URL={https://www.frontiersin.org/article/10.3389/frobt.2018.00037},
    DOI={10.3389/frobt.2018.00037},
    ISSN={2296-9144},
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
  • He who hesitates is lost (... in thoughts over a robot)
    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, Victor Finomore

    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, and Victor Finomore. 2018. He who hesitates is lost (...in thoughts over a robot). In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). ACM, New York, NY, USA, Article 43, 6 pages. DOI: https://doi.org/10.1145/3183654.3183703

    @inproceedings{Wen:2018:HHL:3183654.3183703,
    author = {Wen, James and Stewart, Amanda and Billinghurst, Mark and Dey, Arindam and Tossell, Chad and Finomore, Victor},
    title = {He Who Hesitates is Lost (...In Thoughts over a Robot)},
    booktitle = {Proceedings of the Technology, Mind, and Society},
    series = {TechMindSociety '18},
    year = {2018},
    isbn = {978-1-4503-5420-2},
    location = {Washington, DC, USA},
    pages = {43:1--43:6},
    articleno = {43},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3183654.3183703},
    doi = {10.1145/3183654.3183703},
    acmid = {3183703},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {Anthropomorphism, Empathy, Human Machine Team, Robotics, User Study},
    }
    In a team, the strong bonds that can form between teammates are often seen as critical for reaching peak performance. This perspective may need to be reconsidered, however, if some team members are autonomous robots since establishing bonds with fundamentally inanimate and expendable objects may prove counterproductive. Previous work has measured empathic responses towards robots as singular events at the conclusion of experimental sessions. As relationships extend over long periods of time, sustained empathic behavior towards robots would be of interest. In order to measure user actions that may vary over time and are affected by empathy towards a robot teammate, we created the TEAMMATE simulation system. Our findings suggest that inducing empathy through a back story narrative can significantly change participant decisions in actions that may have consequences for a robot companion over time. The results of our study can have strong implications for the overall performance of human machine teams.
  • A hybrid 2D/3D user Interface for radiological diagnosis
    Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H Butler

    A hybrid 2D/3D user Interface for radiological diagnosis Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H ButlerJournal of digital imaging 31 (1), 56-73

    @Article{Mandalika2018,
    author="Mandalika, Veera Bhadra Harish
    and Chernoglazov, Alexander I.
    and Billinghurst, Mark
    and Bartneck, Christoph
    and Hurrell, Michael A.
    and Ruiter, Niels de
    and Butler, Anthony P. H.
    and Butler, Philip H.",
    title="A Hybrid 2D/3D User Interface for Radiological Diagnosis",
    journal="Journal of Digital Imaging",
    year="2018",
    month="Feb",
    day="01",
    volume="31",
    number="1",
    pages="56--73",
    abstract="This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.",
    issn="1618-727X",
    doi="10.1007/s10278-017-0002-6",
    url="https://doi.org/10.1007/s10278-017-0002-6"
    }
    This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.
  • The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration
    Seungwon Kim, Mark Billinghurst, Gun Lee

    Kim, S., Billinghurst, M., & Lee, G. (2018). The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration. Computer Supported Cooperative Work (CSCW), 1-39.

    @Article{Kim2018,
    author="Kim, Seungwon
    and Billinghurst, Mark
    and Lee, Gun",
    title="The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration",
    journal="Computer Supported Cooperative Work (CSCW)",
    year="2018",
    month="Jun",
    day="02",
    abstract="This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user's real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants' feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants' end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.",
    issn="1573-7551",
    doi="10.1007/s10606-018-9324-2",
    url="https://doi.org/10.1007/s10606-018-9324-2"
    }
    This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user’s real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants’ feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants’ end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.
  • Robust tracking through the design of high quality fiducial markers: An optimization tool for ARToolKit
    Dawar Khan, Sehat Ullah, Dong-Ming Yan, Ihsan Rabbi, Paul Richard, Thuong Hoang, Mark Billinghurst, Xiaopeng Zhang

    D. Khan et al., "Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit," in IEEE Access, vol. 6, pp. 22421-22433, 2018. doi: 10.1109/ACCESS.2018.2801028

    @ARTICLE{8287815,
    author={D. Khan and S. Ullah and D. M. Yan and I. Rabbi and P. Richard and T. Hoang and M. Billinghurst and X. Zhang},
    journal={IEEE Access},
    title={Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit},
    year={2018},
    volume={6},
    number={},
    pages={22421-22433},
    keywords={augmented reality;image recognition;object tracking;optical tracking;pose estimation;ARToolKit markers;B:W;augmented reality applications;camera tracking;edge sharpness;fiducial marker optimizer;high quality fiducial markers;optimization tool;pose estimation;robust tracking;specialized image processing algorithms;Cameras;Complexity theory;Fiducial markers;Libraries;Robustness;Tools;ARToolKit;Fiducial markers;augmented reality;marker tracking;robust recognition},
    doi={10.1109/ACCESS.2018.2801028},
    ISSN={},
    month={},}
    Fiducial markers are images or landmarks placed in real environment, typically used for pose estimation and camera tracking. Reliable fiducials are strongly desired for many augmented reality (AR) applications, but currently there is no systematic method to design highly reliable fiducials. In this paper, we present fiducial marker optimizer (FMO), a tool to optimize the design attributes of ARToolKit markers, including black to white (B:W) ratio, edge sharpness, and information complexity, and to reduce inter-marker confusion. For these operations, the FMO provides a user friendly interface at the front-end and specialized image processing algorithms at the back-end. We tested manually designed markers and FMO optimized markers in ARToolKit and found that the latter were more robust. The FMO will be used for designing highly reliable fiducials in easy to use fashion. It will improve the application's performance, where it is used.
  • User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors
    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst

    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst. User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, 109-116. http://dx.doi.org/10.2312/egve.20171347

    @inproceedings {egve.20171347,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors}},
    author = {Lee, Gun A. and Rudhru, Omprakash and Park, Hye Sun and Kim, Ho Won and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171347}
    }
    This research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.
  • Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze
    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst

    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst. 2017. Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, pp. 197-204. http://dx.doi.org/10.2312/egve.20171359

    @inproceedings {egve.20171359,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze}},
    author = {Lee, Gun A. and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammathip and Norman, Mitchell and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171359}
    }
    To improve remote collaboration in video conferencing systems, researchers have been investigating augmenting visual cues onto a shared live video stream. In such systems, a person wearing a head-mounted display (HMD) and camera can share her view of the surrounding real-world with a remote collaborator to receive assistance on a real-world task. While this concept of augmented video conferencing (AVC) has been actively investigated, there has been little research on how sharing gaze cues might affect the collaboration in video conferencing. This paper investigates how sharing gaze in both directions between a local worker and remote helper in an AVC system affects the collaboration and communication. Using a prototype AVC system that shares the eye gaze of both users, we conducted a user study that compares four conditions with different combinations of eye gaze sharing between the two users. The results showed that sharing each other’s gaze significantly improved collaboration and communication.
  • Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst

    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst. 2017. Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 36-39. https://doi.org/10.1109/3DUI.2017.7893315

    @INPROCEEDINGS{7893315,
    author={T. Piumsomboon and G. Lee and R. W. Lindeman and M. Billinghurst},
    booktitle={2017 IEEE Symposium on 3D User Interfaces (3DUI)},
    title={Exploring natural eye-gaze-based interaction for immersive virtual reality},
    year={2017},
    volume={},
    number={},
    pages={36-39},
    keywords={gaze tracking;gesture recognition;helmet mounted displays;virtual reality;Duo-Reticles;Nod and Roll;Radial Pursuit;cluttered-object selection;eye tracking technology;eye-gaze selection;head-gesture-based interaction;head-mounted display;immersive virtual reality;inertial reticles;natural eye movements;natural eye-gaze-based interaction;smooth pursuit;vestibulo-ocular reflex;Electronic mail;Erbium;Gaze tracking;Painting;Portable computers;Resists;Two dimensional displays;H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction styles},
    doi={10.1109/3DUI.2017.7893315},
    ISSN={},
    month={March},}
    Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
  • Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration
    Kunal Gupta, Gun A. Lee and Mark Billinghurst

    Kunal Gupta, Gun A. Lee and Mark Billinghurst. 2016. Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration. IEEE Transactions on Visualization and Computer Graphics Vol.22, No.11, pp.2413-2422. https://doi.org/10.1109/TVCG.2016.2593778

    @ARTICLE{7523400,
    author={K. Gupta and G. A. Lee and M. Billinghurst},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    title={Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration},
    year={2016},
    volume={22},
    number={11},
    pages={2413-2422},
    keywords={cameras;gaze tracking;helmet mounted displays;eye-tracking camera;gaze tracking;head-mounted camera;head-mounted display;remote collaboration;task space remote collaboration;virtual gaze information;virtual pointer;wearable interface;Cameras;Collaboration;Computers;Gaze tracking;Head;Prototypes;Teleconferencing;Computer conferencing;Computer-supported collaborative work;teleconferencing;videoconferencing},
    doi={10.1109/TVCG.2016.2593778},
    ISSN={1077-2626},
    month={Nov},}
    We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
  • A Remote Collaboration System with Empathy Glasses

    Y. Lee, K. Masai, K. Kunze, M. Sugimoto and M. Billinghurst. 2016. A Remote Collaboration System with Empathy Glasses. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW), Merida, pp. 342-343. http://doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112

    @INPROCEEDINGS{7836533,
    author = {Y. Lee and K. Masai and K. Kunze and M. Sugimoto and M. Billinghurst},
    booktitle = {2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW)},
    title = {A Remote Collaboration System with Empathy Glasses},
    year = {2017},
    volume = {00},
    number = {},
    pages = {342-343},
    keywords={Collaboration;Glass;Heart rate;Biomedical monitoring;Cameras;Hardware;Computers},
    doi = {10.1109/ISMAR-Adjunct.2016.0112},
    url = {doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112},
    ISSN = {},
    month={Sept.}
    }
    In this paper, we describe a demonstration of remote collaboration system using Empathy glasses. Using our system, a local worker can share a view of their environment with a remote helper, as well as their gaze, facial expressions, and physiological signals. The remote user can send back visual cues via a see-through head mounted display to help them perform better on a real world task. The system also provides some indication of the remote users face expression using face tracking technology.
  • Empathy Glasses
    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst

    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst. 2016. Empathy Glasses. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA, 1257-1263. https://doi.org/10.1145/2851581.2892370

    @inproceedings{Masai:2016:EG:2851581.2892370,
    author = {Masai, Katsutoshi and Kunze, Kai and sugimoto, Maki and Billinghurst, Mark},
    title = {Empathy Glasses},
    booktitle = {Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
    series = {CHI EA '16},
    year = {2016},
    isbn = {978-1-4503-4082-3},
    location = {San Jose, California, USA},
    pages = {1257--1263},
    numpages = {7},
    url = {http://doi.acm.org/10.1145/2851581.2892370},
    doi = {10.1145/2851581.2892370},
    acmid = {2892370},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {emotional interface, facial expression, remote collaboration, wearables},
    }
    In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it is the first to combine the following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) a head worn camera, and (4) a see-through head mounted display, with a focus on remote collaboration. Using the system, a local user can send their information and a view of their environment to a remote helper who can send back visual cues on the local user's see-through display to help them perform a real world task. A pilot user study was conducted to explore how effective the Empathy Glasses were at supporting remote collaboration. We describe the implications that can be drawn from this user study.
  • Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration
    Theophilus Teo, Gun A. Lee, Mark Billinghurst, Matt Adcock

    Theophilus Teo, Gun A. Lee, Mark Billinghurst, and Matt Adcock. 2018. Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI '18). ACM, New York, NY, USA, 406-410. DOI: https://doi.org/10.1145/3292147.3292200

    BibTeX | EndNote | ACM Ref
    @inproceedings{Teo:2018:HGV:3292147.3292200,
    author = {Teo, Theophilus and Lee, Gun A. and Billinghurst, Mark and Adcock, Matt},
    title = {Hand Gestures and Visual Annotation in Live 360 Panorama-based Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    series = {OzCHI '18},
    year = {2018},
    isbn = {978-1-4503-6188-0},
    location = {Melbourne, Australia},
    pages = {406--410},
    numpages = {5},
    url = {http://doi.acm.org/10.1145/3292147.3292200},
    doi = {10.1145/3292147.3292200},
    acmid = {3292200},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {gesture communication, mixed reality, remote collaboration},
    }
    In this paper, we investigate hand gestures and visual annotation cues overlaid in a live 360 panorama-based Mixed Reality remote collaboration. The prototype system captures 360 live panorama video of the surroundings of a local user and shares it with another person in a remote location. The two users wearing Augmented Reality or Virtual Reality head-mounted displays can collaborate using augmented visual communication cues such as virtual hand gestures, ray pointing, and drawing annotations. Our preliminary user evaluation comparing these cues found that using visual annotation cues (ray pointing and drawing annotation) helps local users perform collaborative tasks faster, easier, making less errors and with better understanding, compared to using only virtual hand gestures.
  • Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects
    E Ibili, M Billinghurst

    Ibili, E., & Billinghurst, M. (2019). Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects. International Journal of Assessment Tools in Education, 6(3), 378-395.

    @article{ibili2019assessing,
    title={Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={International Journal of Assessment Tools in Education},
    volume={6},
    number={3},
    pages={378--395},
    year={2019}
    }
    In this study, the relationship between the usability of a mobile Augmented Reality (AR) tutorial system and cognitive load was examined. In this context, the relationship between perceived usefulness, the perceived ease of use, and the perceived natural interaction factors and intrinsic, extraneous, germane cognitive load were investigated. In addition, the effect of gender on this relationship was investigated. The research results show that there was a strong relationship between the perceived ease of use and the extraneous load in males, and there was a strong relationship between the perceived usefulness and the intrinsic load in females. Both the perceived usefulness and the perceived ease of use had a strong relationship with the germane cognitive load. Moreover, the perceived natural interaction had a strong relationship with the perceived usefulness in females and the perceived ease of use in males. This research will provide significant clues to AR software developers and researchers to help reduce or control cognitive load in the development of AR-based instructional software.
  • Sharing hand gesture and sketch cues in remote collaboration
    W. Huang, S. Kim, M. Billinghurst, L. Alem

    Huang, W., Kim, S., Billinghurst, M., & Alem, L. (2019). Sharing hand gesture and sketch cues in remote collaboration. Journal of Visual Communication and Image Representation, 58, 428-438.

    @article{huang2019sharing,
    title={Sharing hand gesture and sketch cues in remote collaboration},
    author={Huang, Weidong and Kim, Seungwon and Billinghurst, Mark and Alem, Leila},
    journal={Journal of Visual Communication and Image Representation},
    volume={58},
    pages={428--438},
    year={2019},
    publisher={Elsevier}
    }
    Many systems have been developed to support remote guidance, where a local worker manipulates objects under guidance of a remote expert helper. These systems typically use speech and visual cues between the local worker and the remote helper, where the visual cues could be pointers, hand gestures, or sketches. However, the effects of combining visual cues together in remote collaboration has not been fully explored. We conducted a user study comparing remote collaboration with an interface that combined hand gestures and sketching (the HandsInTouch interface) to one that only used hand gestures, when solving two tasks; Lego assembly and repairing a laptop. In the user study, we found that (1) adding sketch cues improved the task completion time, only with the repairing task as this had complex object manipulation but (2) using gesture and sketching together created a higher task load for the user.
  • 2.5 DHANDS: a gesture-based MR remote collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M., ... & Ji, H. (2019). 2.5 DHANDS: a gesture-based MR remote collaborative platform. The International Journal of Advanced Manufacturing Technology, 102(5-8), 1339-1353.

    @article{wang20192,
    title={2.5 DHANDS: a gesture-based MR remote collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Sun, Mengmeng and Chen, Yongxing and Lv, Hao and Ji, Hongyu},
    journal={The International Journal of Advanced Manufacturing Technology},
    volume={102},
    number={5-8},
    pages={1339--1353},
    year={2019},
    publisher={Springer}
    }
    Current remote collaborative systems in manufacturing are mainly based on video-conferencing technology. Their primary aim is to transmit manufacturing process knowledge between remote experts and local workers. However, it does not provide the experts with the same hands-on experience as when synergistically working on site in person. The mixed reality (MR) and increasing networking performances have the capacity to enhance the experience and communication between collaborators in geographically distributed locations. In this paper, therefore, we propose a new gesture-based remote collaborative platform using MR technology that enables a remote expert to collaborate with local workers on physical tasks. Besides, we concentrate on collaborative remote assembly as an illustrative use case. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert’s gestures into the real worksite to improve the performance, co-presence awareness, and user collaboration experience. We aim to study the effects of sharing the remote expert’s gestures in remote collaboration using a projector-based MR system in manufacturing. Furthermore, we show the capabilities of our framework on a prototype consisting of a VR HMD, Leap Motion, and a projector. The prototype system was evaluated with a pilot study comparing with the POINTER (adding AR annotations on the task space view through the mouse), which is the most popular method used to augment remote collaboration at present. The assessment adopts the following aspects: the performance, user’s satisfaction, and the user-perceived collaboration quality in terms of the interaction and cooperation. Our results demonstrate a clear difference between the POINTER and 2.5DHANDS interface in the performance time. Additionally, the 2.5DHANDS interface was statistically significantly higher than the POINTER interface in terms of the awareness of user’s attention, manipulation, self-confidence, and co-presence.
  • The effects of sharing awareness cues in collaborative mixed reality
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2019). The effects of sharing awareness cues in collaborative mixed reality. Front. Rob, 6(5).

    @article{piumsomboon2019effects,
    title={The effects of sharing awareness cues in collaborative mixed reality},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    year={2019}
    }
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
  • Revisiting collaboration through mixed reality: The evolution of groupware
    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M. (2019). Revisiting collaboration through mixed reality: The evolution of groupware. International Journal of Human-Computer Studies.

    @article{ens2019revisiting,
    title={Revisiting collaboration through mixed reality: The evolution of groupware},
    author={Ens, Barrett and Lanir, Joel and Tang, Anthony and Bateman, Scott and Lee, Gun and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Collaborative Mixed Reality (MR) systems are at a critical point in time as they are soon to become more commonplace. However, MR technology has only recently matured to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus on creating the enabling technology. In parallel, but largely independently, the field of Computer Supported Cooperative Work (CSCW) has focused on the fundamental concerns that underlie human communication and collaboration over the past 30-plus years. Since MR research is now on the brink of moving into the real world, we reflect on three decades of collaborative MR research and try to reconcile it with existing theory from CSCW, to help position MR researchers to pursue fruitful directions for their work. To do this, we review the history of collaborative MR systems, investigating how the common taxonomies and frameworks in CSCW and MR research can be applied to existing work on collaborative MR systems, exploring where they have fallen behind, and look for new ways to describe current trends. Through identifying emergent trends, we suggest future directions for MR, and also find where CSCW researchers can explore new theory that more fully represents the future of working, playing and being with others.
  • WARPING DEIXIS: Distorting Gestures to Enhance Collaboration
    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J.

    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J. (2019, April). WARPING DEIXIS: Distorting Gestures to Enhance Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 608). ACM.

    @inproceedings{sousa2019warping,
    title={WARPING DEIXIS: Distorting Gestures to Enhance Collaboration},
    author={Sousa, Maur{\'\i}cio and dos Anjos, Rafael Kufner and Mendes, Daniel and Billinghurst, Mark and Jorge, Joaquim},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={608},
    year={2019},
    organization={ACM}
    }
    When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer’s perception. We evaluated our approach in a colocated side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
  • Getting your game on: Using virtual reality to improve real table tennis skills
    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T.

    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T. (2019). Getting your game on: Using virtual reality to improve real table tennis skills. PloS one, 14(9).

    @article{michalski2019getting,
    title={Getting your game on: Using virtual reality to improve real table tennis skills},
    author={Michalski, Stefan Carlo and Szpak, Ancret and Saredakis, Dimitrios and Ross, Tyler James and Billinghurst, Mark and Loetscher, Tobias},
    journal={PloS one},
    volume={14},
    number={9},
    year={2019},
    publisher={Public Library of Science}
    }
    Background: A key assumption of VR training is that the learned skills and experiences transfer to the real world. Yet, in certain application areas, such as VR sports training, the research testing this assumption is sparse. Design: Real-world table tennis performance was assessed using a mixed-model analysis of variance. The analysis comprised a between-subjects (VR training group vs control group) and a within-subjects (pre- and post-training) factor. Method: Fifty-seven participants (23 females) were either assigned to a VR training group (n = 29) or no-training control group (n = 28). During VR training, participants were immersed in competitive table tennis matches against an artificial intelligence opponent. An expert table tennis coach evaluated participants on real-world table tennis playing before and after the training phase. Blinded regarding participant’s group assignment, the expert assessed participants’ backhand, forehand and serving on quantitative aspects (e.g. count of rallies without errors) and quality of skill aspects (e.g. technique and consistency). Results: VR training significantly improved participants’ real-world table tennis performance compared to a no-training control group in both quantitative (p < .001, Cohen’s d = 1.08) and quality of skill assessments (p < .001, Cohen’s d = 1.10). Conclusions: This study adds to a sparse yet expanding literature, demonstrating real-world skill transfer from Virtual Reality in an athletic task
  • On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction
    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M. (2019, April). On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 228). ACM.

    @inproceedings{piumsomboon2019shoulder,
    title={On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction},
    author={Piumsomboon, Thammathip and Lee, Gun A and Irlitti, Andrew and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={228},
    year={2019},
    organization={ACM}
    }
    We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
  • Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M.

    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M. (2019, April). Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 173). ACM.

    @inproceedings{kim2019evaluating,
    title={Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration},
    author={Kim, Seungwon and Lee, Gun and Huang, Weidong and Kim, Hayun and Woo, Woontack and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={173},
    year={2019},
    organization={ACM}
    }
    Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
  • Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, April). Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 201). ACM.

    @inproceedings{teo2019mixed,
    title={Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction},
    author={Teo, Theophilus and Lawrence, Louise and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={201},
    year={2019},
    organization={ACM}
    }
    Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
  • Using Augmented Reality with Speech Input for Non-Native Children's Language Learning
    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M.

    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M. (2019). Using Augmented Reality with Speech Input for Non-Native Children's Language Learning. International Journal of Human-Computer Studies.

    @article{dalim2019using,
    title={Using Augmented Reality with Speech Input for Non-Native Children's Language Learning},
    author={Dalim, Che Samihah Che and Sunar, Mohd Shahrizal and Dey, Arindam and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Augmented Reality (AR) offers an enhanced learning environment which could potentially influence children's experience and knowledge gain during the language learning process. Teaching English or other foreign languages to children with different native language can be difficult and requires an effective strategy to avoid boredom and detachment from the learning activities. With the growing numbers of AR education applications and the increasing pervasiveness of speech recognition, we are keen to understand how these technologies benefit non-native young children in learning English. In this paper, we explore children's experience in terms of knowledge gain and enjoyment when learning through a combination of AR and speech recognition technologies. We developed a prototype AR interface called TeachAR, and ran two experiments to investigate how effective the combination of AR and speech recognition was towards the learning of 1) English terms for color and shapes, and 2) English words for spatial relationships. We found encouraging results by creating a novel teaching strategy using these two technologies, not only in terms of increase in knowledge gain and enjoyment when compared with traditional strategy but also enables young children to finish the certain task faster and easier.
  • Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System
    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J.

    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J. (2019, July). Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System. In 2019 23rd International Conference in Information Visualization–Part II (pp. 86-91). IEEE.

    @inproceedings{kim2019sharing,
    title={Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Gun and Norman, Mitchell and Huang, Weidong and He, Jian},
    booktitle={2019 23rd International Conference in Information Visualization--Part II},
    pages={86--91},
    year={2019},
    organization={IEEE}
    }
    In this paper, we explore the effect of showing a remote partner close to user gaze point in a teleconferencing system. We implemented a gaze following function in a teleconferencing system and investigate if this improves the user's feeling of emotional interdependence. We developed a prototype system that shows a remote partner close to the user's current gaze point and conducted a user study comparing it to a condition displaying the partner fixed in the corner of a screen. Our results showed that showing a partner close to their gaze point helped users feel a higher level of emotional interdependence. In addition, we compared the effect of our method between small and big displays, but there was no significant difference in the users' feeling of emotional interdependence even though the big display was preferred.
  • Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration
    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, March). Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1187-1188). IEEE.

    @inproceedings{teo2019supporting,
    title={Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration},
    author={Teo, Theophilus and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1187--1188},
    year={2019},
    organization={IEEE}
    }
    We propose enhancing live 360 panorama-based Mixed Reality (MR) remote collaboration through supporting visual annotation cues. Prior work on live 360 panorama-based collaboration used MR visualization to overlay visual cues, such as view frames and virtual hands, yet they were not registered onto the shared physical workspace, hence had limitations in accuracy for pointing or marking objects. Our prototype system uses spatial mapping and tracking feature of an Augmented Reality head-mounted display to show visual annotation cues accurately registered onto the physical environment. We describe the design and implementation details of our prototype system, and discuss on how such feature could help improve MR remote collaboration.
  • Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality
    Dey, A., Chatourn, A., & Billinghurst, M.

    Dey, A., Chatburn, A., & Billinghurst, M. (2019, March). Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 220-226). IEEE.

    @inproceedings{dey2019exploration,
    title={Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality},
    author={Dey, Arindam and Chatburn, Alex and Billinghurst, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={220--226},
    year={2019},
    organization={IEEE}
    }
    Virtual Reality (VR) is effective in various training scenarios across multiple domains, such as education, health and defense. However, most of those applications are not adaptive to the real-time cognitive or subjectively experienced load placed on the trainee. In this paper, we explore a cognitively adaptive training system based on real-time measurement of task related alpha activity in the brain. This measurement was made by a 32-channel mobile Electroencephalography (EEG) system, and was used to adapt the task difficulty to an ideal level which challenged our participants, and thus theoretically induces the best level of performance gains as a result of training. Our system required participants to select target objects in VR and the complexity of the task adapted to the alpha activity in the brain. A total of 14 participants undertook our training and completed 20 levels of increasing complexity. Our study identified significant differences in brain activity in response to increasing levels of task complexity, but response time did not alter as a function of task difficulty. Collectively, we interpret this to indicate the brain's ability to compensate for higher task load without affecting behaviourally measured visuomotor performance.
  • Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation
    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M.

    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M. (2019, August). Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation. In Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY. Audio Engineering Society.

    @inproceedings{barde2019binaural,
    title={Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation},
    author={Barde, Amit and Lindeman, Robert W and Lee, Gun and Billinghurst, Mark},
    booktitle={Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY},
    year={2019},
    organization={Audio Engineering Society}
    }
    Binaural spatialization over a bone conduction headset in the vertical plane was investigated using inexpensive and commercially available hardware and software components. The aim of the study was to assess the acuity of binaurally spatialized presentations in the vertical plane. The level of externalization achievable was also explored. Results demonstrate good correlation between established perceptual traits for headphone based auditory localization using non-individualized HRTFs, though localization accuracy appears to be significant worse. A distinct pattern of compressed localization judgments is observed with participants tending to localize the presented stimulus within an approximately 20° range on either side of the inter-aural plane. Localization error was approximately 21° in the vertical plane. Participants reported a good level of externalization. We’ve been able to demonstrate an acceptable level of spatial resolution and externalization is achievable using an inexpensive bone conduction headset and software components.
  • Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., & Chen, Y.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., ... & Chen, Y. (2019, March). Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1219-1220). IEEE.

    @inproceedings{wang2019head,
    title={Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Wang, Shuxia and Zhang, Xiaokun and Du, Jiaxiang and Chen, Yongxing},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1219--1220},
    year={2019},
    organization={IEEE}
    }
    This paper investigates how two different unique gaze visualizations (the head pointer(HP), eye gaze(EG)) affect table-size physical tasks in Mixed Reality (MR) remote collaboration. We developed a remote collaborative MR Platform which supports sharing of the remote expert's HP and EG. The prototype was evaluated with a user study comparing two conditions: sharing HP and EG with respect to their effectiveness in the performance and quality of cooperation. There was a statistically significant difference between two conditions on the performance time, and HP is a good proxy for EG in remote collaboration.
  • The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.
    Ibili, E., & Billinghurst, M.

    Ibili, E., & Billinghurst, M. (2019). The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors. Malaysian Online Journal of Educational Technology, 7(3), 39-56.

    @article{ibili2019relationship,
    title={The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={Malaysian Online Journal of Educational Technology},
    volume={7},
    number={3},
    pages={39--56},
    year={2019},
    publisher={ERIC}
    }
    In  this  study,  the  relationship  was  investigated  between  self‐esteem  and  loneliness  in  social  networks  among  students  in  a  guidance  and  psychological  counselling teaching department. The study was conducted during the 2017‐2018  academic year with 312 trainee school counsellors from Turkey. In terms of data  collection, the Social Network Loneliness Scale, and the Self‐esteem Scale were  employed,  and  a  statistical  analysis  of  the  data  was  conducted.  We  found  a  negative relationship between self‐esteem and loneliness as experienced in social networks, although neither differs according to sex, age and class level. It was also  found  that  those who  use  the Internet  for  communication  purposes  have  high  levels of loneliness and self‐esteem in social networks. While self‐esteem levels among users of the Internet are high, those who use it to read about or watch the  news  have  high  levels  of  loneliness.  No  relationship  was  found  between  self‐ esteem  and  social  network  loneliness  levels  and  among  those  who  use  the  Internet for playing games. Regular sporting habits were found to have a positive  effect on self‐esteem, but no effect on the level of loneliness in social networks.
  • A comprehensive survey of AR/MR-based co-design in manufacturing
    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., Zhang, X.

    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., ... & Zhang, X. (2019). A comprehensive survey of AR/MR-based co-design in manufacturing. Engineering with Computers, 1-24.

    @article{wang2019comprehensive,
    title={A comprehensive survey of AR/MR-based co-design in manufacturing},
    author={Wang, Peng and Zhang, Shusheng and Billinghurst, Mark and Bai, Xiaoliang and He, Weiping and Wang, Shuxia and Sun, Mengmeng and Zhang, Xu},
    journal={Engineering with Computers},
    pages={1--24},
    year={2019},
    publisher={Springer}
    }
    For more than 2 decades, Augmented Reality (AR)/Mixed Reality (MR) has received an increasing amount of attention by researchers and practitioners in the manufacturing community, because it has applications in many fields, such as product design, training, maintenance, assembly, and other manufacturing operations. However, to the best of our knowledge, there has been no comprehensive review of AR-based co-design in manufacturing. This paper presents a comprehensive survey of existing research, projects, and technical characteristics between 1990 and 2017 in the domain of co-design based on AR technology. Among these papers, more than 90% of them were published between 2000 and 2017, and these recent relevant works are discussed at length. The paper provides a comprehensive academic roadmap and useful insight into the state-of-the-art of AR-based co-design systems and developments in manufacturing for future researchers all over the world. This work will be useful to researchers who plan to utilize AR as a tool for design research.
  • Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system
    Ibili, E., Resnyansky, D., & Billinghurst, M.

    Ibili, E., Resnyansky, D., & Billinghurst, M. (2019). Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system. Education and Information Technologies, 1-23.

    @article{ibili2019applying,
    title={Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system},
    author={Ibili, Emin and Resnyansky, Dmitry and Billinghurst, Mark},
    journal={Education and Information Technologies},
    pages={1--23},
    year={2019},
    publisher={Springer}
    }
    This paper examines mathematics teachers’ level of acceptance and intention to use the Augmented Reality Geometry Tutorial System (ARGTS), a mobile Augmented Reality (AR) application developed to enhance students’ 3D geometric thinking skills. ARGTS was shared with mathematics teachers, who were then surveyed using the Technology Acceptance Model (TAM) to understand their acceptance of the technology. We also examined the external variables of Anxiety, Social Norms and Satisfaction. The effect of the teacher’s gender, degree of graduate status and number of years of teaching experience on the subscales of the TAM model were examined. We found that the Perceived Ease of Use (PEU) had a direct effect on the Perceived Usefulness (PU) in accordance with the Technology Acceptance Model (TAM). Both variables together affect Satisfaction (SF), however PEU had no direct effect on Attitude (AT). In addition, while Social Norms (SN) had a direct effect on PU and PEU, there was no direct effect on Behavioural Intention (BI). Anxiety (ANX) had a direct effect on PEU, but no effect on PU and SF. While there was a direct effect of SF on PEU, no direct effect was found on BI. We explain how the results of this study could help improve the understanding of AR acceptance by teachers and provide important guidelines for AR researchers, developers and practitioners.
  • An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills
    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M.

    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M. (2019). An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills. International Journal of Mathematical Education in Science and Technology, 1-23.

    @article{ibili2019assessment,
    title={An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills},
    author={{\.I}bili, Emin and {\c{C}}at, Mevl{\"u}t and Resnyansky, Dmitry and {\c{S}}ahin, Sami and Billinghurst, Mark},
    journal={International Journal of Mathematical Education in Science and Technology},
    pages={1--23},
    year={2019},
    publisher={Taylor \& Francis}
    }
    The aim of this research was to examine the effect of Augmented Reality (AR) supported geometry teaching on students’ 3D thinking skills. This research consisted of 3 steps: (1) developing a 3D thinking ability scale, (ii) design and development of an AR Geometry Tutorial System (ARGTS) and (iii) implementation and assessment of geometry teaching supported with ARGTS. A 3D thinking ability scale was developed and tested with experimental and control groups as a pre- and post-test evaluation. An AR Geometry Tutorial System (ARGTS) and AR teaching materials and environments were developed to enhance 3D thinking skills. A user study with these materials found that geometry teaching supported by ARGTS significantly increased the students’ 3D thinking skills. The increase in average scores of Structuring 3D arrays of cubes and Calculation of the volume and the area of solids thinking skills was not statistically significant (p > 0.05). In terms of other 3D geometric thinking skills’ subfactors of the scale a statistically significant difference was found in favour of the experimental group in pre-test and post-test scores (p < 0.05). The biggest difference was found on ability to recognize and create 3D shapes (p < 0.01).The results of this research are particularly important for identifying individual differences in 3D thinking skills of secondary school students and creating personalized dynamic intelligent learning environments.
  • The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training.
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1-2). IEEE.

    @inproceedings{clifford2018effect,
    title={The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1--2},
    year={2018},
    organization={IEEE}
    }
    Situation Awareness (SA) is an essential skill in Air Attack Supervision (AAS) for aerial based wildfire firefighting. The display types used for Virtual Reality Training Systems (VRTS) afford different visual SA depending on the Field of View (FoV) as well as the sense of presence users can obtain in the virtual environment. We conducted a study with 36 participants to evaluate SA acquisition in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical simulation projection display called the SimPit. We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for the three levels of SA.
  • Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017)
    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F.

    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F. (2018). Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017). IEEE transactions on visualization and computer graphics, 24(11), 2947-2962.

    @article{kim2018revisiting,
    title={Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008--2017)},
    author={Kim, Kangsoo and Billinghurst, Mark and Bruder, Gerd and Duh, Henry Been-Lirn and Welch, Gregory F},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2947--2962},
    year={2018},
    publisher={IEEE}
    }
    In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008-2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them-which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.
  • Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment
    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H.

    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H. (2018). Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment. IEEE transactions on visualization and computer graphics, 24(11), 2917-2926.

    @article{reichherzer2018narrative,
    title={Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment},
    author={Reichherzer, Carolin and Cunningham, Andrew and Walsh, James and Kohler, Mark and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2917--2926},
    year={2018},
    publisher={IEEE}
    }
    This paper showcases one way of how virtual reconstruction can be used in a courtroom. The results of a pilot study on narrative and spatial memory are presented in the context of viewing real and virtual copies of a simulated crime scene. Based on current court procedures, three different viewing options were compared: photographs, a real life visit, and a 3D virtual reconstruction of the scene viewed in a Virtual Reality headset. Participants were also given a written narrative that included the spatial locations of stolen goods and were measured on their ability to recall and understand these spatial relationships of those stolen items. The results suggest that Virtual Reality is more reliable for spatial memory compared to photographs and that Virtual Reality provides a compromise for when physical viewing of crime scenes are not possible. We conclude that Virtual Reality is a promising medium for the court.
  • A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks
    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H.

    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H. (2018). A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks. IEEE transactions on visualization and computer graphics, 24(11), 2846-2856.

    @article{volmer2018comparison,
    title={A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks},
    author={Volmer, Benjamin and Baumeister, James and Von Itzstein, Stewart and Bornkessel-Schlesewsky, Ina and Schlesewsky, Matthias and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2846--2856},
    year={2018},
    publisher={IEEE}
    }
    Previous research has demonstrated that Augmented Reality can reduce a user's task response time and mental effort when completing a procedural task. This paper investigates techniques to improve user performance and reduce mental effort by providing projector-based Spatial Augmented Reality predictive cues for future responses. The objective of the two experiments conducted in this study was to isolate the performance and mental effort differences from several different annotation cueing techniques for simple (Experiment 1) and complex (Experiment 2) button-pressing tasks. Comporting with existing cognitive neuroscience literature on prediction, attentional orienting, and interference, we hypothesized that for both simple procedural tasks and complex search-based tasks, having a visual cue guiding to the next task's location would positively impact performance relative to a baseline, no-cue condition. Additionally, we predicted that direction-based cues would provide a more significant positive impact than target-based cues. The results indicated that providing a line to the next task was the most effective technique for improving the users' task time and mental effort in both the simple and complex tasks.
  • Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface
    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M. (2018). Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface. IEEE transactions on visualization and computer graphics, 24(11), 2974-2982.

    @article{piumsomboon2018superman,
    title={Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface},
    author={Piumsomboon, Thammathip and Lee, Gun A and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2974--2982},
    year={2018},
    publisher={IEEE}
    }
    The advancements in Mixed Reality (MR), Unmanned Aerial Vehicle, and multi-scale collaborative virtual environments have led to new interface opportunities for remote collaboration. This paper explores a novel concept of flying telepresence for multi-scale mixed reality remote collaboration. This work could enable remote collaboration at a larger scale such as building construction. We conducted a user study with three experiments. The first experiment compared two interfaces, static and dynamic IPD, on simulator sickness and body size perception. The second experiment tested the user perception of a virtual object size under three levels of IPD and movement gain manipulation with a fixed eye height in a virtual environment having reduced or rich visual cues. Our last experiment investigated the participant’s body size perception for two levels of manipulation of the IPDs and heights using stereo video footage to simulate a flying telepresence experience. The studies found that manipulating IPDs and eye height influenced the user’s size perception. We present our findings and share the recommendations for designing a multi-scale MR flying telepresence interface.
  • Design considerations for combining augmented reality with intelligent tutors
    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G.

    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G. (2018). Design considerations for combining augmented reality with intelligent tutors. Computers & Graphics, 77, 166-182.

    @article{herbert2018design,
    title={Design considerations for combining augmented reality with intelligent tutors},
    author={Herbert, Bradley and Ens, Barrett and Weerasinghe, Amali and Billinghurst, Mark and Wigley, Grant},
    journal={Computers \& Graphics},
    volume={77},
    pages={166--182},
    year={2018},
    publisher={Elsevier}
    }
    Augmented Reality overlays virtual objects on the real world in real-time and has the potential to enhance education, however, few AR training systems provide personalised learning support. Combining AR with intelligent tutoring systems has the potential to improve training outcomes by providing personalised learner support, such as feedback on the AR environment. This paper reviews the current state of AR training systems combined with ITSs and proposes a series of requirements for combining the two paradigms. In addition, this paper identifies a growing need to provide more research in the context of design and implementation of adaptive augmented reality tutors (ARATs). These include possibilities of evaluating the user interfaces of ARAT and potential domains where an ARAT might be considered effective.
  • Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression. In 2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good) (pp. 1-5). IEEE.

    @inproceedings{clifford2018development,
    title={Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good)},
    pages={1--5},
    year={2018},
    organization={IEEE}
    }
    Wildfire firefighting is difficult to train for in the real world due to a variety of reasons, cost and environmental impact being the major barriers to effective training. Virtual Reality offers greater training opportunities to practice crucial skills, difficult to obtain without experiencing the actual environment. Situation Awareness (SA) is a critical aspect of Air Attack Supervision (AAS). Timely decisions need to be made by the AAS based on the information gathered while airborne. The type of display used in virtual reality training systems afford different levels of SA due to factors such as field of view, as well as presence within the virtual environment and the system. We conducted a study with 36 participants to evaluate SA acquisition and immersion in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical projection system (SimPit). We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for SA levels. Preference was given more to the HMD for immersion and portability, but the SimPit gave the best environment for the actual role.
  • Collaborative immersive analytics.
    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T.

    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T. (2018). Collaborative immersive analytics. In Immersive Analytics (pp. 221-257). Springer, Cham.

    @incollection{billinghurst2018collaborative,
    title={Collaborative immersive analytics},
    author={Billinghurst, Mark and Cordeil, Maxime and Bezerianos, Anastasia and Margolis, Todd},
    booktitle={Immersive Analytics},
    pages={221--257},
    year={2018},
    publisher={Springer}
    }
    Many of the problems being addressed by Immersive Analytics require groups of people to solve. This chapter introduces the concept of Collaborative Immersive Analytics (CIA) and reviews how immersive technologies can be combined with Visual Analytics to facilitate co-located and remote collaboration. We provide a definition of Collaborative Immersive Analytics and then an overview of the different types of possible collaboration. The chapter also discusses the various roles in collaborative systems, and how to support shared interaction with the data being presented. Finally, we summarize the opportunities for future research in this domain. The aim of the chapter is to provide enough of an introduction to CIA and key directions for future research, so that practitioners will be able to begin working in the field.
  • Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting
    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W. (2018, September). Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting. In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) (pp. 1-8). IEEE. Clifford, Rory MS, Simon Hoermann, Nicolas Marcade

    @inproceedings{clifford2018evaluating,
    title={Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting},
    author={Clifford, Rory MS and Hoermann, Simon and Marcadet, Nicolas and Oliver, Hamish and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
    pages={1--8},
    year={2018},
    organization={IEEE}
    }
    Aerial firefighting takes place in stressful environments where decision making and communication are paramount, and skills need to be practiced and trained regularly. An experiment was performed to test the effects of disrupting the communications ability of the users on their stress levels in a noisy environment. The goal of this research is to investigate how realistic disruption of communication systems can be simulated in a virtual environment and to what extent they induce stress. We found that aerial firefighting experts maintained a better Heart Rate Variability (HRV) during disruptions than novices. Experts showed better ability to manage stress based on the change in HRV during the experiment. Our main finding is that communication disruptions in virtual reality (e.g., broken transmissions) significantly impacted the level of stress experienced by participants.
  • TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams
    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C. (2018, August). TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 991-996). IEEE.

    @inproceedings{wen2018teammate,
    title={TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossel, Chad},
    booktitle={2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)},
    pages={991--996},
    year={2018},
    organization={IEEE}
    }
    Strong empathic bonding between members of a team can elevate team performance tremendously but it is not clear how such bonding within human-machine teams may impact upon mission success. Prior work using self-reporting surveys and end-of-task metrics do not capture how such bonding may evolve over time and impact upon task fulfillment. Furthermore, sensor-based measures do not scale easily to facilitate the need to collect substantial data for measuring potentially subtle effects. We introduce TEAMMATE, a system designed to provide insights into the emotional dynamics humans may form for machine teammates that could critically impact upon the design of human machine teams.
  • Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques
    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S.

    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S. (2018, September). Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques. In Proceedings of SAI Intelligent Systems Conference (pp. 309-322). Springer, Cham.

    @inproceedings{ismail2018designing,
    title={Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques},
    author={Ismail, Ajune Wanis and Billinghurst, Mark and Sunar, Mohd Shahrizal and Yusof, Cik Suhaimi},
    booktitle={Proceedings of SAI Intelligent Systems Conference},
    pages={309--322},
    year={2018},
    organization={Springer}
    }
    Augmented Reality (AR) supports natural interaction in physical and virtual worlds, so it has recently given rise to a number of novel interaction modalities. This paper presents a method for using hand-gestures with speech input for multimodal interaction in AR. It focuses on providing an intuitive AR environment which supports natural interaction with virtual objects while sustaining accessible real tasks and interaction mechanisms. The paper reviews previous multimodal interfaces and describes recent studies in AR that employ gesture and speech inputs for multimodal input. It describes an implementation of gesture interaction with speech input in AR for virtual object manipulation. Finally, the paper presents a user evaluation of the technique, showing that it can be used to improve the interaction between virtual and physical elements in an AR environment.
  • Emotion Sharing and Augmentation in Cooperative Virtual Reality Games
    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M. (2018, October). Emotion Sharing and Augmentation in Cooperative Virtual Reality Games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (pp. 453-460). ACM.

    @inproceedings{hart2018emotion,
    title={Emotion Sharing and Augmentation in Cooperative Virtual Reality Games},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lawrence, Louise and Lee, Gun A and Smith, Ross T and Billinghurst, Mark},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts},
    pages={453--460},
    year={2018},
    organization={ACM}
    }
    We present preliminary findings from sharing and augmenting facial expression in cooperative social Virtual Reality (VR) games. We implemented a prototype system for capturing and sharing facial expression between VR players through their avatar. We describe our current prototype system and how it could be assimilated into a system for enhancing social VR experience. Two social VR games were created for a preliminary user study. We discuss our findings from the user study, potential games for this system, and future directions for this research.
  • Effects of Manipulating Physiological Feedback in Immersive Virtual Environments
    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Manipulating Physiological Feedback in Immersive Virtual Environments. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (pp. 101-111). ACM.

    @inproceedings{dey2018effects,
    title={Effects of Manipulating Physiological Feedback in Immersive Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play},
    pages={101--111},
    year={2018},
    organization={ACM}
    }

    Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one’s physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants’ emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.

  • Real-time visual representations for mobile mixed reality remote collaboration.
    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W.

    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W. (2018, December). Real-time visual representations for mobile mixed reality remote collaboration. In SIGGRAPH Asia 2018 Virtual & Augmented Reality (p. 15). ACM.

    @inproceedings{gao2018real,
    title={Real-time visual representations for mobile mixed reality remote collaboration},
    author={Gao, Lei and Bai, Huidong and He, Weiping and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2018 Virtual \& Augmented Reality},
    pages={15},
    year={2018},
    organization={ACM}
    }
    In this study we present a Mixed-Reality based mobile remote collaboration system that enables an expert providing real-time assistance over a physical distance. By using the Google ARCore position tracking, we can integrate the keyframes captured with one external depth sensor attached to the mobile phone as one single 3D point-cloud data set to present the local physical environment into the VR world. This captured local scene is then wirelessly streamed to the remote side for the expert to view while wearing a mobile VR headset (HTC VIVE Focus). In this case, the remote expert can immerse himself/herself in the VR scene and provide guidance just as sharing the same work environment with the local worker. In addition, the remote guidance is also streamed back to the local side as an AR cue overlaid on top of the local video see-through display. Our proposed mobile remote collaboration system supports a pair of participants performing as one remote expert guiding one local worker on some physical tasks in a more natural and efficient way in a large scale work space from a distance by simulating the face-to-face co-work experience using the Mixed-Reality technique.
  • Band of Brothers and Bolts: Caring About Your Robot Teammate
    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C. (2018, October). Band of Brothers and Bolts: Caring About Your Robot Teammate. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1853-1858). IEEE.

    @inproceedings{wen2018band,
    title={Band of Brothers and Bolts: Caring About Your Robot Teammate},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossell, Chad},
    booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages={1853--1858},
    year={2018},
    organization={IEEE}
    }
    It has been observed that a robot shown as suffering is enough to cause an empathic response from a person. Whether the response is a fleeting reaction with no consequences or a meaningful perspective change with associated behavior modifications is not clear. Existing work has been limited to measurements made at the end of empathy inducing experimental trials rather measurements made over time to capture consequential behavioral pattern. We report on preliminary results collected from a study that attempts to measure how the actions of a participant may be altered by empathy for a robot companion. Our findings suggest that induced empathy can in fact have a significant impact on a person's behavior to the extent that the ability to fulfill a mission may be affected.
  • The effect of video placement in AR conferencing applications
    Lawrence, L., Dey, A., & Billinghurst, M.

    Lawrence, L., Dey, A., & Billinghurst, M. (2018, December). The effect of video placement in AR conferencing applications. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 453-457). ACM.

    @inproceedings{lawrence2018effect,
    title={The effect of video placement in AR conferencing applications},
    author={Lawrence, Louise and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={453--457},
    year={2018},
    organization={ACM}
    }
    We ran a pilot study to investigate the impact of video placement in augmented reality conferencing on communication, social presence and user preference. In addition, we explored the influence of different tasks, assembly and negotiation. We discovered a correlation between video placement and the type of the tasks, with some significant results in social presence indicators.
  • HandsInTouch: sharing gestures in remote collaboration
    Huang, W., Billinghurst, M., Alem, L., & Kim, S.

    Huang, W., Billinghurst, M., Alem, L., & Kim, S. (2018, December). HandsInTouch: sharing gestures in remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 396-400). ACM.

    @inproceedings{huang2018handsintouch,
    title={HandsInTouch: sharing gestures in remote collaboration},
    author={Huang, Weidong and Billinghurst, Mark and Alem, Leila and Kim, Seungwon},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={396--400},
    year={2018},
    organization={ACM}
    }
    Many systems have been developed to support remote collaboration, where hand gestures or sketches can be shared. However, the effect of combining gesture and sketching together has not been fully explored and understood. In this paper we describe HandsInTouch, a system in which both hand gestures and sketches made by a remote helper are shown to a local user in real time. We conducted a user study to test the usability of the system and the usefulness of combing gesture and sketching for remote collaboration. We discuss results and make recommendations for system design and future work.
  • A generalized, rapid authoring tool for intelligent tutoring systems
    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G.

    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G. (2018, December). A generalized, rapid authoring tool for intelligent tutoring systems. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 368-373). ACM.

    @inproceedings{herbert2018generalized,
    title={A generalized, rapid authoring tool for intelligent tutoring systems},
    author={Herbert, Bradley and Billinghurst, Mark and Weerasinghe, Amali and Ens, Barret and Wigley, Grant},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={368--373},
    year={2018},
    organization={ACM}
    }
    As computer-based training systems become increasingly integrated into real-world training, tools which rapidly author courses for such systems are emerging. However, inconsistent user interface design and limited support for a variety of domains makes them time consuming and difficult to use. We present a Generalized, Rapid Authoring Tool (GRAT), which simplifies creation of Intelligent Tutoring Systems (ITSs) using a unified web-based wizard-style graphical user interface and programming-by-demonstration approaches to reduce technical knowledge needed to author ITS logic. We implemented a prototype, which authors courses for two kinds of tasks: A network cabling task and a console device configuration task to demonstrate the tool's potential. We describe the limitations of our prototype and present opportunities for evaluating the tool's usability and perceived effectiveness.
  • Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.
    Kim, S., Billinghurst, M., Lee, C., & Lee, G

    Kim, S., Billinghurst, M., Lee, C., & Lee, G. (2018). Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration. KSII Transactions on Internet & Information Systems, 12(12).

    @article{kim2018using,
    title={Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Chilwoo and Lee, Gun},
    journal={KSII Transactions on Internet \& Information Systems},
    volume={12},
    number={12},
    year={2018}
    }

    This paper describes two user studies in remote collaboration between two users with a video conferencing system where a remote user can draw annotations on the live video of the local user’s workspace. In these two studies, the local user had the control of the view when sharing the first-person view, but our interfaces provided instant control of the shared view to the remote users. The first study investigates methods for assisting drawing annotations. The auto-freeze method, a novel solution for drawing annotations, is compared to a prior solution (manual freeze method) and a baseline (non-freeze) condition. Results show that both local and remote users preferred the auto-freeze method, which is easy to use and allows users to quickly draw annotations. The manual-freeze method supported precise drawing, but was less preferred because of the need for manual input. The second study explores visual notification for better local user awareness. We propose two designs: the red-box and both-freeze notifications, and compare these to the baseline, no notification condition. Users preferred the less obtrusive red-box notification that improved awareness of when annotations were made by remote users, and had a significantly lower level of interruption compared to the both-freeze condition.

  • The Potential of Augmented Reality for Computer Science Education
    Resnyansky, D., İbili, E., & Billinghurst, M.

    Resnyansky, D., İbili, E., & Billinghurst, M. (2018, December). The Potential of Augmented Reality for Computer Science Education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 350-356). IEEE.

    @inproceedings{resnyansky2018potential,
    title={The Potential of Augmented Reality for Computer Science Education},
    author={Resnyansky, Dmitry and {\.I}bili, Emin and Billinghurst, Mark},
    booktitle={2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    pages={350--356},
    year={2018},
    organization={IEEE}
    }
    Innovative approaches in the teaching of computer science are required to address the needs of diverse target audiences, including groups with minimal mathematical background and insufficient abstract thinking ability.  In order to tackle this problem, new pedagogical approaches as needed, such as using new technologies such as Virtual and Augmented Reality, Tangible User Interfaces, and 3D graphics. This paper draws upon relevant pedagogical and technological literature to determine how Augmented Reality can be more fully applied to computer science education.
  • Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments
    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 165-173). IEEE.

    @inproceedings{dey2018effects,
    title={Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Zhuang, Chang and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={165--173},
    year={2018},
    organization={IEEE}
    }
    Collaboration is an important application area for virtual reality (VR). However, unlike in the real world, collaboration in VR misses important empathetic cues that can make collaborators aware of each other's emotional states. Providing physiological feedback, such as heart rate or respiration rate, to users in VR has been shown to create a positive impact in single user environments. In this paper, through a rigorous mixed-factorial user experiment, we evaluated how providing heart rate feedback to collaborators influences their collaboration in three different environments requiring different kinds of collaboration. We have found that when provided with real-time heart rate feedback participants felt the presence of the collaborator more and felt that they understood their collaborator's emotional state more. Heart rate feedback also made participants feel more dominant when performing the task. We discuss the implication of this research for collaborative VR environments, provide design guidelines, and directions for future research.
  • Sharing and Augmenting Emotion in Collaborative Mixed Reality
    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M. (2018, October). Sharing and Augmenting Emotion in Collaborative Mixed Reality. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 212-213). IEEE.

    @inproceedings{hart2018sharing,
    title={Sharing and Augmenting Emotion in Collaborative Mixed Reality},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lee, Gun and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={212--213},
    year={2018},
    organization={IEEE}
    }
    We present a concept of emotion sharing and augmentation for collaborative mixed-reality. To depict the ideal use case of such system, we give two example scenarios. We describe our prototype system for capturing and augmenting emotion through facial expression, eye-gaze, voice, physiological data and share them through their virtual representation, and discuss on future research directions with potential applications.
  • Filtering 3D Shared Surrounding Environments by Social Proximity in AR
    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W.

    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W. (2018, October). Filtering 3D Shared Surrounding Environments by Social Proximity in AR. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 123-124). IEEE.

    @inproceedings{nassani2018filtering,
    title={Filtering 3D Shared Surrounding Environments by Social Proximity in AR},
    author={Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Langlotz, Tobias and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={123--124},
    year={2018},
    organization={IEEE}
    }
    In this poster, we explore the social sharing of surrounding environments on wearable Augmented Reality (AR) devices. In particular, we propose filtering the level of detail of sharing the surrounding environment based on the social proximity between the viewer and the sharer. We test the effect of having the filter (varying levels of detail) on the shared surrounding environment on the sense of privacy from both viewer and sharer perspectives and conducted a pilot study using HoloLens. We report on semi-structured questionnaire results and suggest future directions in the social sharing of surrounding environments.
  • The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation
    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M.

    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M. (2018, October). The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 216-221). IEEE.

    @inproceedings{zhang2018effect,
    title={The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation},
    author={Zhang, Li and Ha, Weiping and Bai, Xiaoliang and Chen, Yongxing and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={216--221},
    year={2018},
    organization={IEEE}
    }
    In this paper, we explore how Augmented Reality (AR) and anthropomorphism can be used to assign emotions to common physical objects based on their needs. We developed a novel emotional interaction model among personified physical objects so that they could react to other objects by changing virtual facial expressions. To explore the effect of such an emotional interface, we conducted a user study comparing three types of virtual cues shown on the real objects: (1) information only, (2) emotion only and (3) both information and emotional cues. A significant difference was found in task completion time and the quality of work when adding emotional cues to an informational AR-based guiding system. This implies that adding emotion feedback to informational cues may produce better task results than using informational cues alone.
  • Do you know what i mean? an mr-based collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., Wang, S.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., ... & Wang, S. (2018, October). Do you know what i mean? an mr-based collaborative platform. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 77-78). IEEE.

    @inproceedings{wang2018you,
    title={Do you know what i mean? an mr-based collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Zhang, Li and Du, Jiaxiang and Wang, Shuxia},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={77--78},
    year={2018},
    organization={IEEE}
    }
    The Mixed Reality (MR) technology can be used to create unique collaborative experiences. In this paper, we propose a new remote collaboration platform using MR and eye-tracking that enables a remote helper to assist a local worker in an assembly task. We present results from research exploring the effect of sharing virtual gaze and annotations cues in an MR-based projector interface for remote collaboration. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert's eye gaze into the real worksite to improve co-presence. The prototype system was evaluated with a pilot study comparing two conditions: POINTER and ET (eye-tracker cues). We observed that the task completion performance was better in the ET condition. And that sharing gaze significantly improved the awareness of each other's focus and co-presence.
  • Enhancing player engagement through game balancing in digitally augmented physical games
    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C.

    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C. (2017). Enhancing player engagement through game balancing in digitally augmented physical games. International Journal of Human-Computer Studies, 103, 35-47.

    @article{altimira2017enhancing,
    title={Enhancing player engagement through game balancing in digitally augmented physical games},
    author={Altimira, David and Clarke, Jenny and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph and others},
    journal={International Journal of Human-Computer Studies},
    volume={103},
    pages={35--47},
    year={2017},
    publisher={Elsevier}
    }
    Game balancing can be used to compensate for differences in players' skills, in particular in games where players compete against each other. It can help providing the right level of challenge and hence enhance engagement. However, there is a lack of understanding of game balancing design and how different game adjustments affect player engagement. This understanding is important for the design of balanced physical games. In this paper we report on how altering the game equipment in a digitally augmented table tennis game, such as the table size and bat-head size statically and dynamically, can affect game balancing and player engagement. We found these adjustments enhanced player engagement compared to the no-adjustment condition. The understanding of how the adjustments impacted on player engagement helped us to derive a set of balancing strategies to facilitate engaging game experiences. We hope that this understanding can contribute to improve physical activity experiences and encourage people to get engaged in physical activity.
  • Effects of sharing physiological states of players in a collaborative virtual reality gameplay
    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M.

    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M. (2017, May). Effects of sharing physiological states of players in a collaborative virtual reality gameplay. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4045-4056). ACM.

    @inproceedings{dey2017effects,
    title={Effects of sharing physiological states of players in a collaborative virtual reality gameplay},
    author={Dey, Arindam and Piumsomboon, Thammathip and Lee, Youngho and Billinghurst, Mark},
    booktitle={Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
    pages={4045--4056},
    year={2017},
    organization={ACM}
    }
    Interfaces for collaborative tasks, such as multiplayer games can enable more effective and enjoyable collaboration. However, in these systems, the emotional states of the users are often not communicated properly due to their remoteness from one another. In this paper, we investigate the effects of showing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart-rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. The games had significant main effects on the overall emotional experience.
  • User evaluation of hand gestures for designing an intelligent in-vehicle interface
    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M.

    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M. (2017, May). User evaluation of hand gestures for designing an intelligent in-vehicle interface. In International Conference on Design Science Research in Information System and Technology (pp. 104-121). Springer, Cham.

    @inproceedings{jahani2017user,
    title={User evaluation of hand gestures for designing an intelligent in-vehicle interface},
    author={Jahani, Hessam and Alyamani, Hasan J and Kavakli, Manolya and Dey, Arindam and Billinghurst, Mark},
    booktitle={International Conference on Design Science Research in Information System and Technology},
    pages={104--121},
    year={2017},
    organization={Springer}
    }
    Driving a car is a high cognitive-load task requiring full attention behind the wheel. Intelligent navigation, transportation, and in-vehicle interfaces have introduced a safer and less demanding driving experience. However, there is still a gap for the existing interaction systems to satisfy the requirements of actual user experience. Hand gesture as an interaction medium, is natural and less visually demanding while driving. This paper aims to conduct a user-study with 79 participants to validate mid-air gestures for 18 major in-vehicle secondary tasks. We have demonstrated a detailed analysis on 900 mid-air gestures investigating preferences of gestures for in-vehicle tasks, their physical affordance, and driving errors. The outcomes demonstrate that employment of mid-air gestures reduces driving errors by up to 50% compared to traditional air-conditioning control. Results can be used for the development of vision-based in-vehicle gestural interfaces.
  • Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals
    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M.

    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M. (2017, June). Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals. In International Conference on Artificial Intelligence in Education (pp. 450-454). Springer, Cham.

    @inproceedings{almiyad2017intelligent,
    title={Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals},
    author={Almiyad, Mohammed A and Oakden-Rayner, Luke and Weerasinghe, Amali and Billinghurst, Mark},
    booktitle={International Conference on Artificial Intelligence in Education},
    pages={450--454},
    year={2017},
    organization={Springer}
    }
    Percutaneous radiology procedures often require the repeated use of medical radiation in the form of computed tomography (CT) scanning, to demonstrate the position of the needle in the underlying tissues. The angle of the insertion and the distance travelled by the needle inside the patient play a major role in successful procedures, and must be estimated by the practitioner and confirmed periodically by the use of the scanner. Junior radiology trainees, who are already highly trained professionals, currently learn this task “on-the-job” by performing the procedures on real patients with varying levels of guidance. Therefore, we present a novel Augmented Reality (AR)-based system that provides multiple layers of intuitive and adaptive feedback to assist junior radiologists in achieving competency in image-guided procedures.
  • Augmented reality entertainment: taking gaming out of the box
    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H.

    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H. (2017). Augmented reality entertainment: taking gaming out of the box. Encyclopedia of Computer Graphics and Games, 1-9.

    @article{von2017augmented,
    title={Augmented reality entertainment: taking gaming out of the box},
    author={Von Itzstein, G Stewart and Billinghurst, Mark and Smith, Ross T and Thomas, Bruce H},
    journal={Encyclopedia of Computer Graphics and Games},
    pages={1--9},
    year={2017},
    publisher={Springer}
    }
    In this chapter, an overview of using AR for gaming and entertainment is provided, one of the most popular application areas. There are many possible AR entertainment applications. For example, the Pokémon Go mobile phone game has an AR element that allows people to see virtual Pokémon to appear in the live camera view, seemingly inhabiting the real world. In this case, Pokémon Go satisfies Azuma’s three AR criteria: the virtual Pokémon appears in the real world, the user can interact with them, and they appear fixed in space.
  • Estimating Gaze Depth Using Multi-Layer Perceptron

    Lee, Y., Shin, C., Plopski, A., Itoh, Y., Piumsomboon, T., Dey, A., ... & Billinghurst, M. (2017, June). Estimating Gaze Depth Using Multi-Layer Perceptron. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 26-29). IEEE.

    @inproceedings{lee2017estimating,
    title={Estimating Gaze Depth Using Multi-Layer Perceptron},
    author={Lee, Youngho and Shin, Choonsung and Plopski, Alexander and Itoh, Yuta and Piumsomboon, Thammathip and Dey, Arindam and Lee, Gun and Kim, Seungwon and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={26--29},
    year={2017},
    organization={IEEE}
    }
    In this paper we describe a new method for determining gaze depth in a head mounted eye-tracker. Eyetrackers are being incorporated into head mounted displays (HMDs), and eye-gaze is being used for interaction in Virtual and Augmented Reality. For some interaction methods, it is important to accurately measure the x- and y-direction of the eye-gaze and especially the focal depth information. Generally, eye tracking technology has a high accuracy in x- and y-directions, but not in depth. We used a binocular gaze tracker with two eye cameras, and the gaze vector was input to an MLP neural network for training and estimation. For the performance evaluation, data was obtained from 13 people gazing at fixed points at distances from 1m to 5m. The gaze classification into fixed distances produced an average classification error of nearly 10%, and an average error distance of 0.42m. This is sufficient for some Augmented Reality applications, but more research is needed to provide an estimate of a user’s gaze moving in continuous space.
  • Empathic mixed reality: Sharing what you feel and interacting with what you see
    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M.

    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M. (2017, June). Empathic mixed reality: Sharing what you feel and interacting with what you see. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 38-41). IEEE.

    @inproceedings{piumsomboon2017empathic,
    title={Empathic mixed reality: Sharing what you feel and interacting with what you see},
    author={Piumsomboon, Thammathip and Lee, Youngho and Lee, Gun A and Dey, Arindam and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={38--41},
    year={2017},
    organization={IEEE}
    }
    Empathic Computing is a research field that aims to use technology to create deeper shared understanding or empathy between people. At the same time, Mixed Reality (MR) technology provides an immersive experience that can make an ideal interface for collaboration. In this paper, we present some of our research into how MR technology can be applied to creating Empathic Computing experiences. This includes exploring how to share gaze in a remote collaboration between Augmented Reality (AR) and Virtual Reality (VR) environments, using physiological signals to enhance collaborative VR, and supporting interaction through eye-gaze in VR. Early outcomes indicate that as we design collaborative interfaces to enhance empathy between people, this could also benefit the personal experience of the individual interacting with the interface.
  • The Social AR Continuum: Concept and User Study
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W. (2017, October). [POSTER] The Social AR Continuum: Concept and User Study. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 7-8). IEEE.

    @inproceedings{nassani2017poster,
    title={[POSTER] The Social AR Continuum: Concept and User Study},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Hoermann, Simon and Lindeman, Robert W},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={7--8},
    year={2017},
    organization={IEEE}
    }
    In this poster, we describe The Social AR Continuum, a space that encompasses different dimensions of Augmented Reality (AR) for sharing social experiences. We explore various dimensions, discuss options for each dimension, and brainstorm possible scenarios where these options might be useful. We describe a prototype interface using the contact placement dimension, and report on feedback from potential users which supports its usefulness for visualising social contacts. Based on this concept work, we suggest user studies in the social AR space, and give insights into future directions.
  • Mutually Shared Gaze in Augmented Video Conference
    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M.

    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M. (2017, October). Mutually Shared Gaze in Augmented Video Conference. In Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017 (pp. 79-80). Institute of Electrical and Electronics Engineers Inc..

    @inproceedings{lee2017mutually,
    title={Mutually Shared Gaze in Augmented Video Conference},
    author={Lee, Gun and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammatip and Norman, Mitchell and Billinghurst, Mark},
    booktitle={Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017},
    pages={79--80},
    year={2017},
    organization={Institute of Electrical and Electronics Engineers Inc.}
    }
    Augmenting video conference with additional visual cues has been studied to improve remote collaboration. A common setup is a person wearing a head-mounted display (HMD) and camera sharing her view of the workspace with a remote collaborator and getting assistance on a real-world task. While this configuration has been extensively studied, there has been little research on how sharing gaze cues might affect the collaboration. This research investigates how sharing gaze in both directions between a local worker and remote helper affects the collaboration and communication. We developed a prototype system that shares the eye gaze of both users, and conducted a user study. Preliminary results showed that sharing gaze significantly improves the awareness of each other's focus, hence improving collaboration.
  • The effect of user embodiment in AV cinematic experience
    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., and Bartneck, C.

    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., & Bartneck, C. (2017). The effect of user embodiment in AV cinematic experience.

    @article{chen2017effect,
    title={The effect of user embodiment in AV cinematic experience},
    author={Chen, Joshua and Lee, Gun and Billinghurst, Mark and Lindeman, Robert W and Bartneck, Christoph},
    year={2017}
    }
    Virtual Reality (VR) is becoming a popular medium for viewing immersive cinematic experiences using 360◦ panoramic movies and head mounted displays. There are previous research on user embodiment in real-time rendered VR, but not in relation to cinematic VR based on 360 panoramic video. In this paper we explore the effects of introducing the user’s real body into cinematic VR experiences. We conducted a study evaluating how the type of movie and user embodiment affects the sense of presence and user engagement. We found that when participants were able to see their own body in the VR movie, there was significant increase in the sense of Presence, yet user engagement was not significantly affected. We discuss on the implications of the results and how it can be expanded in the future.
  • A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs
    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M.

    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M. (2017, November). A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos (pp. 1-2). Eurographics Association.

    @inproceedings{lee2017gaze,
    title={A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs},
    author={Lee, Youngho and Piumsomboon, Thammathip and Ens, Barrett and Lee, Gun and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos},
    pages={1--2},
    year={2017},
    organization={Eurographics Association}
    }

    The rapid developement of machine learning algorithms can be leveraged for potential software solutions in many domains including techniques for depth estimation of human eye gaze. In this paper, we propose an implicit and continuous data acquisition method for 3D gaze depth estimation for an optical see-Through head mounted display (OST-HMD) equipped with an eye tracker. Our method constantly monitoring and generating user gaze data for training our machine learning algorithm. The gaze data acquired through the eye-tracker include the inter-pupillary distance (IPD) and the gaze distance to the real andvirtual target for each eye.

  • Exploring pupil dilation in emotional virtual reality environments.
    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W.

    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W. (2017, November). Exploring pupil dilation in emotional virtual reality environments. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 169-176). Eurographics Association.

    @inproceedings{chen2017exploring,
    title={Exploring pupil dilation in emotional virtual reality environments},
    author={Chen, Hao and Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={169--176},
    year={2017},
    organization={Eurographics Association}
    }
    Previous investigations have shown that pupil dilation can be affected by emotive pictures, audio clips, and videos. In this paper, we explore how emotive Virtual Reality (VR) content can also cause pupil dilation. VR has been shown to be able to evoke negative and positive arousal in users when they are immersed in different virtual scenes. In our research, VR scenes were used as emotional triggers. Five emotional VR scenes were designed in our study and each scene had five emotion segments; happiness, fear, anxiety, sadness, and disgust. When participants experienced the VR scenes, their pupil dilation and the brightness in the headset were captured. We found that both the negative and positive emotion segments produced pupil dilation in the VR environments. We also explored the effect of showing heart beat cues to the users, and if this could cause difference in pupil dilation. In our study, three different heart beat cues were shown to users using a combination of three channels; haptic, audio, and visual. The results showed that the haptic-visual cue caused the most significant pupil dilation change from the baseline.
  • Collaborative View Configurations for Multi-user Interaction with a Wall-size Display
    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C.

    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C. (2017, November). Collaborative view configurations for multi-user interaction with a wall-size display. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 189-196). Eurographics Association.

    @inproceedings{kim2017collaborative,
    title={Collaborative view configurations for multi-user interaction with a wall-size display},
    author={Kim, Hyungon and Kim, Yeongmi and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={189--196},
    year={2017},
    organization={Eurographics Association}
    }
    This paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users’ attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.
  • Towards Optimization of Mid-air Gestures for In-vehicle Interactions
    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M.

    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M. (2017, November). Towards optimization of mid-air gestures for in-vehicle interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (pp. 126-134). ACM.

    @inproceedings{hessam2017towards,
    title={Towards optimization of mid-air gestures for in-vehicle interactions},
    author={Hessam, Jahani F and Zancanaro, Massimo and Kavakli, Manolya and Billinghurst, Mark},
    booktitle={Proceedings of the 29th Australian Conference on Computer-Human Interaction},
    pages={126--134},
    year={2017},
    organization={ACM}
    }
    A mid-air gesture-based interface could provide a less cumbersome in-vehicle interface for a safer driving experience. Despite the recent developments in gesture-driven technologies facilitating the multi-touch and mid-air gestures, interface safety requirements as well as an evaluation of gesture characteristics and functions, need to be explored. This paper describes an optimization study on the previously developed GestDrive gesture vocabulary for in-vehicle secondary tasks. We investigate mid-air gestures and secondary tasks, their correlation, confusions, unintentional inputs and consequential safety risks. Building upon a statistical analysis, the results provide an optimized taxonomy break-down for a user-centered gestural interface design which considers user preferences, requirements, performance, and safety issues.
  • Exploring Mixed-Scale Gesture Interaction
    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M. (2017). Exploring mixed-scale gesture interaction. SA'17 SIGGRAPH Asia 2017 Posters.

    @article{ens2017exploring,
    title={Exploring mixed-scale gesture interaction},
    author={Ens, Barrett and Quigley, Aaron John and Yeo, Hui Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={SA'17 SIGGRAPH Asia 2017 Posters},
    year={2017},
    publisher={ACM}
    }
    This paper presents ongoing work toward a design exploration for combining microgestures with other types of gestures within the greater lexicon of gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors.
  • Multi-Scale Gestural Interaction for Augmented Reality
    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M.

    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M. (2017, November). Multi-scale gestural interaction for augmented reality. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 11). ACM.

    @inproceedings{ens2017multi,
    title={Multi-scale gestural interaction for augmented reality},
    author={Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={11},
    year={2017},
    organization={ACM}
    }

    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.

  • Static local environment capturing and sharing for MR remote collaboration
    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M.

    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M. (2017, November). Static local environment capturing and sharing for MR remote collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 17). ACM.

    @inproceedings{gao2017static,
    title={Static local environment capturing and sharing for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lindeman, Rob and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={17},
    year={2017},
    organization={ACM}
    }
    We present a Mixed Reality (MR) system that supports entire scene capturing of the local physical work environment for remote collaboration in a large-scale workspace. By integrating the key-frames captured with external depth sensor as one single 3D point-cloud data set, our system could reconstruct the entire local physical workspace into the VR world. In this case, the remote helper could observe the local scene independently from the local user's current head and camera position, and provide gesture guiding information even before the local user staring at the target object. We conducted a pilot study to evaluate the usability of the system by comparing it with our previous oriented view system which only sharing the current camera view together with the real-time head orientation data. Our results indicate that this entire scene capturing and sharing system could significantly increase the remote helper's spatial awareness of the local work environment, especially in a large-scale workspace, and gain an overwhelming user preference (80%) than previous system.
  • Exploring enhancements for remote mixed reality collaboration
    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M. (2017, November). Exploring enhancements for remote mixed reality collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 16). ACM.

    @inproceedings{piumsomboon2017exploring,
    title={Exploring enhancements for remote mixed reality collaboration},
    author={Piumsomboon, Thammathip and Day, Arindam and Ens, Barrett and Lee, Youngho and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={16},
    year={2017},
    organization={ACM}
    }
    In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative natural interaction, and AV-User-Body-Scaling were implemented for enhancing interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and interaction.
  • AR social continuum: representing social contacts
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2017, November). AR social continuum: representing social contacts. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 6). ACM.

    @inproceedings{nassani2017ar,
    title={AR social continuum: representing social contacts},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={6},
    year={2017},
    organization={ACM}
    }
    One of the key problems with representing social networks in Augmented Reality (AR) is how to differentiate between contacts. In this paper we explore how visual and spatial cues based on social relationships can be used to represent contacts in social AR applications, making it easier to distinguish between them. Previous implementations of social AR have been mostly focusing on location based visualization with no focus on the social relationship to the user. In contrast, we explore how to visualise social relationships in mobile AR environments using proximity and visual fidelity filters. We ran a focus group to explore different options for representing social contacts in a mobile an AR application. We also conducted a user study to test a head-worn AR prototype using proximity and visual fidelity filters. We found out that filtering social contacts on wearable AR is preferred and useful. We discuss the results of focus group and the user study, and provide insights into directions for future work.
  • If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking
    Wen, J., Helton, W. S., & Billinghurst, M.

    Wen, J., Helton, W. S., & Billinghurst, M. (2015, March). If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 3). ACM.

    @inproceedings{wen2015if,
    title={If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking},
    author={Wen, James and Helton, William S and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={3},
    year={2015},
    organization={ACM}
    }
    Augmented Reality (AR) on smart phones can be used to overlay virtual tags in the real world to show points of interest that people may want to visit. However, field tests have failed to validate the belief that AR-based tools would outperform map-based tools for such pedestrian navigation tasks. Assuming this is due to inaccuracies in consumer GPS tracking used in handheld AR, we created a simulated environment that provided perfect tracking for AR and conducted experiments based on real world navigation studies. We measured time-on-task performance for guided traversals on both desktop and head-mounted display systems and found that accurate tracking did validate the superior performance of AR-based navigation tools. We also measured performance for unguided recall traversals of previously traversed paths in order to investigate into how navigation tools impact upon route memory.
  • Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization.
    Kim, H., Lee, G., & Billinghurst, M.

    Kim, H., Lee, G., & Billinghurst, M. (2015, March). Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 2). ACM.

    @inproceedings{kim2015adaptive,
    title={Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization},
    author={Kim, Hyungon and Lee, Gun and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={2},
    year={2015},
    organization={ACM}
    }
    Stereoscopic visualization creates illusions of depth through disparity between the images shown to left and right eyes of the viewer. While the stereoscopic visualization is widely adopted in immersive visualization systems to improve user experience, it can also cause visual discomfort if the stereoscopic viewing parameters are not adjusted appropriately. These parameters are usually manually adjusted based on human factors and empirical knowledge of the developer or even the user. However, scenes with dynamic change in scale and configuration can lead into continuous adjustment of these parameters while viewing. In this paper, we propose a method to adjust the interpupillary distance adaptively and automatically according to the configuration of the 3D scene, so that the visualized scene can maintain sufficient stereo effect while reducing visual discomfort.
  • Intelligent Augmented Reality Training for Motherboard Assembly
    Westerfield, G., Mitrovic, A., & Billinghurst, M.

    Westerfield, G., Mitrovic, A., & Billinghurst, M. (2015). Intelligent augmented reality training for motherboard assembly. International Journal of Artificial Intelligence in Education, 25(1), 157-172.

    @article{westerfield2015intelligent,
    title={Intelligent augmented reality training for motherboard assembly},
    author={Westerfield, Giles and Mitrovic, Antonija and Billinghurst, Mark},
    journal={International Journal of Artificial Intelligence in Education},
    volume={25},
    number={1},
    pages={157--172},
    year={2015},
    publisher={Springer}
    }
    We investigate the combination of Augmented Reality (AR) with Intelligent Tutoring Systems (ITS) to assist with training for manual assembly tasks. Our approach combines AR graphics with adaptive guidance from the ITS to provide a more effective learning experience. We have developed a modular software framework for intelligent AR training systems, and a prototype based on this framework that teaches novice users how to assemble a computer motherboard. An evaluation found that our intelligent AR system improved test scores by 25 % and that task performance was 30 % faster compared to the same AR training system without intelligent support. We conclude that using an intelligent AR tutor can significantly improve learning compared to more traditional AR training.
  • User Defined Gestures for Augmented Virtual Mirrors: A Guessability Study
    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M.

    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M. (2015, April). User defined gestures for augmented virtual mirrors: a guessability study. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 959-964). ACM.

    @inproceedings{lee2015user,
    title={User defined gestures for augmented virtual mirrors: a guessability study},
    author={Lee, Gun A and Wong, Jonathan and Park, Hye Sun and Choi, Jin Sung and Park, Chang Joon and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={959--964},
    year={2015},
    organization={ACM}
    }
    Public information displays are evolving from passive screens into more interactive and smarter ubiquitous computing platforms. In this research we investigate applying gesture interaction and Augmented Reality (AR) technologies to make public information displays more intuitive and easy to use. We focus especially on designing intuitive gesture based interaction methods to use in combination with an augmented virtual mirror interface. As an initial step, we conducted a user study to indentify the gestures that users feel are natural for performing common tasks when interacting with augmented virtual mirror displays. We report initial findings from the study, discuss design guidelines, and suggest future research directions.
  • Automatically Freezing Live Video for Annotation during Remote Collaboration
    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M.

    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M. (2015, April). Automatically freezing live video for annotation during remote collaboration. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1669-1674). ACM.

    @inproceedings{kim2015automatically,
    title={Automatically freezing live video for annotation during remote collaboration},
    author={Kim, Seungwon and Lee, Gun A and Ha, Sangtae and Sakata, Nobuchika and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={1669--1674},
    year={2015},
    organization={ACM}
    }
    Drawing annotations on shared live video has been investigated as a tool for remote collaboration. However, if a local user changes the viewpoint of a shared live video while a remote user is drawing an annotation, the annotation is projected and drawn at wrong place. Prior work suggested manually freezing the video while annotating to solve the issue, but this needs additional user input. We introduce a solution that automatically freezes the video, and present the results of a user study comparing it with manual freeze and no freeze conditions. Auto-freeze was most preferred by both remote and local participants who felt it best solved the issue of annotations appearing in the wrong place. With auto-freeze, remote users were able to draw annotations quicker, while the local users were able to understand the annotations clearer.
  • A comparative study of simulated augmented reality displays for vehicle navigation
    Jose, R., Lee, G. A., & Billinghurst, M.

    Jose, R., Lee, G. A., & Billinghurst, M. (2016, November). A comparative study of simulated augmented reality displays for vehicle navigation. In Proceedings of the 28th Australian conference on computer-human interaction (pp. 40-48). ACM.

    @inproceedings{jose2016comparative,
    title={A comparative study of simulated augmented reality displays for vehicle navigation},
    author={Jose, Richie and Lee, Gun A and Billinghurst, Mark},
    booktitle={Proceedings of the 28th Australian conference on computer-human interaction},
    pages={40--48},
    year={2016},
    organization={ACM}
    }
    In this paper we report on a user study in a simulated environment that compares three types of Augmented Reality (AR) displays for assisting with car navigation: Heads Up Display (HUD), Head Mounted Display (HMD) and Heads Down Display (HDD). The virtual cues shown on each of the interface were the same, but there was a significant difference in driver behaviour and preference between interfaces. Overall, users performed better and preferred the HUD over the HDD, and the HMD was ranked lowest. These results have implications for people wanting to use AR cues for car navigation.
  • A Systematic Review of Usability Studies in Augmented Reality between 2005 and 2014
    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E.

    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E. (2016, September). A systematic review of usability studies in augmented reality between 2005 and 2014. In 2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct) (pp. 49-50). IEEE.

    @inproceedings{dey2016systematic,
    title={A systematic review of usability studies in augmented reality between 2005 and 2014},
    author={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W and Swan II, J Edward},
    booktitle={2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct)},
    pages={49--50},
    year={2016},
    organization={IEEE}
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review most AR papers published between 2005 and 2014 that include user studies. A total of 291 papers have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We also identify areas where there have been few user studies, and opportunities for future research. This poster describes the methodology of the review and the classifications of AR research that have emerged.
  • Augmented Reality Annotation for Social Video Sharing

    Nassani, A., Kim, H., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2016, November). Augmented reality annotation for social video sharing. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 9). ACM.

    @inproceedings{nassani2016augmented,
    title={Augmented reality annotation for social video sharing},
    author={Nassani, Alaeddin and Kim, Hyungon and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={9},
    year={2016},
    organization={ACM}
    }
    This paper explores different visual interfaces for sharing comments on a social live video streaming platforms. So far, comments are displayed separately from the video making it hard to relate the comments to event in the video. In this work we investigate an Augmented Reality (AR) interface displaying comments directly on the streamed live video. Our described prototype allows remote spectators to perceive the streamed live video with different interfaces for displaying the comments. We conducted a user study to compare different ways of visualising comments and found that users prefer having comments in the AR view rather than on a separate list. We discuss the implications of this research and directions for future work.
  • An oriented point-cloud view for MR remote collaboration
    Gao, L., Bai, H., Lee, G., & Billinghurst, M.

    Gao, L., Bai, H., Lee, G., & Billinghurst, M. (2016, November). An oriented point-cloud view for MR remote collaboration. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 8). ACM.

    @inproceedings{gao2016oriented,
    title={An oriented point-cloud view for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={8},
    year={2016},
    organization={ACM}
    }
    We present a Mixed Reality system for remote collaboration using Virtual Reality (VR) headsets with external depth cameras attached. By wirelessly sharing a 3D point-cloud data of a local workers' workspace with a remote helper, and sharing the remote helper's hand gestures back to the local worker, the remote helper is able to assist the worker to perform manual tasks.Displaying the point-cloud video in a conventional way, such as a static front view in VR headsets, does not provide helpers with sufficient understanding of the spatial relationships between their hands and the remote surroundings. In contrast, we propose a Mixed Reality (MR) system that shares with the remote helper, not only 3D captured environment data but also real-time orientation info of the worker's viewpoint. We conducted a pilot study to evaluate the usability of the system, and we found that extra synchronized orientation data can make collaborators feel more connected spatially and mentally.
  • Sharing Manipulated Heart Rate Feedback in Collaborative Virtual Environments
    Arindam Dey ; Hao Chen ; Ashkan Hayati ; Mark Billinghurst ; Robert W. Lindeman

    @inproceedings{dey2019sharing,
    title={Sharing Manipulated Heart Rate Feedback in Collaborative Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Hayati, Ashkan and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={248--257},
    year={2019},
    organization={IEEE}
    }
    We have explored the effects of sharing manipulated heart rate feedback in collaborative virtual environments. In our study, we created two types of different virtual environments (active and passive) with different levels of interactions and provided three levels of manipulated heart rate feedback (decreased, unchanged, and increased). We measured the effects of manipulated feedback on Social Presence, affect, physical heart rate, and overall experience. We noticed a significant effect of the manipulated heart rate feedback in affecting scariness and nervousness. The perception of the collaborator's valance and arousal was also affected where increased heart rate feedback perceived as a higher valance and lower arousal. Increased heart rate feedback decreased the real heart rate. The type of virtual environments had a significant effect on social presence, heart rate, and affect where the active environment had better performances across these measurements. We discuss the implications of this and directions for future research.
  • A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes
    Theophilus Teo, Ashkan F. Hayati, Gun A. Lee, Mark Billinghurst, Matt Adcock

    @inproceedings{teo2019technique,
    title={A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes},
    author={Teo, Theophilus and F. Hayati, Ashkan and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    booktitle={25th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2019}
    }
    Mixed Reality (MR) remote collaboration provides an enhanced immersive experience where a remote user can provide verbal and nonverbal assistance to a local user to increase the efficiency and performance of the collaboration. This is usually achieved by sharing the local user's environment through live 360 video or a 3D scene, and using visual cues to gesture or point at real objects allowing for better understanding and collaborative task performance. While most of prior work used one of the methods to capture the surrounding environment, there may be situations where users have to choose between using 360 panoramas or 3D scene reconstruction to collaborate, as each have unique benefits and limitations. In this paper we designed a prototype system that combines 360 panoramas into a 3D scene to introduce a novel way for users to interact and collaborate with each other. We evaluated the prototype through a user study which compared the usability and performance of our proposed approach to live 360 video collaborative system, and we found that participants enjoyed using different ways to access the local user's environment although it took them longer time to learn to use our system. We also collected subjective feedback for future improvements and provide directions for future research.
  • Time to Get Personal: Individualised Virtual Reality for Mental Health
    Nilufar Baghaei , Lehan Stemmet , Andrej Hlasnik , Konstantin Emanov , Sylvia Hach , John A. Naslund , Mark Billinghurst , Imran Khaliq , Hai-Ning Liang

    Nilufar Baghaei, Lehan Stemmet, Andrej Hlasnik, Konstantin Emanov, Sylvia Hach, John A. Naslund, Mark Billinghurst, Imran Khaliq, and Hai-Ning Liang. 2020. Time to Get Personal: Individualised Virtual Reality for Mental Health. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–9. DOI:https://doi.org/10.1145/3334480.3382932

    @inproceedings{baghaei2020time,
    title={Time to Get Personal: Individualised Virtual Reality for Mental Health},
    author={Baghaei, Nilufar and Stemmet, Lehan and Hlasnik, Andrej and Emanov, Konstantin and Hach, Sylvia and Naslund, John A and Billinghurst, Mark and Khaliq, Imran and Liang, Hai-Ning},
    booktitle={Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--9},
    year={2020}
    }
    Mental health conditions pose a major challenge to healthcare providers and society at large. Early intervention can have significant positive impact on a person's prognosis, particularly important in improving mental health outcomes and functioning for young people. Virtual Reality (VR) in mental health is an emerging and innovative field. Recent studies support the use of VR technology in the treatment of anxiety, phobia, eating disorders, addiction, and pain management. However, there is little research on using VR for supporting, treatment and prevention of depression - a field that is very much emerging. There is also very little work done in offering individualised VR experience to users with mental health issues. This paper proposes iVR, a novel individualised VR for improving users' self-compassion, and in the long run, their positive mental health. We describe the concept, design, architecture and implementation of iVR and outline future work. We believe this contribution will pave the way for large-scale efficacy testing, clinical use, and potentially cost-effective delivery of VR technology for mental health therapy in future.
  • A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing
    Huidong Bai , Prasanth Sasikumar , Jing Yang , Mark Billinghurst

    Huidong Bai, Prasanth Sasikumar, Jing Yang, and Mark Billinghurst. 2020. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376550

    @inproceedings{bai2020user,
    title={A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing},
    author={Bai, Huidong and Sasikumar, Prasanth and Yang, Jing and Billinghurst, Mark},
    booktitle={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
    pages={1--13},
    year={2020}
    }
    Supporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.
  • A Constrained Path Redirection for Passive Haptics
    Lili Wang ; Zixiang Zhao ; Xuefeng Yang ; Huidong Bai ; Amit Barde ; Mark Billinghurst

    L. Wang, Z. Zhao, X. Yang, H. Bai, A. Barde and M. Billinghurst, "A Constrained Path Redirection for Passive Haptics," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 2020, pp. 651-652, doi: 10.1109/VRW50115.2020.00176.

    @inproceedings{wang2020constrained,
    title={A Constrained Path Redirection for Passive Haptics},
    author={Wang, Lili and Zhao, Zixiang and Yang, Xuefeng and Bai, Huidong and Barde, Amit and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={651--652},
    year={2020},
    organization={IEEE}
    }
    Navigation with passive haptic feedback can enhance users’ immersion in virtual environments. We propose a constrained path redirection method to provide users with corresponding haptic feedback at the right time and place. We have quantified the VR exploration practicality in a study and the results show advantages over steer-to-center method in terms of presence, and over Steinicke’s method in terms of matching errors and presence.
  • Neurophysiological Effects of Presence in Calm Virtual Environments
    Arindam Dey ; Jane Phoon ; Shuvodeep Saha ; Chelsea Dobbins ; Mark Billinghurst

    A. Dey, J. Phoon, S. Saha, C. Dobbins and M. Billinghurst, "Neurophysiological Effects of Presence in Calm Virtual Environments," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 2020, pp. 745-746, doi: 10.1109/VRW50115.2020.00223.

    @inproceedings{dey2020neurophysiological,
    title={Neurophysiological Effects of Presence in Calm Virtual Environments},
    author={Dey, Arindam and Phoon, Jane and Saha, Shuvodeep and Dobbins, Chelsea and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={745--746},
    year={2020},
    organization={IEEE}
    }
    Presence, the feeling of being there, is an important factor that affects the overall experience of virtual reality. Presence is measured through post-experience subjective questionnaires. While questionnaires are a widely used method in human-based research, they suffer from participant biases, dishonest answers, and fatigue. In this paper, we measured the effects of different levels of presence (high and low) in virtual environments using physiological and neurological signals as an alternative method. Results indicated a significant effect of presence on both physiological and neurological signals.
  • Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality
    Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, Mark Billinghurst

    K. Gupta, R. Hajika, Y. S. Pai, A. Duenser, M. Lochner and M. Billinghurst, "Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 2020, pp. 756-765, doi: 10.1109/VR46266.2020.1581313729558.

    @inproceedings{gupta2020measuring,
    title={Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality},
    author={Gupta, Kunal and Hajika, Ryo and Pai, Yun Suen and Duenser, Andreas and Lochner, Martin and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={756--765},
    year={2020},
    organization={IEEE}
    }
    With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user’s trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users’ head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
  • Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction
    Peng Wang, Xiaoliang Bai, Mark Billinghurst, Shusheng Zhang, Dechuan Han, Mengmeng Sun, Zhuo Wang, Hao Lv, Shu Han

    Wang, Peng, et al. "Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction." International Journal of Human–Computer Interaction (2020): 1-16.

    @article{wang2020haptic,
    title={Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction},
    author={Wang, Peng and Bai, Xiaoliang and Billinghurst, Mark and Zhang, Shusheng and Han, Dechuan and Sun, Mengmeng and Wang, Zhuo and Lv, Hao and Han, Shu},
    journal={International Journal of Human--Computer Interaction},
    pages={1--16},
    year={2020},
    publisher={Taylor \& Francis}
    }
    Research on Augmented Reality (AR)/Mixed Reality (MR) remote collaboration for physical tasks remains a compelling and dynamic area of study. AR systems have been developed which transmit virtual annotations between remote collaborators, but there has been little research on how haptic feedback can also be shared. In this paper, we present a Virtual Reality (VR)-Spatial Augmented Reality (SAR) remote collaborative system that provides haptic feedback with tangible interaction between a local worker and a remote expert helper. Using this system, we conducted a within-subject user study to compare two interfaces for remote collaboration between a local worker and expert helper, one with mid-air free drawing (MFD) and one with tangible physical drawing (TPD). The results showed that there were no significant differences with respect to performance time and operation errors. However, users felt that the TPD interface supporting passive haptic feedback could significantly improve the remote experts’ user experience in VR. Our research provides useful information on the way for gesture- and gaze-based multimodal interaction supporting haptic feedback in AR/MR remote collaboration on physical tasks.
  • Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision
    Rory M. S. Clifford, Hendrik Engelbrecht, Sungchul Jung, Hamish Oliver, Mark Billinghurst, Robert W. Lindeman & Simon Hoermann

    Clifford, Rory MS, et al. "Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision." The Visual Computer (2020): 1-14.

    @article{clifford2020aerial,
    title={Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision},
    author={Clifford, Rory MS and Engelbrecht, Hendrik and Jung, Sungchul and Oliver, Hamish and Billinghurst, Mark and Lindeman, Robert W and Hoermann, Simon},
    journal={The Visual Computer},
    pages={1--14},
    year={2020},
    publisher={Springer}
    }
    Communication disruptions are frequent in aerial firefighting. Information is more easily lost over multiple radio channels, busy with simultaneous conversations. Such a high bandwidth of information throughput creates mental overload. Further problems with hardware or radio signals being disrupted over long distances or by mountainous terrain make it difficult to coordinate firefighting efforts. This creates stressful conditions and requires certain expertise to manage effectively. An experiment was conducted which tested the effects of disrupting users communications equipment and measured their stress levels as well as communication performance. This research investigated how realistic communication disruptions have an effect on behavioural changes in communication frequency, as well as physiological stress by means of measuring heart rate variability (HRV). Broken radio transmissions created a greater degree of stress than background chatter alone. Experts could maintain a more stable HRV during disruptions than novices, which was calculated on the change in HRV during the experiment. From this, we deduce that experts have a better ability to manage stress. We also noted strategies employed by experts such as relaying to overcome the radio challenges, as opposed to novices who would not find a solution, effectively giving up.
  • An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills
    Emin İbili, Mevlüt Çat, Dmitry Resnyansky, Sami Şahin & Mark Billinghurst

    İbili, Emin, et al. "An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills." International Journal of Mathematical Education in Science and Technology 51.2 (2020): 224-246.

    @article{ibili2020assessment,
    title={An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills},
    author={{\.I}bili, Emin and {\c{C}}at, Mevl{\"u}t and Resnyansky, Dmitry and {\c{S}}ahin, Sami and Billinghurst, Mark},
    journal={International Journal of Mathematical Education in Science and Technology},
    volume={51},
    number={2},
    pages={224--246},
    year={2020},
    publisher={Taylor \& Francis}
    }
    The aim of this research was to examine the effect of Augmented Reality (AR) supported geometry teaching on students’ 3D thinking skills. This research consisted of 3 steps: (1) developing a 3D thinking ability scale, (ii) design and development of an AR Geometry Tutorial System (ARGTS) and (iii) implementation and assessment of geometry teaching supported with ARGTS. A 3D thinking ability scale was developed and tested with experimental and control groups as a pre- and post-test evaluation. An AR Geometry Tutorial System (ARGTS) and AR teaching materials and environments were developed to enhance 3D thinking skills. A user study with these materials found that geometry teaching supported by ARGTS significantly increased the students’ 3D thinking skills. The increase in average scores of Structuring 3D arrays of cubes and Calculation of the volume and the area of solids thinking skills was not statistically significant (p > 0.05). In terms of other 3D geometric thinking skills’ subfactors of the scale a statistically significant difference was found in favour of the experimental group in pre-test and post-test scores (p < 0.05). The biggest difference was found on ability to recognize and create 3D shapes (p < 0.01).The results of this research are particularly important for identifying individual differences in 3D thinking skills of secondary school students and creating personalized dynamic intelligent learning environments.
  • Using augmented reality with speech input for non-native children's language learning
    Che Samihah Che Dalim, Mohd Shahrizal, Sunar, Arindam Dey, MarkBillinghurst

    Dalim, Che Samihah Che, et al. "Using augmented reality with speech input for non-native children's language learning." International Journal of Human-Computer Studies 134 (2020): 44-64.

    @article{dalim2020using,
    title={Using augmented reality with speech input for non-native children's language learning},
    author={Dalim, Che Samihah Che and Sunar, Mohd Shahrizal and Dey, Arindam and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    volume={134},
    pages={44--64},
    year={2020},
    publisher={Elsevier}
    }
    Augmented Reality (AR) offers an enhanced learning environment which could potentially influence children's experience and knowledge gain during the language learning process. Teaching English or other foreign languages to children with different native language can be difficult and requires an effective strategy to avoid boredom and detachment from the learning activities. With the growing numbers of AR education applications and the increasing pervasiveness of speech recognition, we are keen to understand how these technologies benefit non-native young children in learning English. In this paper, we explore children's experience in terms of knowledge gain and enjoyment when learning through a combination of AR and speech recognition technologies. We developed a prototype AR interface called TeachAR, and ran two experiments to investigate how effective the combination of AR and speech recognition was towards the learning of 1) English terms for color and shapes, and 2) English words for spatial relationships. We found encouraging results by creating a novel teaching strategy using these two technologies, not only in terms of increase in knowledge gain and enjoyment when compared with traditional strategy but also enables young children to finish the certain task faster and easier.
  • A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning
    Ihshan Gumilar, Ekansh Sareen, Reed Bell, Augustus Stone, Ashkan Hayati, Jingwen Mao, Amit Barde, Anubha Gupta, Arindam Dey, Gun Lee, Mark Billinghurst

    Gumilar, I., Sareen, E., Bell, R., Stone, A., Hayati, A., Mao, J., ... & Billinghurst, M. (2021). A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning. Computers & Graphics, 94, 62-75.

    @article{gumilar2021comparative,
    title={A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning},
    author={Gumilar, Ihshan and Sareen, Ekansh and Bell, Reed and Stone, Augustus and Hayati, Ashkan and Mao, Jingwen and Barde, Amit and Gupta, Anubha and Dey, Arindam and Lee, Gun and others},
    journal={Computers \& Graphics},
    volume={94},
    pages={62--75},
    year={2021},
    publisher={Elsevier}
    }
    Researchers have employed hyperscanning, a technique used to simultaneously record neural activity from multiple participants, in real-world collaborations. However, to the best of our knowledge, there is no study that has used hyperscanning in Virtual Reality (VR). The aims of this study were; firstly, to replicate results of inter-brain synchrony reported in existing literature for a real world task and secondly, to explore whether the inter-brain synchrony could be elicited in a Virtual Environment (VE). This paper reports on three pilot-studies in two different settings (real-world and VR). Paired participants performed two sessions of a finger-pointing exercise separated by a finger-tracking exercise during which their neural activity was simultaneously recorded by electroencephalography (EEG) hardware. By using Phase Locking Value (PLV) analysis, VR was found to induce similar inter-brain synchrony as seen in the real-world. Further, it was observed that the finger-pointing exercise shared the same neurally activated area in both the real-world and VR. Based on these results, we infer that VR can be used to enhance inter-brain synchrony in collaborative tasks carried out in a VE. In particular, we have been able to demonstrate that changing visual perspective in VR is capable of eliciting inter-brain synchrony. This demonstrates that VR could be an exciting platform to explore the phenomena of inter-brain synchrony further and provide a deeper understanding of the neuroscience of human communication.
  • Grand Challenges for Augmented Reality
    Mark Billinghurst

    Billinghurst, M. (2021). Grand Challenges for Augmented Reality. Frontiers in Virtual Reality, 2, 12.

    @article{billinghurst2021grand,
    title={Grand Challenges for Augmented Reality},
    author={Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={12},
    year={2021},
    publisher={Frontiers}
    }
  • Bringing full-featured mobile phone interaction into virtual reality
    H Bai, L Zhang, J Yang, M Billinghurst

    Bai, H., Zhang, L., Yang, J., & Billinghurst, M. (2021). Bringing full-featured mobile phone interaction into virtual reality. Computers & Graphics, 97, 42-53.

    @article{bai2021bringing,
    title={Bringing full-featured mobile phone interaction into virtual reality},
    author={Bai, Huidong and Zhang, Li and Yang, Jing and Billinghurst, Mark},
    journal={Computers \& Graphics},
    volume={97},
    pages={42--53},
    year={2021},
    publisher={Elsevier}
    }

    Virtual Reality (VR) Head-Mounted Display (HMD) technology immerses a user in a computer generated virtual environment. However, a VR HMD also blocks the users’ view of their physical surroundings, and so prevents them from using their mobile phones in a natural manner. In this paper, we present a novel Augmented Virtuality (AV) interface that enables people to naturally interact with a mobile phone in real time in a virtual environment. The system allows the user to wear a VR HMD while seeing his/her 3D hands captured by a depth sensor and rendered in different styles, and enables the user to operate a virtual mobile phone aligned with their real phone. We conducted a formal user study to compare the AV interface with physical touch interaction on user experience in five mobile applications. Participants reported that our system brought the real mobile phone into the virtual world. Unfortunately, the experiment results indicated that using a phone with our AV interfaces in VR was more difficult than the regular smartphone touch interaction, with increased workload and lower system usability, especially for a typing task. We ran a follow-up study to compare different hand visualizations for text typing using the AV interface. Participants felt that a skin-colored hand visualization method provided better usability and immersiveness than other hand rendering styles.

  • SecondSight: A Framework for Cross-Device Augmented Reality Interfaces
    Reichherzer, Carolin, Jack Fraser, Damien Constantine Rompapas, Mark Billinghurst.

    Reichherzer, C., Fraser, J., Rompapas, D. C., & Billinghurst, M. (2021, May). SecondSight: A Framework for Cross-Device Augmented Reality Interfaces. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-6).

    @inproceedings{reichherzer2021secondsight,
    title={SecondSight: A Framework for Cross-Device Augmented Reality Interfaces},
    author={Reichherzer, Carolin and Fraser, Jack and Rompapas, Damien Constantine and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--6},
    year={2021}
    }
    This paper describes a modular framework developed to facilitate the design space exploration of cross-device Augmented Reality (AR) interfaces that combine an AR head-mounted display (HMD) with a smartphone. Currently, there is a growing interest in how AR HMDs can be used with smartphones to improve the user’s AR experience. In this work, we describe a framework that enables rapid prototyping and evaluation of an interface. Our system enables different modes of interaction, content placement, and simulated AR HMD field of view to assess which combination is best suited to inform future researchers on design recommendations. We provide examples of how the framework could be used to create sample applications, the types of the studies which could be supported, and example results from a simple pilot study.
  • Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    Allison Jing, Kieran May, Gun Lee, Mark Billinghurst.

    Jing, A., May, K., Lee, G., & Billinghurst, M. (2021). Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration. Frontiers in Virtual Reality, 2, 79.

    @article{jing2021eye,
    title={Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration},
    author={Jing, Allison and May, Kieran and Lee, Gun and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={79},
    year={2021},
    publisher={Frontiers}
    }
    Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.
  • First Contact‐Take 2: Using XR technology as a bridge between Māori, Pākehā and people from other cultures in Aotearoa, New Zealand
    Mairi Gunn, Mark Billinghurst, Huidong Bai, Prasanth Sasikumar.

    Gunn, M., Billinghurst, M., Bai, H., & Sasikumar, P. (2021). First Contact‐Take 2: Using XR technology as a bridge between Māori, Pākehā and people from other cultures in Aotearoa, New Zealand. Virtual Creativity, 11(1), 67-90.

    @article{gunn2021first,
    title={First Contact-Take 2: Using XR technology as a bridge between M{\=a}ori, P{\=a}keh{\=a} and people from other cultures in Aotearoa, New Zealand},
    author={Gunn, Mairi and Billinghurst, Mark and Bai, Huidong and Sasikumar, Prasanth},
    journal={Virtual Creativity},
    volume={11},
    number={1},
    pages={67--90},
    year={2021},
    publisher={Intellect}
    }
    The art installation common/room explores human‐digital‐human encounter across cultural differences. It comprises a suite of extended reality (XR) experiences that use technology as a bridge to help support human connections with a view to overcoming intercultural discomfort (racism). The installations are exhibited as an informal dining room, where each table hosts a distinct experience designed to bring people together in a playful yet meaningful way. Each experience uses different technologies, including 360° 3D virtual reality (VR) in a headset (common/place), 180° 3D projection (Common Sense) and augmented reality (AR) (Come to the Table! and First Contact ‐ Take 2). This article focuses on the latter, First Contact ‐ Take 2, in which visitors are invited to sit at a dining table, wear an AR head-mounted display and encounter a recorded volumetric representation of an Indigenous Māori woman seated opposite them. She speaks directly to the visitor out of a culture that has refined collective endeavour and relational psychology over millennia. The contextual and methodological framework for this research is international commons scholarship and practice that sits within a set of relationships outlined by the Mātike Mai Report on constitutional transformation for Aotearoa, New Zealand. The goal is to practise and build new relationships between Māori and Tauiwi, including Pākehā.
  • ShowMeAround: Giving Virtual Tours Using Live 360 Video
    Alaeddin Nassani, Li Zhang, Huidong Bai, Mark Billinghurst.

    Nassani, A., Zhang, L., Bai, H., & Billinghurst, M. (2021, May). ShowMeAround: Giving Virtual Tours Using Live 360 Video. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-4).

    @inproceedings{nassani2021showmearound,
    title={ShowMeAround: Giving Virtual Tours Using Live 360 Video},
    author={Nassani, Alaeddin and Zhang, Li and Bai, Huidong and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--4},
    year={2021}
    }
    This demonstration presents ShowMeAround, a video conferencing system designed to allow people to give virtual tours over live 360-video. Using ShowMeAround a host presenter walks through a real space and can live stream a 360-video view to a small group of remote viewers. The ShowMeAround interface has features such as remote pointing and viewpoint awareness to support natural collaboration between the viewers and host presenter. The system also enables sharing of pre-recorded high resolution 360 video and still images to further enhance the virtual tour experience.
  • Manipulating Avatars for Enhanced Communication in Extended Reality
    Jonathon Hart, Thammathip Piumsomboon, Gun A. Lee, Ross T. Smith, Mark Billinghurst.

    Hart, J. D., Piumsomboon, T., Lee, G. A., Smith, R. T., & Billinghurst, M. (2021, May). Manipulating Avatars for Enhanced Communication in Extended Reality. In 2021 IEEE International Conference on Intelligent Reality (ICIR) (pp. 9-16). IEEE.

    @inproceedings{hart2021manipulating,
    title={Manipulating Avatars for Enhanced Communication in Extended Reality},
    author={Hart, Jonathon Derek and Piumsomboon, Thammathip and Lee, Gun A and Smith, Ross T and Billinghurst, Mark},
    booktitle={2021 IEEE International Conference on Intelligent Reality (ICIR)},
    pages={9--16},
    year={2021},
    organization={IEEE}
    }
    Avatars are common virtual representations used in Extended Reality (XR) to support interaction and communication between remote collaborators. Recent advancements in wearable displays provide features such as eye and face-tracking, to enable avatars to express non-verbal cues in XR. The research in this paper investigates the impact of avatar visualization on Social Presence and user’s preference by simulating face tracking in an asymmetric XR remote collaboration between a desktop user and a Virtual Reality (VR) user. Our study was conducted between pairs of participants, one on a laptop computer supporting face tracking and the other being immersed in VR, experiencing different visualization conditions. They worked together to complete an island survival task. We found that the users preferred 3D avatars with facial expressions placed in the scene, compared to 2D screen attached avatars without facial expressions. Participants felt that the presence of the collaborator’s avatar improved overall communication, yet Social Presence was not significantly different between conditions as they mainly relied on audio for communication.
  • Adapting Fitts’ Law and N-Back to Assess Hand Proprioception
    Tamil Gunasekaran, Ryo Hajika, Chloe Dolma Si Ying Haigh, Yun Suen Pai, Danielle Lottridge, Mark Billinghurst.

    Gunasekaran, T. S., Hajika, R., Haigh, C. D. S. Y., Pai, Y. S., Lottridge, D., & Billinghurst, M. (2021, May). Adapting Fitts’ Law and N-Back to Assess Hand Proprioception. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-7).

    @inproceedings{gunasekaran2021adapting,
    title={Adapting Fitts’ Law and N-Back to Assess Hand Proprioception},
    author={Gunasekaran, Tamil Selvan and Hajika, Ryo and Haigh, Chloe Dolma Si Ying and Pai, Yun Suen and Lottridge, Danielle and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--7},
    year={2021}
    }
    Proprioception is the body’s ability to sense the position and movement of each limb, as well as the amount of effort exerted onto or by them. Methods to assess proprioception have been introduced before, yet there is little to no study on assessing the degree of proprioception on body parts for use cases like gesture recognition wearable computing. We propose the use of Fitts’ law coupled with the N-Back task to evaluate proprioception of the hand. We evaluate 15 distinct points at the back of the hand and assess the musing extended 3D Fitts’ law. Our results show that the index of difficulty of tapping point from thumb to pinky increases gradually with a linear regression factor of 0.1144. Additionally, participants perform the tap before performing the N-Back task. From these results, we discuss the fundamental limitations and suggest how Fitts’ law can be further extended to assess proprioception
  • XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing
    Prasanth Sasikumar, Max Collins, Huidong Bai, Mark Billinghurst.

    Sasikumar, P., Collins, M., Bai, H., & Billinghurst, M. (2021, May). XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-4).

    @inproceedings{sasikumar2021xrtb,
    title={XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing},
    author={Sasikumar, Prasanth and Collins, Max and Bai, Huidong and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--4},
    year={2021}
    }
    We present XRTeleBridge (XRTB), an application that integrates a Mixed Reality (MR) interface into existing teleconferencing solutions like Zoom. Unlike conventional webcam, XRTB provides a window into the virtual world to demonstrate and visualize content. Participants can join via webcam or via head mounted display (HMD) in a Virtual Reality (VR) environment. It enables users to embody 3D avatars with natural gestures and eye gaze. A camera in the virtual environment operates as a video feed to the teleconferencing software. An interface resembling a tablet mirrors the teleconferencing window inside the virtual environment, thus enabling the participant in the VR environment to see the webcam participants in real-time. This allows the presenter to view and interact with other participants seamlessly. To demonstrate the system’s functionalities, we created a virtual chemistry lab environment and presented an example lesson using the virtual space and virtual objects and effects.
  • Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR
    Ihshan Gumilar, Amit Barde, Ashkan Hayati, Mark Billinghurst, Gun Lee, Abdul Momin, Charles Averill, Arindam Dey.

    Gumilar, I., Barde, A., Hayati, A. F., Billinghurst, M., Lee, G., Momin, A., ... & Dey, A. (2021, May). Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-7).

    @inproceedings{gumilar2021connecting,
    title={Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR},
    author={Gumilar, Ihshan and Barde, Amit and Hayati, Ashkan F and Billinghurst, Mark and Lee, Gun and Momin, Abdul and Averill, Charles and Dey, Arindam},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--7},
    year={2021}
    }
    Hyperscanning is an emerging method for measuring two or more brains simultaneously. This method allows researchers to simultaneously record neural activity from two or more people. While this method has been extensively implemented over the last five years in the real-world to study inter-brain synchrony, there is little work that has been undertaken in the use of hyperscanning in virtual environments. Preliminary research in the area demonstrates that inter-brain synchrony in virtual environments can be achieved in a mannersimilar to thatseen in the real world. The study described in this paper proposes to further research in the area by studying how non-verbal communication cues in social interactions in virtual environments can afect inter-brain synchrony. In particular, we concentrate on the role eye gaze playsin inter-brain synchrony. The aim of this research is to explore how eye gaze afects inter-brain synchrony between users in a collaborative virtual environment
  • A Review of Hyperscanning and Its Use in Virtual Environments
    Ihshan Gumilar, Ekansh Sareen, Reed Bell, Augustus Stone, Ashkan Hayati, Jingwen Mao, Amit Barde, Anubha Gupta, Arindam Dey, Gun Lee, Mark Billinghurst

    Barde, A., Gumilar, I., Hayati, A. F., Dey, A., Lee, G., & Billinghurst, M. (2020, December). A Review of Hyperscanning and Its Use in Virtual Environments. In Informatics (Vol. 7, No. 4, p. 55). Multidisciplinary Digital Publishing Institute.

    @inproceedings{barde2020review,
    title={A Review of Hyperscanning and Its Use in Virtual Environments},
    author={Barde, Amit and Gumilar, Ihshan and Hayati, Ashkan F and Dey, Arindam and Lee, Gun and Billinghurst, Mark},
    booktitle={Informatics},
    volume={7},
    number={4},
    pages={55},
    year={2020},
    organization={Multidisciplinary Digital Publishing Institute}
    }
    Researchers have employed hyperscanning, a technique used to simultaneously record neural activity from multiple participants, in real-world collaborations. However, to the best of our knowledge, there is no study that has used hyperscanning in Virtual Reality (VR). The aims of this study were; firstly, to replicate results of inter-brain synchrony reported in existing literature for a real world task and secondly, to explore whether the inter-brain synchrony could be elicited in a Virtual Environment (VE). This paper reports on three pilot-studies in two different settings (real-world and VR). Paired participants performed two sessions of a finger-pointing exercise separated by a finger-tracking exercise during which their neural activity was simultaneously recorded by electroencephalography (EEG) hardware. By using Phase Locking Value (PLV) analysis, VR was found to induce similar inter-brain synchrony as seen in the real-world. Further, it was observed that the finger-pointing exercise shared the same neurally activated area in both the real-world and VR. Based on these results, we infer that VR can be used to enhance inter-brain synchrony in collaborative tasks carried out in a VE. In particular, we have been able to demonstrate that changing visual perspective in VR is capable of eliciting inter-brain synchrony. This demonstrates that VR could be an exciting platform to explore the phenomena of inter-brain synchrony further and provide a deeper understanding of the neuroscience of human communication.
  • Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning
    Amit Barde, Nastaran Saffaryazdi, P. Withana, N. Patel, Prasanth Sasikumar, Mark Billinghurst

    Barde, A., Saffaryazdi, N., Withana, P., Patel, N., Sasikumar, P., & Billinghurst, M. (2019, October). Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning. In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 338-339). IEEE.

    @inproceedings{barde2019inter,
    title={Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning},
    author={Barde, Amit and Saffaryazdi, Nastaran and Withana, Pawan and Patel, Nakul and Sasikumar, Prasanth and Billinghurst, Mark},
    booktitle={2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={338--339},
    year={2019},
    organization={IEEE}
    }
    Inter-brain connectivity between pairs of people was explored during a finger tracking task in the real-world and in Virtual Reality (VR). This was facilitated by the use of a dual EEG set-up that allowed us to use hyperscanning to simultaneously record the neural activity of both participants. We found that similar levels of inter-brain synchrony can be elicited in the real-world and VR for the same task. This is the first time that hyperscanning has been used to compare brain activity for the same task performed in real and virtual environments.
  • NeuralDrum: Perceiving Brain Synchronicity in XR Drumming
    Y. S. Pai, Ryo Hajika, Kunla Gupta, Prasnth Sasikumar, Mark Billinghurst.

    Pai, Y. S., Hajika, R., Gupta, K., Sasikumar, P., & Billinghurst, M. (2020). NeuralDrum: Perceiving Brain Synchronicity in XR Drumming. In SIGGRAPH Asia 2020 Technical Communications (pp. 1-4).

    @incollection{pai2020neuraldrum,
    title={NeuralDrum: Perceiving Brain Synchronicity in XR Drumming},
    author={Pai, Yun Suen and Hajika, Ryo and Gupta, Kunal and Sasikumar, Prasanth and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2020 Technical Communications},
    pages={1--4},
    year={2020}
    }
    Brain synchronicity is a neurological phenomena where two or more individuals have their brain activation in phase when performing a shared activity. We present NeuralDrum, an extended reality (XR) drumming experience that allows two players to drum together while their brain signals are simultaneously measured. We calculate the Phase Locking Value (PLV) to determine their brain synchronicity and use this to directly affect their visual and auditory experience in the game, creating a closed feedback loop. In a pilot study, we logged and analysed the users’ brain signals as well as had them answer a subjective questionnaire regarding their perception of synchronicity with their partner and the overall experience. From the results, we discuss design implications to further improve NeuralDrum and propose methods to integrate brain synchronicity into interactive experiences.
  • NapWell: An EOG-based Sleep Assistant Exploring the Effects of Virtual Reality on Sleep Onset
    Yun Suen Pai, Marsel L. Bait, Juyoung Lee, Jingjing Xu, Roshan L Peiris, Woontack Woo, Mark Billinghurst & Kai Kunze

    Pai, Y. S., Bait, M. L., Lee, J., Xu, J., Peiris, R. L., Woo, W., ... & Kunze, K. (2022). NapWell: an EOG-based sleep assistant exploring the effects of virtual reality on sleep onset. Virtual Reality, 26(2), 437-451.

    @article{pai2022napwell,
    title={NapWell: an EOG-based sleep assistant exploring the effects of virtual reality on sleep onset},
    author={Pai, Yun Suen and Bait, Marsel L and Lee, Juyoung and Xu, Jingjing and Peiris, Roshan L and Woo, Woontack and Billinghurst, Mark and Kunze, Kai},
    journal={Virtual Reality},
    volume={26},
    number={2},
    pages={437--451},
    year={2022},
    publisher={Springer}
    }
    We present NapWell, a Sleep Assistant using virtual reality (VR) to decrease sleep onset latency by providing a realistic imagery distraction prior to sleep onset. Our proposed prototype was built using commercial hardware and with relatively low cost, making it replicable for future works as well as paving the way for more low cost EOG-VR devices for sleep assistance. We conducted a user study (n=20) by comparing different sleep conditions; no devices, sleeping mask, VR environment of the study room and preferred VR environment by the participant. During this period, we recorded the electrooculography (EOG) signal and sleep onset time using a finger tapping task (FTT). We found that VR was able to significantly decrease sleep onset latency. We also developed a machine learning model based on EOG signals that can predict sleep onset with a cross-validated accuracy of 70.03%. The presented study demonstrates the feasibility of VR to be used as a tool to decrease sleep onset latency, as well as the use of embedded EOG sensors with VR for automatic sleep detection.
  • RaITIn: Radar-Based Identification for Tangible Interactions
    Tamil Selvan Gunasekaran , Ryo Hajika , Yun Suen Pai , Eiji Hayashi , Mark Billinghurst

    Gunasekaran, T. S., Hajika, R., Pai, Y. S., Hayashi, E., & Billinghurst, M. (2022, April). RaITIn: Radar-Based Identification for Tangible Interactions. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-7).

    @inproceedings{gunasekaran2022raitin,
    title={RaITIn: Radar-Based Identification for Tangible Interactions},
    author={Gunasekaran, Tamil Selvan and Hajika, Ryo and Pai, Yun Suen and Hayashi, Eiji and Billinghurst, Mark},
    booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--7},
    year={2022}
    }
    Radar is primarily used for applications like tracking and large-scale ranging, and its use for object identification has been rarely explored. This paper introduces RaITIn, a radar-based identification (ID) method for tangible interactions. Unlike conventional radar solutions, RaITIn can track and identify objects on a tabletop scale. We use frequency modulated continuous wave (FMCW) radar sensors to classify different objects embedded with low-cost radar reflectors of varying sizes on a tabletop setup. We also introduce Stackable IDs, where different objects can be stacked and combined to produce unique IDs. The result allows RaITIn to accurately identify visually identical objects embedded with different low-cost reflector configurations. When combined with a radar’s ability for tracking, it creates novel tabletop interaction modalities. We discuss possible applications and areas for future work.
  • Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR
    Ihshan Gumilar , Amit Barde , Prasanth Sasikumar , Mark Billinghurst , Ashkan F. Hayati , Gun Lee , Yuda Munarko , Sanjit Singh , Abdul Momin

    Gumilar, I., Barde, A., Sasikumar, P., Billinghurst, M., Hayati, A. F., Lee, G., ... & Momin, A. (2022, April). Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-7).

    @inproceedings{gumilar2022inter,
    title={Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR},
    author={Gumilar, Ihshan and Barde, Amit and Sasikumar, Prasanth and Billinghurst, Mark and Hayati, Ashkan F and Lee, Gun and Munarko, Yuda and Singh, Sanjit and Momin, Abdul},
    booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--7},
    year={2022}
    }
    Brain activity sometimes synchronises when people collaborate together on real world tasks. Understanding this process could to lead to improvements in face to face and remote collaboration. In this paper we report on an experiment exploring the relationship between eye gaze and inter-brain synchrony in Virtual Reality (VR). The experiment recruited pairs who were asked to perform finger-tracking exercises in VR with three different gaze conditions: averted, direct, and natural, while their brain activity was recorded. We found that gaze direction has a significant effect on inter-brain synchrony during collaboration for this task in VR. This shows that representing natural gaze could influence inter-brain synchrony in VR, which may have implications for avatar design for social VR. We discuss implications of our research and possible directions for future work.
  • A review on communication cues for augmented reality based remote guidance
    Weidong Huang, Mathew Wakefield, Troels Ammitsbøl Rasmussen, Seungwon Kim & Mark Billinghurst

    Huang, W., Wakefield, M., Rasmussen, T. A., Kim, S., & Billinghurst, M. (2022). A review on communication cues for augmented reality based remote guidance. Journal on Multimodal User Interfaces, 1-18.

    @article{huang2022review,
    title={A review on communication cues for augmented reality based remote guidance},
    author={Huang, Weidong and Wakefield, Mathew and Rasmussen, Troels Ammitsb{\o}l and Kim, Seungwon and Billinghurst, Mark},
    journal={Journal on Multimodal User Interfaces},
    pages={1--18},
    year={2022},
    publisher={Springer}
    }
    Remote guidance on physical tasks is a type of collaboration in which a local worker is guided by a remote helper to operate on a set of physical objects. It has many applications in industrial sections such as remote maintenance and how to support this type of remote collaboration has been researched for almost three decades. Although a range of different modern computing tools and systems have been proposed, developed and used to support remote guidance in different application scenarios, it is essential to provide communication cues in a shared visual space to achieve common ground for effective communication and collaboration. In this paper, we conduct a selective review to summarize communication cues, approaches that implement the cues and their effects on augmented reality based remote guidance. We also discuss challenges and propose possible future research and development directions.
  • Seeing is believing: AR-assisted blind area assembly to support hand–eye coordination
    Shuo Feng, Weiping He, Shaohua Zhang & Mark Billinghurst

    Feng, S., He, W., Zhang, S., & Billinghurst, M. (2022). Seeing is believing: AR-assisted blind area assembly to support hand–eye coordination. The International Journal of Advanced Manufacturing Technology, 119(11), 8149-8158.

    @article{feng2022seeing,
    title={Seeing is believing: AR-assisted blind area assembly to support hand--eye coordination},
    author={Feng, Shuo and He, Weiping and Zhang, Shaohua and Billinghurst, Mark},
    journal={The International Journal of Advanced Manufacturing Technology},
    volume={119},
    number={11},
    pages={8149--8158},
    year={2022},
    publisher={Springer}
    }
    The assembly stage is a vital phase in the production process and currently, there are still many manual tasks in the assembly operation. One of the challenges of manual assembly is the issue of blind area assembly since the visual obstruction of the hands or a part can lead to more errors and lower assembly efficiency. In this study, we developed an AR-assisted assembly system that solves the occlusion problem. Assembly workers can use the system to achieve comprehensive and precise hand–eye coordination (HEC). Additionally, we designed and conducted a user evaluation experiment to measure the learnability, usability, and mental effort required for the system for other HEC modes. Results indicate that hand position is the first visual information that should be considered in blind areas. Besides, the Intact HEC mode can effectively reduce the difficulty of learning and mental burden in operation, while at the same time improving efficiency.
  • Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals
    Arindam Dey, Amit Barde, Bowen Yuan, Ekansh Sareen, Chelsea Dobbins, Aaron Goh, Gaurav Gupta, Anubha Gupta, MarkBillinghurst

    Dey, A., Barde, A., Yuan, B., Sareen, E., Dobbins, C., Goh, A., ... & Billinghurst, M. (2022). Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals. International Journal of Human-Computer Studies, 160, 102762.

    @article{dey2022effects,
    title={Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals},
    author={Dey, Arindam and Barde, Amit and Yuan, Bowen and Sareen, Ekansh and Dobbins, Chelsea and Goh, Aaron and Gupta, Gaurav and Gupta, Anubha and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    volume={160},
    pages={102762},
    year={2022},
    publisher={Elsevier}
    }
    Virtual Reality (VR) interfaces provide an immersive medium to interact with the digital world. Most VR interfaces require physical interactions using handheld controllers, but there are other alternative interaction methods that can support different use cases and users. Interaction methods in VR are primarily evaluated based on their usability, however, their differences in neurological and physiological effects remains less investigated. In this paper—along with other traditional qualitative matrices such as presence, affect, and system usability—we explore the neurophysiological effects—brain signals and electrodermal activity—of using an alternative facial expression interaction method to interact with VR interfaces. This form of interaction was also compared with traditional handheld controllers. Three different environments, with different experiences to interact with were used—happy (butterfly catching), neutral (object picking), and scary (zombie shooting). Overall, we noticed an effect of interaction methods on the gamma activities in the brain and on skin conductance. For some aspects of presence, facial expression outperformed controllers but controllers were found to be better than facial expressions in terms of usability.
  • HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality
    Zhang, L., He, W., Cao, Z., Wang, S., Bai, H., & Billinghurst, M.

    Zhang, L., He, W., Cao, Z., Wang, S., Bai, H., & Billinghurst, M. (2022). HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality. International Journal of Human–Computer Interaction, 1-15.

    @article{zhang2022hapticproxy,
    title={HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality},
    author={Zhang, Li and He, Weiping and Cao, Zhiwei and Wang, Shuxia and Bai, Huidong and Billinghurst, Mark},
    journal={International Journal of Human--Computer Interaction},
    pages={1--15},
    year={2022},
    publisher={Taylor \& Francis}
    }
    Consistent visual and haptic feedback is an important way to improve the user experience when interacting with virtual objects. However, the perception provided in Augmented Reality (AR) mainly comes from visual cues and amorphous tactile feedback. This work explores how to simulate positional vibrotactile feedback (PVF) with multiple vibration motors when colliding with virtual objects in AR. By attaching spatially distributed vibration motors on a physical haptic proxy, users can obtain an augmented collision experience with positional vibration sensations from the contact point with virtual objects. We first developed a prototype system and conducted a user study to optimize the design parameters. Then we investigated the effect of PVF on user performance and experience in a virtual and real object alignment task in the AR environment. We found that this approach could significantly reduce the alignment offset between virtual and physical objects with tolerable task completion time increments. With the PVF cue, participants obtained a more comprehensive perception of the offset direction, more useful information, and a more authentic AR experience.
  • Octopus Sensing: A Python library for human behavior studies
    Nastaran Saffaryazdi, Aidin Gharibnavaz, Mark Billinghurst

    Saffaryazdi, N., Gharibnavaz, A., & Billinghurst, M. (2022). Octopus Sensing: A Python library for human behavior studies. Journal of Open Source Software, 7(71), 4045.

    @article{saffaryazdi2022octopus,
    title={Octopus Sensing: A Python library for human behavior studies},
    author={Saffaryazdi, Nastaran and Gharibnavaz, Aidin and Billinghurst, Mark},
    journal={Journal of Open Source Software},
    volume={7},
    number={71},
    pages={4045},
    year={2022}
    }
    Designing user studies and collecting data is critical to exploring and automatically recognizing human behavior. It is currently possible to use a range of sensors to capture heart rate, brain activity, skin conductance, and a variety of different physiological cues. These data can be combined to provide information about a user’s emotional state, cognitive load, or other factors. However, even when data are collected correctly, synchronizing data from multiple sensors is time-consuming and prone to errors. Failure to record and synchronize data is likely to result in errors in analysis and results, as well as the need to repeat the time-consuming experiments several times. To overcome these challenges, Octopus Sensing facilitates synchronous data acquisition from various sources and provides some utilities for designing user studies, real-time monitoring, and offline data visualization.
    The primary aim of Octopus Sensing is to provide a simple scripting interface so that people with basic or no software development skills can define sensor-based experiment scenarios with less effort
  • Emotion Recognition in Conversations Using Brain and Physiological Signals
    Nastaran Saffaryazdi , Yenushka Goonesekera , Nafiseh Saffaryazdi , Nebiyou Daniel Hailemariam , Ebasa Girma Temesgen , Suranga Nanayakkara , Elizabeth Broadbent , Mark Billinghurst

    Saffaryazdi, N., Goonesekera, Y., Saffaryazdi, N., Hailemariam, N. D., Temesgen, E. G., Nanayakkara, S., ... & Billinghurst, M. (2022, March). Emotion Recognition in Conversations Using Brain and Physiological Signals. In 27th International Conference on Intelligent User Interfaces (pp. 229-242).

    @inproceedings{saffaryazdi2022emotion,
    title={Emotion recognition in conversations using brain and physiological signals},
    author={Saffaryazdi, Nastaran and Goonesekera, Yenushka and Saffaryazdi, Nafiseh and Hailemariam, Nebiyou Daniel and Temesgen, Ebasa Girma and Nanayakkara, Suranga and Broadbent, Elizabeth and Billinghurst, Mark},
    booktitle={27th International Conference on Intelligent User Interfaces},
    pages={229--242},
    year={2022}
    }
    Emotions are complicated psycho-physiological processes that are related to numerous external and internal changes in the body. They play an essential role in human-human interaction and can be important for human-machine interfaces. Automatically recognizing emotions in conversation could be applied in many application domains like health-care, education, social interactions, entertainment, and more. Facial expressions, speech, and body gestures are primary cues that have been widely used for recognizing emotions in conversation. However, these cues can be ineffective as they cannot reveal underlying emotions when people involuntarily or deliberately conceal their emotions. Researchers have shown that analyzing brain activity and physiological signals can lead to more reliable emotion recognition since they generally cannot be controlled. However, these body responses in emotional situations have been rarely explored in interactive tasks like conversations. This paper explores and discusses the performance and challenges of using brain activity and other physiological signals in recognizing emotions in a face-to-face conversation. We present an experimental setup for stimulating spontaneous emotions using a face-to-face conversation and creating a dataset of the brain and physiological activity. We then describe our analysis strategies for recognizing emotions using Electroencephalography (EEG), Photoplethysmography (PPG), and Galvanic Skin Response (GSR) signals in subject-dependent and subject-independent approaches. Finally, we describe new directions for future research in conversational emotion recognition and the limitations and challenges of our approach.
  • Asymmetric interfaces with stylus and gesture for VR sketching
    Qianyuan Zou; Huidong Bai; Lei Gao; Allan Fowler; Mark Billinghurst

    Zou, Q., Bai, H., Gao, L., Fowler, A., & Billinghurst, M. (2022, March). Asymmetric interfaces with stylus and gesture for VR sketching. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 968-969). IEEE.

    @inproceedings{zou2022asymmetric,
    title={Asymmetric interfaces with stylus and gesture for VR sketching},
    author={Zou, Qianyuan and Bai, Huidong and Gao, Lei and Fowler, Allan and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={968--969},
    year={2022},
    organization={IEEE}
    }
    Virtual Reality (VR) can be used for design and artistic applications. However, traditional symmetrical input devices are not specifically designed as creative tools and may not fully meet artist needs. In this demonstration, we present a variety of tool-based asymmetric VR interfaces to assist artists to create artwork with better performance and easier effort. These interaction methods allow artists to hold different tools in their hands, such as wearing a data glove on the left hand and holding a stylus in the right-hand. We demonstrate this by showing a stylus and glove based sketching interface. We conducted a pilot study showing that most users prefer to create art with different tools in both hands.
  • eyemR-Talk system overview: Illustration and demonstration of the system setup, gaze states, and shared gaze indicator interface designs
    Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration
    Allison Jing; Gun Lee; Mark Billinghurst

    Jing, A., Lee, G., & Billinghurst, M. (2022, March). Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 250-259). IEEE.

    @inproceedings{jing2022using,
    title={Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration},
    author={Jing, Allison and Lee, Gun and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={250--259},
    year={2022},
    organization={IEEE}
    }
    In this paper, we present a 360° panoramic Mixed Reality (MR) sys-tem that visualises shared gaze cues using contextual speech input to improve task coordination. We conducted two studies to evaluate the design of the MR gaze-speech interface exploring the combinations of visualisation style and context control level. Findings from the first study suggest that an explicit visual form that directly connects the collaborators’ shared gaze to the contextual conversation is preferred. The second study indicates that the gaze-speech modality shortens the coordination time to attend to the shared interest, making the communication more natural and the collaboration more effective. Qualitative feedback also suggest that having a constant joint gaze indicator provides a consistent bi-directional view while establishing a sense of co-presence during task collaboration. We discuss the implications for the design of collaborative MR systems and directions for future research.
  • Jamming in MR: Towards Real-Time Music Collaboration in Mixed Reality
    Ruben Schlagowski; Kunal Gupta; Silvan Mertes; Mark Billinghurst; Susanne Metzner; Elisabeth André

    Schlagowski, R., Gupta, K., Mertes, S., Billinghurst, M., Metzner, S., & André, E. (2022, March). Jamming in MR: Towards Real-Time Music Collaboration in Mixed Reality. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 854-855). IEEE.

    @inproceedings{schlagowski2022jamming,
    title={Jamming in MR: towards real-time music collaboration in mixed reality},
    author={Schlagowski, Ruben and Gupta, Kunal and Mertes, Silvan and Billinghurst, Mark and Metzner, Susanne and Andr{\'e}, Elisabeth},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={854--855},
    year={2022},
    organization={IEEE}
    }
    Recent pandemic-related contact restrictions have made it difficult for musicians to meet in person to make music. As a result, there has been an increased demand for applications that enable remote and real-time music collaboration. One desirable goal here is to give musicians a sense of social presence, to make them feel that they are “on site” with their musical partners. We conducted a focus group study to investigate the impact of remote jamming on users' affect. Further, we gathered user requirements for a Mixed Reality system that enables real-time jamming and developed a prototype based on these findings.
  • Supporting Jury Understanding of Expert Evidence in a Virtual Environment
    Carolin Reichherzer; Andrew Cunningham; Jason Barr; Tracey Coleman; Kurt McManus; Dion Sheppard; Scott Coussens; Mark Kohler; Mark Billinghurst; Bruce H. Thomas

    Reichherzer, C., Cunningham, A., Barr, J., Coleman, T., McManus, K., Sheppard, D., ... & Thomas, B. H. (2022, March). Supporting Jury Understanding of Expert Evidence in a Virtual Environment. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 615-624). IEEE.

    @inproceedings{reichherzer2022supporting,
    title={Supporting Jury Understanding of Expert Evidence in a Virtual Environment},
    author={Reichherzer, Carolin and Cunningham, Andrew and Barr, Jason and Coleman, Tracey and McManus, Kurt and Sheppard, Dion and Coussens, Scott and Kohler, Mark and Billinghurst, Mark and Thomas, Bruce H},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={615--624},
    year={2022},
    organization={IEEE}
    }
    This work investigates the use of Virtual Reality (VR) to present forensic evidence to the jury in a courtroom trial. The findings of a between-participant user study on comprehension of an expert statement are presented, examining the benefits and issues of using VR compared to traditional courtroom presentation (being still images). Participants listened to a forensic scientist explain bloodstain spatter patterns while viewing a mock crime scene in either VR or as still images in video format. Under these conditions, we compared understanding of the expert domain, mental effort and content recall. We found that VR significantly improves the understanding of spatial information and knowledge acquisition. We also identify different patterns of user behaviour depending on the display method. We conclude with suggestions on how to best adapt evidence presentation to VR.
  • Tool-based asymmetric interaction for selection in VR.
    Qianyuan Zou; Huidong Bai; Gun Lee; Allan Fowler; Mark Billinghurst

    Zou, Q., Bai, H., Zhang, Y., Lee, G., Allan, F., & Mark, B. (2021). Tool-based asymmetric interaction for selection in vr. In SIGGRAPH Asia 2021 Technical Communications (pp. 1-4).

    @incollection{zou2021tool,
    title={Tool-based asymmetric interaction for selection in vr},
    author={Zou, Qianyuan and Bai, Huidong and Zhang, Yuewei and Lee, Gun and Allan, Fowler and Mark, Billinghurst},
    booktitle={SIGGRAPH Asia 2021 Technical Communications},
    pages={1--4},
    year={2021}
    }
    Mainstream Virtual Reality (VR) devices on the market nowadays mostly use symmetric interaction design for input, yet common practice by artists suggests asymmetric interaction using different input tools in each hand could be a better alternative for 3D modeling tasks in VR. In this paper, we explore the performance and usability of a tool-based asymmetric interaction method for a 3D object selection task in VR and compare it with a symmetric interface. The symmetric VR interface uses two identical handheld controllers to select points on a sphere, while the asymmetric interface uses a handheld controller and a stylus. We conducted a user study to compare these two interfaces and found that the asymmetric system was faster, required less workload, and was rated with better usability. We also discuss the opportunities for tool-based asymmetric input to optimize VR art workflows and future research directions.
  • haptic HONGI: Reflections on collaboration in the transdisciplinary creation of an AR artwork in Creating Digitally
    Gunn, M., Campbell, A., Billinghurst, M., Sasikumar, P., Lawn, W., Muthukumarana, S

  • Jitsi360: Using 360 images for live tours.
    Nassani, A., Bai, H., & Billinghurst, M.

  • Designing, Prototyping and Testing of 360-degree Spatial Audio Conferencing for Virtual Tours.
    Nassani, A., Barde, A., Bai, H., Nanayakkara, S., & Billinghurst, M.

  • Implementation of Attention-Based Spatial Audio for 360° Environments.
    Nassani, A., Barde, A., Bai, H., Nanayakkara, S., & Billinghurst, M.

  • The prototype system
    The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality
    Allison Jing , Kieran May , Brandon Matthews , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran May, Brandon Matthews, Gun Lee, and Mark Billinghurst. 2022. The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 463 (November 2022), 27 pages. https://doi.org/10.1145/3555564

    @article{10.1145/3555564,
    author = {Jing, Allison and May, Kieran and Matthews, Brandon and Lee, Gun and Billinghurst, Mark},
    title = {The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality},
    year = {2022},
    issue_date = {November 2022},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    volume = {6},
    number = {CSCW2},
    url = {https://doi.org/10.1145/3555564},
    doi = {10.1145/3555564},
    abstract = {In a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators' cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator's gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.},
    journal = {Proc. ACM Hum.-Comput. Interact.},
    month = {nov},
    articleno = {463},
    numpages = {27},
    keywords = {gaze visualization, mixed reality remote collaboration, human-computer interaction}
    }
    In a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators' cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator's gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.
  • Mixed Reality Remote Collaboration System supporting Near-Gaze Interface
    Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration
    Allison Jing; Kunal Gupta; Jeremy McDade; Gun A. Lee; Mark Billinghurst

    A. Jing, K. Gupta, J. McDade, G. A. Lee and M. Billinghurst, "Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration," 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Singapore, Singapore, 2022, pp. 837-846, doi: 10.1109/ISMAR55827.2022.00102.

    @INPROCEEDINGS{9995367,
    author={Jing, Allison and Gupta, Kunal and McDade, Jeremy and Lee, Gun A. and Billinghurst, Mark},
    booktitle={2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    title={Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration},
    year={2022},
    volume={},
    number={},
    pages={837-846},
    doi={10.1109/ISMAR55827.2022.00102}}
    In this paper, we share real-time collaborative gaze behaviours, hand pointing, gesturing, and heart rate visualisations between remote collaborators using a live 360 ° panoramic-video based Mixed Reality (MR) system. We first ran a pilot study to explore visual designs to combine communication cues with biofeedback (heart rate), aiming to understand user perceptions of empathic collaboration. We then conducted a formal study to investigate the effect of modality (Gaze+Hand, Hand-only) and interface (Near-Gaze, Embodied). The results show that the Gaze+Hand modality in a Near-Gaze interface is significantly better at reducing task load, improving co-presence, enhancing understanding and tightening collaborative behaviours compared to the conventional Embodied hand-only experience. Ranked as the most preferred condition, the Gaze+Hand in Near-Gaze condition is perceived to reduce the need for dividing attention to the collaborator’s physical location, although it feels slightly less natural compared to the embodied visualisations. In addition, the Gaze+Hand conditions also led to more joint attention and less hand pointing to align mutual understanding. Lastly, we provide a design guideline to summarize what we have learned from the studies on the representation between modality, interface, and biofeedback.
  • System Overview
    Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration
    Allison Jing; Kunal Gupta; Jeremy McDade; Gun A. Lee; Mark Billinghurst

    Allison Jing, Kunal Gupta, Jeremy McDade, Gun Lee, and Mark Billinghurst. 2022. Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration. In ACM SIGGRAPH 2022 Posters (SIGGRAPH '22). Association for Computing Machinery, New York, NY, USA, Article 29, 1–2. https://doi.org/10.1145/3532719.3543213

    @inproceedings{10.1145/3532719.3543213,
    author = {Jing, Allison and Gupta, Kunal and McDade, Jeremy and Lee, Gun and Billinghurst, Mark},
    title = {Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration},
    year = {2022},
    isbn = {9781450393614},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3532719.3543213},
    doi = {10.1145/3532719.3543213},
    abstract = {In this poster, we present a live 360° panoramic-video based empathic Mixed Reality (MR) collaboration system that shares various Near-Gaze non-verbal communication cues including gaze, hand pointing, gesturing, and heart rate visualisations in real-time. The preliminary results indicate that the interface with the partner’s communication cues visualised close to the gaze point allows users to focus without dividing attention to the collaborator’s physical body movements yet still effectively communicate. Shared gaze visualisations coupled with deictic languages are primarily used to affirm joint attention and mutual understanding, while hand pointing and gesturing are used as secondary. Our approach provides a new way to help enable effective remote collaboration through varied empathic communication visualisations and modalities which covers different task properties and spatial setups.},
    booktitle = {ACM SIGGRAPH 2022 Posters},
    articleno = {29},
    numpages = {2},
    location = {Vancouver, BC, Canada},
    series = {SIGGRAPH '22}
    }
    In this poster, we present a live 360° panoramic-video based empathic Mixed Reality (MR) collaboration system that shares various Near-Gaze non-verbal communication cues including gaze, hand pointing, gesturing, and heart rate visualisations in real-time. The preliminary results indicate that the interface with the partner’s communication cues visualised close to the gaze point allows users to focus without dividing attention to the collaborator’s physical body movements yet still effectively communicate. Shared gaze visualisations coupled with deictic languages are primarily used to affirm joint attention and mutual understanding, while hand pointing and gesturing are used as secondary. Our approach provides a new way to help enable effective remote collaboration through varied empathic communication visualisations and modalities which covers different task properties and spatial setups.
  • eyemR-Talk system overview: Illustration and demonstration of the system setup, gaze states, and shared gaze indicator interface designs
    eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues
    Allison Jing , Brandon Matthews , Kieran May , Thomas Clarke , Gun Lee , Mark Billinghurst

    Allison Jing, Brandon Matthews, Kieran May, Thomas Clarke, Gun Lee, and Mark Billinghurst. 2021. EyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues. In SIGGRAPH Asia 2021 Posters (SA '21 Posters). Association for Computing Machinery, New York, NY, USA, Article 16, 1–2. https://doi.org/10.1145/3476124.3488618

    @inproceedings{10.1145/3476124.3488618,
    author = {Jing, Allison and Matthews, Brandon and May, Kieran and Clarke, Thomas and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues},
    year = {2021},
    isbn = {9781450386876},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3476124.3488618},
    doi = {10.1145/3476124.3488618},
    abstract = {In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.},
    booktitle = {SIGGRAPH Asia 2021 Posters},
    articleno = {16},
    numpages = {2},
    keywords = {Mixed Reality remote collaboration, gaze visualization, speech input},
    location = {Tokyo, Japan},
    series = {SA '21 Posters}
    }
    In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.
  • The eyemR-Vis prototype system, showing an AR user (HoloLens2) sharing gaze cues with a VR user (HTC Vive Pro Eye)
    eyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration
    Allison Jing , Kieran William May , Mahnoor Naeem , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran William May, Mahnoor Naeem, Gun Lee, and Mark Billinghurst. 2021. EyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, Article 283, 1–7. https://doi.org/10.1145/3411763.3451844

    @inproceedings{10.1145/3411763.3451844,
    author = {Jing, Allison and May, Kieran William and Naeem, Mahnoor and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration},
    year = {2021},
    isbn = {9781450380959},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3411763.3451844},
    doi = {10.1145/3411763.3451844},
    abstract = {Gaze is one of the most important communication cues in face-to-face collaboration. However, in remote collaboration, sharing dynamic gaze information is more difficult. In this research, we investigate how sharing gaze behavioural cues can improve remote collaboration in a Mixed Reality (MR) environment. To do this, we developed eyemR-Vis, a 360 panoramic Mixed Reality remote collaboration system that shows gaze behavioural cues as bi-directional spatial virtual visualisations shared between a local host and a remote collaborator. Preliminary results from an exploratory study indicate that using virtual cues to visualise gaze behaviour has the potential to increase co-presence, improve gaze awareness, encourage collaboration, and is inclined to be less physically demanding or mentally distracting.},
    booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    articleno = {283},
    numpages = {7},
    keywords = {Human-Computer Interaction, Gaze Visualisation, Mixed Reality Remote Collaboration, CSCW},
    location = {Yokohama, Japan},
    series = {CHI EA '21}
    }
    Gaze is one of the most important communication cues in face-to-face collaboration. However, in remote collaboration, sharing dynamic gaze information is more difficult. In this research, we investigate how sharing gaze behavioural cues can improve remote collaboration in a Mixed Reality (MR) environment. To do this, we developed eyemR-Vis, a 360 panoramic Mixed Reality remote collaboration system that shows gaze behavioural cues as bi-directional spatial virtual visualisations shared between a local host and a remote collaborator. Preliminary results from an exploratory study indicate that using virtual cues to visualise gaze behaviour has the potential to increase co-presence, improve gaze awareness, encourage collaboration, and is inclined to be less physically demanding or mentally distracting.
  • The eyemR-Vis prototype system, showing an AR user (HoloLens2) sharing gaze cues with a VR user (HTC Vive Pro Eye)
    eyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators
    Allison Jing , Kieran William May , Mahnoor Naeem , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran William May, Mahnoor Naeem, Gun Lee, and Mark Billinghurst. 2021. EyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, Article 188, 1–4. https://doi.org/10.1145/3411763.3451545

    @inproceedings{10.1145/3411763.3451545,
    author = {Jing, Allison and May, Kieran William and Naeem, Mahnoor and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators},
    year = {2021},
    isbn = {9781450380959},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3411763.3451545},
    doi = {10.1145/3411763.3451545},
    abstract = {This demonstration shows eyemR-Vis, a 360 panoramic Mixed Reality collaboration system that translates gaze behavioural cues to bi-directional visualisations between a local host (AR) and a remote collaborator (VR). The system is designed to share dynamic gaze behavioural cues as bi-directional spatial virtual visualisations between a local host and a remote collaborator. This enables richer communication of gaze through four visualisation techniques: browse, focus, mutual-gaze, and fixated circle-map. Additionally, our system supports simple bi-directional avatar interaction as well as panoramic video zoom. This makes interaction in the normally constrained remote task space more flexible and relatively natural. By showing visual communication cues that are physically inaccessible in the remote task space through reallocating and visualising the existing ones, our system aims to provide a more engaging and effective remote collaboration experience.},
    booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    articleno = {188},
    numpages = {4},
    keywords = {Gaze Visualisation, Human-Computer Interaction, Mixed Reality Remote Collaboration, CSCW},
    location = {Yokohama, Japan},
    series = {CHI EA '21}
    }
    This demonstration shows eyemR-Vis, a 360 panoramic Mixed Reality collaboration system that translates gaze behavioural cues to bi-directional visualisations between a local host (AR) and a remote collaborator (VR). The system is designed to share dynamic gaze behavioural cues as bi-directional spatial virtual visualisations between a local host and a remote collaborator. This enables richer communication of gaze through four visualisation techniques: browse, focus, mutual-gaze, and fixated circle-map. Additionally, our system supports simple bi-directional avatar interaction as well as panoramic video zoom. This makes interaction in the normally constrained remote task space more flexible and relatively natural. By showing visual communication cues that are physically inaccessible in the remote task space through reallocating and visualising the existing ones, our system aims to provide a more engaging and effective remote collaboration experience.
  • Sankey diagram of EEG analysis pipeline
    Brain activity during cybersickness: a scoping review
    Eunhee Chang, Mark Billinghurst, Byounghyun Yoo

    Chang, E., Billinghurst, M., & Yoo, B. (2023). Brain activity during cybersickness: a scoping review. Virtual Reality, 1-25.

    @article{chang2023brain,
    title={Brain activity during cybersickness: a scoping review},
    author={Chang, Eunhee and Billinghurst, Mark and Yoo, Byounghyun},
    journal={Virtual Reality},
    pages={1--25},
    year={2023},
    publisher={Springer}
    }
    Virtual reality (VR) experiences can cause a range of negative symptoms such as nausea, disorientation, and oculomotor discomfort, which is collectively called cybersickness. Previous studies have attempted to develop a reliable measure for detecting cybersickness instead of using questionnaires, and electroencephalogram (EEG) has been regarded as one of the possible alternatives. However, despite the increasing interest, little is known about which brain activities are consistently associated with cybersickness and what types of methods should be adopted for measuring discomfort through brain activity. We conducted a scoping review of 33 experimental studies in cybersickness and EEG found through database searches and screening. To understand these studies, we organized the pipeline of EEG analysis into four steps (preprocessing, feature extraction, feature selection, classification) and surveyed the characteristics of each step. The results showed that most studies performed frequency or time-frequency analysis for EEG feature extraction. A part of the studies applied a classification model to predict cybersickness indicating an accuracy between 79 and 100%. These studies tended to use HMD-based VR with a portable EEG headset for measuring brain activity. Most VR content shown was scenic views such as driving or navigating a road, and the age of participants was limited to people in their 20 s. This scoping review contributes to presenting an overview of cybersickness-related EEG research and establishing directions for future work.
  • Using an (a) explanation, (b) example, or (c) hint helper block, a brief summary of the code component, examples of its usage, and a guide as to the succeeding component in the statement may respectively be displayed next to the original syntax through 3D text annotation and graphical representation.
    An AR/TUI-supported Debugging Teaching Environment
    Dmitry Resnyansky , Mark Billinghurst , Arindam Dey

    Resnyansky, D., Billinghurst, M., & Dey, A. (2019, December). An AR/TUI-supported debugging teaching environment. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction (pp. 590-594).

    @inproceedings{10.1145/3369457.3369538,
    author = {Resnyansky, Dmitry and Billinghurst, Mark and Dey, Arindam},
    title = {An AR/TUI-Supported Debugging Teaching Environment},
    year = {2020},
    isbn = {9781450376969},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3369457.3369538},
    doi = {10.1145/3369457.3369538},
    abstract = {This paper presents research on the potential application of Tangible and Augmented Reality (AR) technology to computer science education and the teaching of programming in tertiary settings. An approach to an AR-supported debugging-teaching prototype is outlined, focusing on the design of an AR workspace that uses physical markers to interact with content (code). We describe a prototype which has been designed to actively scaffold the student's development of the two primary abilities necessary for effective debugging: (1) the ability to read not just the code syntax, but to understand the overall program structure behind the code; and (2) the ability to independently recall and apply the new knowledge to produce new, working code structures.},
    booktitle = {Proceedings of the 31st Australian Conference on Human-Computer-Interaction},
    pages = {590–594},
    numpages = {5},
    keywords = {tangible user interface, tertiary education, debugging, Human-computer interaction, augmented reality},
    location = {Fremantle, WA, Australia},
    series = {OzCHI '19}
    }
    This paper presents research on the potential application of Tangible and Augmented Reality (AR) technology to computer science education and the teaching of programming in tertiary settings. An approach to an AR-supported debugging-teaching prototype is outlined, focusing on the design of an AR workspace that uses physical markers to interact with content (code). We describe a prototype which has been designed to actively scaffold the student's development of the two primary abilities necessary for effective debugging: (1) the ability to read not just the code syntax, but to understand the overall program structure behind the code; and (2) the ability to independently recall and apply the new knowledge to produce new, working code structures.
  • TUI/AR-based teaching programming tools
    The potential of augmented reality for computer science education
    Dmitry Resnyansky; Emin İbili; Mark Billinghurst

    Resnyansky, D., Ibili, E., & Billinghurst, M. (2018, December). The potential of augmented reality for computer science education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 350-356). IEEE.

    @INPROCEEDINGS{8615331,
    author={Resnyansky, Dmitry and İbili, Emin and Billinghurst, Mark},
    booktitle={2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    title={The Potential of Augmented Reality for Computer Science Education},
    year={2018},
    volume={},
    number={},
    pages={350-356},
    doi={10.1109/TALE.2018.8615331}}
    Innovative approaches in the teaching of computer science are required to address the needs of diverse target audiences, including groups with minimal mathematical background and insufficient abstract thinking ability. In order to tackle this problem, new pedagogical approaches that make use of technologies such as Virtual and Augmented Reality, Tangible User Interfaces, and 3D graphics are needed. This paper draws upon relevant pedagogical and technological literature to determine how Augmented Reality can be more fully applied to computer science education.
  • Prototype system overview showing a remote expert worker immerse into the local worker’s environment to collaborate
    Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration
    Theophilus Teo, Mitchell Norman, Gun A. Lee, Mark Billinghurst & Matt Adcock

    T. Teo, M. Norman, G. A. Lee, M. Billinghurst and M. Adcock. “Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration.” In: J Multimodal User Interfaces. (JMUI), 2020.

    @article{teo2020exploring,
    title={Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration},
    author={Teo, Theophilus and Norman, Mitchell and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    journal={Journal on Multimodal User Interfaces},
    volume={14},
    pages={373--385},
    year={2020},
    publisher={Springer}
    }
    Remote collaboration using mixed reality (MR) enables two separated workers to collaborate by sharing visual cues. A local worker can share his/her environment to the remote worker for a better contextual understanding. However, prior techniques were using either 360 video sharing or a complicated 3D reconstruction configuration. This limits the interactivity and practicality of the system. In this paper we show an interactive and easy-to-configure MR remote collaboration technique enabling a local worker to easily share his/her environment by integrating 360 panorama images into a low-cost 3D reconstructed scene as photo-bubbles and projective textures. This enables the remote worker to visit past scenes on either an immersive 360 panoramic scenery, or an interactive 3D environment. We developed a prototype and conducted a user study comparing the two modes of how 360 panorama images could be used in a remote collaboration system. Results suggested that both photo-bubbles and projective textures can provide high social presence, co-presence and low cognitive load for solving tasks while each have its advantage and limitations. For example, photo-bubbles are good for a quick navigation inside the 3D environment without depth perception while projective textures are good for spatial understanding but require physical efforts.
  • The OmniGlobeVR enables a VR occupant to communicate and cooperate with multiple designers in the physical world.
    OmniGlobeVR: A Collaborative 360-Degree Communication System for VR
    Zhengqing Li , Theophilus Teo , Liwei Chan , Gun Lee , Matt Adcock , Mark Billinghurst , Hideki Koike

    Z. Li, T. Teo, G. Lee, M. Adcock, M. Billinghurst, H. Koike. “A collaborative 360-degree communication system for VR”. In Proceedings of the 2020 Designing Interactive Systems Conference (DIS2020). ACM, 2020.

    @inproceedings{10.1145/3357236.3395429,
    author = {Li, Zhengqing and Teo, Theophilus and Chan, Liwei and Lee, Gun and Adcock, Matt and Billinghurst, Mark and Koike, Hideki},
    title = {OmniGlobeVR: A Collaborative 360-Degree Communication System for VR},
    year = {2020},
    isbn = {9781450369749},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3357236.3395429},
    doi = {10.1145/3357236.3395429},
    abstract = {In this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner's focus. We discuss the design implications of these results and directions for future research.},
    booktitle = {Proceedings of the 2020 ACM Designing Interactive Systems Conference},
    pages = {615–625},
    numpages = {11},
    keywords = {virtual reality, communication, collaboration, mixed reality, spherical display, 360-degree camera},
    location = {Eindhoven, Netherlands},
    series = {DIS '20}
    }
    In this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner's focus. We discuss the design implications of these results and directions for future research.
  • 360Drops System Overview
    360Drops: Mixed Reality Remote Collaboration using 360 Panoramas within the 3D Scene
    Theophilus Teo , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, G. A. Lee, M. Billinghurst and M. Adcock. “360Drops: Mixed Reality Remove Collaboration using 360° Panoramas within the 3D Scene.” In: ACM SIGGRAPH Conference and Exhibition on Computer Graphics & Interactive Technologies in Asia. (SA 2019), Brisbane, Australia, 2019.

    @inproceedings{10.1145/3355049.3360517,
    author = {Teo, Theophilus and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    title = {360Drops: Mixed Reality Remote Collaboration Using 360 Panoramas within the 3D Scene*},
    year = {2019},
    isbn = {9781450369428},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3355049.3360517},
    doi = {10.1145/3355049.3360517},
    abstract = {Mixed Reality (MR) remote guidance has become a practical solution for collaboration that includes nonverbal communication. This research focuses on integrating different types of MR remote collaboration systems together allowing a new variety for remote collaboration to extend its features and user experience. In this demonstration, we present 360Drops, a MR remote collaboration system that uses 360 panorama images within 3D reconstructed scenes. We introduce a new technique to interact with multiple 360 Panorama Spheres in an immersive 3D reconstructed scene. This allows a remote user to switch between multiple 360 scenes “live/static, past/present,” placed in a 3D reconstructed scene to promote a better understanding of space and interactivity through verbal and nonverbal communication. We present the system features and user experience to the attendees of SIGGRAPH Asia 2019 through a live demonstration.},
    booktitle = {SIGGRAPH Asia 2019 Emerging Technologies},
    pages = {1–2},
    numpages = {2},
    keywords = {Remote Collaboration, Shared Experience, Mixed Reality},
    location = {Brisbane, QLD, Australia},
    series = {SA '19}
    }
    Mixed Reality (MR) remote guidance has become a practical solution for collaboration that includes nonverbal communication. This research focuses on integrating different types of MR remote collaboration systems together allowing a new variety for remote collaboration to extend its features and user experience. In this demonstration, we present 360Drops, a MR remote collaboration system that uses 360 panorama images within 3D reconstructed scenes. We introduce a new technique to interact with multiple 360 Panorama Spheres in an immersive 3D reconstructed scene. This allows a remote user to switch between multiple 360 scenes “live/static, past/present,” placed in a 3D reconstructed scene to promote a better understanding of space and interactivity through verbal and nonverbal communication. We present the system features and user experience to the attendees of SIGGRAPH Asia 2019 through a live demonstration.
  • Prototype system overview
    A Technique for Mixed Reality Remote Collaboration using 360° Panoramas in 3D Reconstructed Scenes
    Theophilus Teo , Ashkan F. Hayati , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, A. F. Hayati, G. A. Lee, M. Billinghurst and M. Adcock. “A Technique for Mixed Reality Remote Collaboration using 360° Panoramas in 3D Reconstructed Scenes.” In: ACM Symposium on Virtual Reality Software and Technology. (VRST), Sydney, Australia, 2019.

    @inproceedings{teo2019technique,
    title={A technique for mixed reality remote collaboration using 360 panoramas in 3d reconstructed scenes},
    author={Teo, Theophilus and F. Hayati, Ashkan and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    booktitle={Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2019}
    }
    Mixed Reality (MR) remote collaboration provides an enhanced immersive experience where a remote user can provide verbal and nonverbal assistance to a local user to increase the efficiency and performance of the collaboration. This is usually achieved by sharing the local user's environment through live 360 video or a 3D scene, and using visual cues to gesture or point at real objects allowing for better understanding and collaborative task performance. While most of prior work used one of the methods to capture the surrounding environment, there may be situations where users have to choose between using 360 panoramas or 3D scene reconstruction to collaborate, as each have unique benefits and limitations. In this paper we designed a prototype system that combines 360 panoramas into a 3D scene to introduce a novel way for users to interact and collaborate with each other. We evaluated the prototype through a user study which compared the usability and performance of our proposed approach to live 360 video collaborative system, and we found that participants enjoyed using different ways to access the local user's environment although it took them longer time to learn to use our system. We also collected subjective feedback for future improvements and provide directions for future research.
  • MR remote collaboration system overview
    Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
    Theophilus Teo , Louise Lawrence , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, L. Lawrence, G. A. Lee, M. Billinghurst, and M. Adcock. (2019). “Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction”. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Paper 201, 14 pages.

    @inproceedings{10.1145/3290605.3300431,
    author = {Teo, Theophilus and Lawrence, Louise and Lee, Gun A. and Billinghurst, Mark and Adcock, Matt},
    title = {Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction},
    year = {2019},
    isbn = {9781450359702},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3290605.3300431},
    doi = {10.1145/3290605.3300431},
    abstract = {Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.},
    booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages = {1–14},
    numpages = {14},
    keywords = {interaction methods, remote collaboration, 3d scene reconstruction, mixed reality, virtual reality, 360 panorama},
    location = {Glasgow, Scotland Uk},
    series = {CHI '19}
    }
    Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
  • SharedSphere system overview
    Mixed reality collaboration through sharing a live panorama
    Gun A. Lee , Theophilus Teo , Seungwon Kim , Mark Billinghurst

    G. A. Lee, T. Teo, S. Kim, and M. Billinghurst. (2017). “Mixed reality collaboration through sharing a live panorama”. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (SA 2017). ACM, New York, NY, USA, Article 14, 4 pages.

    @inproceedings{10.1145/3132787.3139203,
    author = {Lee, Gun A. and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    title = {Mixed Reality Collaboration through Sharing a Live Panorama},
    year = {2017},
    isbn = {9781450354103},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3132787.3139203},
    doi = {10.1145/3132787.3139203},
    abstract = {One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. Shared-Sphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator's focus.},
    booktitle = {SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    articleno = {14},
    numpages = {4},
    keywords = {shared experience, panorama, remote collaboration},
    location = {Bangkok, Thailand},
    series = {SA '17}
    }
    One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. Shared-Sphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator's focus.
  • Prototype mixed presence collaborative Mixed Reality System
    A Mixed Presence Collaborative Mixed Reality System
    Mitchell Norman; Gun Lee; Ross T. Smith; Mark Billinqhurst

    M. Norman, G. Lee, R. T. Smith and M. Billinqhurst, "A Mixed Presence Collaborative Mixed Reality System," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 1106-1107, doi: 10.1109/VR.2019.8797966.

    @INPROCEEDINGS{8797966,
    author={Norman, Mitchell and Lee, Gun and Smith, Ross T. and Billinqhurs, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    title={A Mixed Presence Collaborative Mixed Reality System},
    year={2019},
    volume={},
    number={},
    pages={1106-1107},
    doi={10.1109/VR.2019.8797966}}
    Research has shown that Mixed Presence Groupware (MPG) systems are a valuable collaboration tool. However research into MPG systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Augmented Reality (AR) solutions. We present a new system with two local users and one remote user using HMD based AR interfaces. Our system provides tools allowing users to layout a room with the help of a remote user. The remote user has access to a marker and pointer tools to assist in directing the local users. Feedback collected from several groups of users showed that our system is easy to learn but could have increased accuracy and consistency.
  • System features (clockwise from top left: 1) gaze reticle, 2) virtual markers, and 3) virtual ray pointer 4) emitting out of a webcam)
    A Mixed Presence Collaborative Mixed Reality System
    Mitchell Norman; Gun Lee; Ross T. Smith; Mark Billinqhurst

    Norman, M., Lee, G., Smith, R. T., & Billinqhurst, M. (2019, March). A mixed presence collaborative mixed reality system. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1106-1107). IEEE.

    @INPROCEEDINGS{8797966,
    author={Norman, Mitchell and Lee, Gun and Smith, Ross T. and Billinqhurs, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    title={A Mixed Presence Collaborative Mixed Reality System},
    year={2019},
    volume={},
    number={},
    pages={1106-1107},
    doi={10.1109/VR.2019.8797966}}
    Research has shown that Mixed Presence Groupware (MPG) systems are a valuable collaboration tool. However research into MPG systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Augmented Reality (AR) solutions. We present a new system with two local users and one remote user using HMD based AR interfaces. Our system provides tools allowing users to layout a room with the help of a remote user. The remote user has access to a marker and pointer tools to assist in directing the local users. Feedback collected from several groups of users showed that our system is easy to learn but could have increased accuracy and consistency.
  • LightSense-Long Distance
    Uwe Rieger, Yinan Liu, Tharindu Kaluarachchi, Amit Barde, Huidong Bai, Alaeddin Nassani, Suranga Nanayakkara, Mark Billinghurst.

    Rieger, U., Liu, Y., Kaluarachchi, T., Barde, A., Bai, H., Nassani, A., ... & Billinghurst, M. (2023). LightSense-Long Distance. In ACM SIGGRAPH Asia 2023 Art Gallery (pp. 1-2).

    @incollection{rieger2023lightsense,
    title={LightSense-Long Distance},
    author={Rieger, Uwe and Liu, Yinan and Kaluarachchi, Tharindu and Barde, Amit and Bai, Huidong and Nassani, Alaeddin and Nanayakkara, Suranga and Billinghurst, Mark},
    booktitle={ACM SIGGRAPH Asia 2023 Art Gallery},
    pages={1--2},
    year={2023}
    }
    'LightSense - Long Distance' explores remote interaction with architectural space. It is a virtual extension of the project 'LightSense,' which is currently presented at the exhibition 'Cyber Physical: Architecture in Real Time' at EPFL Pavilions in Switzerland. Using numerous VR headsets, the setup at the Art Gallery at SIGGRAPH Asia establishes a direct connection between both exhibition sites in Sydney and Lausanne.
    'LightSense' at EPFL Pavilions is an immersive installation that allows the audience to engage in intimate interaction with a living architectural body. It consists of a 12-meter-long construction that combines a lightweight structure with projected 3D holographic animations. At its core sits a neural network, which has been trained on sixty thousand poems. This allows the structure to engage, lead, and sustain conversations with the visitor. Its responses are truly associative, unpredictable, meaningful, magical, and deeply emotional. Analysing the emotional tenor of the conversation, 'LightSense' can transform into a series of hybrid architectural volumes, immersing the visitors in Pavilions of Love, Anger, Curiosity, and Joy.
    'LightSense's' physical construction is linked to a digital twin. Movement, holographic animations, sound, and text responses are controlled by the cloud-based AI system. This combination creates a location-independent cyber-physical system. As such, the 'Long Distance' version, which premiered at SIGGRAPH Asia, enables the visitors in Sydney to directly engage with the physical setup in Lausanne. Using VR headsets with a new 360-degree 4K live streaming system, the visitors find themselves teleported to face 'LightSense', able to engage in a direct conversation with the structure on-site.
    'LightSense - Long Distance' leaves behind the notion of architecture being a place-bound and static environment. Instead, it points toward the next generation of responsive buildings that transcend space, are capable of dynamic behaviour, and able to accompany their visitors as creative partners.
  • haptic HONGI: Reflections on Collaboration in the Transdisciplinary Creation of an AR Artwork
    Mairi Gunn, Angus Campbell, Mark Billinghurst, Wendy Lawn, Prasanth Sasikumar, and Sachith Muthukumarana.

    Gunn, M., Campbell, A., Billinghurst, M., Lawn, W., Sasikumar, P., & Muthukumarana, S. (2023). haptic HONGI: Reflections on Collaboration in the Transdisciplinary Creation of an AR Artwork. In Creating Digitally: Shifting Boundaries: Arts and Technologies—Contemporary Applications and Concepts (pp. 301-330). Cham: Springer International Publishing.

    @incollection{gunn2023haptic,
    title={haptic HONGI: Reflections on Collaboration in the Transdisciplinary Creation of an AR Artwork},
    author={Gunn, Mairi and Campbell, Angus and Billinghurst, Mark and Lawn, Wendy and Sasikumar, Prasanth and Muthukumarana, Sachith},
    booktitle={Creating Digitally: Shifting Boundaries: Arts and Technologies—Contemporary Applications and Concepts},
    pages={301--330},
    year={2023},
    publisher={Springer}
    }
    We developed the Motion-Simulation Platform, a platform running within a game engine that is able to extract both RGB imagery and the corresponding intrinsic motion data (i.e., motion field). This is useful for motion-related computer vision tasks where large amounts of intrinsic motion data are required to train a model. We describe the implementation and design details of the Motion-Simulation Platform. The platform is extendable, such that any scene developed within the game engine is able to take advantage of the motion data extraction tools. We also provide both user and AI-bot controlled navigation, enabling user-driven input and mass automation of motion data collection.
  • A Motion-Simulation Platform to Generate Synthetic Motion Data for Computer Vision Tasks
    Andrew Chalmers, Junhong Zhao, Weng Khuan Hoh, James Drown, Simon Finnie, Richard Yao, James Lin, James Wilmott, Arindam Dey, Mark Billinghurst, Taehyun Rhee

    Chalmers, A., Zhao, J., Khuan Hoh, W., Drown, J., Finnie, S., Yao, R., ... & Rhee, T. (2023). A Motion-Simulation Platform to Generate Synthetic Motion Data for Computer Vision Tasks. In SIGGRAPH Asia 2023 Technical Communications (pp. 1-4).

    @inproceedings{10.1145/3610543.3628795,
    author = {Chalmers, Andrew and Zhao, Junhong and Khuan Hoh, Weng and Drown, James and Finnie, Simon and Yao, Richard and Lin, James and Wilmott, James and Dey, Arindam and Billinghurst, Mark and Rhee, Taehyun},
    title = {A Motion-Simulation Platform to Generate Synthetic Motion Data for Computer Vision Tasks},
    year = {2023},
    isbn = {9798400703140},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi-org.ezproxy.auckland.ac.nz/10.1145/3610543.3628795},
    doi = {10.1145/3610543.3628795},
    abstract = {We developed the Motion-Simulation Platform, a platform running within a game engine that is able to extract both RGB imagery and the corresponding intrinsic motion data (i.e., motion field). This is useful for motion-related computer vision tasks where large amounts of intrinsic motion data are required to train a model. We describe the implementation and design details of the Motion-Simulation Platform. The platform is extendable, such that any scene developed within the game engine is able to take advantage of the motion data extraction tools. We also provide both user and AI-bot controlled navigation, enabling user-driven input and mass automation of motion data collection.},
    booktitle = {SIGGRAPH Asia 2023 Technical Communications},
    articleno = {21},
    numpages = {4},
    keywords = {user study, simulation, motion, machine learning, data generation},
    location = {Sydney, NSW, Australia},
    series = {SA '23}
    }
    We developed the Motion-Simulation Platform, a platform running within a game engine that is able to extract both RGB imagery and the corresponding intrinsic motion data (i.e., motion field). This is useful for motion-related computer vision tasks where large amounts of intrinsic motion data are required to train a model. We describe the implementation and design details of the Motion-Simulation Platform. The platform is extendable, such that any scene developed within the game engine is able to take advantage of the motion data extraction tools. We also provide both user and AI-bot controlled navigation, enabling user-driven input and mass automation of motion data collection.
  • Stylus and Gesture Asymmetric Interaction for Fast and Precise Sketching in Virtual Reality
    Qianyuan Zou, Huidong Bai, Lei Gao, Gun A. Lee, Allan Fowler & Mark Billinghurst

    Zou, Q., Bai, H., Gao, L., Lee, G. A., Fowler, A., & Billinghurst, M. (2024). Stylus and Gesture Asymmetric Interaction for Fast and Precise Sketching in Virtual Reality. International Journal of Human–Computer Interaction, 1-18.

    @article{zou2024stylus,
    title={Stylus and Gesture Asymmetric Interaction for Fast and Precise Sketching in Virtual Reality},
    author={Zou, Qianyuan and Bai, Huidong and Gao, Lei and Lee, Gun A and Fowler, Allan and Billinghurst, Mark},
    journal={International Journal of Human--Computer Interaction},
    pages={1--18},
    year={2024},
    publisher={Taylor \& Francis}
    }
    This research investigates fast and precise Virtual Reality (VR) sketching methods with different tool-based asymmetric interfaces. In traditional real-world drawing, artists commonly employ an asymmetric interaction system where each hand holds different tools, facilitating diverse and nuanced artistic expressions. However, in virtual reality (VR), users are typically limited to using identical tools in both hands for drawing. To bridge this gap, we aim to introduce specifically designed tools in VR that replicate the varied tool configurations found in the real world. Hence, we developed a VR sketching system supporting three hybrid input techniques using a standard VR controller, a VR stylus, or a data glove. We conducted a formal user study consisting of an internal comparative experiment with four conditions and three tasks to compare three asymmetric input methods with each other and with a traditional symmetric controller-based solution based on questionnaires and performance evaluations. The results showed that in contrast to symmetric dual VR controller interfaces, the asymmetric input with gestures significantly reduced task completion times while maintaining good usability and input accuracy with a low task workload. This shows the value of asymmetric input methods for VR sketching. We also found that the overall user experience could be further improved by optimizing the tracking stability of the data glove and the VR stylus.
  • ARCOA: using the ar-assisted cooperative assembly system to visualize key information about the occluded partner
    Shuo Feng, Weiping He, Qianrui Zhang, Mark Billinghurst, Lingxiao Yang, Shaohua Zhang, and Xiaotian Zhang

    Feng, S., He, W., Zhang, Q., Billinghurst, M., Yang, L., Zhang, S., & Zhang, X. (2023). ARCoA: Using the AR-assisted cooperative assembly system to visualize key information about the occluded partner. International Journal of Human–Computer Interaction, 39(18), 3556-3566.

    @article{feng2023arcoa,
    title={ARCoA: Using the AR-assisted cooperative assembly system to visualize key information about the occluded partner},
    author={Feng, Shuo and He, Weiping and Zhang, Qianrui and Billinghurst, Mark and Yang, Lingxiao and Zhang, Shaohua and Zhang, Xiaotian},
    journal={International Journal of Human--Computer Interaction},
    volume={39},
    number={18},
    pages={3556--3566},
    year={2023},
    publisher={Taylor \& Francis}
    }
    During component assembly, some operations must be completed by two or more workers due to the size or assembly mode. For instance, in manual riveting, two workers are positioned on either side of a steel plate, which blocks the view. Traditional collaborative approaches limit assembly efficiency and is difficult to ensure accurate and rapid interaction between workers. In this study, we developed an AR-Assisted Cooperative Assembly System (ARCoA) to address the issue. ARCoA allows users to view their partner’s key information that is occluded, including tools, gestures, orientations, and shared markers. Besides, we presented a user experiment that compared this method with the traditional approach. The results indicated that the new system could significantly improve assembly efficiency, system availability, and sense of social presence. Moreover, most users we surveyed preferred ARCoA. In the future, we will incorporate more functions and improve the accuracy of the system to solve complex multi-person collaborative problems.
  • A Distributed Augmented Reality Training Architecture For Distributed Cognitive Intelligent Tutoring Paradigms
    Bradley Herbert, Nilufar Baghaei, Mark Billinghurst, and Grant Wigley.

    Herbert, B., Baghaei, N., Billinghurst, M., & Wigley, G. (2023). A Distributed Augmented Reality Training Architecture For Distributed Cognitive Intelligent Tutoring Paradigms. Authorea Preprints.

    @article{herbert2023distributed,
    title={A Distributed Augmented Reality Training Architecture For Distributed Cognitive Intelligent Tutoring Paradigms},
    author={Herbert, Bradley and Baghaei, Nilufar and Billinghurst, Mark and Wigley, Grant},
    journal={Authorea Preprints},
    year={2023},
    publisher={Authorea}
    }
    Modern training typically incorporates real-world training applications. Augmented Reality (AR) technologies support this by overlaying virtual objects in real-world 3-Dimensional (3D) space. However, integrating instruction into AR is challenging because of technological and educational considerations. One reason is a lack of architecture for supporting Intelligent Tutoring Systems (ITSs) in AR training domains. We present a novel modular agent-based Distributed Augmented Reality Training (DART) architecture for ITSs to address two key AR challenges: (1) a decoupling of the display and tracking components and (2) support for modularity. Modular agents communicate with each other over a network, allowing them to be easily swapped out and replaced to support differing needs. Our motivation is driven by the fact that AR technologies are vary considerably and an ITS architecture would need to be flexible enough to support these requirements. Finally, we believe that our novel architecture will appeal to practical designers of ITSs and to the more theoretical educators who wish to use such systems to simulate and broaden research in the distributed cognitive educational theories.
  • Virtual Reality for Social-Emotional Learning: A Review
    Irna Hamzah, Ely Salwana, Mark Billinghurst, Nilufar Baghaei, Mohammad Nazir Ahmad, Fadhilah Rosdi, and Azhar Arsad.

    Hamzah, I., Salwana, E., Billinghurst, M., Baghaei, N., Ahmad, M. N., Rosdi, F., & Arsad, A. (2023, October). Virtual Reality for Social-Emotional Learning: A Review. In International Visual Informatics Conference (pp. 119-130). Singapore: Springer Nature Singapore.

    @inproceedings{hamzah2023virtual,
    title={Virtual Reality for Social-Emotional Learning: A Review},
    author={Hamzah, Irna and Salwana, Ely and Billinghurst, Mark and Baghaei, Nilufar and Ahmad, Mohammad Nazir and Rosdi, Fadhilah and Arsad, Azhar},
    booktitle={International Visual Informatics Conference},
    pages={119--130},
    year={2023},
    organization={Springer}
    }
    Virtual reality (VR) is an immersive technology that can simulate different environments and experiences. Social-emotional learning (SEL) is a process through which individuals develop the skills, knowledge, and attitudes to understand and manage their emotions, establish positive relationships, and make responsible decisions. SEL promotes healthy emotional regulation in adolescents. However, VR interventions for adolescent emotion regulation have received less attention. The aim of this research is to identify a VR element that includes knowledge in relation to SEL since 2017 through systematic literature reviews (SLRs). A broad review of the current literature was conducted in three databases, namely Scopus, IEEE, and WOS. Data were extracted, including age ranges, year published, and medical procedures, using a search term. The result suggests a requirement list to design a virtual reality for social-emotional learning that promotes a positive impact on emotion regulation for Malaysian adolescents.
  • The MagicBook Revisited
    Geert Lugtenberg, Kazuma Mori, Yuki Matoba, Theophilus Teo, and Mark Billinghurst.

    Lugtenberg, G., Mori, K., Matoba, Y., Teo, T., & Billinghurst, M. (2023, October). The MagicBook Revisited. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 801-806). IEEE.

    @inproceedings{lugtenberg2023magicbook,
    title={The MagicBook Revisited},
    author={Lugtenberg, Geert and Mori, Kazuma and Matoba, Yuki and Teo, Theophilus and Billinghurst, Mark},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={801--806},
    year={2023},
    organization={IEEE}
    }
    Twenty years ago the MagicBook demonstrated an early collaborative cross-reality system allowing users to transition along the reality-virtuality continuum. This current application reproduces the key elements of the MagicBook using mobile phones rather than high end graphics computers and custom handheld displays. Just like the original, people can use a handheld viewer to see virtual content superimposed over real book pages, and then fly into the scenes and experience them immersively. Multi-scale collaboration is also supported enabling people in the AR view to see the users in VR, or to experience the scenes together at the same scale. We also add new features, such as realistic avatars, tangible interaction, use of a handheld controller, and support for remote participants. Overall, we have created a cross reality platform that fits in a person’s pocket, but enables them to collaborate with dozens of people in AR and VR in a very natural way. This is demonstrated in a command and control use case, showing its application in a fire-fighting scenario. We also report on pilot study results from people that have tried the platform.
  • Utilizing a Robot to Endow Virtual Objects with Stiffness
    Jiepeng Dong, Weiping He, Bokai Zheng, Yizhe Liu, and Mark Billinghurst.

    Dong, J., He, W., Zheng, B., Liu, Y., & Billinghurst, M. (2023, October). Utilizing a Robot to Endow Virtual Objects with Stiffness. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 496-500). IEEE.

    @inproceedings{dong2023utilizing,
    title={Utilizing a Robot to Endow Virtual Objects with Stiffness},
    author={Dong, Jiepeng and He, Weiping and Zheng, Bokai and Liu, Yizhe and Billinghurst, Mark},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={496--500},
    year={2023},
    organization={IEEE}
    }
    This paper proposes a novel approach to provide stiffness feedback in VR utilizing the inherent characteristics of encounter-type haptic devices (ETHDs). The method aims to provide a sense of object’s deformation and enhance users’ perception of stiffness when touching objects. We explored how to minimize the influence of hardness on stiffness perception and selected materials for the soft, medium, and hard hardness mapping groups to adapt to users’ perceptions when touching objects of different hardness levels. Using this system, we compared and evaluated the method with a touch interaction without stiffness perceptions and found a better performance in terms of realism, possibility to act, attractiveness, efficiency.
  • An Asynchronous Hybrid Cross Reality Collaborative System
    Hyunwoo Cho, Bowen Yuan, Jonathon Derek Hart, Eunhee Chang, Zhuang Chang, Jiashuo Cao, Gun A. Lee, Thammathip Piumsomboon, and Mark Billinghurst.

    Cho, H., Yuan, B., Hart, J. D., Chang, E., Chang, Z., Cao, J., ... & Billinghurst, M. (2023, October). An asynchronous hybrid cross reality collaborative system. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 70-73). IEEE.

    @inproceedings{cho2023asynchronous,
    title={An asynchronous hybrid cross reality collaborative system},
    author={Cho, Hyunwoo and Yuan, Bowen and Hart, Jonathon Derek and Chang, Eunhee and Chang, Zhuang and Cao, Jiashuo and Lee, Gun A and Piumsomboon, Thammathip and Billinghurst, Mark},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={70--73},
    year={2023},
    organization={IEEE}
    }
    This work presents a Mixed Reality (MR)-based asynchronous hybrid cross reality collaborative system which supports recording and playback of user actions in three-dimensional task space at different periods in time. Using this system, an expert user can record a task process such as virtual object placement or assembly, which can then be viewed by other users in either Augmented Reality (AR) or Virtual Reality (VR) views at later points in time to complete the task. In VR, the pre-scanned 3D workspace can be experienced to enhance the understanding of spatial information. Alternatively, AR can provide real-scale information to help the workers manipulate real world objects, and complete the task assignment. Users can also seamlessly move between AR and VR views as desired. In this way the system can contribute to improving task performance and co-presence during asynchronous collaboration.
  • Time Travellers: An Asynchronous Cross Reality Collaborative System
    Hyunwoo Cho, Bowen Yuan, Jonathon Derek Hart, Zhuang Chang, Jiashuo Cao, Eunhee Chang, and Mark Billinghurst.

    Cho, H., Yuan, B., Hart, J. D., Chang, Z., Cao, J., Chang, E., & Billinghurst, M. (2023, October). Time Travellers: An Asynchronous Cross Reality Collaborative System. In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 848-853). IEEE.

    @inproceedings{cho2023time,
    title={Time Travellers: An Asynchronous Cross Reality Collaborative System},
    author={Cho, Hyunwoo and Yuan, Bowen and Hart, Jonathon Derek and Chang, Zhuang and Cao, Jiashuo and Chang, Eunhee and Billinghurst, Mark},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={848--853},
    year={2023},
    organization={IEEE}
    }
    This work presents a Mixed Reality (MR)-based asynchronous hybrid cross reality collaborative system which supports recording and playback of user actions in a large task space at different periods in time. Using this system, an expert can record a task process such as virtual object placement or assembly, which can then be viewed by other users in either Augmented Reality (AR) or Virtual Reality (VR) at later points in time to complete the task. In VR, the pre-scanned 3D workspace can be experienced to enhance the understanding of spatial information. Alternatively, AR can provide real-scale information to help the workers manipulate real-world objects, and complete the assignment. Users can seamlessly switch between AR and VR views as desired. In this way, the system can contribute to improving task performance and co-presence during asynchronous collaboration. The system is demonstrated in a use-case scenario of object assembly using parts that must be retrieved from a storehouse location. A pilot user study found that cross reality asynchronous collaboration system was helpful in providing information about work environments, inducing faster task completion with a lower task load. We provide lessons learned and suggestions for future research.
  • Deep Learning-based Simulator Sickness Estimation from 3D Motion
    Junhong Zhao, Kien TP Tran, Andrew Chalmers, Weng Khuan Hoh, Richard Yao, Arindam Dey, James Wilmott, James Lin, Mark Billinghurst, Robert W. Lindeman, Taehyun Rhee

    Zhao, J., Tran, K. T., Chalmers, A., Hoh, W. K., Yao, R., Dey, A., ... & Rhee, T. (2023, October). Deep learning-based simulator sickness estimation from 3D motion. In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 39-48). IEEE.

    @inproceedings{zhao2023deep,
    title={Deep learning-based simulator sickness estimation from 3D motion},
    author={Zhao, Junhong and Tran, Kien TP and Chalmers, Andrew and Hoh, Weng Khuan and Yao, Richard and Dey, Arindam and Wilmott, James and Lin, James and Billinghurst, Mark and Lindeman, Robert W and others},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={39--48},
    year={2023},
    organization={IEEE}
    }
    This paper presents a novel solution for estimating simulator sickness in HMDs using machine learning and 3D motion data, informed by user-labeled simulator sickness data and user analysis. We conducted a novel VR user study, which decomposed motion data and used an instant dial-based sickness scoring mechanism. We were able to emulate typical VR usage and collect user simulator sickness scores. Our user analysis shows that translation and rotation differently impact user simulator sickness in HMDs. In addition, users’ demographic information and self-assessed simulator sickness susceptibility data are collected and show some indication of potential simulator sickness. Guided by the findings from the user study, we developed a novel deep learning-based solution to better estimate simulator sickness with decomposed 3D motion features and user profile information. The model was trained and tested using the 3D motion dataset with user-labeled simulator sickness and profiles collected from the user study. The results show higher estimation accuracy when using the 3D motion data compared with methods based on optical flow extracted from the recorded video, as well as improved accuracy when decomposing the motion data and incorporating user profile information.
  • Cognitive Load Measurement with Physiological Sensors in Virtual Reality during Physical Activity
    Mohammad Ahmadi, Samantha W. Michalka, Sabrina Lenzoni, Marzieh Ahmadi Najafabadi, Huidong Bai, Alexander Sumich, Burkhard Wuensche, and Mark Billinghurst.

    Ahmadi, M., Michalka, S. W., Lenzoni, S., Ahmadi Najafabadi, M., Bai, H., Sumich, A., ... & Billinghurst, M. (2023, October). Cognitive Load Measurement with Physiological Sensors in Virtual Reality during Physical Activity. In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology (pp. 1-11).

    @inproceedings{ahmadi2023cognitive,
    title={Cognitive Load Measurement with Physiological Sensors in Virtual Reality during Physical Activity},
    author={Ahmadi, Mohammad and Michalka, Samantha W and Lenzoni, Sabrina and Ahmadi Najafabadi, Marzieh and Bai, Huidong and Sumich, Alexander and Wuensche, Burkhard and Billinghurst, Mark},
    booktitle={Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2023}
    }
    Many Virtual Reality (VR) experiences, such as learning tools, would benefit from utilising mental states such as cognitive load. Increases in cognitive load (CL) are often reflected in the alteration of physiological responses, such as pupil dilation (PD), electrodermal cctivity (EDA), heart rate (HR), and electroencephalography (EEG). However, the relationship between these physiological responses and cognitive load are usually measured while participants sit in front of a computer screen, whereas VR environments often require a high degree of physical movement. This physical activity can affect the measured signals, making it unclear how suitable these measures are for use in interactive Virtual Reality (VR). We investigate the suitability of four physiological measures as correlates of cognitive load in interactive VR. Suitable measures must be robust enough to allow the learner to move within VR and be temporally responsive enough to be a useful metric for adaptation. We recorded PD, EDA, HR, and EEG data from nineteen participants during a sequence memory task at varying levels of cognitive load using VR, while in the standing position and using their dominant arm to play a game. We observed significant linear relationships between cognitive load and PD, EDA, and EEG frequency band power, but not HR. PD showed the most reliable relationship but has a slower response rate than EEG. Our results suggest the potential for use of PD, EDA, and EEG in this type of interactive VR environment, but additional studies will be needed to assess feasibility under conditions of greater movement.
  • Exploring Real-time Precision Feedback for AR-assisted Manual Adjustment in Mechanical Assembly
    Xingyue Tang, Zhuang Chang, Weiping He, Mark Billinghurst, and Xiaotian Zhang.

    Tang, X., Chang, Z., He, W., Billinghurst, M., & Zhang, X. (2023, October). Exploring Real-time Precision Feedback for AR-assisted Manual Adjustment in Mechanical Assembly. In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology (pp. 1-11).

    @inproceedings{tang2023exploring,
    title={Exploring Real-time Precision Feedback for AR-assisted Manual Adjustment in Mechanical Assembly},
    author={Tang, Xingyue and Chang, Zhuang and He, Weiping and Billinghurst, Mark and Zhang, Xiaotian},
    booktitle={Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2023}
    }
    Augmented Reality (AR) based manual assembly nowadays enables to guide the process of physical tasks, providing intuitive instructions and detailed information in real-time. However, very limited studies have explored AR manual adjustment tasks with precision requirements. In this paper, we develop an AR-assisted guidance system for manual adjustments with relatively high-precision requirements. We first assessed the accuracy of the special-set OptiTrack system to determine the threshold of precision requirements for our user study. We further evaluated the performance of Number-based and Bar-based precision feedback by comparing orienting assembly errors and task completion time, as well as the usability in the user study. We found that the assembly errors of orientation in the Number-based and Bar-based interfaces were significantly lower than the baseline condition, while there was no significant difference between the Number-based and Bar-based interfaces. Furthermore, the Number-based showed faster task completion time, lower workload, and higher usability than the Bar-based condition.
  • Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues
    Tram Thi Minh Tran, Shane Brown, Oliver Weidlich, Mark Billinghurst, and Callum Parker.

    Tran, T. T. M., Brown, S., Weidlich, O., Billinghurst, M., & Parker, C. (2023). Wearable augmented reality: research trends and future directions from three major venues. IEEE Transactions on Visualization and Computer Graphics.

    @article{tran2023wearable,
    title={Wearable augmented reality: research trends and future directions from three major venues},
    author={Tran, Tram Thi Minh and Brown, Shane and Weidlich, Oliver and Billinghurst, Mark and Parker, Callum},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    year={2023},
    publisher={IEEE}
    }
    Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998–2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
  • Evaluating visual encoding quality of a mixed reality user interface for human–machine co-assembly in complex operational terrain
    Zhuo Wang, Xiangyu Zhang, Liang Li, Yiliang Zhou, Zexin Lu, Yuwei Dai, Chaoqian Liu, Zekun Su, Xiaoliang Bai, and Mark Billinghurst.

    Wang, Z., Zhang, X., Li, L., Zhou, Y., Lu, Z., Dai, Y., ... & Billinghurst, M. (2023). Evaluating visual encoding quality of a mixed reality user interface for human–machine co-assembly in complex operational terrain. Advanced Engineering Informatics, 58, 102171.

    @article{wang2023evaluating,
    title={Evaluating visual encoding quality of a mixed reality user interface for human--machine co-assembly in complex operational terrain},
    author={Wang, Zhuo and Zhang, Xiangyu and Li, Liang and Zhou, Yiliang and Lu, Zexin and Dai, Yuwei and Liu, Chaoqian and Su, Zekun and Bai, Xiaoliang and Billinghurst, Mark},
    journal={Advanced Engineering Informatics},
    volume={58},
    pages={102171},
    year={2023},
    publisher={Elsevier}
    }
    During human–machine collaboration in manufacturing activities, it is important to provide real-time annotations in the three-dimensional workspace for local workers who may lack relevant experience and knowledge. For example, in MR assembly, workers need to be alerted to avoid entering hazardous areas when manually replacing components. Recently, many researchers have explored various visual cues for expressing physical task progress information in the MR interface of intelligent systems. However, the relationship between the implantation of visual cues and the balance of interface cognition has not been well revealed, especially in tasks that require annotating hazardous areas in complex operational terrains. In this study, we developed a novel MR interface for an intelligent assembly system that supports local scene sharing based on dynamic 3D reconstruction, remote expert behavior intention recognition based on deep learning, and local personnel operational behavior visual feedback based on external bounding box. We compared the encoding results of the proposed MR interface with 3D annotations combined with 3D sketch cues (3DS), which combines 3D spatial cues (3DSC) and 3DS combined with adaptive cues (AVC), through a case study. We found that for physical tasks that require specific area annotations, 3D annotations with context (3DAC) can better improve the quality of manual work and regulate the cognitive load distribution of the MR interface more reasonably.
  • Older adults’ experiences of social isolation and loneliness: Can virtual touring increase social connectedness? A pilot study
    Michelle Leanne Oppert, Melissa Ngo, Gun A. Lee, Mark Billinghurst, Siobhan Banks, and Laura Tolson.

    Oppert, M. L., Ngo, M., Lee, G. A., Billinghurst, M., Banks, S., & Tolson, L. (2023). Older adults’ experiences of social isolation and loneliness: Can virtual touring increase social connectedness? A pilot study. Geriatric Nursing, 53, 270-279.

    @article{oppert2023older,
    title={Older adults’ experiences of social isolation and loneliness: Can virtual touring increase social connectedness? A pilot study},
    author={Oppert, Michelle Leanne and Ngo, Melissa and Lee, Gun A and Billinghurst, Mark and Banks, Siobhan and Tolson, Laura},
    journal={Geriatric Nursing},
    volume={53},
    pages={270--279},
    year={2023},
    publisher={Elsevier}
    }
    The present pilot study explored the research aim of understanding how independent-living older adults experience social isolation and loneliness and whether virtual tour digital technology can increase social connectedness (N = 10). Through triangulation of interviews, experiences, and feedback, this study contributes to the knowledge base on the well-being of our ageing populations and how digital technologies, specifically virtual tourism, can aid in this process. The key findings reveal that the participants in our study were moderately lonely but were open to embracing more digital technology, sharing how it is instrumental in facilitating social connection and life administration. Participating in virtual tour experiences was well accepted as participants expressed enjoyment, nostalgia, and interest in future use. However, its contribution to increasing social connections needs to be clarified and requires further investigation. Several future research and education directions are provided.
  • A comprehensive survey on AR-enabled local collaboration
    Shuo Feng, Weiping He, Xiaotian Zhang, Mark Billinghurst, and Shuxia Wang.

    Feng, S., He, W., Zhang, X., Billinghurst, M., & Wang, S. (2023). A comprehensive survey on AR-enabled local collaboration. Virtual Reality, 27(4), 2941-2966.

    @article{feng2023comprehensive,
    title={A comprehensive survey on AR-enabled local collaboration},
    author={Feng, Shuo and He, Weiping and Zhang, Xiaotian and Billinghurst, Mark and Wang, Shuxia},
    journal={Virtual Reality},
    volume={27},
    number={4},
    pages={2941--2966},
    year={2023},
    publisher={Springer}
    }
    With the rapid development of augmented reality (AR) technology and devices, it is widely used in education, design, industry, game, medicine and other fields. It brings new development opportunities for computer-supported cooperative work. In recent years, there has been an increasing number of studies on AR collaboration. Many professional researchers have also summarized and commented on these local and remote applications. However, to the best of our knowledge, there is no comprehensive review specifically on AR-enabled local collaboration (AR-LoCol). Therefore, this paper presents a comprehensive survey of research between 2012 and 2022 in this domain. We surveyed 133 papers on AR-LoCol in Web of Science, 75% of which were published between 2018 and 2022. Next, we provide an in-depth review of papers in seven areas, including time (synchronous and asynchronous), device (hand-held display, desktop, spatial AR, head-mounted display), participants (double and multiple), place (standing, indoor and outdoor), content (virtual objects, annotations, awareness cues and multi-perspective views), and area (education, industry, medicine, architecture, exhibition, game, exterior design, visualization, interaction, basic tools). We discuss the characteristics and specific work in each category, especially the advantages and disadvantages of different devices and the necessity for shared contents. Following this, we summarize the current state of development of AR-LoCol and discuss possible future research directions. This work will be useful for current and future researchers interested in AR-LoCol systems.
  • Investigation of learners’ behavioral intentions to use metaverse learning environment in higher education: a virtual computer laboratory
    Emin İbili, Melek Ölmez, Abdullah Cihan, Fırat Bilal, Aysel Burcu İbili, Nurullah Okumus, and Mark Billinghurst.

    İbili, E., Ölmez, M., Cihan, A., Bilal, F., İbili, A. B., Okumus, N., & Billinghurst, M. (2023). Investigation of learners’ behavioral intentions to use metaverse learning environment in higher education: a virtual computer laboratory. Interactive Learning Environments, 1-26.

    @article{ibili2023investigation,
    title={Investigation of learners’ behavioral intentions to use metaverse learning environment in higher education: a virtual computer laboratory},
    author={{\.I}bili, Emin and {\"O}lmez, Melek and Cihan, Abdullah and Bilal, F{\i}rat and {\.I}bili, Aysel Burcu and Okumus, Nurullah and Billinghurst, Mark},
    journal={Interactive Learning Environments},
    pages={1--26},
    year={2023},
    publisher={Taylor \& Francis}
    }
    The aim of this study is to investigate the determinants that affect undergraduate students’ behavioral intentions to continue learning computer hardware concepts utilizing a Metaverse-based system. The current study examined the factors influencing students’ adoption of Metaverse technology at the tertiary level using a model based on the Technology Acceptance Model (TAM) and the General Extended Technology Acceptance Model for E-Learning (GETAMEL). The data was collected from 210 undergraduate students and Structural Equation Modeling (SEM) was adopted to analyze the responses. The findings show that Perceived Usefulness and Hedonic Motivation have significant positive effect on Behavioral Intention. Additionally, Natural Interaction and Perceived Usefulness significantly affect Hedonic Motivation, while Computer Anxiety negatively affects Hedonic Motivation. Furthermore, Natural Interaction was found to be the strongest predictor of Perceived Usefulness, whereas Experience was the strongest predictor of Perceived Ease of Use. The findings also indicate that Subjective Norms and Self-Efficacy have a significant effect on Experience, while Subjective Norms significantly influence Self-Efficacy. The research results also showed that neither gender nor the department had any effect. The results of this study provide major practical outcomes for higher education institutions and teachers in terms of designing Metaverse-based teaching environments.
  • Immersive medical virtual reality: still a novelty or already a necessity?
    Tobias Loetscher, A. M. Barrett, Mark Billinghurst, and Belinda Lange.

    Loetscher, T., Barrett, A. M., Billinghurst, M., & Lange, B. (2023). Immersive medical virtual reality: still a novelty or already a necessity?. Journal of Neurology, Neurosurgery & Psychiatry, 94(7), 499-501.

    @misc{loetscher2023immersive,
    title={Immersive medical virtual reality: still a novelty or already a necessity?},
    author={Loetscher, Tobias and Barrett, AM and Billinghurst, Mark and Lange, Belinda},
    journal={Journal of Neurology, Neurosurgery \& Psychiatry},
    volume={94},
    number={7},
    pages={499--501},
    year={2023},
    publisher={BMJ Publishing Group Ltd}
    }
    Virtual reality (VR) technologies have been explored for medical applications for over half a century. With major tech companies such as Meta (formerly Facebook), HTC and Microsoft investing heavily in the development of VR technologies, significant advancements have recently been made in hardware (eg, standalone headsets), ease of use (eg, gesture tracking) and equipment cost. These advancements helped spur research in the medical field, with over 2700 VR-related articles indexed in PubMed alone in 2022, and the number of VR articles more than tripling in the last 6 years. Recently, the US Food and Drug Administration (FDA) also approved the first VR-based therapy for chronic back pain. 1 Whether the technology has reached a tipping point for its use in medicine is debatable, but it seems timely to provide a brief overview of the current state of immersive VR in neurology and related fields. In this editorial, we will discuss the characteristics of VR that make it a potentially transformative tool in healthcare, review some of the most mature VR solutions for medical use and highlight barriers to implementation that must be addressed before the technology can be widely adopted in healthcare. This editorial will focus solely on immersive VR technology and will not delve into the applications and use cases of augmented or mixed reality.
  • Can you hear it? Stereo sound-assisted guidance in augmented reality assembly
    Shuo Feng, Xinjing He, Weiping He, and Mark Billinghurst.

    Feng, S., He, X., He, W., & Billinghurst, M. (2023). Can you hear it? Stereo sound-assisted guidance in augmented reality assembly. Virtual Reality, 27(2), 591-601.

    @article{feng2023can,
    title={Can you hear it? Stereo sound-assisted guidance in augmented reality assembly},
    author={Feng, Shuo and He, Xinjing and He, Weiping and Billinghurst, Mark},
    journal={Virtual Reality},
    volume={27},
    number={2},
    pages={591--601},
    year={2023},
    publisher={Springer}
    }
    Most augmented reality (AR) assembly guidance systems only utilize visual information. Regarding the sound, the human binaural effect helps users quickly identify the general direction of sound sources. At the same time, pleasant sounds can give people a sense of pleasure and relaxation. However, the effect on workers is still unknown when stereo sound and visual information are used together for assembly guidance. To assess the combination of sound and vision in AR assembly guidance, we constructed a stereo sound-assisted guidance system (SAG) based on AR. In our SAG system, we used the tone of a soft instrument called the Chinese lute as the sound source. To determine if SAG has an impact on assembly efficiency and user experience, we conducted a usability test to compare SAG with visual information alone. Results showed that the SAG system significantly improves the efficiency of assembly guidance. Moreover, simultaneous visual and auditory information processing does not increase user workload or learning difficulty. Additionally, in a noisy environment, pleasant sounds help to reduce mental strain.
  • A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
    Li Zhang, Weiping He, Huidong Bai, Qianyuan Zou, Shuxia Wang, and Mark Billinghurst.

    Zhang, L., He, W., Bai, H., Zou, Q., Wang, S., & Billinghurst, M. (2023). A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality. Virtual Reality, 27(2), 1273-1291.

    @article{zhang2023hybrid,
    title={A hybrid 2D--3D tangible interface combining a smartphone and controller for virtual reality},
    author={Zhang, Li and He, Weiping and Bai, Huidong and Zou, Qianyuan and Wang, Shuxia and Billinghurst, Mark},
    journal={Virtual Reality},
    volume={27},
    number={2},
    pages={1273--1291},
    year={2023},
    publisher={Springer}
    }
    Virtual reality (VR) controllers are widely used for 3D virtual object selection and manipulation in immersive virtual worlds, while touchscreen-based devices like smartphones or tablets provide precise 2D tangible input. However, VR controllers and touchscreens are used separately in most cases. This research physically integrates a VR controller and a smartphone to create a hybrid 2D–3D tangible interface for VR interactions, combining the strength of both devices. The hybrid interface inherits physical buttons, 3D tracking, and spatial input from the VR controller while having tangible feedback, 2D precise input, and content display from the smartphone’s touchscreen. We review the capabilities of VR controllers and smartphones to summarize design principles and then present a design space with nine typical interaction paradigms for the hybrid interface. We developed an interactive prototype and three application modes to demonstrate the combination of individual interaction paradigms in various VR scenarios. We conducted a formal user study through a guided walkthrough to evaluate the usability of the hybrid interface. The results were positive, with participants reporting above-average usability and rating the system as excellent on four out of six user experience questionnaire scales. We also described two use cases to demonstrate the potential of the hybrid interface.
  • Parallel or Cross? Effects of Two Collaborative Modes on Augmented Reality Co-located Operations
    Shuo Feng, Yizhe Liu, Qianrui Zhang, Weiping He, Xiaotian Zhang, Shuxia Wang, and Mark Billinghurst.

    Feng, S., Liu, Y., Zhang, Q., He, W., Zhang, X., Wang, S., & Billinghurst, M. (2023). Parallel or cross? Effects of two collaborative modes on augmented reality co-located operations. International Journal of Human–Computer Interaction, 1-12.

    @article{feng2023parallel,
    title={Parallel or cross? Effects of two collaborative modes on augmented reality co-located operations},
    author={Feng, Shuo and Liu, Yizhe and Zhang, Qianrui and He, Weiping and Zhang, Xiaotian and Wang, Shuxia and Billinghurst, Mark},
    journal={International Journal of Human--Computer Interaction},
    pages={1--12},
    year={2023},
    publisher={Taylor \& Francis}
    }
    Augmented reality (AR) can bring a new interactive experience to the collaboration between users. When users are in the same place, there are two modes of joint operation for the same object: parallel-work (PW) and cross-work (CW). PW means two users perform their tasks, while CW means assisting each other. To investigate the difference that collaboration using PW and CW in an AR environment brings to users, we developed a two-person local collaboration system, LoCol. We designed and conducted user experiments by selecting the tasks of adjusting the virtual model of the assembly and adding missing boundaries in the model. The results showed that CW led to a higher sense of social coexistence while reducing workload. In terms of task completion time and accuracy, CW and PW each had advantages. We found that users generally want to reduce unnecessary repetitive operations and frequent movement by working with others. This is likely an important criterion for determining who is better suited for a particular job in either approach.
  • Investigating the relationship between three-dimensional perception and presence in virtual reality-reconstructed architecture
    Daniel Paes, Javier Irizarry, Mark Billinghurst, and Diego Pujoni

    Paes, D., Irizarry, J., Billinghurst, M., & Pujoni, D. (2023). Investigating the relationship between three-dimensional perception and presence in virtual reality-reconstructed architecture. Applied Ergonomics, 109, 103953.

    @article{paes2023investigating,
    title={Investigating the relationship between three-dimensional perception and presence in virtual reality-reconstructed architecture},
    author={Paes, Daniel and Irizarry, Javier and Billinghurst, Mark and Pujoni, Diego},
    journal={Applied Ergonomics},
    volume={109},
    pages={103953},
    year={2023},
    publisher={Elsevier}
    }
    Identifying and characterizing the factors that affect presence in virtual environments has been acknowledged as a critical step to improving Virtual Reality (VR) applications in the built environment domain. In the search to identify those factors, the research objective was to test whether three-dimensional perception affects presence in virtual environments. A controlled within-group experiment utilizing perception and presence questionnaires was conducted, followed by data analysis, to test the hypothesized unidirectional association between three-dimensional perception and presence in two different virtual environments (non-immersive and immersive). Results indicate no association in either of the systems studied, contrary to the assumption of many scholars in the field but in line with recent studies on the topic. Consequently, VR applications in architectural design may not necessarily need to incorporate advanced stereoscopic visualization techniques to deliver highly immersive experiences, which may be achieved by addressing factors other than depth realism. As findings suggest that the levels of presence experienced by users are not subject to the display mode of a 3D model (whether immersive or non-immersive display), it may still be possible for professionals involved in the review of 3D models (e.g., designers, contractors, clients) to experience high levels of presence through non-stereoscopic VR systems provided that other presence-promoting factors are included.
  • Towards an Inclusive and Accessible Metaverse
    Callum Parker, Soojeong Yoo, Youngho Lee, Joel Fredericks, Arindam Dey, Youngjun Cho, and Mark Billinghurst.

    Parker, C., Yoo, S., Lee, Y., Fredericks, J., Dey, A., Cho, Y., & Billinghurst, M. (2023, April). Towards an inclusive and accessible metaverse. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-5).

    @inproceedings{parker2023towards,
    title={Towards an inclusive and accessible metaverse},
    author={Parker, Callum and Yoo, Soojeong and Lee, Youngho and Fredericks, Joel and Dey, Arindam and Cho, Youngjun and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
    pages={1--5},
    year={2023}
    }
    The push towards a Metaverse is growing, with companies such as Meta developing their own interpretation of what it should look like. The Metaverse at its conceptual core promises to remove boundaries and borders, becoming a decentralised entity for everyone to use - forming a digital virtual layer over our own “real” world. However, creation of a Metaverse or “new world” presents the opportunity to create one which is inclusive and accessible to all. This challenge is explored and discussed in this workshop, with an aim of understanding how to create a Metaverse which is open and inclusive to people with physical and intellectual disabilities, and how interactions can be designed in a way to minimise disadvantage. The key outcomes of this workshop outline new opportunities for improving accessibility in the Metaverse, methodologies for designing and evaluating accessibility, and key considerations for designing accessible Metaverse environments and interactions.
  • Wish You Were Here: Mental and Physiological Effects of Remote Music Collaboration in Mixed Reality
    Ruben Schlagowski, Dariia Nazarenko, Yekta Can, Kunal Gupta, Silvan Mertes, Mark Billinghurst, and Elisabeth André.

    Schlagowski, R., Nazarenko, D., Can, Y., Gupta, K., Mertes, S., Billinghurst, M., & André, E. (2023, April). Wish you were here: Mental and physiological effects of remote music collaboration in mixed reality. In Proceedings of the 2023 CHI conference on human factors in computing systems (pp. 1-16).

    @inproceedings{schlagowski2023wish,
    title={Wish you were here: Mental and physiological effects of remote music collaboration in mixed reality},
    author={Schlagowski, Ruben and Nazarenko, Dariia and Can, Yekta and Gupta, Kunal and Mertes, Silvan and Billinghurst, Mark and Andr{\'e}, Elisabeth},
    booktitle={Proceedings of the 2023 CHI conference on human factors in computing systems},
    pages={1--16},
    year={2023}
    }
    With face-to-face music collaboration being severely limited during the recent pandemic, mixed reality technologies and their potential to provide musicians a feeling of "being there" with their musical partner can offer tremendous opportunities. In order to assess this potential, we conducted a laboratory study in which musicians made music together in real-time while simultaneously seeing their jamming partner’s mixed reality point cloud via a head-mounted display and compared mental effects such as flow, affect, and co-presence to an audio-only baseline. In addition, we tracked the musicians’ physiological signals and evaluated their features during times of self-reported flow. For users jamming in mixed reality, we observed a significant increase in co-presence. Regardless of the condition (mixed reality or audio-only), we observed an increase in positive affect after jamming remotely. Furthermore, we identified heart rate and HF/LF as promising features for classifying the flow state musicians experienced while making music together.
  • Brain activity during cybersickness: a scoping review
    Eunhee Chang, Mark Billinghurst, and Byounghyun Yoo.

    Chang, E., Billinghurst, M., & Yoo, B. (2023). Brain activity during cybersickness: A scoping review. Virtual reality, 27(3), 2073-2097.

    @article{chang2023brain,
    title={Brain activity during cybersickness: A scoping review},
    author={Chang, Eunhee and Billinghurst, Mark and Yoo, Byounghyun},
    journal={Virtual reality},
    volume={27},
    number={3},
    pages={2073--2097},
    year={2023},
    publisher={Springer}
    }
    Virtual reality (VR) experiences can cause a range of negative symptoms such as nausea, disorientation, and oculomotor discomfort, which is collectively called cybersickness. Previous studies have attempted to develop a reliable measure for detecting cybersickness instead of using questionnaires, and electroencephalogram (EEG) has been regarded as one of the possible alternatives. However, despite the increasing interest, little is known about which brain activities are consistently associated with cybersickness and what types of methods should be adopted for measuring discomfort through brain activity. We conducted a scoping review of 33 experimental studies in cybersickness and EEG found through database searches and screening. To understand these studies, we organized the pipeline of EEG analysis into four steps (preprocessing, feature extraction, feature selection, classification) and surveyed the characteristics of each step. The results showed that most studies performed frequency or time-frequency analysis for EEG feature extraction. A part of the studies applied a classification model to predict cybersickness indicating an accuracy between 79 and 100%. These studies tended to use HMD-based VR with a portable EEG headset for measuring brain activity. Most VR content shown was scenic views such as driving or navigating a road, and the age of participants was limited to people in their 20 s. This scoping review contributes to presenting an overview of cybersickness-related EEG research and establishing directions for future work.
  • The impact of virtual agents’ multimodal communication on brain activity and cognitive load in virtual reality
    Zhuang Chang, Huidong Bai, Li Zhang, Kunal Gupta, Weiping He, and Mark Billinghurst.

    Chang, Z., Bai, H., Zhang, L., Gupta, K., He, W., & Billinghurst, M. (2022). The impact of virtual agents’ multimodal communication on brain activity and cognitive load in virtual reality. Frontiers in Virtual Reality, 3, 995090.

    @article{chang2022impact,
    title={The impact of virtual agents’ multimodal communication on brain activity and cognitive load in virtual reality},
    author={Chang, Zhuang and Bai, Huidong and Zhang, Li and Gupta, Kunal and He, Weiping and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={3},
    pages={995090},
    year={2022},
    publisher={Frontiers Media SA}
    }
    Related research has shown that collaborating with Intelligent Virtual Agents (IVAs) embodied in Augmented Reality (AR) or Virtual Reality (VR) can improve task performance and reduce task load. Human cognition and behaviors are controlled by brain activities, which can be captured and reflected by Electroencephalogram (EEG) signals. However, little research has been done to understand users’ cognition and behaviors using EEG while interacting with IVAs embodied in AR and VR environments. In this paper, we investigate the impact of the virtual agent’s multimodal communication in VR on users’ EEG signals as measured by alpha band power. We develop a desert survival game where the participants make decisions collaboratively with the virtual agent in VR. We evaluate three different communication methods based on a within-subject pilot study: 1) a Voice-only Agent, 2) an Embodied Agent with speech and gaze, and 3) a Gestural Agent with a gesture pointing at the object while talking about it. No significant difference was found in the EEG alpha band power. However, the alpha band ERD/ERS calculated around the moment when the virtual agent started speaking indicated providing a virtual body for the sudden speech could avoid the abrupt attentional demand when the agent started speaking. Moreover, a sudden gesture coupled with the speech induced more attentional demands, even though the speech was matched with the virtual body. This work is the first to explore the impact of IVAs’ interaction methods in VR on users’ brain activity, and our findings contribute to the IVAs interaction design.
  • Online platforms for remote immersive Virtual Reality testing: an emerging tool for experimental behavioral research
    Tobias Loetscher, Nadia Siena Jurkovic, Stefan Carlo Michalski, Mark Billinghurst, and Gun Lee.

    Loetscher, T., Jurkovic, N. S., Michalski, S. C., Billinghurst, M., & Lee, G. (2023). Online platforms for remote immersive Virtual Reality testing: an emerging tool for experimental behavioral research. Multimodal Technologies and Interaction, 7(3), 32.

    @article{loetscher2023online,
    title={Online platforms for remote immersive Virtual Reality testing: an emerging tool for experimental behavioral research},
    author={Loetscher, Tobias and Jurkovic, Nadia Siena and Michalski, Stefan Carlo and Billinghurst, Mark and Lee, Gun},
    journal={Multimodal Technologies and Interaction},
    volume={7},
    number={3},
    pages={32},
    year={2023},
    publisher={MDPI}
    }
    Virtual Reality (VR) technology is gaining in popularity as a research tool for studying human behavior. However, the use of VR technology for remote testing is still an emerging field. This study aimed to evaluate the feasibility of conducting remote VR behavioral experiments that require millisecond timing. Participants were recruited via an online crowdsourcing platform and accessed a task on the classic cognitive phenomenon “Inhibition of Return” through a web browser using their own VR headset or desktop computer (68 participants in each group). The results confirm previous research that remote participants using desktop computers can be used effectively for conducting time-critical cognitive experiments. However, inhibition of return was only partially replicated for the VR headset group. Exploratory analyses revealed that technical factors, such as headset type, were likely to significantly impact variability and must be mitigated to obtain accurate results. This study demonstrates the potential for remote VR testing to broaden the research scope and reach a larger participant population. Crowdsourcing services appear to be an efficient and effective way to recruit participants for remote behavioral testing using high-end VR headsets.
  • Using Virtual Replicas to Improve Mixed Reality Remote Collaboration
    Huayuan Tian, Gun A. Lee, Huidong Bai, and Mark Billinghurst.

    Tian, H., Lee, G. A., Bai, H., & Billinghurst, M. (2023). Using virtual replicas to improve mixed reality remote collaboration. IEEE Transactions on Visualization and Computer Graphics, 29(5), 2785-2795.

    @article{tian2023using,
    title={Using virtual replicas to improve mixed reality remote collaboration},
    author={Tian, Huayuan and Lee, Gun A and Bai, Huidong and Billinghurst, Mark},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    volume={29},
    number={5},
    pages={2785--2795},
    year={2023},
    publisher={IEEE}
    }
    In this paper, we explore how virtual replicas can enhance Mixed Reality (MR) remote collaboration with a 3D reconstruction of the task space. People in different locations may need to work together remotely on complicated tasks. For example, a local user could follow a remote expert's instructions to complete a physical task. However, it could be challenging for the local user to fully understand the remote expert's intentions without effective spatial referencing and action demonstration. In this research, we investigate how virtual replicas can work as a spatial communication cue to improve MR remote collaboration. This approach segments the foreground manipulable objects in the local environment and creates corresponding virtual replicas of physical task objects. The remote user can then manipulate these virtual replicas to explain the task and guide their partner. This enables the local user to rapidly and accurately understand the remote expert's intentions and instructions. Our user study with an object assembly task found that using virtual replica manipulation was more efficient than using 3D annotation drawing in an MR remote collaboration scenario. We report and discuss the findings and limitations of our system and study, and present directions for future research.
  • Hapticproxy: Providing positional vibrotactile feedback on a physical proxy for virtual-real interaction in augmented reality
    Li Zhang, Weiping He, Zhiwei Cao, Shuxia Wang, Huidong Bai, and Mark Billinghurst.

    Zhang, L., He, W., Cao, Z., Wang, S., Bai, H., & Billinghurst, M. (2023). Hapticproxy: Providing positional vibrotactile feedback on a physical proxy for virtual-real interaction in augmented reality. International Journal of Human–Computer Interaction, 39(3), 449-463.

    @article{zhang2023hapticproxy,
    title={Hapticproxy: Providing positional vibrotactile feedback on a physical proxy for virtual-real interaction in augmented reality},
    author={Zhang, Li and He, Weiping and Cao, Zhiwei and Wang, Shuxia and Bai, Huidong and Billinghurst, Mark},
    journal={International Journal of Human--Computer Interaction},
    volume={39},
    number={3},
    pages={449--463},
    year={2023},
    publisher={Taylor \& Francis}
    }
    Consistent visual and haptic feedback is an important way to improve the user experience when interacting with virtual objects. However, the perception provided in Augmented Reality (AR) mainly comes from visual cues and amorphous tactile feedback. This work explores how to simulate positional vibrotactile feedback (PVF) with multiple vibration motors when colliding with virtual objects in AR. By attaching spatially distributed vibration motors on a physical haptic proxy, users can obtain an augmented collision experience with positional vibration sensations from the contact point with virtual objects. We first developed a prototype system and conducted a user study to optimize the design parameters. Then we investigated the effect of PVF on user performance and experience in a virtual and real object alignment task in the AR environment. We found that this approach could significantly reduce the alignment offset between virtual and physical objects with tolerable task completion time increments. With the PVF cue, participants obtained a more comprehensive perception of the offset direction, more useful information, and a more authentic AR experience.
  • Robot-enabled tangible virtual assembly with coordinated midair object placement
    Li Zhang, Yizhe Liu, Huidong Bai, Qianyuan Zou, Zhuang Chang, Weiping He, Shuxia Wang, and Mark Billinghurst.

    Zhang, L., Liu, Y., Bai, H., Zou, Q., Chang, Z., He, W., ... & Billinghurst, M. (2023). Robot-enabled tangible virtual assembly with coordinated midair object placement. Robotics and Computer-Integrated Manufacturing, 79, 102434.

    @article{zhang2023robot,
    title={Robot-enabled tangible virtual assembly with coordinated midair object placement},
    author={Zhang, Li and Liu, Yizhe and Bai, Huidong and Zou, Qianyuan and Chang, Zhuang and He, Weiping and Wang, Shuxia and Billinghurst, Mark},
    journal={Robotics and Computer-Integrated Manufacturing},
    volume={79},
    pages={102434},
    year={2023},
    publisher={Elsevier}
    }
    Li Zhang, Yizhe Liu, Huidong Bai, Qianyuan Zou, Zhuang Chang, Weiping He, Shuxia Wang, and Mark Billinghurst.
  • BeHere: a VR/SAR remote collaboration system based on virtual replicas sharing gesture and avatar in a procedural task
    Peng Wang, Yue Wang, Mark Billinghurst, Huizhen Yang, Peng Xu, and Yanhong Li.

    Wang, P., Wang, Y., Billinghurst, M., Yang, H., Xu, P., & Li, Y. (2023). BeHere: a VR/SAR remote collaboration system based on virtual replicas sharing gesture and avatar in a procedural task. Virtual Reality, 27(2), 1409-1430.

    @article{wang2023behere,
    title={BeHere: a VR/SAR remote collaboration system based on virtual replicas sharing gesture and avatar in a procedural task},
    author={Wang, Peng and Wang, Yue and Billinghurst, Mark and Yang, Huizhen and Xu, Peng and Li, Yanhong},
    journal={Virtual Reality},
    volume={27},
    number={2},
    pages={1409--1430},
    year={2023},
    publisher={Springer}
    }
    In this paper, we focus on the significance of remote collaboration using virtual replicas, avatar, and gesture on a procedural task in industry; thus, we present a Virtual Reality (VR)/Spatial Augmented Reality (SAR) remote collaboration system, BeHere, based on 3D virtual replicas and sharing gestures and avatar. BeHere enables a remote expert in VR to guide a local worker in real-time to complete a procedural task in the real-world. For the remote VR site, we construct a 3D virtual environment using virtual replicas, and the user can manipulate them by using gestures in an intuitive interaction and see their partners’ 3D virtual avatar. For the local site, we use SAR to enable the local worker to see instructions projected onto the real-world based on the shared virtual replicas and gestures. We conducted a formal user study to evaluate the prototype system in terms of performance, social presence, workload, and ranking and user preference. We found that the combination of visual cues of gestures, avatar, and virtual replicas plays a positive role in improving user experience, especially for remote VR users. More significantly, our study provides useful information and important design implications for further research on the use of gesture-, gaze- and avatar-based cues as well as virtual replicas in VR/AR remote collaboration on a procedural task in industry.
  • RadarHand: a Wrist-Worn Radar for On-Skin Touch-based Proprioceptive Gestures
    Ryo Hajika, Tamil Selvan Gunasekaran, Chloe Dolma Si Ying Haigh, Yun Suen Pai, Eiji Hayashi, Jaime Lien, Danielle Lottridge, and Mark Billinghurst.

    Hajika, R., Gunasekaran, T. S., Haigh, C. D. S. Y., Pai, Y. S., Hayashi, E., Lien, J., ... & Billinghurst, M. (2024). RadarHand: A Wrist-Worn Radar for On-Skin Touch-Based Proprioceptive Gestures. ACM Transactions on Computer-Human Interaction, 31(2), 1-36.

    @article{hajika2024radarhand,
    title={RadarHand: A Wrist-Worn Radar for On-Skin Touch-Based Proprioceptive Gestures},
    author={Hajika, Ryo and Gunasekaran, Tamil Selvan and Haigh, Chloe Dolma Si Ying and Pai, Yun Suen and Hayashi, Eiji and Lien, Jaime and Lottridge, Danielle and Billinghurst, Mark},
    journal={ACM Transactions on Computer-Human Interaction},
    volume={31},
    number={2},
    pages={1--36},
    year={2024},
    publisher={ACM New York, NY, USA}
    }
    We introduce RadarHand, a wrist-worn wearable with millimetre wave radar that detects on-skin touch-based proprioceptive hand gestures. Radars are robust, private, small, penetrate materials, and require low computation costs. We first evaluated the proprioceptive and tactile perception nature of the back of the hand and found that tapping on the thumb is the least proprioceptive error of all the finger joints, followed by the index finger, middle finger, ring finger, and pinky finger in the eyes-free and high cognitive load situation. Next, we trained deep-learning models for gesture classification. We introduce two types of gestures based on the locations of the back of the hand: generic gestures and discrete gestures. Discrete gestures are gestures that start at specific locations and end at specific locations at the back of the hand, in contrast to generic gestures, which can start anywhere and end anywhere on the back of the hand. Out of 27 gesture group possibilities, we achieved 92% accuracy for a set of seven gestures and 93% accuracy for the set of eight discrete gestures. Finally, we evaluated RadarHand’s performance in real-time under two interaction modes: Active interaction and Reactive interaction. Active interaction is where the user initiates input to achieve the desired output, and reactive interaction is where the device initiates interaction and requires the user to react. We obtained an accuracy of 87% and 74% for active generic and discrete gestures, respectively, as well as 91% and 81.7% for reactive generic and discrete gestures, respectively. We discuss the implications of RadarHand for gesture recognition and directions for future works.
  • View Types and Visual Communication Cues for Remote Collaboration
    Seungwon Kim, Weidong Huang, Chi-Min Oh, Gun Lee, Mark Billinghurst, and Sang-Joon Lee.

    Kim, S., Huang, W., Oh, C. M., Lee, G., Billinghurst, M., & Lee, S. J. (2023). View types and visual communication cues for remote collaboration. Computers, Materials and Continua.

    @article{kim2023view,
    title={View types and visual communication cues for remote collaboration},
    author={Kim, Seungwon and Huang, Weidong and Oh, Chi-Min and Lee, Gun and Billinghurst, Mark and Lee, Sang-Joon},
    journal={Computers, Materials and Continua},
    year={2023},
    publisher={Computers, Materials and Continua (Tech Science Press)}
    }
    Over the last several years, remote collaboration has been getting more attention in the research community because of the COVID-19 pandemic. In previous studies, researchers have investigated the effect of adding visual communication cues or shared views in collaboration, but there has not been any previous study exploring the influence between them. In this paper, we investigate the influence of view types on the use of visual communication cues. We compared the use of the three visual cues (hand gesture, a pointer with hand gesture, and sketches with hand gesture) across two view types (dependent and independent views), respectively. We conducted a user study, and the results showed that hand gesture and sketches with the hand gesture cues were well matched with the dependent view condition, and using a pointer with the hand gesture cue was suited to the independent view condition. With the dependent view, the hand gesture and sketch cues required less mental effort for collaborative communication, had better usability, provided better message understanding, and increased feeling of co-presence compared to the independent view. Since the dependent view supported the same viewpoint between the remote expert and a local worker, the local worker could easily understand the remote expert’s hand gestures. In contrast, in the independent view case, when they had different viewpoints, it was not easy for the local worker to understand the remote expert’s hand gestures. The sketch cue had a benefit of showing the final position and orientation of the manipulating objects with the dependent view, but this benefit was less obvious in the independent view case (which provided a further view compared to the dependent view) because precise drawing in the sketches was difficult from a distance. On the contrary, a pointer with the hand gesture cue required less mental effort to collaborate, had better usability, provided better message understanding, and an increased feeling of co-presence in the independent view condition than in the dependent view condition. The pointer cue could be used instead of a hand gesture in the independent view condition because the pointer could still show precise pointing information regardless of the view type.
  • How Immersive Virtual Reality Safety Training System Features Impact Learning Outcomes: An Experimental Study of Forklift Training
    Ali Abbas, JoonOh Seo, Seungjun Ahn, Yanfang Luo, Mitchell J. Wyllie, Gun Lee, and Mark Billinghurst.

    Abbas, A., Seo, J., Ahn, S., Luo, Y., Wyllie, M. J., Lee, G., & Billinghurst, M. (2023). How immersive virtual reality safety training system features impact learning outcomes: An experimental study of forklift training. Journal of Management in Engineering, 39(1), 04022068.

    @article{abbas2023immersive,
    title={How immersive virtual reality safety training system features impact learning outcomes: An experimental study of forklift training},
    author={Abbas, Ali and Seo, JoonOh and Ahn, Seungjun and Luo, Yanfang and Wyllie, Mitchell J and Lee, Gun and Billinghurst, Mark},
    journal={Journal of Management in Engineering},
    volume={39},
    number={1},
    pages={04022068},
    year={2023},
    publisher={American Society of Civil Engineers}
    }
    Immersive virtual reality (VR)–based training has been widely proposed in different firms to improve the hazard recognition skills of their workforce and change their unsafe behavior. However, little is known about the impact of VR-based training on the user’s behavior and learning. With the use of structural equation modeling (SEM), this study investigated the impact of VR-based training on 60 participants, and the results supported the mediating effect of VR system features on the users’ acquisition of knowledge, behavioral intention, and satisfaction. The results also indicated that the VR system features were a significant antecedent to psychological factors (presence, motivation, enjoyment, and self-efficacy). This suggests that there are two general paths: (1) usability and fidelity (UF)–enjoyment (EJ)–behavioral intention (BI); and (2) UF–EJ–satisfaction (ST), by which VR-based safety training can have a positive impact on the users’ behavior. This study also revealed that the higher level of presence in the VR training environment would not exert a strong influence on users’ behavior. The findings of this study could help to better design VR-based training programs in a cost-effective way and thus could maximize the benefits of VR technology for industry.
  • Point & Portal: A New Action at a Distance Technique For Virtual Reality
    Daniel Ablett, Andrew Cunningham, Gun A. Lee, and Bruce H. Thomas.

    Ablett, D., Cunningham, A., Lee, G. A., & Thomas, B. H. (2023, October). Point & Portal: A New Action at a Distance Technique For Virtual Reality. In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 119-128). IEEE.

    @inproceedings{ablett2023point,
    title={Point \& Portal: A New Action at a Distance Technique For Virtual Reality},
    author={Ablett, Daniel and Cunningham, Andrew and Lee, Gun A and Thomas, Bruce H},
    booktitle={2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={119--128},
    year={2023},
    organization={IEEE}
    }
    This paper introduces Point & Portal, a novel Virtual Reality (VR) interaction technique, inspired by Point & Teleport. This new technique enables users to configure portals using pointing actions, and supports seamless action at a distance and navigation without requiring line of sight. By supporting multiple portals, Point & Portalenables users to create dynamic portal configurations to manage multiple remote tasks. Additionally, this paper introduces Relative Portal Positioning for reliable portal interactions, and the concept of maintaining Level Portals. In a comparative user study, Point & Portal demonstrated significant advantages over the traditional Point & Teleport technique to bring interaction devices within-arm’s reach. In the presence of obstacles, Point & Portal exhibited faster speed, lower cognitive load and was preferred by participants. Overall, participants required less physical movement, pointing actions, and reported higher involvement and “good” usability.