Publications

  • 2023
  • Sankey diagram of EEG analysis pipeline
    Brain activity during cybersickness: a scoping review
    Eunhee Chang, Mark Billinghurst, Byounghyun Yoo

    Chang, E., Billinghurst, M., & Yoo, B. (2023). Brain activity during cybersickness: a scoping review. Virtual Reality, 1-25.

    @article{chang2023brain,
    title={Brain activity during cybersickness: a scoping review},
    author={Chang, Eunhee and Billinghurst, Mark and Yoo, Byounghyun},
    journal={Virtual Reality},
    pages={1--25},
    year={2023},
    publisher={Springer}
    }
    Virtual reality (VR) experiences can cause a range of negative symptoms such as nausea, disorientation, and oculomotor discomfort, which is collectively called cybersickness. Previous studies have attempted to develop a reliable measure for detecting cybersickness instead of using questionnaires, and electroencephalogram (EEG) has been regarded as one of the possible alternatives. However, despite the increasing interest, little is known about which brain activities are consistently associated with cybersickness and what types of methods should be adopted for measuring discomfort through brain activity. We conducted a scoping review of 33 experimental studies in cybersickness and EEG found through database searches and screening. To understand these studies, we organized the pipeline of EEG analysis into four steps (preprocessing, feature extraction, feature selection, classification) and surveyed the characteristics of each step. The results showed that most studies performed frequency or time-frequency analysis for EEG feature extraction. A part of the studies applied a classification model to predict cybersickness indicating an accuracy between 79 and 100%. These studies tended to use HMD-based VR with a portable EEG headset for measuring brain activity. Most VR content shown was scenic views such as driving or navigating a road, and the age of participants was limited to people in their 20 s. This scoping review contributes to presenting an overview of cybersickness-related EEG research and establishing directions for future work.
  • 2022
  • NapWell: An EOG-based Sleep Assistant Exploring the Effects of Virtual Reality on Sleep Onset
    Yun Suen Pai, Marsel L. Bait, Juyoung Lee, Jingjing Xu, Roshan L Peiris, Woontack Woo, Mark Billinghurst & Kai Kunze

    Pai, Y. S., Bait, M. L., Lee, J., Xu, J., Peiris, R. L., Woo, W., ... & Kunze, K. (2022). NapWell: an EOG-based sleep assistant exploring the effects of virtual reality on sleep onset. Virtual Reality, 26(2), 437-451.

    @article{pai2022napwell,
    title={NapWell: an EOG-based sleep assistant exploring the effects of virtual reality on sleep onset},
    author={Pai, Yun Suen and Bait, Marsel L and Lee, Juyoung and Xu, Jingjing and Peiris, Roshan L and Woo, Woontack and Billinghurst, Mark and Kunze, Kai},
    journal={Virtual Reality},
    volume={26},
    number={2},
    pages={437--451},
    year={2022},
    publisher={Springer}
    }
    We present NapWell, a Sleep Assistant using virtual reality (VR) to decrease sleep onset latency by providing a realistic imagery distraction prior to sleep onset. Our proposed prototype was built using commercial hardware and with relatively low cost, making it replicable for future works as well as paving the way for more low cost EOG-VR devices for sleep assistance. We conducted a user study (n=20) by comparing different sleep conditions; no devices, sleeping mask, VR environment of the study room and preferred VR environment by the participant. During this period, we recorded the electrooculography (EOG) signal and sleep onset time using a finger tapping task (FTT). We found that VR was able to significantly decrease sleep onset latency. We also developed a machine learning model based on EOG signals that can predict sleep onset with a cross-validated accuracy of 70.03%. The presented study demonstrates the feasibility of VR to be used as a tool to decrease sleep onset latency, as well as the use of embedded EOG sensors with VR for automatic sleep detection.
  • RaITIn: Radar-Based Identification for Tangible Interactions
    Tamil Selvan Gunasekaran , Ryo Hajika , Yun Suen Pai , Eiji Hayashi , Mark Billinghurst

    Gunasekaran, T. S., Hajika, R., Pai, Y. S., Hayashi, E., & Billinghurst, M. (2022, April). RaITIn: Radar-Based Identification for Tangible Interactions. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-7).

    @inproceedings{gunasekaran2022raitin,
    title={RaITIn: Radar-Based Identification for Tangible Interactions},
    author={Gunasekaran, Tamil Selvan and Hajika, Ryo and Pai, Yun Suen and Hayashi, Eiji and Billinghurst, Mark},
    booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--7},
    year={2022}
    }
    Radar is primarily used for applications like tracking and large-scale ranging, and its use for object identification has been rarely explored. This paper introduces RaITIn, a radar-based identification (ID) method for tangible interactions. Unlike conventional radar solutions, RaITIn can track and identify objects on a tabletop scale. We use frequency modulated continuous wave (FMCW) radar sensors to classify different objects embedded with low-cost radar reflectors of varying sizes on a tabletop setup. We also introduce Stackable IDs, where different objects can be stacked and combined to produce unique IDs. The result allows RaITIn to accurately identify visually identical objects embedded with different low-cost reflector configurations. When combined with a radar’s ability for tracking, it creates novel tabletop interaction modalities. We discuss possible applications and areas for future work.
  • Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR
    Ihshan Gumilar , Amit Barde , Prasanth Sasikumar , Mark Billinghurst , Ashkan F. Hayati , Gun Lee , Yuda Munarko , Sanjit Singh , Abdul Momin

    Gumilar, I., Barde, A., Sasikumar, P., Billinghurst, M., Hayati, A. F., Lee, G., ... & Momin, A. (2022, April). Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-7).

    @inproceedings{gumilar2022inter,
    title={Inter-brain Synchrony and Eye Gaze Direction During Collaboration in VR},
    author={Gumilar, Ihshan and Barde, Amit and Sasikumar, Prasanth and Billinghurst, Mark and Hayati, Ashkan F and Lee, Gun and Munarko, Yuda and Singh, Sanjit and Momin, Abdul},
    booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--7},
    year={2022}
    }
    Brain activity sometimes synchronises when people collaborate together on real world tasks. Understanding this process could to lead to improvements in face to face and remote collaboration. In this paper we report on an experiment exploring the relationship between eye gaze and inter-brain synchrony in Virtual Reality (VR). The experiment recruited pairs who were asked to perform finger-tracking exercises in VR with three different gaze conditions: averted, direct, and natural, while their brain activity was recorded. We found that gaze direction has a significant effect on inter-brain synchrony during collaboration for this task in VR. This shows that representing natural gaze could influence inter-brain synchrony in VR, which may have implications for avatar design for social VR. We discuss implications of our research and possible directions for future work.
  • VR [we are] Training – Workshop on Collaborative Virtual Training for Challenging Contexts
    Georg Regal , Helmut Schrom-Feiertag , Quynh Nguyen , Marco Aust , Markus Murtinger , Dorothé Smit , Manfred Tscheligi , Mark Billinghurst

    Regal, G., Schrom-Feiertag, H., Nguyen, Q., Aust, M., Murtinger, M., Smit, D., ... & Billinghurst, M. (2022, April). VR [we are] Training-Workshop on Collaborative Virtual Training for Challenging Contexts. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-6).

    @inproceedings{regal2022vr,
    title={VR [we are] Training-Workshop on Collaborative Virtual Training for Challenging Contexts},
    author={Regal, Georg and Schrom-Feiertag, Helmut and Nguyen, Quynh and Aust, Marco and Murtinger, Markus and Smit, Doroth{\'e} and Tscheligi, Manfred and Billinghurst, Mark},
    booktitle={CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--6},
    year={2022}
    }
    Virtual reality provides great opportunities to simulate various environments and situations as reproducible and controllable training environments. Training is an inherently collaborative effort, with trainees and trainers working together to achieve specific goals. Recently, we have seen considerable effort to use virtual training environments (VTEs) in many demanding training contexts, e.g. police training, medical first responder training, firefighter training etc. For such contexts, trainers and trainees must undertake various roles as supervisors, adaptors, role players, and observers in training, making collaboration complex, but essential for training success. These social and multi-user aspects for collaborative VTEs have received little investigation so far. Therefore, we propose this workshop to discuss the potential and perspectives of VTEs for challenging training settings...
  • A review on communication cues for augmented reality based remote guidance
    Weidong Huang, Mathew Wakefield, Troels Ammitsbøl Rasmussen, Seungwon Kim & Mark Billinghurst

    Huang, W., Wakefield, M., Rasmussen, T. A., Kim, S., & Billinghurst, M. (2022). A review on communication cues for augmented reality based remote guidance. Journal on Multimodal User Interfaces, 1-18.

    @article{huang2022review,
    title={A review on communication cues for augmented reality based remote guidance},
    author={Huang, Weidong and Wakefield, Mathew and Rasmussen, Troels Ammitsb{\o}l and Kim, Seungwon and Billinghurst, Mark},
    journal={Journal on Multimodal User Interfaces},
    pages={1--18},
    year={2022},
    publisher={Springer}
    }
    Remote guidance on physical tasks is a type of collaboration in which a local worker is guided by a remote helper to operate on a set of physical objects. It has many applications in industrial sections such as remote maintenance and how to support this type of remote collaboration has been researched for almost three decades. Although a range of different modern computing tools and systems have been proposed, developed and used to support remote guidance in different application scenarios, it is essential to provide communication cues in a shared visual space to achieve common ground for effective communication and collaboration. In this paper, we conduct a selective review to summarize communication cues, approaches that implement the cues and their effects on augmented reality based remote guidance. We also discuss challenges and propose possible future research and development directions.
  • Seeing is believing: AR-assisted blind area assembly to support hand–eye coordination
    Shuo Feng, Weiping He, Shaohua Zhang & Mark Billinghurst

    Feng, S., He, W., Zhang, S., & Billinghurst, M. (2022). Seeing is believing: AR-assisted blind area assembly to support hand–eye coordination. The International Journal of Advanced Manufacturing Technology, 119(11), 8149-8158.

    @article{feng2022seeing,
    title={Seeing is believing: AR-assisted blind area assembly to support hand--eye coordination},
    author={Feng, Shuo and He, Weiping and Zhang, Shaohua and Billinghurst, Mark},
    journal={The International Journal of Advanced Manufacturing Technology},
    volume={119},
    number={11},
    pages={8149--8158},
    year={2022},
    publisher={Springer}
    }
    The assembly stage is a vital phase in the production process and currently, there are still many manual tasks in the assembly operation. One of the challenges of manual assembly is the issue of blind area assembly since the visual obstruction of the hands or a part can lead to more errors and lower assembly efficiency. In this study, we developed an AR-assisted assembly system that solves the occlusion problem. Assembly workers can use the system to achieve comprehensive and precise hand–eye coordination (HEC). Additionally, we designed and conducted a user evaluation experiment to measure the learnability, usability, and mental effort required for the system for other HEC modes. Results indicate that hand position is the first visual information that should be considered in blind areas. Besides, the Intact HEC mode can effectively reduce the difficulty of learning and mental burden in operation, while at the same time improving efficiency.
  • Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals
    Arindam Dey, Amit Barde, Bowen Yuan, Ekansh Sareen, Chelsea Dobbins, Aaron Goh, Gaurav Gupta, Anubha Gupta, MarkBillinghurst

    Dey, A., Barde, A., Yuan, B., Sareen, E., Dobbins, C., Goh, A., ... & Billinghurst, M. (2022). Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals. International Journal of Human-Computer Studies, 160, 102762.

    @article{dey2022effects,
    title={Effects of interacting with facial expressions and controllers in different virtual environments on presence, usability, affect, and neurophysiological signals},
    author={Dey, Arindam and Barde, Amit and Yuan, Bowen and Sareen, Ekansh and Dobbins, Chelsea and Goh, Aaron and Gupta, Gaurav and Gupta, Anubha and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    volume={160},
    pages={102762},
    year={2022},
    publisher={Elsevier}
    }
    Virtual Reality (VR) interfaces provide an immersive medium to interact with the digital world. Most VR interfaces require physical interactions using handheld controllers, but there are other alternative interaction methods that can support different use cases and users. Interaction methods in VR are primarily evaluated based on their usability, however, their differences in neurological and physiological effects remains less investigated. In this paper—along with other traditional qualitative matrices such as presence, affect, and system usability—we explore the neurophysiological effects—brain signals and electrodermal activity—of using an alternative facial expression interaction method to interact with VR interfaces. This form of interaction was also compared with traditional handheld controllers. Three different environments, with different experiences to interact with were used—happy (butterfly catching), neutral (object picking), and scary (zombie shooting). Overall, we noticed an effect of interaction methods on the gamma activities in the brain and on skin conductance. For some aspects of presence, facial expression outperformed controllers but controllers were found to be better than facial expressions in terms of usability.
  • HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality
    Zhang, L., He, W., Cao, Z., Wang, S., Bai, H., & Billinghurst, M.

    Zhang, L., He, W., Cao, Z., Wang, S., Bai, H., & Billinghurst, M. (2022). HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality. International Journal of Human–Computer Interaction, 1-15.

    @article{zhang2022hapticproxy,
    title={HapticProxy: Providing Positional Vibrotactile Feedback on a Physical Proxy for Virtual-Real Interaction in Augmented Reality},
    author={Zhang, Li and He, Weiping and Cao, Zhiwei and Wang, Shuxia and Bai, Huidong and Billinghurst, Mark},
    journal={International Journal of Human--Computer Interaction},
    pages={1--15},
    year={2022},
    publisher={Taylor \& Francis}
    }
    Consistent visual and haptic feedback is an important way to improve the user experience when interacting with virtual objects. However, the perception provided in Augmented Reality (AR) mainly comes from visual cues and amorphous tactile feedback. This work explores how to simulate positional vibrotactile feedback (PVF) with multiple vibration motors when colliding with virtual objects in AR. By attaching spatially distributed vibration motors on a physical haptic proxy, users can obtain an augmented collision experience with positional vibration sensations from the contact point with virtual objects. We first developed a prototype system and conducted a user study to optimize the design parameters. Then we investigated the effect of PVF on user performance and experience in a virtual and real object alignment task in the AR environment. We found that this approach could significantly reduce the alignment offset between virtual and physical objects with tolerable task completion time increments. With the PVF cue, participants obtained a more comprehensive perception of the offset direction, more useful information, and a more authentic AR experience.
  • Octopus Sensing: A Python library for human behavior studies
    Nastaran Saffaryazdi, Aidin Gharibnavaz, Mark Billinghurst

    Saffaryazdi, N., Gharibnavaz, A., & Billinghurst, M. (2022). Octopus Sensing: A Python library for human behavior studies. Journal of Open Source Software, 7(71), 4045.

    @article{saffaryazdi2022octopus,
    title={Octopus Sensing: A Python library for human behavior studies},
    author={Saffaryazdi, Nastaran and Gharibnavaz, Aidin and Billinghurst, Mark},
    journal={Journal of Open Source Software},
    volume={7},
    number={71},
    pages={4045},
    year={2022}
    }
    Designing user studies and collecting data is critical to exploring and automatically recognizing human behavior. It is currently possible to use a range of sensors to capture heart rate, brain activity, skin conductance, and a variety of different physiological cues. These data can be combined to provide information about a user’s emotional state, cognitive load, or other factors. However, even when data are collected correctly, synchronizing data from multiple sensors is time-consuming and prone to errors. Failure to record and synchronize data is likely to result in errors in analysis and results, as well as the need to repeat the time-consuming experiments several times. To overcome these challenges, Octopus Sensing facilitates synchronous data acquisition from various sources and provides some utilities for designing user studies, real-time monitoring, and offline data visualization.
    The primary aim of Octopus Sensing is to provide a simple scripting interface so that people with basic or no software development skills can define sensor-based experiment scenarios with less effort
  • Emotion Recognition in Conversations Using Brain and Physiological Signals
    Nastaran Saffaryazdi , Yenushka Goonesekera , Nafiseh Saffaryazdi , Nebiyou Daniel Hailemariam , Ebasa Girma Temesgen , Suranga Nanayakkara , Elizabeth Broadbent , Mark Billinghurst

    Saffaryazdi, N., Goonesekera, Y., Saffaryazdi, N., Hailemariam, N. D., Temesgen, E. G., Nanayakkara, S., ... & Billinghurst, M. (2022, March). Emotion Recognition in Conversations Using Brain and Physiological Signals. In 27th International Conference on Intelligent User Interfaces (pp. 229-242).

    @inproceedings{saffaryazdi2022emotion,
    title={Emotion recognition in conversations using brain and physiological signals},
    author={Saffaryazdi, Nastaran and Goonesekera, Yenushka and Saffaryazdi, Nafiseh and Hailemariam, Nebiyou Daniel and Temesgen, Ebasa Girma and Nanayakkara, Suranga and Broadbent, Elizabeth and Billinghurst, Mark},
    booktitle={27th International Conference on Intelligent User Interfaces},
    pages={229--242},
    year={2022}
    }
    Emotions are complicated psycho-physiological processes that are related to numerous external and internal changes in the body. They play an essential role in human-human interaction and can be important for human-machine interfaces. Automatically recognizing emotions in conversation could be applied in many application domains like health-care, education, social interactions, entertainment, and more. Facial expressions, speech, and body gestures are primary cues that have been widely used for recognizing emotions in conversation. However, these cues can be ineffective as they cannot reveal underlying emotions when people involuntarily or deliberately conceal their emotions. Researchers have shown that analyzing brain activity and physiological signals can lead to more reliable emotion recognition since they generally cannot be controlled. However, these body responses in emotional situations have been rarely explored in interactive tasks like conversations. This paper explores and discusses the performance and challenges of using brain activity and other physiological signals in recognizing emotions in a face-to-face conversation. We present an experimental setup for stimulating spontaneous emotions using a face-to-face conversation and creating a dataset of the brain and physiological activity. We then describe our analysis strategies for recognizing emotions using Electroencephalography (EEG), Photoplethysmography (PPG), and Galvanic Skin Response (GSR) signals in subject-dependent and subject-independent approaches. Finally, we describe new directions for future research in conversational emotion recognition and the limitations and challenges of our approach.
  • Asymmetric interfaces with stylus and gesture for VR sketching
    Qianyuan Zou; Huidong Bai; Lei Gao; Allan Fowler; Mark Billinghurst

    Zou, Q., Bai, H., Gao, L., Fowler, A., & Billinghurst, M. (2022, March). Asymmetric interfaces with stylus and gesture for VR sketching. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 968-969). IEEE.

    @inproceedings{zou2022asymmetric,
    title={Asymmetric interfaces with stylus and gesture for VR sketching},
    author={Zou, Qianyuan and Bai, Huidong and Gao, Lei and Fowler, Allan and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={968--969},
    year={2022},
    organization={IEEE}
    }
    Virtual Reality (VR) can be used for design and artistic applications. However, traditional symmetrical input devices are not specifically designed as creative tools and may not fully meet artist needs. In this demonstration, we present a variety of tool-based asymmetric VR interfaces to assist artists to create artwork with better performance and easier effort. These interaction methods allow artists to hold different tools in their hands, such as wearing a data glove on the left hand and holding a stylus in the right-hand. We demonstrate this by showing a stylus and glove based sketching interface. We conducted a pilot study showing that most users prefer to create art with different tools in both hands.
  • eyemR-Talk system overview: Illustration and demonstration of the system setup, gaze states, and shared gaze indicator interface designs
    Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration
    Allison Jing; Gun Lee; Mark Billinghurst

    Jing, A., Lee, G., & Billinghurst, M. (2022, March). Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 250-259). IEEE.

    @inproceedings{jing2022using,
    title={Using Speech to Visualise Shared Gaze Cues in MR Remote Collaboration},
    author={Jing, Allison and Lee, Gun and Billinghurst, Mark},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={250--259},
    year={2022},
    organization={IEEE}
    }
    In this paper, we present a 360° panoramic Mixed Reality (MR) sys-tem that visualises shared gaze cues using contextual speech input to improve task coordination. We conducted two studies to evaluate the design of the MR gaze-speech interface exploring the combinations of visualisation style and context control level. Findings from the first study suggest that an explicit visual form that directly connects the collaborators’ shared gaze to the contextual conversation is preferred. The second study indicates that the gaze-speech modality shortens the coordination time to attend to the shared interest, making the communication more natural and the collaboration more effective. Qualitative feedback also suggest that having a constant joint gaze indicator provides a consistent bi-directional view while establishing a sense of co-presence during task collaboration. We discuss the implications for the design of collaborative MR systems and directions for future research.
  • Jamming in MR: Towards Real-Time Music Collaboration in Mixed Reality
    Ruben Schlagowski; Kunal Gupta; Silvan Mertes; Mark Billinghurst; Susanne Metzner; Elisabeth André

    Schlagowski, R., Gupta, K., Mertes, S., Billinghurst, M., Metzner, S., & André, E. (2022, March). Jamming in MR: Towards Real-Time Music Collaboration in Mixed Reality. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) (pp. 854-855). IEEE.

    @inproceedings{schlagowski2022jamming,
    title={Jamming in MR: towards real-time music collaboration in mixed reality},
    author={Schlagowski, Ruben and Gupta, Kunal and Mertes, Silvan and Billinghurst, Mark and Metzner, Susanne and Andr{\'e}, Elisabeth},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={854--855},
    year={2022},
    organization={IEEE}
    }
    Recent pandemic-related contact restrictions have made it difficult for musicians to meet in person to make music. As a result, there has been an increased demand for applications that enable remote and real-time music collaboration. One desirable goal here is to give musicians a sense of social presence, to make them feel that they are “on site” with their musical partners. We conducted a focus group study to investigate the impact of remote jamming on users' affect. Further, we gathered user requirements for a Mixed Reality system that enables real-time jamming and developed a prototype based on these findings.
  • Supporting Jury Understanding of Expert Evidence in a Virtual Environment
    Carolin Reichherzer; Andrew Cunningham; Jason Barr; Tracey Coleman; Kurt McManus; Dion Sheppard; Scott Coussens; Mark Kohler; Mark Billinghurst; Bruce H. Thomas

    Reichherzer, C., Cunningham, A., Barr, J., Coleman, T., McManus, K., Sheppard, D., ... & Thomas, B. H. (2022, March). Supporting Jury Understanding of Expert Evidence in a Virtual Environment. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 615-624). IEEE.

    @inproceedings{reichherzer2022supporting,
    title={Supporting Jury Understanding of Expert Evidence in a Virtual Environment},
    author={Reichherzer, Carolin and Cunningham, Andrew and Barr, Jason and Coleman, Tracey and McManus, Kurt and Sheppard, Dion and Coussens, Scott and Kohler, Mark and Billinghurst, Mark and Thomas, Bruce H},
    booktitle={2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={615--624},
    year={2022},
    organization={IEEE}
    }
    This work investigates the use of Virtual Reality (VR) to present forensic evidence to the jury in a courtroom trial. The findings of a between-participant user study on comprehension of an expert statement are presented, examining the benefits and issues of using VR compared to traditional courtroom presentation (being still images). Participants listened to a forensic scientist explain bloodstain spatter patterns while viewing a mock crime scene in either VR or as still images in video format. Under these conditions, we compared understanding of the expert domain, mental effort and content recall. We found that VR significantly improves the understanding of spatial information and knowledge acquisition. We also identify different patterns of user behaviour depending on the display method. We conclude with suggestions on how to best adapt evidence presentation to VR.
  • The prototype system
    The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality
    Allison Jing , Kieran May , Brandon Matthews , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran May, Brandon Matthews, Gun Lee, and Mark Billinghurst. 2022. The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 463 (November 2022), 27 pages. https://doi.org/10.1145/3555564

    @article{10.1145/3555564,
    author = {Jing, Allison and May, Kieran and Matthews, Brandon and Lee, Gun and Billinghurst, Mark},
    title = {The Impact of Sharing Gaze Behaviours in Collaborative Mixed Reality},
    year = {2022},
    issue_date = {November 2022},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    volume = {6},
    number = {CSCW2},
    url = {https://doi.org/10.1145/3555564},
    doi = {10.1145/3555564},
    abstract = {In a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators' cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator's gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.},
    journal = {Proc. ACM Hum.-Comput. Interact.},
    month = {nov},
    articleno = {463},
    numpages = {27},
    keywords = {gaze visualization, mixed reality remote collaboration, human-computer interaction}
    }
    In a remote collaboration involving a physical task, visualising gaze behaviours may compensate for other unavailable communication channels. In this paper, we report on a 360° panoramic Mixed Reality (MR) remote collaboration system that shares gaze behaviour visualisations between a local user in Augmented Reality and a remote collaborator in Virtual Reality. We conducted two user studies to evaluate the design of MR gaze interfaces and the effect of gaze behaviour (on/off) and gaze style (bi-/uni-directional). The results indicate that gaze visualisations amplify meaningful joint attention and improve co-presence compared to a no gaze condition. Gaze behaviour visualisations enable communication to be less verbally complex therefore lowering collaborators' cognitive load while improving mutual understanding. Users felt that bi-directional behaviour visualisation, showing both collaborator's gaze state, was the preferred condition since it enabled easy identification of shared interests and task progress.
  • Mixed Reality Remote Collaboration System supporting Near-Gaze Interface
    Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration
    Allison Jing; Kunal Gupta; Jeremy McDade; Gun A. Lee; Mark Billinghurst

    A. Jing, K. Gupta, J. McDade, G. A. Lee and M. Billinghurst, "Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration," 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Singapore, Singapore, 2022, pp. 837-846, doi: 10.1109/ISMAR55827.2022.00102.

    @INPROCEEDINGS{9995367,
    author={Jing, Allison and Gupta, Kunal and McDade, Jeremy and Lee, Gun A. and Billinghurst, Mark},
    booktitle={2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    title={Comparing Gaze-Supported Modalities with Empathic Mixed Reality Interfaces in Remote Collaboration},
    year={2022},
    volume={},
    number={},
    pages={837-846},
    doi={10.1109/ISMAR55827.2022.00102}}
    In this paper, we share real-time collaborative gaze behaviours, hand pointing, gesturing, and heart rate visualisations between remote collaborators using a live 360 ° panoramic-video based Mixed Reality (MR) system. We first ran a pilot study to explore visual designs to combine communication cues with biofeedback (heart rate), aiming to understand user perceptions of empathic collaboration. We then conducted a formal study to investigate the effect of modality (Gaze+Hand, Hand-only) and interface (Near-Gaze, Embodied). The results show that the Gaze+Hand modality in a Near-Gaze interface is significantly better at reducing task load, improving co-presence, enhancing understanding and tightening collaborative behaviours compared to the conventional Embodied hand-only experience. Ranked as the most preferred condition, the Gaze+Hand in Near-Gaze condition is perceived to reduce the need for dividing attention to the collaborator’s physical location, although it feels slightly less natural compared to the embodied visualisations. In addition, the Gaze+Hand conditions also led to more joint attention and less hand pointing to align mutual understanding. Lastly, we provide a design guideline to summarize what we have learned from the studies on the representation between modality, interface, and biofeedback.
  • System Overview
    Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration
    Allison Jing; Kunal Gupta; Jeremy McDade; Gun A. Lee; Mark Billinghurst

    Allison Jing, Kunal Gupta, Jeremy McDade, Gun Lee, and Mark Billinghurst. 2022. Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration. In ACM SIGGRAPH 2022 Posters (SIGGRAPH '22). Association for Computing Machinery, New York, NY, USA, Article 29, 1–2. https://doi.org/10.1145/3532719.3543213

    @inproceedings{10.1145/3532719.3543213,
    author = {Jing, Allison and Gupta, Kunal and McDade, Jeremy and Lee, Gun and Billinghurst, Mark},
    title = {Near-Gaze Visualisations of Empathic Communication Cues in Mixed Reality Collaboration},
    year = {2022},
    isbn = {9781450393614},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3532719.3543213},
    doi = {10.1145/3532719.3543213},
    abstract = {In this poster, we present a live 360° panoramic-video based empathic Mixed Reality (MR) collaboration system that shares various Near-Gaze non-verbal communication cues including gaze, hand pointing, gesturing, and heart rate visualisations in real-time. The preliminary results indicate that the interface with the partner’s communication cues visualised close to the gaze point allows users to focus without dividing attention to the collaborator’s physical body movements yet still effectively communicate. Shared gaze visualisations coupled with deictic languages are primarily used to affirm joint attention and mutual understanding, while hand pointing and gesturing are used as secondary. Our approach provides a new way to help enable effective remote collaboration through varied empathic communication visualisations and modalities which covers different task properties and spatial setups.},
    booktitle = {ACM SIGGRAPH 2022 Posters},
    articleno = {29},
    numpages = {2},
    location = {Vancouver, BC, Canada},
    series = {SIGGRAPH '22}
    }
    In this poster, we present a live 360° panoramic-video based empathic Mixed Reality (MR) collaboration system that shares various Near-Gaze non-verbal communication cues including gaze, hand pointing, gesturing, and heart rate visualisations in real-time. The preliminary results indicate that the interface with the partner’s communication cues visualised close to the gaze point allows users to focus without dividing attention to the collaborator’s physical body movements yet still effectively communicate. Shared gaze visualisations coupled with deictic languages are primarily used to affirm joint attention and mutual understanding, while hand pointing and gesturing are used as secondary. Our approach provides a new way to help enable effective remote collaboration through varied empathic communication visualisations and modalities which covers different task properties and spatial setups.
  • 2021
  • A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning
    Ihshan Gumilar, Ekansh Sareen, Reed Bell, Augustus Stone, Ashkan Hayati, Jingwen Mao, Amit Barde, Anubha Gupta, Arindam Dey, Gun Lee, Mark Billinghurst

    Gumilar, I., Sareen, E., Bell, R., Stone, A., Hayati, A., Mao, J., ... & Billinghurst, M. (2021). A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning. Computers & Graphics, 94, 62-75.

    @article{gumilar2021comparative,
    title={A comparative study on inter-brain synchrony in real and virtual environments using hyperscanning},
    author={Gumilar, Ihshan and Sareen, Ekansh and Bell, Reed and Stone, Augustus and Hayati, Ashkan and Mao, Jingwen and Barde, Amit and Gupta, Anubha and Dey, Arindam and Lee, Gun and others},
    journal={Computers \& Graphics},
    volume={94},
    pages={62--75},
    year={2021},
    publisher={Elsevier}
    }
    Researchers have employed hyperscanning, a technique used to simultaneously record neural activity from multiple participants, in real-world collaborations. However, to the best of our knowledge, there is no study that has used hyperscanning in Virtual Reality (VR). The aims of this study were; firstly, to replicate results of inter-brain synchrony reported in existing literature for a real world task and secondly, to explore whether the inter-brain synchrony could be elicited in a Virtual Environment (VE). This paper reports on three pilot-studies in two different settings (real-world and VR). Paired participants performed two sessions of a finger-pointing exercise separated by a finger-tracking exercise during which their neural activity was simultaneously recorded by electroencephalography (EEG) hardware. By using Phase Locking Value (PLV) analysis, VR was found to induce similar inter-brain synchrony as seen in the real-world. Further, it was observed that the finger-pointing exercise shared the same neurally activated area in both the real-world and VR. Based on these results, we infer that VR can be used to enhance inter-brain synchrony in collaborative tasks carried out in a VE. In particular, we have been able to demonstrate that changing visual perspective in VR is capable of eliciting inter-brain synchrony. This demonstrates that VR could be an exciting platform to explore the phenomena of inter-brain synchrony further and provide a deeper understanding of the neuroscience of human communication.
  • Grand Challenges for Augmented Reality
    Mark Billinghurst

    Billinghurst, M. (2021). Grand Challenges for Augmented Reality. Frontiers in Virtual Reality, 2, 12.

    @article{billinghurst2021grand,
    title={Grand Challenges for Augmented Reality},
    author={Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={12},
    year={2021},
    publisher={Frontiers}
    }
  • Bringing full-featured mobile phone interaction into virtual reality
    H Bai, L Zhang, J Yang, M Billinghurst

    Bai, H., Zhang, L., Yang, J., & Billinghurst, M. (2021). Bringing full-featured mobile phone interaction into virtual reality. Computers & Graphics, 97, 42-53.

    @article{bai2021bringing,
    title={Bringing full-featured mobile phone interaction into virtual reality},
    author={Bai, Huidong and Zhang, Li and Yang, Jing and Billinghurst, Mark},
    journal={Computers \& Graphics},
    volume={97},
    pages={42--53},
    year={2021},
    publisher={Elsevier}
    }

    Virtual Reality (VR) Head-Mounted Display (HMD) technology immerses a user in a computer generated virtual environment. However, a VR HMD also blocks the users’ view of their physical surroundings, and so prevents them from using their mobile phones in a natural manner. In this paper, we present a novel Augmented Virtuality (AV) interface that enables people to naturally interact with a mobile phone in real time in a virtual environment. The system allows the user to wear a VR HMD while seeing his/her 3D hands captured by a depth sensor and rendered in different styles, and enables the user to operate a virtual mobile phone aligned with their real phone. We conducted a formal user study to compare the AV interface with physical touch interaction on user experience in five mobile applications. Participants reported that our system brought the real mobile phone into the virtual world. Unfortunately, the experiment results indicated that using a phone with our AV interfaces in VR was more difficult than the regular smartphone touch interaction, with increased workload and lower system usability, especially for a typing task. We ran a follow-up study to compare different hand visualizations for text typing using the AV interface. Participants felt that a skin-colored hand visualization method provided better usability and immersiveness than other hand rendering styles.

  • SecondSight: A Framework for Cross-Device Augmented Reality Interfaces
    Reichherzer, Carolin, Jack Fraser, Damien Constantine Rompapas, Mark Billinghurst.

    Reichherzer, C., Fraser, J., Rompapas, D. C., & Billinghurst, M. (2021, May). SecondSight: A Framework for Cross-Device Augmented Reality Interfaces. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-6).

    @inproceedings{reichherzer2021secondsight,
    title={SecondSight: A Framework for Cross-Device Augmented Reality Interfaces},
    author={Reichherzer, Carolin and Fraser, Jack and Rompapas, Damien Constantine and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--6},
    year={2021}
    }
    This paper describes a modular framework developed to facilitate the design space exploration of cross-device Augmented Reality (AR) interfaces that combine an AR head-mounted display (HMD) with a smartphone. Currently, there is a growing interest in how AR HMDs can be used with smartphones to improve the user’s AR experience. In this work, we describe a framework that enables rapid prototyping and evaluation of an interface. Our system enables different modes of interaction, content placement, and simulated AR HMD field of view to assess which combination is best suited to inform future researchers on design recommendations. We provide examples of how the framework could be used to create sample applications, the types of the studies which could be supported, and example results from a simple pilot study.
  • Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    Allison Jing, Kieran May, Gun Lee, Mark Billinghurst.

    Jing, A., May, K., Lee, G., & Billinghurst, M. (2021). Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration. Frontiers in Virtual Reality, 2, 79.

    @article{jing2021eye,
    title={Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration},
    author={Jing, Allison and May, Kieran and Lee, Gun and Billinghurst, Mark},
    journal={Frontiers in Virtual Reality},
    volume={2},
    pages={79},
    year={2021},
    publisher={Frontiers}
    }
    Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi-directional collaborative BDC visualisations are rated significantly better at representing joint attention and user intention compared to a non-collaborative (NC) condition, and hence are considered more engaging. The Laser Eye condition, spatially embodied with gaze direction, is perceived significantly more effective as it encourages mutual gaze awareness with a relatively low mental effort in a less constrained workspace. In addition, by offering additional virtual representation that compensates for verbal descriptions and hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues coupled with deictic references during co-located symmetric collaboration. We provide a summary of the lessons learned, limitations of the study, and directions for future research.
  • First Contact‐Take 2: Using XR technology as a bridge between Māori, Pākehā and people from other cultures in Aotearoa, New Zealand
    Mairi Gunn, Mark Billinghurst, Huidong Bai, Prasanth Sasikumar.

    Gunn, M., Billinghurst, M., Bai, H., & Sasikumar, P. (2021). First Contact‐Take 2: Using XR technology as a bridge between Māori, Pākehā and people from other cultures in Aotearoa, New Zealand. Virtual Creativity, 11(1), 67-90.

    @article{gunn2021first,
    title={First Contact-Take 2: Using XR technology as a bridge between M{\=a}ori, P{\=a}keh{\=a} and people from other cultures in Aotearoa, New Zealand},
    author={Gunn, Mairi and Billinghurst, Mark and Bai, Huidong and Sasikumar, Prasanth},
    journal={Virtual Creativity},
    volume={11},
    number={1},
    pages={67--90},
    year={2021},
    publisher={Intellect}
    }
    The art installation common/room explores human‐digital‐human encounter across cultural differences. It comprises a suite of extended reality (XR) experiences that use technology as a bridge to help support human connections with a view to overcoming intercultural discomfort (racism). The installations are exhibited as an informal dining room, where each table hosts a distinct experience designed to bring people together in a playful yet meaningful way. Each experience uses different technologies, including 360° 3D virtual reality (VR) in a headset (common/place), 180° 3D projection (Common Sense) and augmented reality (AR) (Come to the Table! and First Contact ‐ Take 2). This article focuses on the latter, First Contact ‐ Take 2, in which visitors are invited to sit at a dining table, wear an AR head-mounted display and encounter a recorded volumetric representation of an Indigenous Māori woman seated opposite them. She speaks directly to the visitor out of a culture that has refined collective endeavour and relational psychology over millennia. The contextual and methodological framework for this research is international commons scholarship and practice that sits within a set of relationships outlined by the Mātike Mai Report on constitutional transformation for Aotearoa, New Zealand. The goal is to practise and build new relationships between Māori and Tauiwi, including Pākehā.
  • ShowMeAround: Giving Virtual Tours Using Live 360 Video
    Alaeddin Nassani, Li Zhang, Huidong Bai, Mark Billinghurst.

    Nassani, A., Zhang, L., Bai, H., & Billinghurst, M. (2021, May). ShowMeAround: Giving Virtual Tours Using Live 360 Video. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-4).

    @inproceedings{nassani2021showmearound,
    title={ShowMeAround: Giving Virtual Tours Using Live 360 Video},
    author={Nassani, Alaeddin and Zhang, Li and Bai, Huidong and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--4},
    year={2021}
    }
    This demonstration presents ShowMeAround, a video conferencing system designed to allow people to give virtual tours over live 360-video. Using ShowMeAround a host presenter walks through a real space and can live stream a 360-video view to a small group of remote viewers. The ShowMeAround interface has features such as remote pointing and viewpoint awareness to support natural collaboration between the viewers and host presenter. The system also enables sharing of pre-recorded high resolution 360 video and still images to further enhance the virtual tour experience.
  • Manipulating Avatars for Enhanced Communication in Extended Reality
    Jonathon Hart, Thammathip Piumsomboon, Gun A. Lee, Ross T. Smith, Mark Billinghurst.

    Hart, J. D., Piumsomboon, T., Lee, G. A., Smith, R. T., & Billinghurst, M. (2021, May). Manipulating Avatars for Enhanced Communication in Extended Reality. In 2021 IEEE International Conference on Intelligent Reality (ICIR) (pp. 9-16). IEEE.

    @inproceedings{hart2021manipulating,
    title={Manipulating Avatars for Enhanced Communication in Extended Reality},
    author={Hart, Jonathon Derek and Piumsomboon, Thammathip and Lee, Gun A and Smith, Ross T and Billinghurst, Mark},
    booktitle={2021 IEEE International Conference on Intelligent Reality (ICIR)},
    pages={9--16},
    year={2021},
    organization={IEEE}
    }
    Avatars are common virtual representations used in Extended Reality (XR) to support interaction and communication between remote collaborators. Recent advancements in wearable displays provide features such as eye and face-tracking, to enable avatars to express non-verbal cues in XR. The research in this paper investigates the impact of avatar visualization on Social Presence and user’s preference by simulating face tracking in an asymmetric XR remote collaboration between a desktop user and a Virtual Reality (VR) user. Our study was conducted between pairs of participants, one on a laptop computer supporting face tracking and the other being immersed in VR, experiencing different visualization conditions. They worked together to complete an island survival task. We found that the users preferred 3D avatars with facial expressions placed in the scene, compared to 2D screen attached avatars without facial expressions. Participants felt that the presence of the collaborator’s avatar improved overall communication, yet Social Presence was not significantly different between conditions as they mainly relied on audio for communication.
  • Adapting Fitts’ Law and N-Back to Assess Hand Proprioception
    Tamil Gunasekaran, Ryo Hajika, Chloe Dolma Si Ying Haigh, Yun Suen Pai, Danielle Lottridge, Mark Billinghurst.

    Gunasekaran, T. S., Hajika, R., Haigh, C. D. S. Y., Pai, Y. S., Lottridge, D., & Billinghurst, M. (2021, May). Adapting Fitts’ Law and N-Back to Assess Hand Proprioception. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-7).

    @inproceedings{gunasekaran2021adapting,
    title={Adapting Fitts’ Law and N-Back to Assess Hand Proprioception},
    author={Gunasekaran, Tamil Selvan and Hajika, Ryo and Haigh, Chloe Dolma Si Ying and Pai, Yun Suen and Lottridge, Danielle and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--7},
    year={2021}
    }
    Proprioception is the body’s ability to sense the position and movement of each limb, as well as the amount of effort exerted onto or by them. Methods to assess proprioception have been introduced before, yet there is little to no study on assessing the degree of proprioception on body parts for use cases like gesture recognition wearable computing. We propose the use of Fitts’ law coupled with the N-Back task to evaluate proprioception of the hand. We evaluate 15 distinct points at the back of the hand and assess the musing extended 3D Fitts’ law. Our results show that the index of difficulty of tapping point from thumb to pinky increases gradually with a linear regression factor of 0.1144. Additionally, participants perform the tap before performing the N-Back task. From these results, we discuss the fundamental limitations and suggest how Fitts’ law can be further extended to assess proprioception
  • XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing
    Prasanth Sasikumar, Max Collins, Huidong Bai, Mark Billinghurst.

    Sasikumar, P., Collins, M., Bai, H., & Billinghurst, M. (2021, May). XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-4).

    @inproceedings{sasikumar2021xrtb,
    title={XRTB: A Cross Reality Teleconference Bridge to incorporate 3D interactivity to 2D Teleconferencing},
    author={Sasikumar, Prasanth and Collins, Max and Bai, Huidong and Billinghurst, Mark},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--4},
    year={2021}
    }
    We present XRTeleBridge (XRTB), an application that integrates a Mixed Reality (MR) interface into existing teleconferencing solutions like Zoom. Unlike conventional webcam, XRTB provides a window into the virtual world to demonstrate and visualize content. Participants can join via webcam or via head mounted display (HMD) in a Virtual Reality (VR) environment. It enables users to embody 3D avatars with natural gestures and eye gaze. A camera in the virtual environment operates as a video feed to the teleconferencing software. An interface resembling a tablet mirrors the teleconferencing window inside the virtual environment, thus enabling the participant in the VR environment to see the webcam participants in real-time. This allows the presenter to view and interact with other participants seamlessly. To demonstrate the system’s functionalities, we created a virtual chemistry lab environment and presented an example lesson using the virtual space and virtual objects and effects.
  • Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR
    Ihshan Gumilar, Amit Barde, Ashkan Hayati, Mark Billinghurst, Gun Lee, Abdul Momin, Charles Averill, Arindam Dey.

    Gumilar, I., Barde, A., Hayati, A. F., Billinghurst, M., Lee, G., Momin, A., ... & Dey, A. (2021, May). Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-7).

    @inproceedings{gumilar2021connecting,
    title={Connecting the Brains via Virtual Eyes: Eye-Gaze Directions and Inter-brain Synchrony in VR},
    author={Gumilar, Ihshan and Barde, Amit and Hayati, Ashkan F and Billinghurst, Mark and Lee, Gun and Momin, Abdul and Averill, Charles and Dey, Arindam},
    booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    pages={1--7},
    year={2021}
    }
    Hyperscanning is an emerging method for measuring two or more brains simultaneously. This method allows researchers to simultaneously record neural activity from two or more people. While this method has been extensively implemented over the last five years in the real-world to study inter-brain synchrony, there is little work that has been undertaken in the use of hyperscanning in virtual environments. Preliminary research in the area demonstrates that inter-brain synchrony in virtual environments can be achieved in a mannersimilar to thatseen in the real world. The study described in this paper proposes to further research in the area by studying how non-verbal communication cues in social interactions in virtual environments can afect inter-brain synchrony. In particular, we concentrate on the role eye gaze playsin inter-brain synchrony. The aim of this research is to explore how eye gaze afects inter-brain synchrony between users in a collaborative virtual environment
  • Tool-based asymmetric interaction for selection in VR.
    Qianyuan Zou; Huidong Bai; Gun Lee; Allan Fowler; Mark Billinghurst

    Zou, Q., Bai, H., Zhang, Y., Lee, G., Allan, F., & Mark, B. (2021). Tool-based asymmetric interaction for selection in vr. In SIGGRAPH Asia 2021 Technical Communications (pp. 1-4).

    @incollection{zou2021tool,
    title={Tool-based asymmetric interaction for selection in vr},
    author={Zou, Qianyuan and Bai, Huidong and Zhang, Yuewei and Lee, Gun and Allan, Fowler and Mark, Billinghurst},
    booktitle={SIGGRAPH Asia 2021 Technical Communications},
    pages={1--4},
    year={2021}
    }
    Mainstream Virtual Reality (VR) devices on the market nowadays mostly use symmetric interaction design for input, yet common practice by artists suggests asymmetric interaction using different input tools in each hand could be a better alternative for 3D modeling tasks in VR. In this paper, we explore the performance and usability of a tool-based asymmetric interaction method for a 3D object selection task in VR and compare it with a symmetric interface. The symmetric VR interface uses two identical handheld controllers to select points on a sphere, while the asymmetric interface uses a handheld controller and a stylus. We conducted a user study to compare these two interfaces and found that the asymmetric system was faster, required less workload, and was rated with better usability. We also discuss the opportunities for tool-based asymmetric input to optimize VR art workflows and future research directions.
  • eyemR-Talk system overview: Illustration and demonstration of the system setup, gaze states, and shared gaze indicator interface designs
    eyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues
    Allison Jing , Brandon Matthews , Kieran May , Thomas Clarke , Gun Lee , Mark Billinghurst

    Allison Jing, Brandon Matthews, Kieran May, Thomas Clarke, Gun Lee, and Mark Billinghurst. 2021. EyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues. In SIGGRAPH Asia 2021 Posters (SA '21 Posters). Association for Computing Machinery, New York, NY, USA, Article 16, 1–2. https://doi.org/10.1145/3476124.3488618

    @inproceedings{10.1145/3476124.3488618,
    author = {Jing, Allison and Matthews, Brandon and May, Kieran and Clarke, Thomas and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Talk: Using Speech to Visualise Shared MR Gaze Cues},
    year = {2021},
    isbn = {9781450386876},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3476124.3488618},
    doi = {10.1145/3476124.3488618},
    abstract = {In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.},
    booktitle = {SIGGRAPH Asia 2021 Posters},
    articleno = {16},
    numpages = {2},
    keywords = {Mixed Reality remote collaboration, gaze visualization, speech input},
    location = {Tokyo, Japan},
    series = {SA '21 Posters}
    }
    In this poster we present eyemR-Talk, a Mixed Reality (MR) collaboration system that uses speech input to trigger shared gaze visualisations between remote users. The system uses 360° panoramic video to support collaboration between a local user in the real world in an Augmented Reality (AR) view and a remote collaborator in Virtual Reality (VR). Using specific speech phrases to turn on virtual gaze visualisations, the system enables contextual speech-gaze interaction between collaborators. The overall benefit is to achieve more natural gaze awareness, leading to better communication and more effective collaboration.
  • The eyemR-Vis prototype system, showing an AR user (HoloLens2) sharing gaze cues with a VR user (HTC Vive Pro Eye)
    eyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration
    Allison Jing , Kieran William May , Mahnoor Naeem , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran William May, Mahnoor Naeem, Gun Lee, and Mark Billinghurst. 2021. EyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, Article 283, 1–7. https://doi.org/10.1145/3411763.3451844

    @inproceedings{10.1145/3411763.3451844,
    author = {Jing, Allison and May, Kieran William and Naeem, Mahnoor and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Vis: Using Bi-Directional Gaze Behavioural Cues to Improve Mixed Reality Remote Collaboration},
    year = {2021},
    isbn = {9781450380959},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3411763.3451844},
    doi = {10.1145/3411763.3451844},
    abstract = {Gaze is one of the most important communication cues in face-to-face collaboration. However, in remote collaboration, sharing dynamic gaze information is more difficult. In this research, we investigate how sharing gaze behavioural cues can improve remote collaboration in a Mixed Reality (MR) environment. To do this, we developed eyemR-Vis, a 360 panoramic Mixed Reality remote collaboration system that shows gaze behavioural cues as bi-directional spatial virtual visualisations shared between a local host and a remote collaborator. Preliminary results from an exploratory study indicate that using virtual cues to visualise gaze behaviour has the potential to increase co-presence, improve gaze awareness, encourage collaboration, and is inclined to be less physically demanding or mentally distracting.},
    booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    articleno = {283},
    numpages = {7},
    keywords = {Human-Computer Interaction, Gaze Visualisation, Mixed Reality Remote Collaboration, CSCW},
    location = {Yokohama, Japan},
    series = {CHI EA '21}
    }
    Gaze is one of the most important communication cues in face-to-face collaboration. However, in remote collaboration, sharing dynamic gaze information is more difficult. In this research, we investigate how sharing gaze behavioural cues can improve remote collaboration in a Mixed Reality (MR) environment. To do this, we developed eyemR-Vis, a 360 panoramic Mixed Reality remote collaboration system that shows gaze behavioural cues as bi-directional spatial virtual visualisations shared between a local host and a remote collaborator. Preliminary results from an exploratory study indicate that using virtual cues to visualise gaze behaviour has the potential to increase co-presence, improve gaze awareness, encourage collaboration, and is inclined to be less physically demanding or mentally distracting.
  • The eyemR-Vis prototype system, showing an AR user (HoloLens2) sharing gaze cues with a VR user (HTC Vive Pro Eye)
    eyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators
    Allison Jing , Kieran William May , Mahnoor Naeem , Gun Lee , Mark Billinghurst

    Allison Jing, Kieran William May, Mahnoor Naeem, Gun Lee, and Mark Billinghurst. 2021. EyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, Article 188, 1–4. https://doi.org/10.1145/3411763.3451545

    @inproceedings{10.1145/3411763.3451545,
    author = {Jing, Allison and May, Kieran William and Naeem, Mahnoor and Lee, Gun and Billinghurst, Mark},
    title = {EyemR-Vis: A Mixed Reality System to Visualise Bi-Directional Gaze Behavioural Cues Between Remote Collaborators},
    year = {2021},
    isbn = {9781450380959},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3411763.3451545},
    doi = {10.1145/3411763.3451545},
    abstract = {This demonstration shows eyemR-Vis, a 360 panoramic Mixed Reality collaboration system that translates gaze behavioural cues to bi-directional visualisations between a local host (AR) and a remote collaborator (VR). The system is designed to share dynamic gaze behavioural cues as bi-directional spatial virtual visualisations between a local host and a remote collaborator. This enables richer communication of gaze through four visualisation techniques: browse, focus, mutual-gaze, and fixated circle-map. Additionally, our system supports simple bi-directional avatar interaction as well as panoramic video zoom. This makes interaction in the normally constrained remote task space more flexible and relatively natural. By showing visual communication cues that are physically inaccessible in the remote task space through reallocating and visualising the existing ones, our system aims to provide a more engaging and effective remote collaboration experience.},
    booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
    articleno = {188},
    numpages = {4},
    keywords = {Gaze Visualisation, Human-Computer Interaction, Mixed Reality Remote Collaboration, CSCW},
    location = {Yokohama, Japan},
    series = {CHI EA '21}
    }
    This demonstration shows eyemR-Vis, a 360 panoramic Mixed Reality collaboration system that translates gaze behavioural cues to bi-directional visualisations between a local host (AR) and a remote collaborator (VR). The system is designed to share dynamic gaze behavioural cues as bi-directional spatial virtual visualisations between a local host and a remote collaborator. This enables richer communication of gaze through four visualisation techniques: browse, focus, mutual-gaze, and fixated circle-map. Additionally, our system supports simple bi-directional avatar interaction as well as panoramic video zoom. This makes interaction in the normally constrained remote task space more flexible and relatively natural. By showing visual communication cues that are physically inaccessible in the remote task space through reallocating and visualising the existing ones, our system aims to provide a more engaging and effective remote collaboration experience.
  • 2020
  • Time to Get Personal: Individualised Virtual Reality for Mental Health
    Nilufar Baghaei , Lehan Stemmet , Andrej Hlasnik , Konstantin Emanov , Sylvia Hach , John A. Naslund , Mark Billinghurst , Imran Khaliq , Hai-Ning Liang

    Nilufar Baghaei, Lehan Stemmet, Andrej Hlasnik, Konstantin Emanov, Sylvia Hach, John A. Naslund, Mark Billinghurst, Imran Khaliq, and Hai-Ning Liang. 2020. Time to Get Personal: Individualised Virtual Reality for Mental Health. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–9. DOI:https://doi.org/10.1145/3334480.3382932

    @inproceedings{baghaei2020time,
    title={Time to Get Personal: Individualised Virtual Reality for Mental Health},
    author={Baghaei, Nilufar and Stemmet, Lehan and Hlasnik, Andrej and Emanov, Konstantin and Hach, Sylvia and Naslund, John A and Billinghurst, Mark and Khaliq, Imran and Liang, Hai-Ning},
    booktitle={Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--9},
    year={2020}
    }
    Mental health conditions pose a major challenge to healthcare providers and society at large. Early intervention can have significant positive impact on a person's prognosis, particularly important in improving mental health outcomes and functioning for young people. Virtual Reality (VR) in mental health is an emerging and innovative field. Recent studies support the use of VR technology in the treatment of anxiety, phobia, eating disorders, addiction, and pain management. However, there is little research on using VR for supporting, treatment and prevention of depression - a field that is very much emerging. There is also very little work done in offering individualised VR experience to users with mental health issues. This paper proposes iVR, a novel individualised VR for improving users' self-compassion, and in the long run, their positive mental health. We describe the concept, design, architecture and implementation of iVR and outline future work. We believe this contribution will pave the way for large-scale efficacy testing, clinical use, and potentially cost-effective delivery of VR technology for mental health therapy in future.
  • A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing
    Huidong Bai , Prasanth Sasikumar , Jing Yang , Mark Billinghurst

    Huidong Bai, Prasanth Sasikumar, Jing Yang, and Mark Billinghurst. 2020. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376550

    @inproceedings{bai2020user,
    title={A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing},
    author={Bai, Huidong and Sasikumar, Prasanth and Yang, Jing and Billinghurst, Mark},
    booktitle={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
    pages={1--13},
    year={2020}
    }
    Supporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.
  • OmniGlobeVR: A Collaborative 360° Communication System for VR
    Zhengqing Li , Liwei Chan , Theophilus Teo , Hideki Koike

    Zhengqing Li, Liwei Chan, Theophilus Teo, and Hideki Koike. 2020. OmniGlobeVR: A Collaborative 360° Communication System for VR. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. DOI:https://doi.org/10.1145/3334480.3382869

    @inproceedings{li2020omniglobevr,
    title={OmniGlobeVR: A Collaborative 360 Communication System for VR},
    author={Li, Zhengqing and Chan, Liwei and Teo, Theophilus and Koike, Hideki},
    booktitle={Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--8},
    year={2020}
    }
    In this paper, we propose OmniGlobeVR, a novel collaboration tool based on an asymmetric cooperation system that supports communication and cooperation between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. The OmniGlobeVR allows designer(s) to access the content of a VR space from any point of view using two view modes: 360° first-person mode and third-person mode. Furthermore, a proper interface of a shared gaze awareness cue is designed to enhance communication between the occupant and the designer(s). The system also has a face window feature that allows designer(s) to share their facial expressions and upper body gesture with the occupant in order to exchange and express information in a nonverbal context. Combined together, the OmniGlobeVR allows collaborators between the VR and non-VR platforms to cooperate while allowing designer(s) to easily access physical assets while working synchronously with the occupant in the VR space.
  • MazeRunVR: An Open Benchmark for VR Locomotion Performance, Preference and Sickness in the Wild
    Kirill Ragozin , Kai Kunze , Karola Marky , Yun Suen Pai

    Kirill Ragozin, Kai Kunze, Karola Marky, and Yun Suen Pai. 2020. MazeRunVR: An Open Benchmark for VR Locomotion Performance, Preference and Sickness in the Wild. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. DOI:https://doi.org/10.1145/3334480.3383035

    @inproceedings{ragozin2020mazerunvr,
    title={MazeRunVR: An Open Benchmark for VR Locomotion Performance, Preference and Sickness in the Wild},
    author={Ragozin, Kirill and Kunze, Kai and Marky, Karola and Pai, Yun Suen},
    booktitle={Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts},
    pages={1--8},
    year={2020}
    }
    Locomotion in virtual reality (VR) is one of the biggest problems for large scale adoption of VR applications. Yet, to our knowledge, there are few studies conducted in-the-wild to understand performance metrics and general user preference for different mechanics. In this paper, we present the first steps towards an open framework to create a VR locomotion benchmark. As a viability study, we investigate how well the users move in VR when using three different locomotion mechanics. It was played in over 124 sessions across 10 countries in a period of three weeks. The included prototype locomotion mechanics are arm swing,walk-in-place and trackpad movement. We found that over-all, users performed significantly faster using arm swing and trackpad when compared to walk-in-place. For subjective preference, arm swing was significantly more preferred over the other two methods. Finally for induced sickness, walk-in-place was the overall most sickness-inducing locomotion method.
  • A Constrained Path Redirection for Passive Haptics
    Lili Wang ; Zixiang Zhao ; Xuefeng Yang ; Huidong Bai ; Amit Barde ; Mark Billinghurst

    L. Wang, Z. Zhao, X. Yang, H. Bai, A. Barde and M. Billinghurst, "A Constrained Path Redirection for Passive Haptics," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 2020, pp. 651-652, doi: 10.1109/VRW50115.2020.00176.

    @inproceedings{wang2020constrained,
    title={A Constrained Path Redirection for Passive Haptics},
    author={Wang, Lili and Zhao, Zixiang and Yang, Xuefeng and Bai, Huidong and Barde, Amit and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={651--652},
    year={2020},
    organization={IEEE}
    }
    Navigation with passive haptic feedback can enhance users’ immersion in virtual environments. We propose a constrained path redirection method to provide users with corresponding haptic feedback at the right time and place. We have quantified the VR exploration practicality in a study and the results show advantages over steer-to-center method in terms of presence, and over Steinicke’s method in terms of matching errors and presence.
  • Neurophysiological Effects of Presence in Calm Virtual Environments
    Arindam Dey ; Jane Phoon ; Shuvodeep Saha ; Chelsea Dobbins ; Mark Billinghurst

    A. Dey, J. Phoon, S. Saha, C. Dobbins and M. Billinghurst, "Neurophysiological Effects of Presence in Calm Virtual Environments," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 2020, pp. 745-746, doi: 10.1109/VRW50115.2020.00223.

    @inproceedings{dey2020neurophysiological,
    title={Neurophysiological Effects of Presence in Calm Virtual Environments},
    author={Dey, Arindam and Phoon, Jane and Saha, Shuvodeep and Dobbins, Chelsea and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
    pages={745--746},
    year={2020},
    organization={IEEE}
    }
    Presence, the feeling of being there, is an important factor that affects the overall experience of virtual reality. Presence is measured through post-experience subjective questionnaires. While questionnaires are a widely used method in human-based research, they suffer from participant biases, dishonest answers, and fatigue. In this paper, we measured the effects of different levels of presence (high and low) in virtual environments using physiological and neurological signals as an alternative method. Results indicated a significant effect of presence on both physiological and neurological signals.
  • Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality
    Kunal Gupta, Ryo Hajika, Yun Suen Pai, Andreas Duenser, Martin Lochner, Mark Billinghurst

    K. Gupta, R. Hajika, Y. S. Pai, A. Duenser, M. Lochner and M. Billinghurst, "Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality," 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 2020, pp. 756-765, doi: 10.1109/VR46266.2020.1581313729558.

    @inproceedings{gupta2020measuring,
    title={Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality},
    author={Gupta, Kunal and Hajika, Ryo and Pai, Yun Suen and Duenser, Andreas and Lochner, Martin and Billinghurst, Mark},
    booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={756--765},
    year={2020},
    organization={IEEE}
    }
    With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user’s trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users’ head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
  • Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction
    Peng Wang, Xiaoliang Bai, Mark Billinghurst, Shusheng Zhang, Dechuan Han, Mengmeng Sun, Zhuo Wang, Hao Lv, Shu Han

    Wang, Peng, et al. "Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction." International Journal of Human–Computer Interaction (2020): 1-16.

    @article{wang2020haptic,
    title={Haptic Feedback Helps Me? A VR-SAR Remote Collaborative System with Tangible Interaction},
    author={Wang, Peng and Bai, Xiaoliang and Billinghurst, Mark and Zhang, Shusheng and Han, Dechuan and Sun, Mengmeng and Wang, Zhuo and Lv, Hao and Han, Shu},
    journal={International Journal of Human--Computer Interaction},
    pages={1--16},
    year={2020},
    publisher={Taylor \& Francis}
    }
    Research on Augmented Reality (AR)/Mixed Reality (MR) remote collaboration for physical tasks remains a compelling and dynamic area of study. AR systems have been developed which transmit virtual annotations between remote collaborators, but there has been little research on how haptic feedback can also be shared. In this paper, we present a Virtual Reality (VR)-Spatial Augmented Reality (SAR) remote collaborative system that provides haptic feedback with tangible interaction between a local worker and a remote expert helper. Using this system, we conducted a within-subject user study to compare two interfaces for remote collaboration between a local worker and expert helper, one with mid-air free drawing (MFD) and one with tangible physical drawing (TPD). The results showed that there were no significant differences with respect to performance time and operation errors. However, users felt that the TPD interface supporting passive haptic feedback could significantly improve the remote experts’ user experience in VR. Our research provides useful information on the way for gesture- and gaze-based multimodal interaction supporting haptic feedback in AR/MR remote collaboration on physical tasks.
  • Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision
    Rory M. S. Clifford, Hendrik Engelbrecht, Sungchul Jung, Hamish Oliver, Mark Billinghurst, Robert W. Lindeman & Simon Hoermann

    Clifford, Rory MS, et al. "Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision." The Visual Computer (2020): 1-14.

    @article{clifford2020aerial,
    title={Aerial firefighter radio communication performance in a virtual training system: radio communication disruptions simulated in VR for Air Attack Supervision},
    author={Clifford, Rory MS and Engelbrecht, Hendrik and Jung, Sungchul and Oliver, Hamish and Billinghurst, Mark and Lindeman, Robert W and Hoermann, Simon},
    journal={The Visual Computer},
    pages={1--14},
    year={2020},
    publisher={Springer}
    }
    Communication disruptions are frequent in aerial firefighting. Information is more easily lost over multiple radio channels, busy with simultaneous conversations. Such a high bandwidth of information throughput creates mental overload. Further problems with hardware or radio signals being disrupted over long distances or by mountainous terrain make it difficult to coordinate firefighting efforts. This creates stressful conditions and requires certain expertise to manage effectively. An experiment was conducted which tested the effects of disrupting users communications equipment and measured their stress levels as well as communication performance. This research investigated how realistic communication disruptions have an effect on behavioural changes in communication frequency, as well as physiological stress by means of measuring heart rate variability (HRV). Broken radio transmissions created a greater degree of stress than background chatter alone. Experts could maintain a more stable HRV during disruptions than novices, which was calculated on the change in HRV during the experiment. From this, we deduce that experts have a better ability to manage stress. We also noted strategies employed by experts such as relaying to overcome the radio challenges, as opposed to novices who would not find a solution, effectively giving up.
  • An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills
    Emin İbili, Mevlüt Çat, Dmitry Resnyansky, Sami Şahin & Mark Billinghurst

    İbili, Emin, et al. "An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills." International Journal of Mathematical Education in Science and Technology 51.2 (2020): 224-246.

    @article{ibili2020assessment,
    title={An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills},
    author={{\.I}bili, Emin and {\c{C}}at, Mevl{\"u}t and Resnyansky, Dmitry and {\c{S}}ahin, Sami and Billinghurst, Mark},
    journal={International Journal of Mathematical Education in Science and Technology},
    volume={51},
    number={2},
    pages={224--246},
    year={2020},
    publisher={Taylor \& Francis}
    }
    The aim of this research was to examine the effect of Augmented Reality (AR) supported geometry teaching on students’ 3D thinking skills. This research consisted of 3 steps: (1) developing a 3D thinking ability scale, (ii) design and development of an AR Geometry Tutorial System (ARGTS) and (iii) implementation and assessment of geometry teaching supported with ARGTS. A 3D thinking ability scale was developed and tested with experimental and control groups as a pre- and post-test evaluation. An AR Geometry Tutorial System (ARGTS) and AR teaching materials and environments were developed to enhance 3D thinking skills. A user study with these materials found that geometry teaching supported by ARGTS significantly increased the students’ 3D thinking skills. The increase in average scores of Structuring 3D arrays of cubes and Calculation of the volume and the area of solids thinking skills was not statistically significant (p > 0.05). In terms of other 3D geometric thinking skills’ subfactors of the scale a statistically significant difference was found in favour of the experimental group in pre-test and post-test scores (p < 0.05). The biggest difference was found on ability to recognize and create 3D shapes (p < 0.01).The results of this research are particularly important for identifying individual differences in 3D thinking skills of secondary school students and creating personalized dynamic intelligent learning environments.
  • Using augmented reality with speech input for non-native children’s language learning
    Che Samihah Che Dalim, Mohd Shahrizal, Sunar, Arindam Dey, MarkBillinghurst

    Dalim, Che Samihah Che, et al. "Using augmented reality with speech input for non-native children's language learning." International Journal of Human-Computer Studies 134 (2020): 44-64.

    @article{dalim2020using,
    title={Using augmented reality with speech input for non-native children's language learning},
    author={Dalim, Che Samihah Che and Sunar, Mohd Shahrizal and Dey, Arindam and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    volume={134},
    pages={44--64},
    year={2020},
    publisher={Elsevier}
    }
    Augmented Reality (AR) offers an enhanced learning environment which could potentially influence children's experience and knowledge gain during the language learning process. Teaching English or other foreign languages to children with different native language can be difficult and requires an effective strategy to avoid boredom and detachment from the learning activities. With the growing numbers of AR education applications and the increasing pervasiveness of speech recognition, we are keen to understand how these technologies benefit non-native young children in learning English. In this paper, we explore children's experience in terms of knowledge gain and enjoyment when learning through a combination of AR and speech recognition technologies. We developed a prototype AR interface called TeachAR, and ran two experiments to investigate how effective the combination of AR and speech recognition was towards the learning of 1) English terms for color and shapes, and 2) English words for spatial relationships. We found encouraging results by creating a novel teaching strategy using these two technologies, not only in terms of increase in knowledge gain and enjoyment when compared with traditional strategy but also enables young children to finish the certain task faster and easier.
  • A Review of Hyperscanning and Its Use in Virtual Environments
    Ihshan Gumilar, Ekansh Sareen, Reed Bell, Augustus Stone, Ashkan Hayati, Jingwen Mao, Amit Barde, Anubha Gupta, Arindam Dey, Gun Lee, Mark Billinghurst

    Barde, A., Gumilar, I., Hayati, A. F., Dey, A., Lee, G., & Billinghurst, M. (2020, December). A Review of Hyperscanning and Its Use in Virtual Environments. In Informatics (Vol. 7, No. 4, p. 55). Multidisciplinary Digital Publishing Institute.

    @inproceedings{barde2020review,
    title={A Review of Hyperscanning and Its Use in Virtual Environments},
    author={Barde, Amit and Gumilar, Ihshan and Hayati, Ashkan F and Dey, Arindam and Lee, Gun and Billinghurst, Mark},
    booktitle={Informatics},
    volume={7},
    number={4},
    pages={55},
    year={2020},
    organization={Multidisciplinary Digital Publishing Institute}
    }
    Researchers have employed hyperscanning, a technique used to simultaneously record neural activity from multiple participants, in real-world collaborations. However, to the best of our knowledge, there is no study that has used hyperscanning in Virtual Reality (VR). The aims of this study were; firstly, to replicate results of inter-brain synchrony reported in existing literature for a real world task and secondly, to explore whether the inter-brain synchrony could be elicited in a Virtual Environment (VE). This paper reports on three pilot-studies in two different settings (real-world and VR). Paired participants performed two sessions of a finger-pointing exercise separated by a finger-tracking exercise during which their neural activity was simultaneously recorded by electroencephalography (EEG) hardware. By using Phase Locking Value (PLV) analysis, VR was found to induce similar inter-brain synchrony as seen in the real-world. Further, it was observed that the finger-pointing exercise shared the same neurally activated area in both the real-world and VR. Based on these results, we infer that VR can be used to enhance inter-brain synchrony in collaborative tasks carried out in a VE. In particular, we have been able to demonstrate that changing visual perspective in VR is capable of eliciting inter-brain synchrony. This demonstrates that VR could be an exciting platform to explore the phenomena of inter-brain synchrony further and provide a deeper understanding of the neuroscience of human communication.
  • NeuralDrum: Perceiving Brain Synchronicity in XR Drumming
    Y. S. Pai, Ryo Hajika, Kunla Gupta, Prasnth Sasikumar, Mark Billinghurst.

    Pai, Y. S., Hajika, R., Gupta, K., Sasikumar, P., & Billinghurst, M. (2020). NeuralDrum: Perceiving Brain Synchronicity in XR Drumming. In SIGGRAPH Asia 2020 Technical Communications (pp. 1-4).

    @incollection{pai2020neuraldrum,
    title={NeuralDrum: Perceiving Brain Synchronicity in XR Drumming},
    author={Pai, Yun Suen and Hajika, Ryo and Gupta, Kunal and Sasikumar, Prasanth and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2020 Technical Communications},
    pages={1--4},
    year={2020}
    }
    Brain synchronicity is a neurological phenomena where two or more individuals have their brain activation in phase when performing a shared activity. We present NeuralDrum, an extended reality (XR) drumming experience that allows two players to drum together while their brain signals are simultaneously measured. We calculate the Phase Locking Value (PLV) to determine their brain synchronicity and use this to directly affect their visual and auditory experience in the game, creating a closed feedback loop. In a pilot study, we logged and analysed the users’ brain signals as well as had them answer a subjective questionnaire regarding their perception of synchronicity with their partner and the overall experience. From the results, we discuss design implications to further improve NeuralDrum and propose methods to integrate brain synchronicity into interactive experiences.
  • A prototype system with AR-supported TUI being used in a code construction scenario. Registration patterns on tangible code blocks are tracked with an on-board camera from a handheld tablet device, and respond by displaying passive augmentation to indicate potential for interaction (respective bounding box on each block), as well as corresponding augmented annotation depending on proximity to the viewer. Additional text cues are output to the system's HUD (seen below the main workspace area) to guide learner's programming activity within the environment.
    Augmented reality-supported tangible gamification for debugging learning
    Dmitry Resnyansky

    Resnyansky, D. (2020, December). Augmented reality-supported tangible gamification for debugging learning. In 2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 377-383). IEEE.

    @INPROCEEDINGS{9368410,
    author={Resnyansky, Dmitry},
    booktitle={2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    title={Augmented Reality-Supported Tangible Gamification for Debugging Learning},
    year={2020},
    volume={},
    number={},
    pages={377-383},
    doi={10.1109/TALE48869.2020.9368410}}
    Innovative technologies such as Augmented Reality (AR) and Virtual Reality, tangible user interfaces (TUIs), computer games, robotics and microprocessors have seen an interest within educational research due to their potential for fostering active learning in and outside the classroom. This paper aims to explore the affordances of AR and TUIs as mediums of instruction to address the problem of teaching and learning text-based computer languages, computer science concepts, and programming skills such as debugging. It presents parallels between the technology-supported learning and the active, scaffolded narrative entertainment experience in videogames, and suggests a conceptual framework for the design of learning environments for programming and debugging by using AR and tangible interaction to support scaffolding.
  • Prototype system overview showing a remote expert worker immerse into the local worker’s environment to collaborate
    Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration
    Theophilus Teo, Mitchell Norman, Gun A. Lee, Mark Billinghurst & Matt Adcock

    T. Teo, M. Norman, G. A. Lee, M. Billinghurst and M. Adcock. “Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration.” In: J Multimodal User Interfaces. (JMUI), 2020.

    @article{teo2020exploring,
    title={Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration},
    author={Teo, Theophilus and Norman, Mitchell and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    journal={Journal on Multimodal User Interfaces},
    volume={14},
    pages={373--385},
    year={2020},
    publisher={Springer}
    }
    Remote collaboration using mixed reality (MR) enables two separated workers to collaborate by sharing visual cues. A local worker can share his/her environment to the remote worker for a better contextual understanding. However, prior techniques were using either 360 video sharing or a complicated 3D reconstruction configuration. This limits the interactivity and practicality of the system. In this paper we show an interactive and easy-to-configure MR remote collaboration technique enabling a local worker to easily share his/her environment by integrating 360 panorama images into a low-cost 3D reconstructed scene as photo-bubbles and projective textures. This enables the remote worker to visit past scenes on either an immersive 360 panoramic scenery, or an interactive 3D environment. We developed a prototype and conducted a user study comparing the two modes of how 360 panorama images could be used in a remote collaboration system. Results suggested that both photo-bubbles and projective textures can provide high social presence, co-presence and low cognitive load for solving tasks while each have its advantage and limitations. For example, photo-bubbles are good for a quick navigation inside the 3D environment without depth perception while projective textures are good for spatial understanding but require physical efforts.
  • The OmniGlobeVR enables a VR occupant to communicate and cooperate with multiple designers in the physical world.
    OmniGlobeVR: A Collaborative 360-Degree Communication System for VR
    Zhengqing Li , Theophilus Teo , Liwei Chan , Gun Lee , Matt Adcock , Mark Billinghurst , Hideki Koike

    Z. Li, T. Teo, G. Lee, M. Adcock, M. Billinghurst, H. Koike. “A collaborative 360-degree communication system for VR”. In Proceedings of the 2020 Designing Interactive Systems Conference (DIS2020). ACM, 2020.

    @inproceedings{10.1145/3357236.3395429,
    author = {Li, Zhengqing and Teo, Theophilus and Chan, Liwei and Lee, Gun and Adcock, Matt and Billinghurst, Mark and Koike, Hideki},
    title = {OmniGlobeVR: A Collaborative 360-Degree Communication System for VR},
    year = {2020},
    isbn = {9781450369749},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3357236.3395429},
    doi = {10.1145/3357236.3395429},
    abstract = {In this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner's focus. We discuss the design implications of these results and directions for future research.},
    booktitle = {Proceedings of the 2020 ACM Designing Interactive Systems Conference},
    pages = {615–625},
    numpages = {11},
    keywords = {virtual reality, communication, collaboration, mixed reality, spherical display, 360-degree camera},
    location = {Eindhoven, Netherlands},
    series = {DIS '20}
    }
    In this paper, we present a novel collaboration tool, OmniGlobeVR, which is an asymmetric system that supports communication and collaboration between a VR user (occupant) and multiple non-VR users (designers) across the virtual and physical platform. OmniGlobeVR allows designer(s) to explore the VR space from any point of view using two view modes: a 360° first-person mode and a third-person mode. In addition, a shared gaze awareness cue is provided to further enhance communication between the occupant and the designer(s). Finally, the system has a face window feature that allows designer(s) to share their facial expressions and upper body view with the occupant for exchanging and expressing information using nonverbal cues. We conducted a user study to evaluate the OmniGlobeVR, comparing three conditions: (1) first-person mode with the face window, (2) first-person mode with a solid window, and (3) third-person mode with the face window. We found that the first-person mode with the face window required significantly less mental effort, and provided better spatial presence, usability, and understanding of the partner's focus. We discuss the design implications of these results and directions for future research.
  • 2019
  • Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects
    E Ibili, M Billinghurst

    Ibili, E., & Billinghurst, M. (2019). Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects. International Journal of Assessment Tools in Education, 6(3), 378-395.

    @article{ibili2019assessing,
    title={Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={International Journal of Assessment Tools in Education},
    volume={6},
    number={3},
    pages={378--395},
    year={2019}
    }
    In this study, the relationship between the usability of a mobile Augmented Reality (AR) tutorial system and cognitive load was examined. In this context, the relationship between perceived usefulness, the perceived ease of use, and the perceived natural interaction factors and intrinsic, extraneous, germane cognitive load were investigated. In addition, the effect of gender on this relationship was investigated. The research results show that there was a strong relationship between the perceived ease of use and the extraneous load in males, and there was a strong relationship between the perceived usefulness and the intrinsic load in females. Both the perceived usefulness and the perceived ease of use had a strong relationship with the germane cognitive load. Moreover, the perceived natural interaction had a strong relationship with the perceived usefulness in females and the perceived ease of use in males. This research will provide significant clues to AR software developers and researchers to help reduce or control cognitive load in the development of AR-based instructional software.
  • Sharing hand gesture and sketch cues in remote collaboration
    W. Huang, S. Kim, M. Billinghurst, L. Alem

    Huang, W., Kim, S., Billinghurst, M., & Alem, L. (2019). Sharing hand gesture and sketch cues in remote collaboration. Journal of Visual Communication and Image Representation, 58, 428-438.

    @article{huang2019sharing,
    title={Sharing hand gesture and sketch cues in remote collaboration},
    author={Huang, Weidong and Kim, Seungwon and Billinghurst, Mark and Alem, Leila},
    journal={Journal of Visual Communication and Image Representation},
    volume={58},
    pages={428--438},
    year={2019},
    publisher={Elsevier}
    }
    Many systems have been developed to support remote guidance, where a local worker manipulates objects under guidance of a remote expert helper. These systems typically use speech and visual cues between the local worker and the remote helper, where the visual cues could be pointers, hand gestures, or sketches. However, the effects of combining visual cues together in remote collaboration has not been fully explored. We conducted a user study comparing remote collaboration with an interface that combined hand gestures and sketching (the HandsInTouch interface) to one that only used hand gestures, when solving two tasks; Lego assembly and repairing a laptop. In the user study, we found that (1) adding sketch cues improved the task completion time, only with the repairing task as this had complex object manipulation but (2) using gesture and sketching together created a higher task load for the user.
  • 2.5 DHANDS: a gesture-based MR remote collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M., ... & Ji, H. (2019). 2.5 DHANDS: a gesture-based MR remote collaborative platform. The International Journal of Advanced Manufacturing Technology, 102(5-8), 1339-1353.

    @article{wang20192,
    title={2.5 DHANDS: a gesture-based MR remote collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Sun, Mengmeng and Chen, Yongxing and Lv, Hao and Ji, Hongyu},
    journal={The International Journal of Advanced Manufacturing Technology},
    volume={102},
    number={5-8},
    pages={1339--1353},
    year={2019},
    publisher={Springer}
    }
    Current remote collaborative systems in manufacturing are mainly based on video-conferencing technology. Their primary aim is to transmit manufacturing process knowledge between remote experts and local workers. However, it does not provide the experts with the same hands-on experience as when synergistically working on site in person. The mixed reality (MR) and increasing networking performances have the capacity to enhance the experience and communication between collaborators in geographically distributed locations. In this paper, therefore, we propose a new gesture-based remote collaborative platform using MR technology that enables a remote expert to collaborate with local workers on physical tasks. Besides, we concentrate on collaborative remote assembly as an illustrative use case. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert’s gestures into the real worksite to improve the performance, co-presence awareness, and user collaboration experience. We aim to study the effects of sharing the remote expert’s gestures in remote collaboration using a projector-based MR system in manufacturing. Furthermore, we show the capabilities of our framework on a prototype consisting of a VR HMD, Leap Motion, and a projector. The prototype system was evaluated with a pilot study comparing with the POINTER (adding AR annotations on the task space view through the mouse), which is the most popular method used to augment remote collaboration at present. The assessment adopts the following aspects: the performance, user’s satisfaction, and the user-perceived collaboration quality in terms of the interaction and cooperation. Our results demonstrate a clear difference between the POINTER and 2.5DHANDS interface in the performance time. Additionally, the 2.5DHANDS interface was statistically significantly higher than the POINTER interface in terms of the awareness of user’s attention, manipulation, self-confidence, and co-presence.
  • The effects of sharing awareness cues in collaborative mixed reality
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2019). The effects of sharing awareness cues in collaborative mixed reality. Front. Rob, 6(5).

    @article{piumsomboon2019effects,
    title={The effects of sharing awareness cues in collaborative mixed reality},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    year={2019}
    }
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
  • Revisiting collaboration through mixed reality: The evolution of groupware
    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M. (2019). Revisiting collaboration through mixed reality: The evolution of groupware. International Journal of Human-Computer Studies.

    @article{ens2019revisiting,
    title={Revisiting collaboration through mixed reality: The evolution of groupware},
    author={Ens, Barrett and Lanir, Joel and Tang, Anthony and Bateman, Scott and Lee, Gun and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Collaborative Mixed Reality (MR) systems are at a critical point in time as they are soon to become more commonplace. However, MR technology has only recently matured to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus on creating the enabling technology. In parallel, but largely independently, the field of Computer Supported Cooperative Work (CSCW) has focused on the fundamental concerns that underlie human communication and collaboration over the past 30-plus years. Since MR research is now on the brink of moving into the real world, we reflect on three decades of collaborative MR research and try to reconcile it with existing theory from CSCW, to help position MR researchers to pursue fruitful directions for their work. To do this, we review the history of collaborative MR systems, investigating how the common taxonomies and frameworks in CSCW and MR research can be applied to existing work on collaborative MR systems, exploring where they have fallen behind, and look for new ways to describe current trends. Through identifying emergent trends, we suggest future directions for MR, and also find where CSCW researchers can explore new theory that more fully represents the future of working, playing and being with others.
  • WARPING DEIXIS: Distorting Gestures to Enhance Collaboration
    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J.

    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J. (2019, April). WARPING DEIXIS: Distorting Gestures to Enhance Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 608). ACM.

    @inproceedings{sousa2019warping,
    title={WARPING DEIXIS: Distorting Gestures to Enhance Collaboration},
    author={Sousa, Maur{\'\i}cio and dos Anjos, Rafael Kufner and Mendes, Daniel and Billinghurst, Mark and Jorge, Joaquim},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={608},
    year={2019},
    organization={ACM}
    }
    When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer’s perception. We evaluated our approach in a colocated side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
  • Getting your game on: Using virtual reality to improve real table tennis skills
    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T.

    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T. (2019). Getting your game on: Using virtual reality to improve real table tennis skills. PloS one, 14(9).

    @article{michalski2019getting,
    title={Getting your game on: Using virtual reality to improve real table tennis skills},
    author={Michalski, Stefan Carlo and Szpak, Ancret and Saredakis, Dimitrios and Ross, Tyler James and Billinghurst, Mark and Loetscher, Tobias},
    journal={PloS one},
    volume={14},
    number={9},
    year={2019},
    publisher={Public Library of Science}
    }
    Background: A key assumption of VR training is that the learned skills and experiences transfer to the real world. Yet, in certain application areas, such as VR sports training, the research testing this assumption is sparse. Design: Real-world table tennis performance was assessed using a mixed-model analysis of variance. The analysis comprised a between-subjects (VR training group vs control group) and a within-subjects (pre- and post-training) factor. Method: Fifty-seven participants (23 females) were either assigned to a VR training group (n = 29) or no-training control group (n = 28). During VR training, participants were immersed in competitive table tennis matches against an artificial intelligence opponent. An expert table tennis coach evaluated participants on real-world table tennis playing before and after the training phase. Blinded regarding participant’s group assignment, the expert assessed participants’ backhand, forehand and serving on quantitative aspects (e.g. count of rallies without errors) and quality of skill aspects (e.g. technique and consistency). Results: VR training significantly improved participants’ real-world table tennis performance compared to a no-training control group in both quantitative (p < .001, Cohen’s d = 1.08) and quality of skill assessments (p < .001, Cohen’s d = 1.10). Conclusions: This study adds to a sparse yet expanding literature, demonstrating real-world skill transfer from Virtual Reality in an athletic task
  • On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction
    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M. (2019, April). On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 228). ACM.

    @inproceedings{piumsomboon2019shoulder,
    title={On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction},
    author={Piumsomboon, Thammathip and Lee, Gun A and Irlitti, Andrew and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={228},
    year={2019},
    organization={ACM}
    }
    We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
  • Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M.

    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M. (2019, April). Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 173). ACM.

    @inproceedings{kim2019evaluating,
    title={Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration},
    author={Kim, Seungwon and Lee, Gun and Huang, Weidong and Kim, Hayun and Woo, Woontack and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={173},
    year={2019},
    organization={ACM}
    }
    Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
  • Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, April). Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 201). ACM.

    @inproceedings{teo2019mixed,
    title={Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction},
    author={Teo, Theophilus and Lawrence, Louise and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={201},
    year={2019},
    organization={ACM}
    }
    Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
  • Using Augmented Reality with Speech Input for Non-Native Children’s Language Learning
    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M.

    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M. (2019). Using Augmented Reality with Speech Input for Non-Native Children's Language Learning. International Journal of Human-Computer Studies.

    @article{dalim2019using,
    title={Using Augmented Reality with Speech Input for Non-Native Children's Language Learning},
    author={Dalim, Che Samihah Che and Sunar, Mohd Shahrizal and Dey, Arindam and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Augmented Reality (AR) offers an enhanced learning environment which could potentially influence children's experience and knowledge gain during the language learning process. Teaching English or other foreign languages to children with different native language can be difficult and requires an effective strategy to avoid boredom and detachment from the learning activities. With the growing numbers of AR education applications and the increasing pervasiveness of speech recognition, we are keen to understand how these technologies benefit non-native young children in learning English. In this paper, we explore children's experience in terms of knowledge gain and enjoyment when learning through a combination of AR and speech recognition technologies. We developed a prototype AR interface called TeachAR, and ran two experiments to investigate how effective the combination of AR and speech recognition was towards the learning of 1) English terms for color and shapes, and 2) English words for spatial relationships. We found encouraging results by creating a novel teaching strategy using these two technologies, not only in terms of increase in knowledge gain and enjoyment when compared with traditional strategy but also enables young children to finish the certain task faster and easier.
  • Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System
    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J.

    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J. (2019, July). Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System. In 2019 23rd International Conference in Information Visualization–Part II (pp. 86-91). IEEE.

    @inproceedings{kim2019sharing,
    title={Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Gun and Norman, Mitchell and Huang, Weidong and He, Jian},
    booktitle={2019 23rd International Conference in Information Visualization--Part II},
    pages={86--91},
    year={2019},
    organization={IEEE}
    }
    In this paper, we explore the effect of showing a remote partner close to user gaze point in a teleconferencing system. We implemented a gaze following function in a teleconferencing system and investigate if this improves the user's feeling of emotional interdependence. We developed a prototype system that shows a remote partner close to the user's current gaze point and conducted a user study comparing it to a condition displaying the partner fixed in the corner of a screen. Our results showed that showing a partner close to their gaze point helped users feel a higher level of emotional interdependence. In addition, we compared the effect of our method between small and big displays, but there was no significant difference in the users' feeling of emotional interdependence even though the big display was preferred.
  • Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration
    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, March). Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1187-1188). IEEE.

    @inproceedings{teo2019supporting,
    title={Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration},
    author={Teo, Theophilus and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1187--1188},
    year={2019},
    organization={IEEE}
    }
    We propose enhancing live 360 panorama-based Mixed Reality (MR) remote collaboration through supporting visual annotation cues. Prior work on live 360 panorama-based collaboration used MR visualization to overlay visual cues, such as view frames and virtual hands, yet they were not registered onto the shared physical workspace, hence had limitations in accuracy for pointing or marking objects. Our prototype system uses spatial mapping and tracking feature of an Augmented Reality head-mounted display to show visual annotation cues accurately registered onto the physical environment. We describe the design and implementation details of our prototype system, and discuss on how such feature could help improve MR remote collaboration.
  • Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality
    Dey, A., Chatourn, A., & Billinghurst, M.

    Dey, A., Chatburn, A., & Billinghurst, M. (2019, March). Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 220-226). IEEE.

    @inproceedings{dey2019exploration,
    title={Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality},
    author={Dey, Arindam and Chatburn, Alex and Billinghurst, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={220--226},
    year={2019},
    organization={IEEE}
    }
    Virtual Reality (VR) is effective in various training scenarios across multiple domains, such as education, health and defense. However, most of those applications are not adaptive to the real-time cognitive or subjectively experienced load placed on the trainee. In this paper, we explore a cognitively adaptive training system based on real-time measurement of task related alpha activity in the brain. This measurement was made by a 32-channel mobile Electroencephalography (EEG) system, and was used to adapt the task difficulty to an ideal level which challenged our participants, and thus theoretically induces the best level of performance gains as a result of training. Our system required participants to select target objects in VR and the complexity of the task adapted to the alpha activity in the brain. A total of 14 participants undertook our training and completed 20 levels of increasing complexity. Our study identified significant differences in brain activity in response to increasing levels of task complexity, but response time did not alter as a function of task difficulty. Collectively, we interpret this to indicate the brain's ability to compensate for higher task load without affecting behaviourally measured visuomotor performance.
  • Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation
    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M.

    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M. (2019, August). Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation. In Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY. Audio Engineering Society.

    @inproceedings{barde2019binaural,
    title={Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation},
    author={Barde, Amit and Lindeman, Robert W and Lee, Gun and Billinghurst, Mark},
    booktitle={Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY},
    year={2019},
    organization={Audio Engineering Society}
    }
    Binaural spatialization over a bone conduction headset in the vertical plane was investigated using inexpensive and commercially available hardware and software components. The aim of the study was to assess the acuity of binaurally spatialized presentations in the vertical plane. The level of externalization achievable was also explored. Results demonstrate good correlation between established perceptual traits for headphone based auditory localization using non-individualized HRTFs, though localization accuracy appears to be significant worse. A distinct pattern of compressed localization judgments is observed with participants tending to localize the presented stimulus within an approximately 20° range on either side of the inter-aural plane. Localization error was approximately 21° in the vertical plane. Participants reported a good level of externalization. We’ve been able to demonstrate an acceptable level of spatial resolution and externalization is achievable using an inexpensive bone conduction headset and software components.
  • Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., & Chen, Y.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., ... & Chen, Y. (2019, March). Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1219-1220). IEEE.

    @inproceedings{wang2019head,
    title={Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Wang, Shuxia and Zhang, Xiaokun and Du, Jiaxiang and Chen, Yongxing},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1219--1220},
    year={2019},
    organization={IEEE}
    }
    This paper investigates how two different unique gaze visualizations (the head pointer(HP), eye gaze(EG)) affect table-size physical tasks in Mixed Reality (MR) remote collaboration. We developed a remote collaborative MR Platform which supports sharing of the remote expert's HP and EG. The prototype was evaluated with a user study comparing two conditions: sharing HP and EG with respect to their effectiveness in the performance and quality of cooperation. There was a statistically significant difference between two conditions on the performance time, and HP is a good proxy for EG in remote collaboration.
  • The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.
    Ibili, E., & Billinghurst, M.

    Ibili, E., & Billinghurst, M. (2019). The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors. Malaysian Online Journal of Educational Technology, 7(3), 39-56.

    @article{ibili2019relationship,
    title={The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={Malaysian Online Journal of Educational Technology},
    volume={7},
    number={3},
    pages={39--56},
    year={2019},
    publisher={ERIC}
    }
    In  this  study,  the  relationship  was  investigated  between  self‐esteem  and  loneliness  in  social  networks  among  students  in  a  guidance  and  psychological  counselling teaching department. The study was conducted during the 2017‐2018  academic year with 312 trainee school counsellors from Turkey. In terms of data  collection, the Social Network Loneliness Scale, and the Self‐esteem Scale were  employed,  and  a  statistical  analysis  of  the  data  was  conducted.  We  found  a  negative relationship between self‐esteem and loneliness as experienced in social networks, although neither differs according to sex, age and class level. It was also  found  that  those who  use  the Internet  for  communication  purposes  have  high  levels of loneliness and self‐esteem in social networks. While self‐esteem levels among users of the Internet are high, those who use it to read about or watch the  news  have  high  levels  of  loneliness.  No  relationship  was  found  between  self‐ esteem  and  social  network  loneliness  levels  and  among  those  who  use  the  Internet for playing games. Regular sporting habits were found to have a positive  effect on self‐esteem, but no effect on the level of loneliness in social networks.
  • A comprehensive survey of AR/MR-based co-design in manufacturing
    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., Zhang, X.

    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., ... & Zhang, X. (2019). A comprehensive survey of AR/MR-based co-design in manufacturing. Engineering with Computers, 1-24.

    @article{wang2019comprehensive,
    title={A comprehensive survey of AR/MR-based co-design in manufacturing},
    author={Wang, Peng and Zhang, Shusheng and Billinghurst, Mark and Bai, Xiaoliang and He, Weiping and Wang, Shuxia and Sun, Mengmeng and Zhang, Xu},
    journal={Engineering with Computers},
    pages={1--24},
    year={2019},
    publisher={Springer}
    }
    For more than 2 decades, Augmented Reality (AR)/Mixed Reality (MR) has received an increasing amount of attention by researchers and practitioners in the manufacturing community, because it has applications in many fields, such as product design, training, maintenance, assembly, and other manufacturing operations. However, to the best of our knowledge, there has been no comprehensive review of AR-based co-design in manufacturing. This paper presents a comprehensive survey of existing research, projects, and technical characteristics between 1990 and 2017 in the domain of co-design based on AR technology. Among these papers, more than 90% of them were published between 2000 and 2017, and these recent relevant works are discussed at length. The paper provides a comprehensive academic roadmap and useful insight into the state-of-the-art of AR-based co-design systems and developments in manufacturing for future researchers all over the world. This work will be useful to researchers who plan to utilize AR as a tool for design research.
  • Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system
    Ibili, E., Resnyansky, D., & Billinghurst, M.

    Ibili, E., Resnyansky, D., & Billinghurst, M. (2019). Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system. Education and Information Technologies, 1-23.

    @article{ibili2019applying,
    title={Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system},
    author={Ibili, Emin and Resnyansky, Dmitry and Billinghurst, Mark},
    journal={Education and Information Technologies},
    pages={1--23},
    year={2019},
    publisher={Springer}
    }
    This paper examines mathematics teachers’ level of acceptance and intention to use the Augmented Reality Geometry Tutorial System (ARGTS), a mobile Augmented Reality (AR) application developed to enhance students’ 3D geometric thinking skills. ARGTS was shared with mathematics teachers, who were then surveyed using the Technology Acceptance Model (TAM) to understand their acceptance of the technology. We also examined the external variables of Anxiety, Social Norms and Satisfaction. The effect of the teacher’s gender, degree of graduate status and number of years of teaching experience on the subscales of the TAM model were examined. We found that the Perceived Ease of Use (PEU) had a direct effect on the Perceived Usefulness (PU) in accordance with the Technology Acceptance Model (TAM). Both variables together affect Satisfaction (SF), however PEU had no direct effect on Attitude (AT). In addition, while Social Norms (SN) had a direct effect on PU and PEU, there was no direct effect on Behavioural Intention (BI). Anxiety (ANX) had a direct effect on PEU, but no effect on PU and SF. While there was a direct effect of SF on PEU, no direct effect was found on BI. We explain how the results of this study could help improve the understanding of AR acceptance by teachers and provide important guidelines for AR researchers, developers and practitioners.
  • An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills
    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M.

    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M. (2019). An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills. International Journal of Mathematical Education in Science and Technology, 1-23.

    @article{ibili2019assessment,
    title={An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills},
    author={{\.I}bili, Emin and {\c{C}}at, Mevl{\"u}t and Resnyansky, Dmitry and {\c{S}}ahin, Sami and Billinghurst, Mark},
    journal={International Journal of Mathematical Education in Science and Technology},
    pages={1--23},
    year={2019},
    publisher={Taylor \& Francis}
    }
    The aim of this research was to examine the effect of Augmented Reality (AR) supported geometry teaching on students’ 3D thinking skills. This research consisted of 3 steps: (1) developing a 3D thinking ability scale, (ii) design and development of an AR Geometry Tutorial System (ARGTS) and (iii) implementation and assessment of geometry teaching supported with ARGTS. A 3D thinking ability scale was developed and tested with experimental and control groups as a pre- and post-test evaluation. An AR Geometry Tutorial System (ARGTS) and AR teaching materials and environments were developed to enhance 3D thinking skills. A user study with these materials found that geometry teaching supported by ARGTS significantly increased the students’ 3D thinking skills. The increase in average scores of Structuring 3D arrays of cubes and Calculation of the volume and the area of solids thinking skills was not statistically significant (p > 0.05). In terms of other 3D geometric thinking skills’ subfactors of the scale a statistically significant difference was found in favour of the experimental group in pre-test and post-test scores (p < 0.05). The biggest difference was found on ability to recognize and create 3D shapes (p < 0.01).The results of this research are particularly important for identifying individual differences in 3D thinking skills of secondary school students and creating personalized dynamic intelligent learning environments.
  • Sharing Manipulated Heart Rate Feedback in Collaborative Virtual Environments
    Arindam Dey ; Hao Chen ; Ashkan Hayati ; Mark Billinghurst ; Robert W. Lindeman

    @inproceedings{dey2019sharing,
    title={Sharing Manipulated Heart Rate Feedback in Collaborative Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Hayati, Ashkan and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={248--257},
    year={2019},
    organization={IEEE}
    }
    We have explored the effects of sharing manipulated heart rate feedback in collaborative virtual environments. In our study, we created two types of different virtual environments (active and passive) with different levels of interactions and provided three levels of manipulated heart rate feedback (decreased, unchanged, and increased). We measured the effects of manipulated feedback on Social Presence, affect, physical heart rate, and overall experience. We noticed a significant effect of the manipulated heart rate feedback in affecting scariness and nervousness. The perception of the collaborator's valance and arousal was also affected where increased heart rate feedback perceived as a higher valance and lower arousal. Increased heart rate feedback decreased the real heart rate. The type of virtual environments had a significant effect on social presence, heart rate, and affect where the active environment had better performances across these measurements. We discuss the implications of this and directions for future research.
  • A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes
    Theophilus Teo, Ashkan F. Hayati, Gun A. Lee, Mark Billinghurst, Matt Adcock

    @inproceedings{teo2019technique,
    title={A Technique for Mixed Reality Remote Collaboration using 360 Panoramas in 3D Reconstructed Scenes},
    author={Teo, Theophilus and F. Hayati, Ashkan and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    booktitle={25th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2019}
    }
    Mixed Reality (MR) remote collaboration provides an enhanced immersive experience where a remote user can provide verbal and nonverbal assistance to a local user to increase the efficiency and performance of the collaboration. This is usually achieved by sharing the local user's environment through live 360 video or a 3D scene, and using visual cues to gesture or point at real objects allowing for better understanding and collaborative task performance. While most of prior work used one of the methods to capture the surrounding environment, there may be situations where users have to choose between using 360 panoramas or 3D scene reconstruction to collaborate, as each have unique benefits and limitations. In this paper we designed a prototype system that combines 360 panoramas into a 3D scene to introduce a novel way for users to interact and collaborate with each other. We evaluated the prototype through a user study which compared the usability and performance of our proposed approach to live 360 video collaborative system, and we found that participants enjoyed using different ways to access the local user's environment although it took them longer time to learn to use our system. We also collected subjective feedback for future improvements and provide directions for future research.
  • Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning
    Amit Barde, Nastaran Saffaryazdi, P. Withana, N. Patel, Prasanth Sasikumar, Mark Billinghurst

    Barde, A., Saffaryazdi, N., Withana, P., Patel, N., Sasikumar, P., & Billinghurst, M. (2019, October). Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning. In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 338-339). IEEE.

    @inproceedings{barde2019inter,
    title={Inter-brain connectivity: Comparisons between real and virtual environments using hyperscanning},
    author={Barde, Amit and Saffaryazdi, Nastaran and Withana, Pawan and Patel, Nakul and Sasikumar, Prasanth and Billinghurst, Mark},
    booktitle={2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={338--339},
    year={2019},
    organization={IEEE}
    }
    Inter-brain connectivity between pairs of people was explored during a finger tracking task in the real-world and in Virtual Reality (VR). This was facilitated by the use of a dual EEG set-up that allowed us to use hyperscanning to simultaneously record the neural activity of both participants. We found that similar levels of inter-brain synchrony can be elicited in the real-world and VR for the same task. This is the first time that hyperscanning has been used to compare brain activity for the same task performed in real and virtual environments.
  • Using an (a) explanation, (b) example, or (c) hint helper block, a brief summary of the code component, examples of its usage, and a guide as to the succeeding component in the statement may respectively be displayed next to the original syntax through 3D text annotation and graphical representation.
    An AR/TUI-supported Debugging Teaching Environment
    Dmitry Resnyansky , Mark Billinghurst , Arindam Dey

    Resnyansky, D., Billinghurst, M., & Dey, A. (2019, December). An AR/TUI-supported debugging teaching environment. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction (pp. 590-594).

    @inproceedings{10.1145/3369457.3369538,
    author = {Resnyansky, Dmitry and Billinghurst, Mark and Dey, Arindam},
    title = {An AR/TUI-Supported Debugging Teaching Environment},
    year = {2020},
    isbn = {9781450376969},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3369457.3369538},
    doi = {10.1145/3369457.3369538},
    abstract = {This paper presents research on the potential application of Tangible and Augmented Reality (AR) technology to computer science education and the teaching of programming in tertiary settings. An approach to an AR-supported debugging-teaching prototype is outlined, focusing on the design of an AR workspace that uses physical markers to interact with content (code). We describe a prototype which has been designed to actively scaffold the student's development of the two primary abilities necessary for effective debugging: (1) the ability to read not just the code syntax, but to understand the overall program structure behind the code; and (2) the ability to independently recall and apply the new knowledge to produce new, working code structures.},
    booktitle = {Proceedings of the 31st Australian Conference on Human-Computer-Interaction},
    pages = {590–594},
    numpages = {5},
    keywords = {tangible user interface, tertiary education, debugging, Human-computer interaction, augmented reality},
    location = {Fremantle, WA, Australia},
    series = {OzCHI '19}
    }
    This paper presents research on the potential application of Tangible and Augmented Reality (AR) technology to computer science education and the teaching of programming in tertiary settings. An approach to an AR-supported debugging-teaching prototype is outlined, focusing on the design of an AR workspace that uses physical markers to interact with content (code). We describe a prototype which has been designed to actively scaffold the student's development of the two primary abilities necessary for effective debugging: (1) the ability to read not just the code syntax, but to understand the overall program structure behind the code; and (2) the ability to independently recall and apply the new knowledge to produce new, working code structures.
  • 360Drops System Overview
    360Drops: Mixed Reality Remote Collaboration using 360 Panoramas within the 3D Scene
    Theophilus Teo , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, G. A. Lee, M. Billinghurst and M. Adcock. “360Drops: Mixed Reality Remove Collaboration using 360° Panoramas within the 3D Scene.” In: ACM SIGGRAPH Conference and Exhibition on Computer Graphics & Interactive Technologies in Asia. (SA 2019), Brisbane, Australia, 2019.

    @inproceedings{10.1145/3355049.3360517,
    author = {Teo, Theophilus and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    title = {360Drops: Mixed Reality Remote Collaboration Using 360 Panoramas within the 3D Scene*},
    year = {2019},
    isbn = {9781450369428},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3355049.3360517},
    doi = {10.1145/3355049.3360517},
    abstract = {Mixed Reality (MR) remote guidance has become a practical solution for collaboration that includes nonverbal communication. This research focuses on integrating different types of MR remote collaboration systems together allowing a new variety for remote collaboration to extend its features and user experience. In this demonstration, we present 360Drops, a MR remote collaboration system that uses 360 panorama images within 3D reconstructed scenes. We introduce a new technique to interact with multiple 360 Panorama Spheres in an immersive 3D reconstructed scene. This allows a remote user to switch between multiple 360 scenes “live/static, past/present,” placed in a 3D reconstructed scene to promote a better understanding of space and interactivity through verbal and nonverbal communication. We present the system features and user experience to the attendees of SIGGRAPH Asia 2019 through a live demonstration.},
    booktitle = {SIGGRAPH Asia 2019 Emerging Technologies},
    pages = {1–2},
    numpages = {2},
    keywords = {Remote Collaboration, Shared Experience, Mixed Reality},
    location = {Brisbane, QLD, Australia},
    series = {SA '19}
    }
    Mixed Reality (MR) remote guidance has become a practical solution for collaboration that includes nonverbal communication. This research focuses on integrating different types of MR remote collaboration systems together allowing a new variety for remote collaboration to extend its features and user experience. In this demonstration, we present 360Drops, a MR remote collaboration system that uses 360 panorama images within 3D reconstructed scenes. We introduce a new technique to interact with multiple 360 Panorama Spheres in an immersive 3D reconstructed scene. This allows a remote user to switch between multiple 360 scenes “live/static, past/present,” placed in a 3D reconstructed scene to promote a better understanding of space and interactivity through verbal and nonverbal communication. We present the system features and user experience to the attendees of SIGGRAPH Asia 2019 through a live demonstration.
  • Prototype system overview
    A Technique for Mixed Reality Remote Collaboration using 360° Panoramas in 3D Reconstructed Scenes
    Theophilus Teo , Ashkan F. Hayati , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, A. F. Hayati, G. A. Lee, M. Billinghurst and M. Adcock. “A Technique for Mixed Reality Remote Collaboration using 360° Panoramas in 3D Reconstructed Scenes.” In: ACM Symposium on Virtual Reality Software and Technology. (VRST), Sydney, Australia, 2019.

    @inproceedings{teo2019technique,
    title={A technique for mixed reality remote collaboration using 360 panoramas in 3d reconstructed scenes},
    author={Teo, Theophilus and F. Hayati, Ashkan and A. Lee, Gun and Billinghurst, Mark and Adcock, Matt},
    booktitle={Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology},
    pages={1--11},
    year={2019}
    }
    Mixed Reality (MR) remote collaboration provides an enhanced immersive experience where a remote user can provide verbal and nonverbal assistance to a local user to increase the efficiency and performance of the collaboration. This is usually achieved by sharing the local user's environment through live 360 video or a 3D scene, and using visual cues to gesture or point at real objects allowing for better understanding and collaborative task performance. While most of prior work used one of the methods to capture the surrounding environment, there may be situations where users have to choose between using 360 panoramas or 3D scene reconstruction to collaborate, as each have unique benefits and limitations. In this paper we designed a prototype system that combines 360 panoramas into a 3D scene to introduce a novel way for users to interact and collaborate with each other. We evaluated the prototype through a user study which compared the usability and performance of our proposed approach to live 360 video collaborative system, and we found that participants enjoyed using different ways to access the local user's environment although it took them longer time to learn to use our system. We also collected subjective feedback for future improvements and provide directions for future research.
  • MR remote collaboration system overview
    Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
    Theophilus Teo , Louise Lawrence , Gun A. Lee , Mark Billinghurst , Matt Adcock

    T. Teo, L. Lawrence, G. A. Lee, M. Billinghurst, and M. Adcock. (2019). “Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction”. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, Paper 201, 14 pages.

    @inproceedings{10.1145/3290605.3300431,
    author = {Teo, Theophilus and Lawrence, Louise and Lee, Gun A. and Billinghurst, Mark and Adcock, Matt},
    title = {Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction},
    year = {2019},
    isbn = {9781450359702},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3290605.3300431},
    doi = {10.1145/3290605.3300431},
    abstract = {Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.},
    booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages = {1–14},
    numpages = {14},
    keywords = {interaction methods, remote collaboration, 3d scene reconstruction, mixed reality, virtual reality, 360 panorama},
    location = {Glasgow, Scotland Uk},
    series = {CHI '19}
    }
    Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
  • Prototype mixed presence collaborative Mixed Reality System
    A Mixed Presence Collaborative Mixed Reality System
    Mitchell Norman; Gun Lee; Ross T. Smith; Mark Billinqhurst

    M. Norman, G. Lee, R. T. Smith and M. Billinqhurst, "A Mixed Presence Collaborative Mixed Reality System," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 1106-1107, doi: 10.1109/VR.2019.8797966.

    @INPROCEEDINGS{8797966,
    author={Norman, Mitchell and Lee, Gun and Smith, Ross T. and Billinqhurs, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    title={A Mixed Presence Collaborative Mixed Reality System},
    year={2019},
    volume={},
    number={},
    pages={1106-1107},
    doi={10.1109/VR.2019.8797966}}
    Research has shown that Mixed Presence Groupware (MPG) systems are a valuable collaboration tool. However research into MPG systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Augmented Reality (AR) solutions. We present a new system with two local users and one remote user using HMD based AR interfaces. Our system provides tools allowing users to layout a room with the help of a remote user. The remote user has access to a marker and pointer tools to assist in directing the local users. Feedback collected from several groups of users showed that our system is easy to learn but could have increased accuracy and consistency.
  • System features (clockwise from top left: 1) gaze reticle, 2) virtual markers, and 3) virtual ray pointer 4) emitting out of a webcam)
    A Mixed Presence Collaborative Mixed Reality System
    Mitchell Norman; Gun Lee; Ross T. Smith; Mark Billinqhurst

    Norman, M., Lee, G., Smith, R. T., & Billinqhurst, M. (2019, March). A mixed presence collaborative mixed reality system. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1106-1107). IEEE.

    @INPROCEEDINGS{8797966,
    author={Norman, Mitchell and Lee, Gun and Smith, Ross T. and Billinqhurs, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    title={A Mixed Presence Collaborative Mixed Reality System},
    year={2019},
    volume={},
    number={},
    pages={1106-1107},
    doi={10.1109/VR.2019.8797966}}
    Research has shown that Mixed Presence Groupware (MPG) systems are a valuable collaboration tool. However research into MPG systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Augmented Reality (AR) solutions. We present a new system with two local users and one remote user using HMD based AR interfaces. Our system provides tools allowing users to layout a room with the help of a remote user. The remote user has access to a marker and pointer tools to assist in directing the local users. Feedback collected from several groups of users showed that our system is easy to learn but could have increased accuracy and consistency.
  • 2018
  • Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Jonathon D Hart, Barrett Ens, Robert W Lindeman, Bruce H Thomas, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 46, 13 pages. DOI: https://doi.org/10.1145/3173574.3173620

    @inproceedings{Piumsomboon:2018:MAA:3173574.3173620,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Hart, Jonathon D. and Ens, Barrett and Lindeman, Robert W. and Thomas, Bruce H. and Billinghurst, Mark},
    title = {Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {46:1--46:13},
    articleno = {46},
    numpages = {13},
    url = {http://doi.acm.org/10.1145/3173574.3173620},
    doi = {10.1145/3173574.3173620},
    acmid = {3173620},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, awareness, gaze, gesture, mixed reality, redirected, remote collaboration, remote embodiment, virtual reality},
    }
    [download]
    We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.
  • Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality
    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 81, 14 pages. DOI: https://doi.org/10.1145/3173574.3173655

    @inproceedings{Kyto:2018:PPH:3173574.3173655,
    author = {Kyt\"{o}, Mikko and Ens, Barrett and Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {81:1--81:14},
    articleno = {81},
    numpages = {14},
    url = {http://doi.acm.org/10.1145/3173574.3173655},
    doi = {10.1145/3173574.3173655},
    acmid = {3173655},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, eye tracking, gaze interaction, head-worn display, refinement techniques, target selection},
    }
    Head and eye movement can be leveraged to improve the user's interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
  • Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications
    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, Mark Billinghurst

    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, and Mark Billinghurst. 2018. Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW120, 6 pages. DOI: https://doi.org/10.1145/3170427.3188513

    @inproceedings{Ens:2018:CEM:3170427.3188513,
    author = {Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    title = {Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW120:1--LBW120:6},
    articleno = {LBW120},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188513},
    doi = {10.1145/3170427.3188513},
    acmid = {3188513},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, gesture interaction, wearable computing},
    }
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.
  • Levity: A Virtual Reality System that Responds to Cognitive Load
    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, Mark Billinghurst

    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, and Mark Billinghurst. 2018. Levity: A Virtual Reality System that Responds to Cognitive Load. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW610, 6 pages. DOI: https://doi.org/10.1145/3170427.3188479

    @inproceedings{Gerry:2018:LVR:3170427.3188479,
    author = {Gerry, Lynda and Ens, Barrett and Drogemuller, Adam and Thomas, Bruce and Billinghurst, Mark},
    title = {Levity: A Virtual Reality System That Responds to Cognitive Load},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW610:1--LBW610:6},
    articleno = {LBW610},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188479},
    doi = {10.1145/3170427.3188479},
    acmid = {3188479},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {brain computer interface, cognitive load, virtual reality, visual search task},
    }
    This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
  • Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper D115, 4 pages. DOI: https://doi.org/10.1145/3170427.3186495

    @inproceedings{Piumsomboon:2018:SDM:3170427.3186495,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {D115:1--D115:4},
    articleno = {D115},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3170427.3186495},
    doi = {10.1145/3170427.3186495},
    acmid = {3186495},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, mixed reality, multiple, remote collaboration, remote embodiment, scale, virtual reality},
    }
    We present Snow Dome, a Mixed Reality (MR) remote collaboration application that supports a multi-scale interaction for a Virtual Reality (VR) user. We share a local Augmented Reality (AR) user's reconstructed space with a remote VR user who has an ability to scale themselves up into a giant or down into a miniature for different perspectives and interaction at that scale within the shared space.
  • Filtering Shared Social Data in AR
    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, Robert W Lindeman

    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, and Robert W. Lindeman. 2018. Filtering Shared Social Data in AR. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW100, 6 pages. DOI: https://doi.org/10.1145/3170427.3188609

    @inproceedings{Nassani:2018:FSS:3170427.3188609,
    author = {Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W.},
    title = {Filtering Shared Social Data in AR},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW100:1--LBW100:6},
    articleno = {LBW100},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188609},
    doi = {10.1145/3170427.3188609},
    acmid = {3188609},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {360 panoramas, augmented reality, live video stream, sharing social experiences, virtual avatars},
    }
    We describe a method and a prototype implementation for filtering shared social data (eg, 360 video) in a wearable Augmented Reality (eg, HoloLens) application. The data filtering is based on user-viewer relationships. For example, when sharing a 360 video, if the user has an intimate relationship with the viewer, then full fidelity (ie the 360 video) of the user's environment is visible. But if the two are strangers then only a snapshot image is shared. By varying the fidelity of the shared content, the viewer is able to focus more on the data shared by their close relations and differentiate this from other content. Also, the approach enables the sharing-user to have more control over the fidelity of the content shared with their contacts for privacy.
  • A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014
    Arindam Dey, Mark Billinghurst, Robert W Lindeman, J Swan

    Dey A, Billinghurst M, Lindeman RW and Swan JE II (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 5:37. doi: 10.3389/frobt.2018.00037

    @ARTICLE{10.3389/frobt.2018.00037,
    AUTHOR={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W. and Swan, J. Edward},
    TITLE={A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014},
    JOURNAL={Frontiers in Robotics and AI},
    VOLUME={5},
    PAGES={37},
    YEAR={2018},
    URL={https://www.frontiersin.org/article/10.3389/frobt.2018.00037},
    DOI={10.3389/frobt.2018.00037},
    ISSN={2296-9144},
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
  • He who hesitates is lost (… in thoughts over a robot)
    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, Victor Finomore

    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, and Victor Finomore. 2018. He who hesitates is lost (...in thoughts over a robot). In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). ACM, New York, NY, USA, Article 43, 6 pages. DOI: https://doi.org/10.1145/3183654.3183703

    @inproceedings{Wen:2018:HHL:3183654.3183703,
    author = {Wen, James and Stewart, Amanda and Billinghurst, Mark and Dey, Arindam and Tossell, Chad and Finomore, Victor},
    title = {He Who Hesitates is Lost (...In Thoughts over a Robot)},
    booktitle = {Proceedings of the Technology, Mind, and Society},
    series = {TechMindSociety '18},
    year = {2018},
    isbn = {978-1-4503-5420-2},
    location = {Washington, DC, USA},
    pages = {43:1--43:6},
    articleno = {43},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3183654.3183703},
    doi = {10.1145/3183654.3183703},
    acmid = {3183703},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {Anthropomorphism, Empathy, Human Machine Team, Robotics, User Study},
    }
    In a team, the strong bonds that can form between teammates are often seen as critical for reaching peak performance. This perspective may need to be reconsidered, however, if some team members are autonomous robots since establishing bonds with fundamentally inanimate and expendable objects may prove counterproductive. Previous work has measured empathic responses towards robots as singular events at the conclusion of experimental sessions. As relationships extend over long periods of time, sustained empathic behavior towards robots would be of interest. In order to measure user actions that may vary over time and are affected by empathy towards a robot teammate, we created the TEAMMATE simulation system. Our findings suggest that inducing empathy through a back story narrative can significantly change participant decisions in actions that may have consequences for a robot companion over time. The results of our study can have strong implications for the overall performance of human machine teams.
  • A hybrid 2D/3D user Interface for radiological diagnosis
    Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H Butler

    A hybrid 2D/3D user Interface for radiological diagnosis Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H ButlerJournal of digital imaging 31 (1), 56-73

    @Article{Mandalika2018,
    author="Mandalika, Veera Bhadra Harish
    and Chernoglazov, Alexander I.
    and Billinghurst, Mark
    and Bartneck, Christoph
    and Hurrell, Michael A.
    and Ruiter, Niels de
    and Butler, Anthony P. H.
    and Butler, Philip H.",
    title="A Hybrid 2D/3D User Interface for Radiological Diagnosis",
    journal="Journal of Digital Imaging",
    year="2018",
    month="Feb",
    day="01",
    volume="31",
    number="1",
    pages="56--73",
    abstract="This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.",
    issn="1618-727X",
    doi="10.1007/s10278-017-0002-6",
    url="https://doi.org/10.1007/s10278-017-0002-6"
    }
    This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.
  • The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration
    Seungwon Kim, Mark Billinghurst, Gun Lee

    Kim, S., Billinghurst, M., & Lee, G. (2018). The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration. Computer Supported Cooperative Work (CSCW), 1-39.

    @Article{Kim2018,
    author="Kim, Seungwon
    and Billinghurst, Mark
    and Lee, Gun",
    title="The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration",
    journal="Computer Supported Cooperative Work (CSCW)",
    year="2018",
    month="Jun",
    day="02",
    abstract="This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user's real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants' feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants' end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.",
    issn="1573-7551",
    doi="10.1007/s10606-018-9324-2",
    url="https://doi.org/10.1007/s10606-018-9324-2"
    }
    This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user’s real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants’ feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants’ end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.
  • Robust tracking through the design of high quality fiducial markers: An optimization tool for ARToolKit
    Dawar Khan, Sehat Ullah, Dong-Ming Yan, Ihsan Rabbi, Paul Richard, Thuong Hoang, Mark Billinghurst, Xiaopeng Zhang

    D. Khan et al., "Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit," in IEEE Access, vol. 6, pp. 22421-22433, 2018. doi: 10.1109/ACCESS.2018.2801028

    @ARTICLE{8287815,
    author={D. Khan and S. Ullah and D. M. Yan and I. Rabbi and P. Richard and T. Hoang and M. Billinghurst and X. Zhang},
    journal={IEEE Access},
    title={Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit},
    year={2018},
    volume={6},
    number={},
    pages={22421-22433},
    keywords={augmented reality;image recognition;object tracking;optical tracking;pose estimation;ARToolKit markers;B:W;augmented reality applications;camera tracking;edge sharpness;fiducial marker optimizer;high quality fiducial markers;optimization tool;pose estimation;robust tracking;specialized image processing algorithms;Cameras;Complexity theory;Fiducial markers;Libraries;Robustness;Tools;ARToolKit;Fiducial markers;augmented reality;marker tracking;robust recognition},
    doi={10.1109/ACCESS.2018.2801028},
    ISSN={},
    month={},}
    Fiducial markers are images or landmarks placed in real environment, typically used for pose estimation and camera tracking. Reliable fiducials are strongly desired for many augmented reality (AR) applications, but currently there is no systematic method to design highly reliable fiducials. In this paper, we present fiducial marker optimizer (FMO), a tool to optimize the design attributes of ARToolKit markers, including black to white (B:W) ratio, edge sharpness, and information complexity, and to reduce inter-marker confusion. For these operations, the FMO provides a user friendly interface at the front-end and specialized image processing algorithms at the back-end. We tested manually designed markers and FMO optimized markers in ARToolKit and found that the latter were more robust. The FMO will be used for designing highly reliable fiducials in easy to use fashion. It will improve the application's performance, where it is used.
  • Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration
    Theophilus Teo, Gun A. Lee, Mark Billinghurst, Matt Adcock

    Theophilus Teo, Gun A. Lee, Mark Billinghurst, and Matt Adcock. 2018. Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI '18). ACM, New York, NY, USA, 406-410. DOI: https://doi.org/10.1145/3292147.3292200

    BibTeX | EndNote | ACM Ref
    @inproceedings{Teo:2018:HGV:3292147.3292200,
    author = {Teo, Theophilus and Lee, Gun A. and Billinghurst, Mark and Adcock, Matt},
    title = {Hand Gestures and Visual Annotation in Live 360 Panorama-based Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    series = {OzCHI '18},
    year = {2018},
    isbn = {978-1-4503-6188-0},
    location = {Melbourne, Australia},
    pages = {406--410},
    numpages = {5},
    url = {http://doi.acm.org/10.1145/3292147.3292200},
    doi = {10.1145/3292147.3292200},
    acmid = {3292200},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {gesture communication, mixed reality, remote collaboration},
    }
    In this paper, we investigate hand gestures and visual annotation cues overlaid in a live 360 panorama-based Mixed Reality remote collaboration. The prototype system captures 360 live panorama video of the surroundings of a local user and shares it with another person in a remote location. The two users wearing Augmented Reality or Virtual Reality head-mounted displays can collaborate using augmented visual communication cues such as virtual hand gestures, ray pointing, and drawing annotations. Our preliminary user evaluation comparing these cues found that using visual annotation cues (ray pointing and drawing annotation) helps local users perform collaborative tasks faster, easier, making less errors and with better understanding, compared to using only virtual hand gestures.
  • The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training.
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1-2). IEEE.

    @inproceedings{clifford2018effect,
    title={The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1--2},
    year={2018},
    organization={IEEE}
    }
    Situation Awareness (SA) is an essential skill in Air Attack Supervision (AAS) for aerial based wildfire firefighting. The display types used for Virtual Reality Training Systems (VRTS) afford different visual SA depending on the Field of View (FoV) as well as the sense of presence users can obtain in the virtual environment. We conducted a study with 36 participants to evaluate SA acquisition in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical simulation projection display called the SimPit. We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for the three levels of SA.
  • Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017)
    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F.

    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F. (2018). Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017). IEEE transactions on visualization and computer graphics, 24(11), 2947-2962.

    @article{kim2018revisiting,
    title={Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008--2017)},
    author={Kim, Kangsoo and Billinghurst, Mark and Bruder, Gerd and Duh, Henry Been-Lirn and Welch, Gregory F},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2947--2962},
    year={2018},
    publisher={IEEE}
    }
    In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008-2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them-which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.
  • Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment
    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H.

    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H. (2018). Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment. IEEE transactions on visualization and computer graphics, 24(11), 2917-2926.

    @article{reichherzer2018narrative,
    title={Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment},
    author={Reichherzer, Carolin and Cunningham, Andrew and Walsh, James and Kohler, Mark and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2917--2926},
    year={2018},
    publisher={IEEE}
    }
    This paper showcases one way of how virtual reconstruction can be used in a courtroom. The results of a pilot study on narrative and spatial memory are presented in the context of viewing real and virtual copies of a simulated crime scene. Based on current court procedures, three different viewing options were compared: photographs, a real life visit, and a 3D virtual reconstruction of the scene viewed in a Virtual Reality headset. Participants were also given a written narrative that included the spatial locations of stolen goods and were measured on their ability to recall and understand these spatial relationships of those stolen items. The results suggest that Virtual Reality is more reliable for spatial memory compared to photographs and that Virtual Reality provides a compromise for when physical viewing of crime scenes are not possible. We conclude that Virtual Reality is a promising medium for the court.
  • A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks
    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H.

    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H. (2018). A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks. IEEE transactions on visualization and computer graphics, 24(11), 2846-2856.

    @article{volmer2018comparison,
    title={A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks},
    author={Volmer, Benjamin and Baumeister, James and Von Itzstein, Stewart and Bornkessel-Schlesewsky, Ina and Schlesewsky, Matthias and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2846--2856},
    year={2018},
    publisher={IEEE}
    }
    Previous research has demonstrated that Augmented Reality can reduce a user's task response time and mental effort when completing a procedural task. This paper investigates techniques to improve user performance and reduce mental effort by providing projector-based Spatial Augmented Reality predictive cues for future responses. The objective of the two experiments conducted in this study was to isolate the performance and mental effort differences from several different annotation cueing techniques for simple (Experiment 1) and complex (Experiment 2) button-pressing tasks. Comporting with existing cognitive neuroscience literature on prediction, attentional orienting, and interference, we hypothesized that for both simple procedural tasks and complex search-based tasks, having a visual cue guiding to the next task's location would positively impact performance relative to a baseline, no-cue condition. Additionally, we predicted that direction-based cues would provide a more significant positive impact than target-based cues. The results indicated that providing a line to the next task was the most effective technique for improving the users' task time and mental effort in both the simple and complex tasks.
  • Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface
    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M. (2018). Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface. IEEE transactions on visualization and computer graphics, 24(11), 2974-2982.

    @article{piumsomboon2018superman,
    title={Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface},
    author={Piumsomboon, Thammathip and Lee, Gun A and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2974--2982},
    year={2018},
    publisher={IEEE}
    }
    The advancements in Mixed Reality (MR), Unmanned Aerial Vehicle, and multi-scale collaborative virtual environments have led to new interface opportunities for remote collaboration. This paper explores a novel concept of flying telepresence for multi-scale mixed reality remote collaboration. This work could enable remote collaboration at a larger scale such as building construction. We conducted a user study with three experiments. The first experiment compared two interfaces, static and dynamic IPD, on simulator sickness and body size perception. The second experiment tested the user perception of a virtual object size under three levels of IPD and movement gain manipulation with a fixed eye height in a virtual environment having reduced or rich visual cues. Our last experiment investigated the participant’s body size perception for two levels of manipulation of the IPDs and heights using stereo video footage to simulate a flying telepresence experience. The studies found that manipulating IPDs and eye height influenced the user’s size perception. We present our findings and share the recommendations for designing a multi-scale MR flying telepresence interface.
  • Design considerations for combining augmented reality with intelligent tutors
    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G.

    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G. (2018). Design considerations for combining augmented reality with intelligent tutors. Computers & Graphics, 77, 166-182.

    @article{herbert2018design,
    title={Design considerations for combining augmented reality with intelligent tutors},
    author={Herbert, Bradley and Ens, Barrett and Weerasinghe, Amali and Billinghurst, Mark and Wigley, Grant},
    journal={Computers \& Graphics},
    volume={77},
    pages={166--182},
    year={2018},
    publisher={Elsevier}
    }
    Augmented Reality overlays virtual objects on the real world in real-time and has the potential to enhance education, however, few AR training systems provide personalised learning support. Combining AR with intelligent tutoring systems has the potential to improve training outcomes by providing personalised learner support, such as feedback on the AR environment. This paper reviews the current state of AR training systems combined with ITSs and proposes a series of requirements for combining the two paradigms. In addition, this paper identifies a growing need to provide more research in the context of design and implementation of adaptive augmented reality tutors (ARATs). These include possibilities of evaluating the user interfaces of ARAT and potential domains where an ARAT might be considered effective.
  • Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression. In 2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good) (pp. 1-5). IEEE.

    @inproceedings{clifford2018development,
    title={Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good)},
    pages={1--5},
    year={2018},
    organization={IEEE}
    }
    Wildfire firefighting is difficult to train for in the real world due to a variety of reasons, cost and environmental impact being the major barriers to effective training. Virtual Reality offers greater training opportunities to practice crucial skills, difficult to obtain without experiencing the actual environment. Situation Awareness (SA) is a critical aspect of Air Attack Supervision (AAS). Timely decisions need to be made by the AAS based on the information gathered while airborne. The type of display used in virtual reality training systems afford different levels of SA due to factors such as field of view, as well as presence within the virtual environment and the system. We conducted a study with 36 participants to evaluate SA acquisition and immersion in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical projection system (SimPit). We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for SA levels. Preference was given more to the HMD for immersion and portability, but the SimPit gave the best environment for the actual role.
  • Collaborative immersive analytics.
    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T.

    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T. (2018). Collaborative immersive analytics. In Immersive Analytics (pp. 221-257). Springer, Cham.

    @incollection{billinghurst2018collaborative,
    title={Collaborative immersive analytics},
    author={Billinghurst, Mark and Cordeil, Maxime and Bezerianos, Anastasia and Margolis, Todd},
    booktitle={Immersive Analytics},
    pages={221--257},
    year={2018},
    publisher={Springer}
    }
    Many of the problems being addressed by Immersive Analytics require groups of people to solve. This chapter introduces the concept of Collaborative Immersive Analytics (CIA) and reviews how immersive technologies can be combined with Visual Analytics to facilitate co-located and remote collaboration. We provide a definition of Collaborative Immersive Analytics and then an overview of the different types of possible collaboration. The chapter also discusses the various roles in collaborative systems, and how to support shared interaction with the data being presented. Finally, we summarize the opportunities for future research in this domain. The aim of the chapter is to provide enough of an introduction to CIA and key directions for future research, so that practitioners will be able to begin working in the field.
  • Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting
    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W. (2018, September). Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting. In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) (pp. 1-8). IEEE. Clifford, Rory MS, Simon Hoermann, Nicolas Marcade

    @inproceedings{clifford2018evaluating,
    title={Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting},
    author={Clifford, Rory MS and Hoermann, Simon and Marcadet, Nicolas and Oliver, Hamish and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
    pages={1--8},
    year={2018},
    organization={IEEE}
    }
    Aerial firefighting takes place in stressful environments where decision making and communication are paramount, and skills need to be practiced and trained regularly. An experiment was performed to test the effects of disrupting the communications ability of the users on their stress levels in a noisy environment. The goal of this research is to investigate how realistic disruption of communication systems can be simulated in a virtual environment and to what extent they induce stress. We found that aerial firefighting experts maintained a better Heart Rate Variability (HRV) during disruptions than novices. Experts showed better ability to manage stress based on the change in HRV during the experiment. Our main finding is that communication disruptions in virtual reality (e.g., broken transmissions) significantly impacted the level of stress experienced by participants.
  • TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams
    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C. (2018, August). TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 991-996). IEEE.

    @inproceedings{wen2018teammate,
    title={TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossel, Chad},
    booktitle={2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)},
    pages={991--996},
    year={2018},
    organization={IEEE}
    }
    Strong empathic bonding between members of a team can elevate team performance tremendously but it is not clear how such bonding within human-machine teams may impact upon mission success. Prior work using self-reporting surveys and end-of-task metrics do not capture how such bonding may evolve over time and impact upon task fulfillment. Furthermore, sensor-based measures do not scale easily to facilitate the need to collect substantial data for measuring potentially subtle effects. We introduce TEAMMATE, a system designed to provide insights into the emotional dynamics humans may form for machine teammates that could critically impact upon the design of human machine teams.
  • Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques
    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S.

    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S. (2018, September). Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques. In Proceedings of SAI Intelligent Systems Conference (pp. 309-322). Springer, Cham.

    @inproceedings{ismail2018designing,
    title={Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques},
    author={Ismail, Ajune Wanis and Billinghurst, Mark and Sunar, Mohd Shahrizal and Yusof, Cik Suhaimi},
    booktitle={Proceedings of SAI Intelligent Systems Conference},
    pages={309--322},
    year={2018},
    organization={Springer}
    }
    Augmented Reality (AR) supports natural interaction in physical and virtual worlds, so it has recently given rise to a number of novel interaction modalities. This paper presents a method for using hand-gestures with speech input for multimodal interaction in AR. It focuses on providing an intuitive AR environment which supports natural interaction with virtual objects while sustaining accessible real tasks and interaction mechanisms. The paper reviews previous multimodal interfaces and describes recent studies in AR that employ gesture and speech inputs for multimodal input. It describes an implementation of gesture interaction with speech input in AR for virtual object manipulation. Finally, the paper presents a user evaluation of the technique, showing that it can be used to improve the interaction between virtual and physical elements in an AR environment.
  • Emotion Sharing and Augmentation in Cooperative Virtual Reality Games
    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M. (2018, October). Emotion Sharing and Augmentation in Cooperative Virtual Reality Games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (pp. 453-460). ACM.

    @inproceedings{hart2018emotion,
    title={Emotion Sharing and Augmentation in Cooperative Virtual Reality Games},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lawrence, Louise and Lee, Gun A and Smith, Ross T and Billinghurst, Mark},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts},
    pages={453--460},
    year={2018},
    organization={ACM}
    }
    We present preliminary findings from sharing and augmenting facial expression in cooperative social Virtual Reality (VR) games. We implemented a prototype system for capturing and sharing facial expression between VR players through their avatar. We describe our current prototype system and how it could be assimilated into a system for enhancing social VR experience. Two social VR games were created for a preliminary user study. We discuss our findings from the user study, potential games for this system, and future directions for this research.
  • Effects of Manipulating Physiological Feedback in Immersive Virtual Environments
    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Manipulating Physiological Feedback in Immersive Virtual Environments. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (pp. 101-111). ACM.

    @inproceedings{dey2018effects,
    title={Effects of Manipulating Physiological Feedback in Immersive Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play},
    pages={101--111},
    year={2018},
    organization={ACM}
    }

    Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one’s physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants’ emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.

  • Real-time visual representations for mobile mixed reality remote collaboration.
    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W.

    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W. (2018, December). Real-time visual representations for mobile mixed reality remote collaboration. In SIGGRAPH Asia 2018 Virtual & Augmented Reality (p. 15). ACM.

    @inproceedings{gao2018real,
    title={Real-time visual representations for mobile mixed reality remote collaboration},
    author={Gao, Lei and Bai, Huidong and He, Weiping and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2018 Virtual \& Augmented Reality},
    pages={15},
    year={2018},
    organization={ACM}
    }
    In this study we present a Mixed-Reality based mobile remote collaboration system that enables an expert providing real-time assistance over a physical distance. By using the Google ARCore position tracking, we can integrate the keyframes captured with one external depth sensor attached to the mobile phone as one single 3D point-cloud data set to present the local physical environment into the VR world. This captured local scene is then wirelessly streamed to the remote side for the expert to view while wearing a mobile VR headset (HTC VIVE Focus). In this case, the remote expert can immerse himself/herself in the VR scene and provide guidance just as sharing the same work environment with the local worker. In addition, the remote guidance is also streamed back to the local side as an AR cue overlaid on top of the local video see-through display. Our proposed mobile remote collaboration system supports a pair of participants performing as one remote expert guiding one local worker on some physical tasks in a more natural and efficient way in a large scale work space from a distance by simulating the face-to-face co-work experience using the Mixed-Reality technique.
  • Band of Brothers and Bolts: Caring About Your Robot Teammate
    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C. (2018, October). Band of Brothers and Bolts: Caring About Your Robot Teammate. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1853-1858). IEEE.

    @inproceedings{wen2018band,
    title={Band of Brothers and Bolts: Caring About Your Robot Teammate},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossell, Chad},
    booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages={1853--1858},
    year={2018},
    organization={IEEE}
    }
    It has been observed that a robot shown as suffering is enough to cause an empathic response from a person. Whether the response is a fleeting reaction with no consequences or a meaningful perspective change with associated behavior modifications is not clear. Existing work has been limited to measurements made at the end of empathy inducing experimental trials rather measurements made over time to capture consequential behavioral pattern. We report on preliminary results collected from a study that attempts to measure how the actions of a participant may be altered by empathy for a robot companion. Our findings suggest that induced empathy can in fact have a significant impact on a person's behavior to the extent that the ability to fulfill a mission may be affected.
  • The effect of video placement in AR conferencing applications
    Lawrence, L., Dey, A., & Billinghurst, M.

    Lawrence, L., Dey, A., & Billinghurst, M. (2018, December). The effect of video placement in AR conferencing applications. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 453-457). ACM.

    @inproceedings{lawrence2018effect,
    title={The effect of video placement in AR conferencing applications},
    author={Lawrence, Louise and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={453--457},
    year={2018},
    organization={ACM}
    }
    We ran a pilot study to investigate the impact of video placement in augmented reality conferencing on communication, social presence and user preference. In addition, we explored the influence of different tasks, assembly and negotiation. We discovered a correlation between video placement and the type of the tasks, with some significant results in social presence indicators.
  • HandsInTouch: sharing gestures in remote collaboration
    Huang, W., Billinghurst, M., Alem, L., & Kim, S.

    Huang, W., Billinghurst, M., Alem, L., & Kim, S. (2018, December). HandsInTouch: sharing gestures in remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 396-400). ACM.

    @inproceedings{huang2018handsintouch,
    title={HandsInTouch: sharing gestures in remote collaboration},
    author={Huang, Weidong and Billinghurst, Mark and Alem, Leila and Kim, Seungwon},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={396--400},
    year={2018},
    organization={ACM}
    }
    Many systems have been developed to support remote collaboration, where hand gestures or sketches can be shared. However, the effect of combining gesture and sketching together has not been fully explored and understood. In this paper we describe HandsInTouch, a system in which both hand gestures and sketches made by a remote helper are shown to a local user in real time. We conducted a user study to test the usability of the system and the usefulness of combing gesture and sketching for remote collaboration. We discuss results and make recommendations for system design and future work.
  • A generalized, rapid authoring tool for intelligent tutoring systems
    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G.

    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G. (2018, December). A generalized, rapid authoring tool for intelligent tutoring systems. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 368-373). ACM.

    @inproceedings{herbert2018generalized,
    title={A generalized, rapid authoring tool for intelligent tutoring systems},
    author={Herbert, Bradley and Billinghurst, Mark and Weerasinghe, Amali and Ens, Barret and Wigley, Grant},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={368--373},
    year={2018},
    organization={ACM}
    }
    As computer-based training systems become increasingly integrated into real-world training, tools which rapidly author courses for such systems are emerging. However, inconsistent user interface design and limited support for a variety of domains makes them time consuming and difficult to use. We present a Generalized, Rapid Authoring Tool (GRAT), which simplifies creation of Intelligent Tutoring Systems (ITSs) using a unified web-based wizard-style graphical user interface and programming-by-demonstration approaches to reduce technical knowledge needed to author ITS logic. We implemented a prototype, which authors courses for two kinds of tasks: A network cabling task and a console device configuration task to demonstrate the tool's potential. We describe the limitations of our prototype and present opportunities for evaluating the tool's usability and perceived effectiveness.
  • User virtual costume visualisation in an augmented virtuality immersive cinematic environment
    Tang, W., Lee, G. A., Billinghurst, M., & Lindeman, R. W.

    Tang, W., Lee, G. A., Billinghurst, M., & Lindeman, R. W. (2018, December). User virtual costume visualisation in an augmented virtuality immersive cinematic environment. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 219-223). ACM.

    @inproceedings{tang2018user,
    title={User virtual costume visualisation in an augmented virtuality immersive cinematic environment},
    author={Tang, Wenjing and Lee, Gun A and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={219--223},
    year={2018},
    organization={ACM}
    }
    Recent development of affordable head-mounted displays (HMDs) has led to accessible Virtual Reality (VR) solutions for watching 360-degree panoramic movies. While conventionally users cannot see their body while watching 360 movies, our prior work seamlessly blended a user's physical body into a 360 virtual movie scene. This paper extends this work by overlaying context-matching virtual costumes onto the user's real body. A prototype was developed using a pair of depth cameras and an HMD to capture the user's real body and embed it into a virtual 360 movie scene. Virtual costumes related to the movie scene are then overlaid on user's real body to enhance the user experience. Results from a user study showed that augmenting the user's real body with context-matching virtual costume was most preferred by users while having no significant effect on sense of presence compared to showing only user's body in a 360 movie scene. The results offer a future direction to generate enhanced 360 VR movie watching experiences.
  • Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.
    Kim, S., Billinghurst, M., Lee, C., & Lee, G

    Kim, S., Billinghurst, M., Lee, C., & Lee, G. (2018). Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration. KSII Transactions on Internet & Information Systems, 12(12).

    @article{kim2018using,
    title={Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Chilwoo and Lee, Gun},
    journal={KSII Transactions on Internet \& Information Systems},
    volume={12},
    number={12},
    year={2018}
    }

    This paper describes two user studies in remote collaboration between two users with a video conferencing system where a remote user can draw annotations on the live video of the local user’s workspace. In these two studies, the local user had the control of the view when sharing the first-person view, but our interfaces provided instant control of the shared view to the remote users. The first study investigates methods for assisting drawing annotations. The auto-freeze method, a novel solution for drawing annotations, is compared to a prior solution (manual freeze method) and a baseline (non-freeze) condition. Results show that both local and remote users preferred the auto-freeze method, which is easy to use and allows users to quickly draw annotations. The manual-freeze method supported precise drawing, but was less preferred because of the need for manual input. The second study explores visual notification for better local user awareness. We propose two designs: the red-box and both-freeze notifications, and compare these to the baseline, no notification condition. Users preferred the less obtrusive red-box notification that improved awareness of when annotations were made by remote users, and had a significantly lower level of interruption compared to the both-freeze condition.

  • A user study on mr remote collaboration using live 360 video.
    Lee, G. A., Teo, T., Kim, S., & Billinghurst, M.

    Lee, G. A., Teo, T., Kim, S., & Billinghurst, M. (2018, October). A user study on mr remote collaboration using live 360 video. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 153-164). IEEE.

    @inproceedings{lee2018user,
    title={A user study on mr remote collaboration using live 360 video},
    author={Lee, Gun A and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={153--164},
    year={2018},
    organization={IEEE}
    }
    Sharing and watching live 360 panorama video is available on modern social networking platforms, yet the communication is often a passive one-directional experience. This research investigates how to further improve live 360 panorama based remote collaborative experiences by adding Mixed Reality (MR) cues. SharedSphere is a wearable MR remote collaboration system that enriches a live captured immersive panorama based collaboration through MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). We describe the design and implementation details of the prototype system, and report on a user study investigating how MR live panorama sharing affects the user's collaborative experience. The results showed that providing view independence through sharing live panorama enhances co-presence in collaboration, and the MR cues help users understanding each other. Based on the study results we discuss design implications and future research direction.
  • The Potential of Augmented Reality for Computer Science Education
    Resnyansky, D., İbili, E., & Billinghurst, M.

    Resnyansky, D., İbili, E., & Billinghurst, M. (2018, December). The Potential of Augmented Reality for Computer Science Education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 350-356). IEEE.

    @inproceedings{resnyansky2018potential,
    title={The Potential of Augmented Reality for Computer Science Education},
    author={Resnyansky, Dmitry and {\.I}bili, Emin and Billinghurst, Mark},
    booktitle={2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    pages={350--356},
    year={2018},
    organization={IEEE}
    }
    Innovative approaches in the teaching of computer science are required to address the needs of diverse target audiences, including groups with minimal mathematical background and insufficient abstract thinking ability.  In order to tackle this problem, new pedagogical approaches as needed, such as using new technologies such as Virtual and Augmented Reality, Tangible User Interfaces, and 3D graphics. This paper draws upon relevant pedagogical and technological literature to determine how Augmented Reality can be more fully applied to computer science education.
  • Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments
    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 165-173). IEEE.

    @inproceedings{dey2018effects,
    title={Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Zhuang, Chang and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={165--173},
    year={2018},
    organization={IEEE}
    }
    Collaboration is an important application area for virtual reality (VR). However, unlike in the real world, collaboration in VR misses important empathetic cues that can make collaborators aware of each other's emotional states. Providing physiological feedback, such as heart rate or respiration rate, to users in VR has been shown to create a positive impact in single user environments. In this paper, through a rigorous mixed-factorial user experiment, we evaluated how providing heart rate feedback to collaborators influences their collaboration in three different environments requiring different kinds of collaboration. We have found that when provided with real-time heart rate feedback participants felt the presence of the collaborator more and felt that they understood their collaborator's emotional state more. Heart rate feedback also made participants feel more dominant when performing the task. We discuss the implication of this research for collaborative VR environments, provide design guidelines, and directions for future research.
  • Sharing and Augmenting Emotion in Collaborative Mixed Reality
    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M. (2018, October). Sharing and Augmenting Emotion in Collaborative Mixed Reality. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 212-213). IEEE.

    @inproceedings{hart2018sharing,
    title={Sharing and Augmenting Emotion in Collaborative Mixed Reality},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lee, Gun and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={212--213},
    year={2018},
    organization={IEEE}
    }
    We present a concept of emotion sharing and augmentation for collaborative mixed-reality. To depict the ideal use case of such system, we give two example scenarios. We describe our prototype system for capturing and augmenting emotion through facial expression, eye-gaze, voice, physiological data and share them through their virtual representation, and discuss on future research directions with potential applications.
  • Filtering 3D Shared Surrounding Environments by Social Proximity in AR
    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W.

    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W. (2018, October). Filtering 3D Shared Surrounding Environments by Social Proximity in AR. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 123-124). IEEE.

    @inproceedings{nassani2018filtering,
    title={Filtering 3D Shared Surrounding Environments by Social Proximity in AR},
    author={Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Langlotz, Tobias and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={123--124},
    year={2018},
    organization={IEEE}
    }
    In this poster, we explore the social sharing of surrounding environments on wearable Augmented Reality (AR) devices. In particular, we propose filtering the level of detail of sharing the surrounding environment based on the social proximity between the viewer and the sharer. We test the effect of having the filter (varying levels of detail) on the shared surrounding environment on the sense of privacy from both viewer and sharer perspectives and conducted a pilot study using HoloLens. We report on semi-structured questionnaire results and suggest future directions in the social sharing of surrounding environments.
  • The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation
    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M.

    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M. (2018, October). The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 216-221). IEEE.

    @inproceedings{zhang2018effect,
    title={The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation},
    author={Zhang, Li and Ha, Weiping and Bai, Xiaoliang and Chen, Yongxing and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={216--221},
    year={2018},
    organization={IEEE}
    }
    In this paper, we explore how Augmented Reality (AR) and anthropomorphism can be used to assign emotions to common physical objects based on their needs. We developed a novel emotional interaction model among personified physical objects so that they could react to other objects by changing virtual facial expressions. To explore the effect of such an emotional interface, we conducted a user study comparing three types of virtual cues shown on the real objects: (1) information only, (2) emotion only and (3) both information and emotional cues. A significant difference was found in task completion time and the quality of work when adding emotional cues to an informational AR-based guiding system. This implies that adding emotion feedback to informational cues may produce better task results than using informational cues alone.
  • Do you know what i mean? an mr-based collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., Wang, S.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., ... & Wang, S. (2018, October). Do you know what i mean? an mr-based collaborative platform. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 77-78). IEEE.

    @inproceedings{wang2018you,
    title={Do you know what i mean? an mr-based collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Zhang, Li and Du, Jiaxiang and Wang, Shuxia},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={77--78},
    year={2018},
    organization={IEEE}
    }
    The Mixed Reality (MR) technology can be used to create unique collaborative experiences. In this paper, we propose a new remote collaboration platform using MR and eye-tracking that enables a remote helper to assist a local worker in an assembly task. We present results from research exploring the effect of sharing virtual gaze and annotations cues in an MR-based projector interface for remote collaboration. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert's eye gaze into the real worksite to improve co-presence. The prototype system was evaluated with a pilot study comparing two conditions: POINTER and ET (eye-tracker cues). We observed that the task completion performance was better in the ET condition. And that sharing gaze significantly improved the awareness of each other's focus and co-presence.
  • TUI/AR-based teaching programming tools
    The potential of augmented reality for computer science education
    Dmitry Resnyansky; Emin İbili; Mark Billinghurst

    Resnyansky, D., Ibili, E., & Billinghurst, M. (2018, December). The potential of augmented reality for computer science education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 350-356). IEEE.

    @INPROCEEDINGS{8615331,
    author={Resnyansky, Dmitry and İbili, Emin and Billinghurst, Mark},
    booktitle={2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    title={The Potential of Augmented Reality for Computer Science Education},
    year={2018},
    volume={},
    number={},
    pages={350-356},
    doi={10.1109/TALE.2018.8615331}}
    Innovative approaches in the teaching of computer science are required to address the needs of diverse target audiences, including groups with minimal mathematical background and insufficient abstract thinking ability. In order to tackle this problem, new pedagogical approaches that make use of technologies such as Virtual and Augmented Reality, Tangible User Interfaces, and 3D graphics are needed. This paper draws upon relevant pedagogical and technological literature to determine how Augmented Reality can be more fully applied to computer science education.
  • 2017
  • Mixed Reality Collaboration through Sharing a Live Panorama
    Gun A. Lee, Theophilus Teo, Seungwon Kim, Mark Billinghurst

    Gun A. Lee, Theophilus Teo, Seungwon Kim, and Mark Billinghurst. 2017. Mixed reality collaboration through sharing a live panorama. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (SA '17). ACM, New York, NY, USA, Article 14, 4 pages. http://doi.acm.org/10.1145/3132787.3139203

    @inproceedings{Lee:2017:MRC:3132787.3139203,
    author = {Lee, Gun A. and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    title = {Mixed Reality Collaboration Through Sharing a Live Panorama},
    booktitle = {SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    series = {SA '17},
    year = {2017},
    isbn = {978-1-4503-5410-3},
    location = {Bangkok, Thailand},
    pages = {14:1--14:4},
    articleno = {14},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3132787.3139203},
    doi = {10.1145/3132787.3139203},
    acmid = {3139203},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {panorama, remote collaboration, shared experience},
    }
    One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. SharedSphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator’s focus.
  • User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors
    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst

    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst. User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, 109-116. http://dx.doi.org/10.2312/egve.20171347

    @inproceedings {egve.20171347,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors}},
    author = {Lee, Gun A. and Rudhru, Omprakash and Park, Hye Sun and Kim, Ho Won and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171347}
    }
    This research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.
  • Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze
    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst

    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst. 2017. Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, pp. 197-204. http://dx.doi.org/10.2312/egve.20171359

    @inproceedings {egve.20171359,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze}},
    author = {Lee, Gun A. and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammathip and Norman, Mitchell and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171359}
    }
    To improve remote collaboration in video conferencing systems, researchers have been investigating augmenting visual cues onto a shared live video stream. In such systems, a person wearing a head-mounted display (HMD) and camera can share her view of the surrounding real-world with a remote collaborator to receive assistance on a real-world task. While this concept of augmented video conferencing (AVC) has been actively investigated, there has been little research on how sharing gaze cues might affect the collaboration in video conferencing. This paper investigates how sharing gaze in both directions between a local worker and remote helper in an AVC system affects the collaboration and communication. Using a prototype AVC system that shares the eye gaze of both users, we conducted a user study that compares four conditions with different combinations of eye gaze sharing between the two users. The results showed that sharing each other’s gaze significantly improved collaboration and communication.
  • Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst

    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst. 2017. Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 36-39. https://doi.org/10.1109/3DUI.2017.7893315

    @INPROCEEDINGS{7893315,
    author={T. Piumsomboon and G. Lee and R. W. Lindeman and M. Billinghurst},
    booktitle={2017 IEEE Symposium on 3D User Interfaces (3DUI)},
    title={Exploring natural eye-gaze-based interaction for immersive virtual reality},
    year={2017},
    volume={},
    number={},
    pages={36-39},
    keywords={gaze tracking;gesture recognition;helmet mounted displays;virtual reality;Duo-Reticles;Nod and Roll;Radial Pursuit;cluttered-object selection;eye tracking technology;eye-gaze selection;head-gesture-based interaction;head-mounted display;immersive virtual reality;inertial reticles;natural eye movements;natural eye-gaze-based interaction;smooth pursuit;vestibulo-ocular reflex;Electronic mail;Erbium;Gaze tracking;Painting;Portable computers;Resists;Two dimensional displays;H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction styles},
    doi={10.1109/3DUI.2017.7893315},
    ISSN={},
    month={March},}
    Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
  • Enhancing player engagement through game balancing in digitally augmented physical games
    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C.

    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C. (2017). Enhancing player engagement through game balancing in digitally augmented physical games. International Journal of Human-Computer Studies, 103, 35-47.

    @article{altimira2017enhancing,
    title={Enhancing player engagement through game balancing in digitally augmented physical games},
    author={Altimira, David and Clarke, Jenny and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph and others},
    journal={International Journal of Human-Computer Studies},
    volume={103},
    pages={35--47},
    year={2017},
    publisher={Elsevier}
    }
    Game balancing can be used to compensate for differences in players' skills, in particular in games where players compete against each other. It can help providing the right level of challenge and hence enhance engagement. However, there is a lack of understanding of game balancing design and how different game adjustments affect player engagement. This understanding is important for the design of balanced physical games. In this paper we report on how altering the game equipment in a digitally augmented table tennis game, such as the table size and bat-head size statically and dynamically, can affect game balancing and player engagement. We found these adjustments enhanced player engagement compared to the no-adjustment condition. The understanding of how the adjustments impacted on player engagement helped us to derive a set of balancing strategies to facilitate engaging game experiences. We hope that this understanding can contribute to improve physical activity experiences and encourage people to get engaged in physical activity.
  • Effects of sharing physiological states of players in a collaborative virtual reality gameplay
    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M.

    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M. (2017, May). Effects of sharing physiological states of players in a collaborative virtual reality gameplay. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4045-4056). ACM.

    @inproceedings{dey2017effects,
    title={Effects of sharing physiological states of players in a collaborative virtual reality gameplay},
    author={Dey, Arindam and Piumsomboon, Thammathip and Lee, Youngho and Billinghurst, Mark},
    booktitle={Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
    pages={4045--4056},
    year={2017},
    organization={ACM}
    }
    Interfaces for collaborative tasks, such as multiplayer games can enable more effective and enjoyable collaboration. However, in these systems, the emotional states of the users are often not communicated properly due to their remoteness from one another. In this paper, we investigate the effects of showing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart-rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. The games had significant main effects on the overall emotional experience.
  • User evaluation of hand gestures for designing an intelligent in-vehicle interface
    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M.

    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M. (2017, May). User evaluation of hand gestures for designing an intelligent in-vehicle interface. In International Conference on Design Science Research in Information System and Technology (pp. 104-121). Springer, Cham.

    @inproceedings{jahani2017user,
    title={User evaluation of hand gestures for designing an intelligent in-vehicle interface},
    author={Jahani, Hessam and Alyamani, Hasan J and Kavakli, Manolya and Dey, Arindam and Billinghurst, Mark},
    booktitle={International Conference on Design Science Research in Information System and Technology},
    pages={104--121},
    year={2017},
    organization={Springer}
    }
    Driving a car is a high cognitive-load task requiring full attention behind the wheel. Intelligent navigation, transportation, and in-vehicle interfaces have introduced a safer and less demanding driving experience. However, there is still a gap for the existing interaction systems to satisfy the requirements of actual user experience. Hand gesture as an interaction medium, is natural and less visually demanding while driving. This paper aims to conduct a user-study with 79 participants to validate mid-air gestures for 18 major in-vehicle secondary tasks. We have demonstrated a detailed analysis on 900 mid-air gestures investigating preferences of gestures for in-vehicle tasks, their physical affordance, and driving errors. The outcomes demonstrate that employment of mid-air gestures reduces driving errors by up to 50% compared to traditional air-conditioning control. Results can be used for the development of vision-based in-vehicle gestural interfaces.
  • Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals
    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M.

    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M. (2017, June). Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals. In International Conference on Artificial Intelligence in Education (pp. 450-454). Springer, Cham.

    @inproceedings{almiyad2017intelligent,
    title={Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals},
    author={Almiyad, Mohammed A and Oakden-Rayner, Luke and Weerasinghe, Amali and Billinghurst, Mark},
    booktitle={International Conference on Artificial Intelligence in Education},
    pages={450--454},
    year={2017},
    organization={Springer}
    }
    Percutaneous radiology procedures often require the repeated use of medical radiation in the form of computed tomography (CT) scanning, to demonstrate the position of the needle in the underlying tissues. The angle of the insertion and the distance travelled by the needle inside the patient play a major role in successful procedures, and must be estimated by the practitioner and confirmed periodically by the use of the scanner. Junior radiology trainees, who are already highly trained professionals, currently learn this task “on-the-job” by performing the procedures on real patients with varying levels of guidance. Therefore, we present a novel Augmented Reality (AR)-based system that provides multiple layers of intuitive and adaptive feedback to assist junior radiologists in achieving competency in image-guided procedures.
  • Augmented reality entertainment: taking gaming out of the box
    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H.

    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H. (2017). Augmented reality entertainment: taking gaming out of the box. Encyclopedia of Computer Graphics and Games, 1-9.

    @article{von2017augmented,
    title={Augmented reality entertainment: taking gaming out of the box},
    author={Von Itzstein, G Stewart and Billinghurst, Mark and Smith, Ross T and Thomas, Bruce H},
    journal={Encyclopedia of Computer Graphics and Games},
    pages={1--9},
    year={2017},
    publisher={Springer}
    }
    In this chapter, an overview of using AR for gaming and entertainment is provided, one of the most popular application areas. There are many possible AR entertainment applications. For example, the Pokémon Go mobile phone game has an AR element that allows people to see virtual Pokémon to appear in the live camera view, seemingly inhabiting the real world. In this case, Pokémon Go satisfies Azuma’s three AR criteria: the virtual Pokémon appears in the real world, the user can interact with them, and they appear fixed in space.
  • Estimating Gaze Depth Using Multi-Layer Perceptron

    Lee, Y., Shin, C., Plopski, A., Itoh, Y., Piumsomboon, T., Dey, A., ... & Billinghurst, M. (2017, June). Estimating Gaze Depth Using Multi-Layer Perceptron. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 26-29). IEEE.

    @inproceedings{lee2017estimating,
    title={Estimating Gaze Depth Using Multi-Layer Perceptron},
    author={Lee, Youngho and Shin, Choonsung and Plopski, Alexander and Itoh, Yuta and Piumsomboon, Thammathip and Dey, Arindam and Lee, Gun and Kim, Seungwon and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={26--29},
    year={2017},
    organization={IEEE}
    }
    In this paper we describe a new method for determining gaze depth in a head mounted eye-tracker. Eyetrackers are being incorporated into head mounted displays (HMDs), and eye-gaze is being used for interaction in Virtual and Augmented Reality. For some interaction methods, it is important to accurately measure the x- and y-direction of the eye-gaze and especially the focal depth information. Generally, eye tracking technology has a high accuracy in x- and y-directions, but not in depth. We used a binocular gaze tracker with two eye cameras, and the gaze vector was input to an MLP neural network for training and estimation. For the performance evaluation, data was obtained from 13 people gazing at fixed points at distances from 1m to 5m. The gaze classification into fixed distances produced an average classification error of nearly 10%, and an average error distance of 0.42m. This is sufficient for some Augmented Reality applications, but more research is needed to provide an estimate of a user’s gaze moving in continuous space.
  • Empathic mixed reality: Sharing what you feel and interacting with what you see
    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M.

    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M. (2017, June). Empathic mixed reality: Sharing what you feel and interacting with what you see. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 38-41). IEEE.

    @inproceedings{piumsomboon2017empathic,
    title={Empathic mixed reality: Sharing what you feel and interacting with what you see},
    author={Piumsomboon, Thammathip and Lee, Youngho and Lee, Gun A and Dey, Arindam and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={38--41},
    year={2017},
    organization={IEEE}
    }
    Empathic Computing is a research field that aims to use technology to create deeper shared understanding or empathy between people. At the same time, Mixed Reality (MR) technology provides an immersive experience that can make an ideal interface for collaboration. In this paper, we present some of our research into how MR technology can be applied to creating Empathic Computing experiences. This includes exploring how to share gaze in a remote collaboration between Augmented Reality (AR) and Virtual Reality (VR) environments, using physiological signals to enhance collaborative VR, and supporting interaction through eye-gaze in VR. Early outcomes indicate that as we design collaborative interfaces to enhance empathy between people, this could also benefit the personal experience of the individual interacting with the interface.
  • The Social AR Continuum: Concept and User Study
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W. (2017, October). [POSTER] The Social AR Continuum: Concept and User Study. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 7-8). IEEE.

    @inproceedings{nassani2017poster,
    title={[POSTER] The Social AR Continuum: Concept and User Study},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Hoermann, Simon and Lindeman, Robert W},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={7--8},
    year={2017},
    organization={IEEE}
    }
    In this poster, we describe The Social AR Continuum, a space that encompasses different dimensions of Augmented Reality (AR) for sharing social experiences. We explore various dimensions, discuss options for each dimension, and brainstorm possible scenarios where these options might be useful. We describe a prototype interface using the contact placement dimension, and report on feedback from potential users which supports its usefulness for visualising social contacts. Based on this concept work, we suggest user studies in the social AR space, and give insights into future directions.
  • Mutually Shared Gaze in Augmented Video Conference
    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M.

    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M. (2017, October). Mutually Shared Gaze in Augmented Video Conference. In Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017 (pp. 79-80). Institute of Electrical and Electronics Engineers Inc..

    @inproceedings{lee2017mutually,
    title={Mutually Shared Gaze in Augmented Video Conference},
    author={Lee, Gun and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammatip and Norman, Mitchell and Billinghurst, Mark},
    booktitle={Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017},
    pages={79--80},
    year={2017},
    organization={Institute of Electrical and Electronics Engineers Inc.}
    }
    Augmenting video conference with additional visual cues has been studied to improve remote collaboration. A common setup is a person wearing a head-mounted display (HMD) and camera sharing her view of the workspace with a remote collaborator and getting assistance on a real-world task. While this configuration has been extensively studied, there has been little research on how sharing gaze cues might affect the collaboration. This research investigates how sharing gaze in both directions between a local worker and remote helper affects the collaboration and communication. We developed a prototype system that shares the eye gaze of both users, and conducted a user study. Preliminary results showed that sharing gaze significantly improves the awareness of each other's focus, hence improving collaboration.
  • CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., and Billinghurst, M

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2017, October). [POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 218-219). IEEE.

    @inproceedings{piumsomboon2017poster,
    title={[POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={218--219},
    year={2017},
    organization={IEEE}
    }
    We present CoVAR, a novel Virtual Reality (VR) and Augmented Reality (AR) system for remote collaboration. It supports collaboration between AR and VR users by sharing a 3D reconstruction of the AR user's environment. To enhance this mixed platform collaboration, it provides natural inputs such as eye-gaze and hand gestures, remote embodiment through avatar's head and hands, and awareness cues of field-of-view and gaze cue. In this paper, we describe the system architecture, setup and calibration procedures, input methods and interaction, and collaboration enhancement features.
  • Exhibition approach using an AR and VR pillar
    See, Z. S., Sunar, M. S., Billinghurst, M., Dey, A., Santano, D., Esmaeili, H., and Thwaites, H.

    See, Z. S., Sunar, M. S., Billinghurst, M., Dey, A., Santano, D., Esmaeili, H., & Thwaites, H. (2017, November). Exhibition approach using an AR and VR pillar. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 8). ACM.

    @inproceedings{see2017exhibition,
    title={Exhibition approach using an AR and VR pillar},
    author={See, Zi Siang and Sunar, Mohd Shahrizal and Billinghurst, Mark and Dey, Arindam and Santano, Delas and Esmaeili, Human and Thwaites, Harold},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={8},
    year={2017},
    organization={ACM}
    }
    This demonstration presents a development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase. A cultural heritage theme of Boatbuilders of Pangkor was been featured in an experiment of the AR and VR Pillar. Multimedia tablets and mobile AR head-mount-displays (HMDs) were freely provided for the public visitors to experience multisensory content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360 virtual environment. A miniature version of the pillar will be used for the demonstration where users could experience features of the prototype system.
  • Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie
    Khan, Humayun, Gun Lee, Simon Hoermann, Rory Clifford, Mark Billinghurst, and Robert W. Lindeman.

    Khan, Humayun, Gun Lee, Simon Hoermann, Rory Clifford, Mark Billinghurst, and Robert W. Lindeman. "Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie." (2017).

    @article{khan2017evaluating,
    title={Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie},
    author={Khan, Humayun and Lee, Gun and Hoermann, Simon and Clifford, Rory and Billinghurst, Mark and Lindeman, Robert W},
    year={2017}
    }
    Head-mounted displays are becoming increasingly popular as home entertainment devices for viewing 360◦ movies. This paper explores the effects of adding gesture interaction with virtual content and two different hand-visualisation modes for 360◦ movie watching experience. The system in the study comprises of a Leap Motion sensor to track the user’s hand and finger motions, in combination with a SoftKinetic RGB-D camera to capture the texture of the hands and arms. A 360◦ panoramic movie with embedded virtual objects was used as content. Four conditions, displaying either a point-cloud of the real hand or a rigged computer-generated hand, with and without interaction, were evaluated. Presence, agency, embodiment, and ownership, as well as the overall participant preference were measured. Results showed that participants had a strong preference for the conditions with interactive virtual content, and they felt stronger embodiment and ownership. The comparison of the two hand visualisations showed that the display of the real hand elicited stronger ownership. There was no overall difference for presence between the four conditions. These findings suggest that adding interaction with virtual content could be beneficial to the overall user experience, and that interaction should be performed using the real hand visualisation instead of the virtual hand if higher ownership is desired.
  • The effect of user embodiment in AV cinematic experience
    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., and Bartneck, C.

    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., & Bartneck, C. (2017). The effect of user embodiment in AV cinematic experience.

    @article{chen2017effect,
    title={The effect of user embodiment in AV cinematic experience},
    author={Chen, Joshua and Lee, Gun and Billinghurst, Mark and Lindeman, Robert W and Bartneck, Christoph},
    year={2017}
    }
    Virtual Reality (VR) is becoming a popular medium for viewing immersive cinematic experiences using 360◦ panoramic movies and head mounted displays. There are previous research on user embodiment in real-time rendered VR, but not in relation to cinematic VR based on 360 panoramic video. In this paper we explore the effects of introducing the user’s real body into cinematic VR experiences. We conducted a study evaluating how the type of movie and user embodiment affects the sense of presence and user engagement. We found that when participants were able to see their own body in the VR movie, there was significant increase in the sense of Presence, yet user engagement was not significantly affected. We discuss on the implications of the results and how it can be expanded in the future.
  • A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs
    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M.

    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M. (2017, November). A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos (pp. 1-2). Eurographics Association.

    @inproceedings{lee2017gaze,
    title={A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs},
    author={Lee, Youngho and Piumsomboon, Thammathip and Ens, Barrett and Lee, Gun and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos},
    pages={1--2},
    year={2017},
    organization={Eurographics Association}
    }

    The rapid developement of machine learning algorithms can be leveraged for potential software solutions in many domains including techniques for depth estimation of human eye gaze. In this paper, we propose an implicit and continuous data acquisition method for 3D gaze depth estimation for an optical see-Through head mounted display (OST-HMD) equipped with an eye tracker. Our method constantly monitoring and generating user gaze data for training our machine learning algorithm. The gaze data acquired through the eye-tracker include the inter-pupillary distance (IPD) and the gaze distance to the real andvirtual target for each eye.

  • Exploring pupil dilation in emotional virtual reality environments.
    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W.

    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W. (2017, November). Exploring pupil dilation in emotional virtual reality environments. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 169-176). Eurographics Association.

    @inproceedings{chen2017exploring,
    title={Exploring pupil dilation in emotional virtual reality environments},
    author={Chen, Hao and Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={169--176},
    year={2017},
    organization={Eurographics Association}
    }
    Previous investigations have shown that pupil dilation can be affected by emotive pictures, audio clips, and videos. In this paper, we explore how emotive Virtual Reality (VR) content can also cause pupil dilation. VR has been shown to be able to evoke negative and positive arousal in users when they are immersed in different virtual scenes. In our research, VR scenes were used as emotional triggers. Five emotional VR scenes were designed in our study and each scene had five emotion segments; happiness, fear, anxiety, sadness, and disgust. When participants experienced the VR scenes, their pupil dilation and the brightness in the headset were captured. We found that both the negative and positive emotion segments produced pupil dilation in the VR environments. We also explored the effect of showing heart beat cues to the users, and if this could cause difference in pupil dilation. In our study, three different heart beat cues were shown to users using a combination of three channels; haptic, audio, and visual. The results showed that the haptic-visual cue caused the most significant pupil dilation change from the baseline.
  • Collaborative View Configurations for Multi-user Interaction with a Wall-size Display
    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C.

    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C. (2017, November). Collaborative view configurations for multi-user interaction with a wall-size display. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 189-196). Eurographics Association.

    @inproceedings{kim2017collaborative,
    title={Collaborative view configurations for multi-user interaction with a wall-size display},
    author={Kim, Hyungon and Kim, Yeongmi and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={189--196},
    year={2017},
    organization={Eurographics Association}
    }
    This paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users’ attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.
  • Towards Optimization of Mid-air Gestures for In-vehicle Interactions
    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M.

    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M. (2017, November). Towards optimization of mid-air gestures for in-vehicle interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (pp. 126-134). ACM.

    @inproceedings{hessam2017towards,
    title={Towards optimization of mid-air gestures for in-vehicle interactions},
    author={Hessam, Jahani F and Zancanaro, Massimo and Kavakli, Manolya and Billinghurst, Mark},
    booktitle={Proceedings of the 29th Australian Conference on Computer-Human Interaction},
    pages={126--134},
    year={2017},
    organization={ACM}
    }
    A mid-air gesture-based interface could provide a less cumbersome in-vehicle interface for a safer driving experience. Despite the recent developments in gesture-driven technologies facilitating the multi-touch and mid-air gestures, interface safety requirements as well as an evaluation of gesture characteristics and functions, need to be explored. This paper describes an optimization study on the previously developed GestDrive gesture vocabulary for in-vehicle secondary tasks. We investigate mid-air gestures and secondary tasks, their correlation, confusions, unintentional inputs and consequential safety risks. Building upon a statistical analysis, the results provide an optimized taxonomy break-down for a user-centered gestural interface design which considers user preferences, requirements, performance, and safety issues.
  • Exploring Mixed-Scale Gesture Interaction
    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M. (2017). Exploring mixed-scale gesture interaction. SA'17 SIGGRAPH Asia 2017 Posters.

    @article{ens2017exploring,
    title={Exploring mixed-scale gesture interaction},
    author={Ens, Barrett and Quigley, Aaron John and Yeo, Hui Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={SA'17 SIGGRAPH Asia 2017 Posters},
    year={2017},
    publisher={ACM}
    }
    This paper presents ongoing work toward a design exploration for combining microgestures with other types of gestures within the greater lexicon of gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors.
  • Multi-Scale Gestural Interaction for Augmented Reality
    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M.

    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M. (2017, November). Multi-scale gestural interaction for augmented reality. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 11). ACM.

    @inproceedings{ens2017multi,
    title={Multi-scale gestural interaction for augmented reality},
    author={Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={11},
    year={2017},
    organization={ACM}
    }

    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.

  • Static local environment capturing and sharing for MR remote collaboration
    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M.

    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M. (2017, November). Static local environment capturing and sharing for MR remote collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 17). ACM.

    @inproceedings{gao2017static,
    title={Static local environment capturing and sharing for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lindeman, Rob and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={17},
    year={2017},
    organization={ACM}
    }
    We present a Mixed Reality (MR) system that supports entire scene capturing of the local physical work environment for remote collaboration in a large-scale workspace. By integrating the key-frames captured with external depth sensor as one single 3D point-cloud data set, our system could reconstruct the entire local physical workspace into the VR world. In this case, the remote helper could observe the local scene independently from the local user's current head and camera position, and provide gesture guiding information even before the local user staring at the target object. We conducted a pilot study to evaluate the usability of the system by comparing it with our previous oriented view system which only sharing the current camera view together with the real-time head orientation data. Our results indicate that this entire scene capturing and sharing system could significantly increase the remote helper's spatial awareness of the local work environment, especially in a large-scale workspace, and gain an overwhelming user preference (80%) than previous system.
  • 6DoF input for hololens using vive controller
    Bai, H., Gao, L., & Billinghurst, M.

    Bai, H., Gao, L., & Billinghurst, M. (2017, November). 6DoF input for hololens using vive controller. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 4). ACM.

    @inproceedings{bai20176dof,
    title={6DoF input for hololens using vive controller},
    author={Bai, Huidong and Gao, Lei and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={4},
    year={2017},
    organization={ACM}
    }
    In this research we present a calibration method that enables 6 degree of freedom (DoF) interaction technology with high accuracy for the Microsoft HoloLens using HTC Vive controllers. We calibrate the HoloLens's front color camera with the Vive lighthouse sensors by tracking a reference image and a Vive tracker at the first beginning automatically, and the Vive controllers' position and pose data can be transmitted to the HoloLens in real time via the Bluetooth connection, which provides a more accurate and efficient input solution for users to manipulate the augmented content compared with the default gesture or head-gaze interfaces.
  • Exploring enhancements for remote mixed reality collaboration
    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M. (2017, November). Exploring enhancements for remote mixed reality collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 16). ACM.

    @inproceedings{piumsomboon2017exploring,
    title={Exploring enhancements for remote mixed reality collaboration},
    author={Piumsomboon, Thammathip and Day, Arindam and Ens, Barrett and Lee, Youngho and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={16},
    year={2017},
    organization={ACM}
    }
    In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative natural interaction, and AV-User-Body-Scaling were implemented for enhancing interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and interaction.
  • AR social continuum: representing social contacts
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2017, November). AR social continuum: representing social contacts. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 6). ACM.

    @inproceedings{nassani2017ar,
    title={AR social continuum: representing social contacts},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={6},
    year={2017},
    organization={ACM}
    }
    One of the key problems with representing social networks in Augmented Reality (AR) is how to differentiate between contacts. In this paper we explore how visual and spatial cues based on social relationships can be used to represent contacts in social AR applications, making it easier to distinguish between them. Previous implementations of social AR have been mostly focusing on location based visualization with no focus on the social relationship to the user. In contrast, we explore how to visualise social relationships in mobile AR environments using proximity and visual fidelity filters. We ran a focus group to explore different options for representing social contacts in a mobile an AR application. We also conducted a user study to test a head-worn AR prototype using proximity and visual fidelity filters. We found out that filtering social contacts on wearable AR is preferred and useful. We discuss the results of focus group and the user study, and provide insights into directions for future work.
  • SharedSphere system overview
    Mixed reality collaboration through sharing a live panorama
    Gun A. Lee , Theophilus Teo , Seungwon Kim , Mark Billinghurst

    G. A. Lee, T. Teo, S. Kim, and M. Billinghurst. (2017). “Mixed reality collaboration through sharing a live panorama”. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (SA 2017). ACM, New York, NY, USA, Article 14, 4 pages.

    @inproceedings{10.1145/3132787.3139203,
    author = {Lee, Gun A. and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    title = {Mixed Reality Collaboration through Sharing a Live Panorama},
    year = {2017},
    isbn = {9781450354103},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3132787.3139203},
    doi = {10.1145/3132787.3139203},
    abstract = {One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. Shared-Sphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator's focus.},
    booktitle = {SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    articleno = {14},
    numpages = {4},
    keywords = {shared experience, panorama, remote collaboration},
    location = {Bangkok, Thailand},
    series = {SA '17}
    }
    One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. Shared-Sphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator's focus.
  • 2016
  • Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration
    Kunal Gupta, Gun A. Lee and Mark Billinghurst

    Kunal Gupta, Gun A. Lee and Mark Billinghurst. 2016. Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration. IEEE Transactions on Visualization and Computer Graphics Vol.22, No.11, pp.2413-2422. https://doi.org/10.1109/TVCG.2016.2593778

    @ARTICLE{7523400,
    author={K. Gupta and G. A. Lee and M. Billinghurst},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    title={Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration},
    year={2016},
    volume={22},
    number={11},
    pages={2413-2422},
    keywords={cameras;gaze tracking;helmet mounted displays;eye-tracking camera;gaze tracking;head-mounted camera;head-mounted display;remote collaboration;task space remote collaboration;virtual gaze information;virtual pointer;wearable interface;Cameras;Collaboration;Computers;Gaze tracking;Head;Prototypes;Teleconferencing;Computer conferencing;Computer-supported collaborative work;teleconferencing;videoconferencing},
    doi={10.1109/TVCG.2016.2593778},
    ISSN={1077-2626},
    month={Nov},}
    We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
  • A Remote Collaboration System with Empathy Glasses

    Y. Lee, K. Masai, K. Kunze, M. Sugimoto and M. Billinghurst. 2016. A Remote Collaboration System with Empathy Glasses. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW), Merida, pp. 342-343. http://doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112

    @INPROCEEDINGS{7836533,
    author = {Y. Lee and K. Masai and K. Kunze and M. Sugimoto and M. Billinghurst},
    booktitle = {2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW)},
    title = {A Remote Collaboration System with Empathy Glasses},
    year = {2017},
    volume = {00},
    number = {},
    pages = {342-343},
    keywords={Collaboration;Glass;Heart rate;Biomedical monitoring;Cameras;Hardware;Computers},
    doi = {10.1109/ISMAR-Adjunct.2016.0112},
    url = {doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112},
    ISSN = {},
    month={Sept.}
    }
    In this paper, we describe a demonstration of remote collaboration system using Empathy glasses. Using our system, a local worker can share a view of their environment with a remote helper, as well as their gaze, facial expressions, and physiological signals. The remote user can send back visual cues via a see-through head mounted display to help them perform better on a real world task. The system also provides some indication of the remote users face expression using face tracking technology.
  • Empathy Glasses
    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst

    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst. 2016. Empathy Glasses. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA, 1257-1263. https://doi.org/10.1145/2851581.2892370

    @inproceedings{Masai:2016:EG:2851581.2892370,
    author = {Masai, Katsutoshi and Kunze, Kai and sugimoto, Maki and Billinghurst, Mark},
    title = {Empathy Glasses},
    booktitle = {Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
    series = {CHI EA '16},
    year = {2016},
    isbn = {978-1-4503-4082-3},
    location = {San Jose, California, USA},
    pages = {1257--1263},
    numpages = {7},
    url = {http://doi.acm.org/10.1145/2851581.2892370},
    doi = {10.1145/2851581.2892370},
    acmid = {2892370},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {emotional interface, facial expression, remote collaboration, wearables},
    }
    In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it is the first to combine the following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) a head worn camera, and (4) a see-through head mounted display, with a focus on remote collaboration. Using the system, a local user can send their information and a view of their environment to a remote helper who can send back visual cues on the local user's see-through display to help them perform a real world task. A pilot user study was conducted to explore how effective the Empathy Glasses were at supporting remote collaboration. We describe the implications that can be drawn from this user study.
  • A comparative study of simulated augmented reality displays for vehicle navigation
    Jose, R., Lee, G. A., & Billinghurst, M.

    Jose, R., Lee, G. A., & Billinghurst, M. (2016, November). A comparative study of simulated augmented reality displays for vehicle navigation. In Proceedings of the 28th Australian conference on computer-human interaction (pp. 40-48). ACM.

    @inproceedings{jose2016comparative,
    title={A comparative study of simulated augmented reality displays for vehicle navigation},
    author={Jose, Richie and Lee, Gun A and Billinghurst, Mark},
    booktitle={Proceedings of the 28th Australian conference on computer-human interaction},
    pages={40--48},
    year={2016},
    organization={ACM}
    }
    In this paper we report on a user study in a simulated environment that compares three types of Augmented Reality (AR) displays for assisting with car navigation: Heads Up Display (HUD), Head Mounted Display (HMD) and Heads Down Display (HDD). The virtual cues shown on each of the interface were the same, but there was a significant difference in driver behaviour and preference between interfaces. Overall, users performed better and preferred the HUD over the HDD, and the HMD was ranked lowest. These results have implications for people wanting to use AR cues for car navigation.
  • A Systematic Review of Usability Studies in Augmented Reality between 2005 and 2014
    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E.

    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E. (2016, September). A systematic review of usability studies in augmented reality between 2005 and 2014. In 2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct) (pp. 49-50). IEEE.

    @inproceedings{dey2016systematic,
    title={A systematic review of usability studies in augmented reality between 2005 and 2014},
    author={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W and Swan II, J Edward},
    booktitle={2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct)},
    pages={49--50},
    year={2016},
    organization={IEEE}
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review most AR papers published between 2005 and 2014 that include user studies. A total of 291 papers have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We also identify areas where there have been few user studies, and opportunities for future research. This poster describes the methodology of the review and the classifications of AR research that have emerged.
  • Augmented Reality Annotation for Social Video Sharing

    Nassani, A., Kim, H., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2016, November). Augmented reality annotation for social video sharing. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 9). ACM.

    @inproceedings{nassani2016augmented,
    title={Augmented reality annotation for social video sharing},
    author={Nassani, Alaeddin and Kim, Hyungon and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={9},
    year={2016},
    organization={ACM}
    }
    This paper explores different visual interfaces for sharing comments on a social live video streaming platforms. So far, comments are displayed separately from the video making it hard to relate the comments to event in the video. In this work we investigate an Augmented Reality (AR) interface displaying comments directly on the streamed live video. Our described prototype allows remote spectators to perceive the streamed live video with different interfaces for displaying the comments. We conducted a user study to compare different ways of visualising comments and found that users prefer having comments in the AR view rather than on a separate list. We discuss the implications of this research and directions for future work.
  • Digitally Augmenting Sports: An Opportunity for Exploring and Understanding Novel Balancing Techniques
    Altimira, D., Mueller, F. F., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C.

    Altimira, D., Mueller, F. F., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C. (2016, May). Digitally augmenting sports: An opportunity for exploring and understanding novel balancing techniques. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 1681-1691). ACM.

    @inproceedings{altimira2016digitally,
    title={Digitally augmenting sports: An opportunity for exploring and understanding novel balancing techniques},
    author={Altimira, David and Mueller, Florian Floyd and Clarke, Jenny and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph},
    booktitle={Proceedings of the 2016 CHI conference on human factors in computing systems},
    pages={1681--1691},
    year={2016},
    organization={ACM}
    }
    Using game balancing techniques can provide the right level of challenge and hence enhance player engagement for sport players with different skill levels. Digital technology can support and enhance balancing techniques in sports, for example, by adjusting players’ level of intensity based on their heart rate. However, there is limited knowledge on how to design such balancing and its impact on the user experience. To address this we created two novel balancing techniques enabled by digitally augmenting a table tennis table. We adjusted the more skilled player’s performance by inducing two different styles of play and studied the effects on game balancing and player engagement. We showed that by altering the more skilled player’s performance we can balance the game through: (i) encouraging game mistakes, and (ii) changing the style of play to one that is easier for the opponent to counteract. We outline the advantages and disadvantages of each approach, extending our understanding of game balancing design. We also show that digitally augmenting sports offers opportunities for novel balancing techniques while facilitating engaging experiences, guiding those interested in HCI and sports.
  • An oriented point-cloud view for MR remote collaboration
    Gao, L., Bai, H., Lee, G., & Billinghurst, M.

    Gao, L., Bai, H., Lee, G., & Billinghurst, M. (2016, November). An oriented point-cloud view for MR remote collaboration. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 8). ACM.

    @inproceedings{gao2016oriented,
    title={An oriented point-cloud view for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={8},
    year={2016},
    organization={ACM}
    }
    We present a Mixed Reality system for remote collaboration using Virtual Reality (VR) headsets with external depth cameras attached. By wirelessly sharing a 3D point-cloud data of a local workers' workspace with a remote helper, and sharing the remote helper's hand gestures back to the local worker, the remote helper is able to assist the worker to perform manual tasks.Displaying the point-cloud video in a conventional way, such as a static front view in VR headsets, does not provide helpers with sufficient understanding of the spatial relationships between their hands and the remote surroundings. In contrast, we propose a Mixed Reality (MR) system that shares with the remote helper, not only 3D captured environment data but also real-time orientation info of the worker's viewpoint. We conducted a pilot study to evaluate the usability of the system, and we found that extra synchronized orientation data can make collaborators feel more connected spatially and mentally.
  • 2015
  • If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking
    Wen, J., Helton, W. S., & Billinghurst, M.

    Wen, J., Helton, W. S., & Billinghurst, M. (2015, March). If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 3). ACM.

    @inproceedings{wen2015if,
    title={If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking},
    author={Wen, James and Helton, William S and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={3},
    year={2015},
    organization={ACM}
    }
    Augmented Reality (AR) on smart phones can be used to overlay virtual tags in the real world to show points of interest that people may want to visit. However, field tests have failed to validate the belief that AR-based tools would outperform map-based tools for such pedestrian navigation tasks. Assuming this is due to inaccuracies in consumer GPS tracking used in handheld AR, we created a simulated environment that provided perfect tracking for AR and conducted experiments based on real world navigation studies. We measured time-on-task performance for guided traversals on both desktop and head-mounted display systems and found that accurate tracking did validate the superior performance of AR-based navigation tools. We also measured performance for unguided recall traversals of previously traversed paths in order to investigate into how navigation tools impact upon route memory.
  • Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization.
    Kim, H., Lee, G., & Billinghurst, M.

    Kim, H., Lee, G., & Billinghurst, M. (2015, March). Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 2). ACM.

    @inproceedings{kim2015adaptive,
    title={Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization},
    author={Kim, Hyungon and Lee, Gun and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={2},
    year={2015},
    organization={ACM}
    }
    Stereoscopic visualization creates illusions of depth through disparity between the images shown to left and right eyes of the viewer. While the stereoscopic visualization is widely adopted in immersive visualization systems to improve user experience, it can also cause visual discomfort if the stereoscopic viewing parameters are not adjusted appropriately. These parameters are usually manually adjusted based on human factors and empirical knowledge of the developer or even the user. However, scenes with dynamic change in scale and configuration can lead into continuous adjustment of these parameters while viewing. In this paper, we propose a method to adjust the interpupillary distance adaptively and automatically according to the configuration of the 3D scene, so that the visualized scene can maintain sufficient stereo effect while reducing visual discomfort.
  • Intelligent Augmented Reality Training for Motherboard Assembly
    Westerfield, G., Mitrovic, A., & Billinghurst, M.

    Westerfield, G., Mitrovic, A., & Billinghurst, M. (2015). Intelligent augmented reality training for motherboard assembly. International Journal of Artificial Intelligence in Education, 25(1), 157-172.

    @article{westerfield2015intelligent,
    title={Intelligent augmented reality training for motherboard assembly},
    author={Westerfield, Giles and Mitrovic, Antonija and Billinghurst, Mark},
    journal={International Journal of Artificial Intelligence in Education},
    volume={25},
    number={1},
    pages={157--172},
    year={2015},
    publisher={Springer}
    }
    We investigate the combination of Augmented Reality (AR) with Intelligent Tutoring Systems (ITS) to assist with training for manual assembly tasks. Our approach combines AR graphics with adaptive guidance from the ITS to provide a more effective learning experience. We have developed a modular software framework for intelligent AR training systems, and a prototype based on this framework that teaches novice users how to assemble a computer motherboard. An evaluation found that our intelligent AR system improved test scores by 25 % and that task performance was 30 % faster compared to the same AR training system without intelligent support. We conclude that using an intelligent AR tutor can significantly improve learning compared to more traditional AR training.
  • User Defined Gestures for Augmented Virtual Mirrors: A Guessability Study
    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M.

    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M. (2015, April). User defined gestures for augmented virtual mirrors: a guessability study. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 959-964). ACM.

    @inproceedings{lee2015user,
    title={User defined gestures for augmented virtual mirrors: a guessability study},
    author={Lee, Gun A and Wong, Jonathan and Park, Hye Sun and Choi, Jin Sung and Park, Chang Joon and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={959--964},
    year={2015},
    organization={ACM}
    }
    Public information displays are evolving from passive screens into more interactive and smarter ubiquitous computing platforms. In this research we investigate applying gesture interaction and Augmented Reality (AR) technologies to make public information displays more intuitive and easy to use. We focus especially on designing intuitive gesture based interaction methods to use in combination with an augmented virtual mirror interface. As an initial step, we conducted a user study to indentify the gestures that users feel are natural for performing common tasks when interacting with augmented virtual mirror displays. We report initial findings from the study, discuss design guidelines, and suggest future research directions.
  • Automatically Freezing Live Video for Annotation during Remote Collaboration
    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M.

    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M. (2015, April). Automatically freezing live video for annotation during remote collaboration. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1669-1674). ACM.

    @inproceedings{kim2015automatically,
    title={Automatically freezing live video for annotation during remote collaboration},
    author={Kim, Seungwon and Lee, Gun A and Ha, Sangtae and Sakata, Nobuchika and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={1669--1674},
    year={2015},
    organization={ACM}
    }
    Drawing annotations on shared live video has been investigated as a tool for remote collaboration. However, if a local user changes the viewpoint of a shared live video while a remote user is drawing an annotation, the annotation is projected and drawn at wrong place. Prior work suggested manually freezing the video while annotating to solve the issue, but this needs additional user input. We introduce a solution that automatically freezes the video, and present the results of a user study comparing it with manual freeze and no freeze conditions. Auto-freeze was most preferred by both remote and local participants who felt it best solved the issue of annotations appearing in the wrong place. With auto-freeze, remote users were able to draw annotations quicker, while the local users were able to understand the annotations clearer.
  • haptic HONGI: Reflections on collaboration in the transdisciplinary creation of an AR artwork in Creating Digitally
    Gunn, M., Campbell, A., Billinghurst, M., Sasikumar, P., Lawn, W., Muthukumarana, S

  • Can the transdisciplinary co-creation of Extended Reality experiences, haptic HONGI and Common Sense help decolonise the GLAM sector?

  • First Contact-Take 2 Using XR to Overcome Intercultural Discomfort (racism)
    Gunn, M., Sasikumar, P., & Bai, H

  • Come to the Table! Haere mai ki te tēpu!.
    Gunn, M., Bai, H., & Sasikumar, P.

  • Jitsi360: Using 360 images for live tours.
    Nassani, A., Bai, H., & Billinghurst, M.

  • Designing, Prototyping and Testing of 360-degree Spatial Audio Conferencing for Virtual Tours.
    Nassani, A., Barde, A., Bai, H., Nanayakkara, S., & Billinghurst, M.

  • Implementation of Attention-Based Spatial Audio for 360° Environments.
    Nassani, A., Barde, A., Bai, H., Nanayakkara, S., & Billinghurst, M.