Publications

  • 2019
  • Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects
    E Ibili, M Billinghurst

    Ibili, E., & Billinghurst, M. (2019). Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects. International Journal of Assessment Tools in Education, 6(3), 378-395.

    @article{ibili2019assessing,
    title={Assessing the Relationship between Cognitive Load and the Usability of a Mobile Augmented Reality Tutorial System: A Study of Gender Effects},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={International Journal of Assessment Tools in Education},
    volume={6},
    number={3},
    pages={378--395},
    year={2019}
    }
    In this study, the relationship between the usability of a mobile Augmented Reality (AR) tutorial system and cognitive load was examined. In this context, the relationship between perceived usefulness, the perceived ease of use, and the perceived natural interaction factors and intrinsic, extraneous, germane cognitive load were investigated. In addition, the effect of gender on this relationship was investigated. The research results show that there was a strong relationship between the perceived ease of use and the extraneous load in males, and there was a strong relationship between the perceived usefulness and the intrinsic load in females. Both the perceived usefulness and the perceived ease of use had a strong relationship with the germane cognitive load. Moreover, the perceived natural interaction had a strong relationship with the perceived usefulness in females and the perceived ease of use in males. This research will provide significant clues to AR software developers and researchers to help reduce or control cognitive load in the development of AR-based instructional software.
  • Sharing hand gesture and sketch cues in remote collaboration
    W. Huang, S. Kim, M. Billinghurst, L. Alem

    Huang, W., Kim, S., Billinghurst, M., & Alem, L. (2019). Sharing hand gesture and sketch cues in remote collaboration. Journal of Visual Communication and Image Representation, 58, 428-438.

    @article{huang2019sharing,
    title={Sharing hand gesture and sketch cues in remote collaboration},
    author={Huang, Weidong and Kim, Seungwon and Billinghurst, Mark and Alem, Leila},
    journal={Journal of Visual Communication and Image Representation},
    volume={58},
    pages={428--438},
    year={2019},
    publisher={Elsevier}
    }
    Many systems have been developed to support remote guidance, where a local worker manipulates objects under guidance of a remote expert helper. These systems typically use speech and visual cues between the local worker and the remote helper, where the visual cues could be pointers, hand gestures, or sketches. However, the effects of combining visual cues together in remote collaboration has not been fully explored. We conducted a user study comparing remote collaboration with an interface that combined hand gestures and sketching (the HandsInTouch interface) to one that only used hand gestures, when solving two tasks; Lego assembly and repairing a laptop. In the user study, we found that (1) adding sketch cues improved the task completion time, only with the repairing task as this had complex object manipulation but (2) using gesture and sketching together created a higher task load for the user.
  • 2.5 DHANDS: a gesture-based MR remote collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Sun, M., ... & Ji, H. (2019). 2.5 DHANDS: a gesture-based MR remote collaborative platform. The International Journal of Advanced Manufacturing Technology, 102(5-8), 1339-1353.

    @article{wang20192,
    title={2.5 DHANDS: a gesture-based MR remote collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Sun, Mengmeng and Chen, Yongxing and Lv, Hao and Ji, Hongyu},
    journal={The International Journal of Advanced Manufacturing Technology},
    volume={102},
    number={5-8},
    pages={1339--1353},
    year={2019},
    publisher={Springer}
    }
    Current remote collaborative systems in manufacturing are mainly based on video-conferencing technology. Their primary aim is to transmit manufacturing process knowledge between remote experts and local workers. However, it does not provide the experts with the same hands-on experience as when synergistically working on site in person. The mixed reality (MR) and increasing networking performances have the capacity to enhance the experience and communication between collaborators in geographically distributed locations. In this paper, therefore, we propose a new gesture-based remote collaborative platform using MR technology that enables a remote expert to collaborate with local workers on physical tasks. Besides, we concentrate on collaborative remote assembly as an illustrative use case. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert’s gestures into the real worksite to improve the performance, co-presence awareness, and user collaboration experience. We aim to study the effects of sharing the remote expert’s gestures in remote collaboration using a projector-based MR system in manufacturing. Furthermore, we show the capabilities of our framework on a prototype consisting of a VR HMD, Leap Motion, and a projector. The prototype system was evaluated with a pilot study comparing with the POINTER (adding AR annotations on the task space view through the mouse), which is the most popular method used to augment remote collaboration at present. The assessment adopts the following aspects: the performance, user’s satisfaction, and the user-perceived collaboration quality in terms of the interaction and cooperation. Our results demonstrate a clear difference between the POINTER and 2.5DHANDS interface in the performance time. Additionally, the 2.5DHANDS interface was statistically significantly higher than the POINTER interface in terms of the awareness of user’s attention, manipulation, self-confidence, and co-presence.
  • The effects of sharing awareness cues in collaborative mixed reality
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2019). The effects of sharing awareness cues in collaborative mixed reality. Front. Rob, 6(5).

    @article{piumsomboon2019effects,
    title={The effects of sharing awareness cues in collaborative mixed reality},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    year={2019}
    }
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
  • Revisiting collaboration through mixed reality: The evolution of groupware
    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Lanir, J., Tang, A., Bateman, S., Lee, G., Piumsomboon, T., & Billinghurst, M. (2019). Revisiting collaboration through mixed reality: The evolution of groupware. International Journal of Human-Computer Studies.

    @article{ens2019revisiting,
    title={Revisiting collaboration through mixed reality: The evolution of groupware},
    author={Ens, Barrett and Lanir, Joel and Tang, Anthony and Bateman, Scott and Lee, Gun and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Collaborative Mixed Reality (MR) systems are at a critical point in time as they are soon to become more commonplace. However, MR technology has only recently matured to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus on creating the enabling technology. In parallel, but largely independently, the field of Computer Supported Cooperative Work (CSCW) has focused on the fundamental concerns that underlie human communication and collaboration over the past 30-plus years. Since MR research is now on the brink of moving into the real world, we reflect on three decades of collaborative MR research and try to reconcile it with existing theory from CSCW, to help position MR researchers to pursue fruitful directions for their work. To do this, we review the history of collaborative MR systems, investigating how the common taxonomies and frameworks in CSCW and MR research can be applied to existing work on collaborative MR systems, exploring where they have fallen behind, and look for new ways to describe current trends. Through identifying emergent trends, we suggest future directions for MR, and also find where CSCW researchers can explore new theory that more fully represents the future of working, playing and being with others.
  • WARPING DEIXIS: Distorting Gestures to Enhance Collaboration
    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J.

    Sousa, M., dos Anjos, R. K., Mendes, D., Billinghurst, M., & Jorge, J. (2019, April). WARPING DEIXIS: Distorting Gestures to Enhance Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 608). ACM.

    @inproceedings{sousa2019warping,
    title={WARPING DEIXIS: Distorting Gestures to Enhance Collaboration},
    author={Sousa, Maur{\'\i}cio and dos Anjos, Rafael Kufner and Mendes, Daniel and Billinghurst, Mark and Jorge, Joaquim},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={608},
    year={2019},
    organization={ACM}
    }
    When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer’s perception. We evaluated our approach in a colocated side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
  • Getting your game on: Using virtual reality to improve real table tennis skills
    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T.

    Michalski, S. C., Szpak, A., Saredakis, D., Ross, T. J., Billinghurst, M., & Loetscher, T. (2019). Getting your game on: Using virtual reality to improve real table tennis skills. PloS one, 14(9).

    @article{michalski2019getting,
    title={Getting your game on: Using virtual reality to improve real table tennis skills},
    author={Michalski, Stefan Carlo and Szpak, Ancret and Saredakis, Dimitrios and Ross, Tyler James and Billinghurst, Mark and Loetscher, Tobias},
    journal={PloS one},
    volume={14},
    number={9},
    year={2019},
    publisher={Public Library of Science}
    }
    Background: A key assumption of VR training is that the learned skills and experiences transfer to the real world. Yet, in certain application areas, such as VR sports training, the research testing this assumption is sparse. Design: Real-world table tennis performance was assessed using a mixed-model analysis of variance. The analysis comprised a between-subjects (VR training group vs control group) and a within-subjects (pre- and post-training) factor. Method: Fifty-seven participants (23 females) were either assigned to a VR training group (n = 29) or no-training control group (n = 28). During VR training, participants were immersed in competitive table tennis matches against an artificial intelligence opponent. An expert table tennis coach evaluated participants on real-world table tennis playing before and after the training phase. Blinded regarding participant’s group assignment, the expert assessed participants’ backhand, forehand and serving on quantitative aspects (e.g. count of rallies without errors) and quality of skill aspects (e.g. technique and consistency). Results: VR training significantly improved participants’ real-world table tennis performance compared to a no-training control group in both quantitative (p < .001, Cohen’s d = 1.08) and quality of skill assessments (p < .001, Cohen’s d = 1.10). Conclusions: This study adds to a sparse yet expanding literature, demonstrating real-world skill transfer from Virtual Reality in an athletic task
  • On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction
    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Irlitti, A., Ens, B., Thomas, B. H., & Billinghurst, M. (2019, April). On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 228). ACM.

    @inproceedings{piumsomboon2019shoulder,
    title={On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction},
    author={Piumsomboon, Thammathip and Lee, Gun A and Irlitti, Andrew and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={228},
    year={2019},
    organization={ACM}
    }
    We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
  • Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M.

    Kim, S., Lee, G., Huang, W., Kim, H., Woo, W., & Billinghurst, M. (2019, April). Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 173). ACM.

    @inproceedings{kim2019evaluating,
    title={Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration},
    author={Kim, Seungwon and Lee, Gun and Huang, Weidong and Kim, Hayun and Woo, Woontack and Billinghurst, Mark},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={173},
    year={2019},
    organization={ACM}
    }
    Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
  • Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lawrence, L., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, April). Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (p. 201). ACM.

    @inproceedings{teo2019mixed,
    title={Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction},
    author={Teo, Theophilus and Lawrence, Louise and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
    pages={201},
    year={2019},
    organization={ACM}
    }
    Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
  • Using Augmented Reality with Speech Input for Non-Native Children’s Language Learning
    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M.

    Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M. (2019). Using Augmented Reality with Speech Input for Non-Native Children's Language Learning. International Journal of Human-Computer Studies.

    @article{dalim2019using,
    title={Using Augmented Reality with Speech Input for Non-Native Children's Language Learning},
    author={Dalim, Che Samihah Che and Sunar, Mohd Shahrizal and Dey, Arindam and Billinghurst, Mark},
    journal={International Journal of Human-Computer Studies},
    year={2019},
    publisher={Elsevier}
    }
    Augmented Reality (AR) offers an enhanced learning environment which could potentially influence children's experience and knowledge gain during the language learning process. Teaching English or other foreign languages to children with different native language can be difficult and requires an effective strategy to avoid boredom and detachment from the learning activities. With the growing numbers of AR education applications and the increasing pervasiveness of speech recognition, we are keen to understand how these technologies benefit non-native young children in learning English. In this paper, we explore children's experience in terms of knowledge gain and enjoyment when learning through a combination of AR and speech recognition technologies. We developed a prototype AR interface called TeachAR, and ran two experiments to investigate how effective the combination of AR and speech recognition was towards the learning of 1) English terms for color and shapes, and 2) English words for spatial relationships. We found encouraging results by creating a novel teaching strategy using these two technologies, not only in terms of increase in knowledge gain and enjoyment when compared with traditional strategy but also enables young children to finish the certain task faster and easier.
  • Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System
    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J.

    Kim, S., Billinghurst, M., Lee, G., Norman, M., Huang, W., & He, J. (2019, July). Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System. In 2019 23rd International Conference in Information Visualization–Part II (pp. 86-91). IEEE.

    @inproceedings{kim2019sharing,
    title={Sharing Emotion by Displaying a Partner Near the Gaze Point in a Telepresence System},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Gun and Norman, Mitchell and Huang, Weidong and He, Jian},
    booktitle={2019 23rd International Conference in Information Visualization--Part II},
    pages={86--91},
    year={2019},
    organization={IEEE}
    }
    In this paper, we explore the effect of showing a remote partner close to user gaze point in a teleconferencing system. We implemented a gaze following function in a teleconferencing system and investigate if this improves the user's feeling of emotional interdependence. We developed a prototype system that shows a remote partner close to the user's current gaze point and conducted a user study comparing it to a condition displaying the partner fixed in the corner of a screen. Our results showed that showing a partner close to their gaze point helped users feel a higher level of emotional interdependence. In addition, we compared the effect of our method between small and big displays, but there was no significant difference in the users' feeling of emotional interdependence even though the big display was preferred.
  • Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration
    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M.

    Teo, T., Lee, G. A., Billinghurst, M., & Adcock, M. (2019, March). Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1187-1188). IEEE.

    @inproceedings{teo2019supporting,
    title={Supporting Visual Annotation Cues in a Live 360 Panorama-based Mixed Reality Remote Collaboration},
    author={Teo, Theophilus and Lee, Gun A and Billinghurst, Mark and Adcock, Matt},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1187--1188},
    year={2019},
    organization={IEEE}
    }
    We propose enhancing live 360 panorama-based Mixed Reality (MR) remote collaboration through supporting visual annotation cues. Prior work on live 360 panorama-based collaboration used MR visualization to overlay visual cues, such as view frames and virtual hands, yet they were not registered onto the shared physical workspace, hence had limitations in accuracy for pointing or marking objects. Our prototype system uses spatial mapping and tracking feature of an Augmented Reality head-mounted display to show visual annotation cues accurately registered onto the physical environment. We describe the design and implementation details of our prototype system, and discuss on how such feature could help improve MR remote collaboration.
  • Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality
    Dey, A., Chatourn, A., & Billinghurst, M.

    Dey, A., Chatburn, A., & Billinghurst, M. (2019, March). Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 220-226). IEEE.

    @inproceedings{dey2019exploration,
    title={Exploration of an EEG-Based Cognitively Adaptive Training System in Virtual Reality},
    author={Dey, Arindam and Chatburn, Alex and Billinghurst, Mark},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={220--226},
    year={2019},
    organization={IEEE}
    }
    Virtual Reality (VR) is effective in various training scenarios across multiple domains, such as education, health and defense. However, most of those applications are not adaptive to the real-time cognitive or subjectively experienced load placed on the trainee. In this paper, we explore a cognitively adaptive training system based on real-time measurement of task related alpha activity in the brain. This measurement was made by a 32-channel mobile Electroencephalography (EEG) system, and was used to adapt the task difficulty to an ideal level which challenged our participants, and thus theoretically induces the best level of performance gains as a result of training. Our system required participants to select target objects in VR and the complexity of the task adapted to the alpha activity in the brain. A total of 14 participants undertook our training and completed 20 levels of increasing complexity. Our study identified significant differences in brain activity in response to increasing levels of task complexity, but response time did not alter as a function of task difficulty. Collectively, we interpret this to indicate the brain's ability to compensate for higher task load without affecting behaviourally measured visuomotor performance.
  • Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation
    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M.

    Barde, A., Lindeman, R. W., Lee, G., & Billinghurst, M. (2019, August). Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation. In Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY. Audio Engineering Society.

    @inproceedings{barde2019binaural,
    title={Binaural Spatialization over a Bone Conduction Headset: The Perception of Elevation},
    author={Barde, Amit and Lindeman, Robert W and Lee, Gun and Billinghurst, Mark},
    booktitle={Audio Engineering Society Conference: 2019 AES INTERNATIONAL CONFERENCE ON HEADPHONE TECHNOLOGY},
    year={2019},
    organization={Audio Engineering Society}
    }
    Binaural spatialization over a bone conduction headset in the vertical plane was investigated using inexpensive and commercially available hardware and software components. The aim of the study was to assess the acuity of binaurally spatialized presentations in the vertical plane. The level of externalization achievable was also explored. Results demonstrate good correlation between established perceptual traits for headphone based auditory localization using non-individualized HRTFs, though localization accuracy appears to be significant worse. A distinct pattern of compressed localization judgments is observed with participants tending to localize the presented stimulus within an approximately 20° range on either side of the inter-aural plane. Localization error was approximately 21° in the vertical plane. Participants reported a good level of externalization. We’ve been able to demonstrate an acceptable level of spatial resolution and externalization is achievable using an inexpensive bone conduction headset and software components.
  • Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., & Chen, Y.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Wang, S., ... & Chen, Y. (2019, March). Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1219-1220). IEEE.

    @inproceedings{wang2019head,
    title={Head Pointer or Eye Gaze: Which Helps More in MR Remote Collaboration?},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Wang, Shuxia and Zhang, Xiaokun and Du, Jiaxiang and Chen, Yongxing},
    booktitle={2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1219--1220},
    year={2019},
    organization={IEEE}
    }
    This paper investigates how two different unique gaze visualizations (the head pointer(HP), eye gaze(EG)) affect table-size physical tasks in Mixed Reality (MR) remote collaboration. We developed a remote collaborative MR Platform which supports sharing of the remote expert's HP and EG. The prototype was evaluated with a user study comparing two conditions: sharing HP and EG with respect to their effectiveness in the performance and quality of cooperation. There was a statistically significant difference between two conditions on the performance time, and HP is a good proxy for EG in remote collaboration.
  • The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.
    Ibili, E., & Billinghurst, M.

    Ibili, E., & Billinghurst, M. (2019). The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors. Malaysian Online Journal of Educational Technology, 7(3), 39-56.

    @article{ibili2019relationship,
    title={The Relationship between Self-Esteem and Social Network Loneliness: A Study of Trainee School Counsellors.},
    author={Ibili, Emin and Billinghurst, Mark},
    journal={Malaysian Online Journal of Educational Technology},
    volume={7},
    number={3},
    pages={39--56},
    year={2019},
    publisher={ERIC}
    }
    In  this  study,  the  relationship  was  investigated  between  self‐esteem  and  loneliness  in  social  networks  among  students  in  a  guidance  and  psychological  counselling teaching department. The study was conducted during the 2017‐2018  academic year with 312 trainee school counsellors from Turkey. In terms of data  collection, the Social Network Loneliness Scale, and the Self‐esteem Scale were  employed,  and  a  statistical  analysis  of  the  data  was  conducted.  We  found  a  negative relationship between self‐esteem and loneliness as experienced in social networks, although neither differs according to sex, age and class level. It was also  found  that  those who  use  the Internet  for  communication  purposes  have  high  levels of loneliness and self‐esteem in social networks. While self‐esteem levels among users of the Internet are high, those who use it to read about or watch the  news  have  high  levels  of  loneliness.  No  relationship  was  found  between  self‐ esteem  and  social  network  loneliness  levels  and  among  those  who  use  the  Internet for playing games. Regular sporting habits were found to have a positive  effect on self‐esteem, but no effect on the level of loneliness in social networks.
  • A comprehensive survey of AR/MR-based co-design in manufacturing
    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., Zhang, X.

    Wang, P., Zhang, S., Billinghurst, M., Bai, X., He, W., Wang, S., ... & Zhang, X. (2019). A comprehensive survey of AR/MR-based co-design in manufacturing. Engineering with Computers, 1-24.

    @article{wang2019comprehensive,
    title={A comprehensive survey of AR/MR-based co-design in manufacturing},
    author={Wang, Peng and Zhang, Shusheng and Billinghurst, Mark and Bai, Xiaoliang and He, Weiping and Wang, Shuxia and Sun, Mengmeng and Zhang, Xu},
    journal={Engineering with Computers},
    pages={1--24},
    year={2019},
    publisher={Springer}
    }
    For more than 2 decades, Augmented Reality (AR)/Mixed Reality (MR) has received an increasing amount of attention by researchers and practitioners in the manufacturing community, because it has applications in many fields, such as product design, training, maintenance, assembly, and other manufacturing operations. However, to the best of our knowledge, there has been no comprehensive review of AR-based co-design in manufacturing. This paper presents a comprehensive survey of existing research, projects, and technical characteristics between 1990 and 2017 in the domain of co-design based on AR technology. Among these papers, more than 90% of them were published between 2000 and 2017, and these recent relevant works are discussed at length. The paper provides a comprehensive academic roadmap and useful insight into the state-of-the-art of AR-based co-design systems and developments in manufacturing for future researchers all over the world. This work will be useful to researchers who plan to utilize AR as a tool for design research.
  • Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system
    Ibili, E., Resnyansky, D., & Billinghurst, M.

    Ibili, E., Resnyansky, D., & Billinghurst, M. (2019). Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system. Education and Information Technologies, 1-23.

    @article{ibili2019applying,
    title={Applying the technology acceptance model to understand maths teachers’ perceptions towards an augmented reality tutoring system},
    author={Ibili, Emin and Resnyansky, Dmitry and Billinghurst, Mark},
    journal={Education and Information Technologies},
    pages={1--23},
    year={2019},
    publisher={Springer}
    }
    This paper examines mathematics teachers’ level of acceptance and intention to use the Augmented Reality Geometry Tutorial System (ARGTS), a mobile Augmented Reality (AR) application developed to enhance students’ 3D geometric thinking skills. ARGTS was shared with mathematics teachers, who were then surveyed using the Technology Acceptance Model (TAM) to understand their acceptance of the technology. We also examined the external variables of Anxiety, Social Norms and Satisfaction. The effect of the teacher’s gender, degree of graduate status and number of years of teaching experience on the subscales of the TAM model were examined. We found that the Perceived Ease of Use (PEU) had a direct effect on the Perceived Usefulness (PU) in accordance with the Technology Acceptance Model (TAM). Both variables together affect Satisfaction (SF), however PEU had no direct effect on Attitude (AT). In addition, while Social Norms (SN) had a direct effect on PU and PEU, there was no direct effect on Behavioural Intention (BI). Anxiety (ANX) had a direct effect on PEU, but no effect on PU and SF. While there was a direct effect of SF on PEU, no direct effect was found on BI. We explain how the results of this study could help improve the understanding of AR acceptance by teachers and provide important guidelines for AR researchers, developers and practitioners.
  • An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills
    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M.

    İbili, E., Çat, M., Resnyansky, D., Şahin, S., & Billinghurst, M. (2019). An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills. International Journal of Mathematical Education in Science and Technology, 1-23.

    @article{ibili2019assessment,
    title={An assessment of geometry teaching supported with augmented reality teaching materials to enhance students’ 3D geometry thinking skills},
    author={{\.I}bili, Emin and {\c{C}}at, Mevl{\"u}t and Resnyansky, Dmitry and {\c{S}}ahin, Sami and Billinghurst, Mark},
    journal={International Journal of Mathematical Education in Science and Technology},
    pages={1--23},
    year={2019},
    publisher={Taylor \& Francis}
    }
    The aim of this research was to examine the effect of Augmented Reality (AR) supported geometry teaching on students’ 3D thinking skills. This research consisted of 3 steps: (1) developing a 3D thinking ability scale, (ii) design and development of an AR Geometry Tutorial System (ARGTS) and (iii) implementation and assessment of geometry teaching supported with ARGTS. A 3D thinking ability scale was developed and tested with experimental and control groups as a pre- and post-test evaluation. An AR Geometry Tutorial System (ARGTS) and AR teaching materials and environments were developed to enhance 3D thinking skills. A user study with these materials found that geometry teaching supported by ARGTS significantly increased the students’ 3D thinking skills. The increase in average scores of Structuring 3D arrays of cubes and Calculation of the volume and the area of solids thinking skills was not statistically significant (p > 0.05). In terms of other 3D geometric thinking skills’ subfactors of the scale a statistically significant difference was found in favour of the experimental group in pre-test and post-test scores (p < 0.05). The biggest difference was found on ability to recognize and create 3D shapes (p < 0.01).The results of this research are particularly important for identifying individual differences in 3D thinking skills of secondary school students and creating personalized dynamic intelligent learning environments.
  • 2018
  • Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Jonathon D Hart, Barrett Ens, Robert W Lindeman, Bruce H Thomas, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 46, 13 pages. DOI: https://doi.org/10.1145/3173574.3173620

    @inproceedings{Piumsomboon:2018:MAA:3173574.3173620,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Hart, Jonathon D. and Ens, Barrett and Lindeman, Robert W. and Thomas, Bruce H. and Billinghurst, Mark},
    title = {Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {46:1--46:13},
    articleno = {46},
    numpages = {13},
    url = {http://doi.acm.org/10.1145/3173574.3173620},
    doi = {10.1145/3173574.3173620},
    acmid = {3173620},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, awareness, gaze, gesture, mixed reality, redirected, remote collaboration, remote embodiment, virtual reality},
    }
    [download]
    We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.
  • Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality
    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 81, 14 pages. DOI: https://doi.org/10.1145/3173574.3173655

    @inproceedings{Kyto:2018:PPH:3173574.3173655,
    author = {Kyt\"{o}, Mikko and Ens, Barrett and Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {81:1--81:14},
    articleno = {81},
    numpages = {14},
    url = {http://doi.acm.org/10.1145/3173574.3173655},
    doi = {10.1145/3173574.3173655},
    acmid = {3173655},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, eye tracking, gaze interaction, head-worn display, refinement techniques, target selection},
    }
    Head and eye movement can be leveraged to improve the user's interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
  • Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications
    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, Mark Billinghurst

    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, and Mark Billinghurst. 2018. Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW120, 6 pages. DOI: https://doi.org/10.1145/3170427.3188513

    @inproceedings{Ens:2018:CEM:3170427.3188513,
    author = {Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    title = {Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW120:1--LBW120:6},
    articleno = {LBW120},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188513},
    doi = {10.1145/3170427.3188513},
    acmid = {3188513},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, gesture interaction, wearable computing},
    }
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.
  • Levity: A Virtual Reality System that Responds to Cognitive Load
    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, Mark Billinghurst

    Lynda Gerry, Barrett Ens, Adam Drogemuller, Bruce Thomas, and Mark Billinghurst. 2018. Levity: A Virtual Reality System that Responds to Cognitive Load. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW610, 6 pages. DOI: https://doi.org/10.1145/3170427.3188479

    @inproceedings{Gerry:2018:LVR:3170427.3188479,
    author = {Gerry, Lynda and Ens, Barrett and Drogemuller, Adam and Thomas, Bruce and Billinghurst, Mark},
    title = {Levity: A Virtual Reality System That Responds to Cognitive Load},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW610:1--LBW610:6},
    articleno = {LBW610},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188479},
    doi = {10.1145/3170427.3188479},
    acmid = {3188479},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {brain computer interface, cognitive load, virtual reality, visual search task},
    }
    This paper presents the ongoing development of a proof-of-concept, adaptive system that uses a neurocognitive signal to facilitate efficient performance in a Virtual Reality visual search task. The Levity system measures and interactively adjusts the display of a visual array during a visual search task based on the user's level of cognitive load, measured with a 16-channel EEG device. Future developments will validate the system and evaluate its ability to improve search efficiency by detecting and adapting to a user's cognitive demands.
  • Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper D115, 4 pages. DOI: https://doi.org/10.1145/3170427.3186495

    @inproceedings{Piumsomboon:2018:SDM:3170427.3186495,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {D115:1--D115:4},
    articleno = {D115},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3170427.3186495},
    doi = {10.1145/3170427.3186495},
    acmid = {3186495},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, mixed reality, multiple, remote collaboration, remote embodiment, scale, virtual reality},
    }
    We present Snow Dome, a Mixed Reality (MR) remote collaboration application that supports a multi-scale interaction for a Virtual Reality (VR) user. We share a local Augmented Reality (AR) user's reconstructed space with a remote VR user who has an ability to scale themselves up into a giant or down into a miniature for different perspectives and interaction at that scale within the shared space.
  • Filtering Shared Social Data in AR
    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, Robert W Lindeman

    Alaeddin Nassani, Huidong Bai, Gun Lee, Mark Billinghurst, Tobias Langlotz, and Robert W. Lindeman. 2018. Filtering Shared Social Data in AR. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW100, 6 pages. DOI: https://doi.org/10.1145/3170427.3188609

    @inproceedings{Nassani:2018:FSS:3170427.3188609,
    author = {Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W.},
    title = {Filtering Shared Social Data in AR},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW100:1--LBW100:6},
    articleno = {LBW100},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188609},
    doi = {10.1145/3170427.3188609},
    acmid = {3188609},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {360 panoramas, augmented reality, live video stream, sharing social experiences, virtual avatars},
    }
    We describe a method and a prototype implementation for filtering shared social data (eg, 360 video) in a wearable Augmented Reality (eg, HoloLens) application. The data filtering is based on user-viewer relationships. For example, when sharing a 360 video, if the user has an intimate relationship with the viewer, then full fidelity (ie the 360 video) of the user's environment is visible. But if the two are strangers then only a snapshot image is shared. By varying the fidelity of the shared content, the viewer is able to focus more on the data shared by their close relations and differentiate this from other content. Also, the approach enables the sharing-user to have more control over the fidelity of the content shared with their contacts for privacy.
  • A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014
    Arindam Dey, Mark Billinghurst, Robert W Lindeman, J Swan

    Dey A, Billinghurst M, Lindeman RW and Swan JE II (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 5:37. doi: 10.3389/frobt.2018.00037

    @ARTICLE{10.3389/frobt.2018.00037,
    AUTHOR={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W. and Swan, J. Edward},
    TITLE={A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014},
    JOURNAL={Frontiers in Robotics and AI},
    VOLUME={5},
    PAGES={37},
    YEAR={2018},
    URL={https://www.frontiersin.org/article/10.3389/frobt.2018.00037},
    DOI={10.3389/frobt.2018.00037},
    ISSN={2296-9144},
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
  • He who hesitates is lost (… in thoughts over a robot)
    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, Victor Finomore

    James Wen, Amanda Stewart, Mark Billinghurst, Arindam Dey, Chad Tossell, and Victor Finomore. 2018. He who hesitates is lost (...in thoughts over a robot). In Proceedings of the Technology, Mind, and Society (TechMindSociety '18). ACM, New York, NY, USA, Article 43, 6 pages. DOI: https://doi.org/10.1145/3183654.3183703

    @inproceedings{Wen:2018:HHL:3183654.3183703,
    author = {Wen, James and Stewart, Amanda and Billinghurst, Mark and Dey, Arindam and Tossell, Chad and Finomore, Victor},
    title = {He Who Hesitates is Lost (...In Thoughts over a Robot)},
    booktitle = {Proceedings of the Technology, Mind, and Society},
    series = {TechMindSociety '18},
    year = {2018},
    isbn = {978-1-4503-5420-2},
    location = {Washington, DC, USA},
    pages = {43:1--43:6},
    articleno = {43},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3183654.3183703},
    doi = {10.1145/3183654.3183703},
    acmid = {3183703},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {Anthropomorphism, Empathy, Human Machine Team, Robotics, User Study},
    }
    In a team, the strong bonds that can form between teammates are often seen as critical for reaching peak performance. This perspective may need to be reconsidered, however, if some team members are autonomous robots since establishing bonds with fundamentally inanimate and expendable objects may prove counterproductive. Previous work has measured empathic responses towards robots as singular events at the conclusion of experimental sessions. As relationships extend over long periods of time, sustained empathic behavior towards robots would be of interest. In order to measure user actions that may vary over time and are affected by empathy towards a robot teammate, we created the TEAMMATE simulation system. Our findings suggest that inducing empathy through a back story narrative can significantly change participant decisions in actions that may have consequences for a robot companion over time. The results of our study can have strong implications for the overall performance of human machine teams.
  • A hybrid 2D/3D user Interface for radiological diagnosis
    Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H Butler

    A hybrid 2D/3D user Interface for radiological diagnosis Veera Bhadra Harish Mandalika, Alexander I Chernoglazov, Mark Billinghurst, Christoph Bartneck, Michael A Hurrell, Niels de Ruiter, Anthony PH Butler, Philip H ButlerJournal of digital imaging 31 (1), 56-73

    @Article{Mandalika2018,
    author="Mandalika, Veera Bhadra Harish
    and Chernoglazov, Alexander I.
    and Billinghurst, Mark
    and Bartneck, Christoph
    and Hurrell, Michael A.
    and Ruiter, Niels de
    and Butler, Anthony P. H.
    and Butler, Philip H.",
    title="A Hybrid 2D/3D User Interface for Radiological Diagnosis",
    journal="Journal of Digital Imaging",
    year="2018",
    month="Feb",
    day="01",
    volume="31",
    number="1",
    pages="56--73",
    abstract="This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.",
    issn="1618-727X",
    doi="10.1007/s10278-017-0002-6",
    url="https://doi.org/10.1007/s10278-017-0002-6"
    }
    This paper presents a novel 2D/3D desktop virtual reality hybrid user interface for radiology that focuses on improving 3D manipulation required in some diagnostic tasks. An evaluation of our system revealed that our hybrid interface is more efficient for novice users and more accurate for both novice and experienced users when compared to traditional 2D only interfaces. This is a significant finding because it indicates, as the techniques mature, that hybrid interfaces can provide significant benefit to image evaluation. Our hybrid system combines a zSpace stereoscopic display with 2D displays, and mouse and keyboard input. It allows the use of 2D and 3D components interchangeably, or simultaneously. The system was evaluated against a 2D only interface with a user study that involved performing a scoliosis diagnosis task. There were two user groups: medical students and radiology residents. We found improvements in completion time for medical students, and in accuracy for both groups. In particular, the accuracy of medical students improved to match that of the residents.
  • The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration
    Seungwon Kim, Mark Billinghurst, Gun Lee

    Kim, S., Billinghurst, M., & Lee, G. (2018). The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration. Computer Supported Cooperative Work (CSCW), 1-39.

    @Article{Kim2018,
    author="Kim, Seungwon
    and Billinghurst, Mark
    and Lee, Gun",
    title="The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration",
    journal="Computer Supported Cooperative Work (CSCW)",
    year="2018",
    month="Jun",
    day="02",
    abstract="This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user's real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants' feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants' end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.",
    issn="1573-7551",
    doi="10.1007/s10606-018-9324-2",
    url="https://doi.org/10.1007/s10606-018-9324-2"
    }
    This paper investigates how different collaboration styles and view independence affect remote collaboration. Our remote collaboration system shares a live video of a local user’s real-world task space with a remote user. The remote user can have an independent view or a dependent view of a shared real-world object manipulation task and can draw virtual annotations onto the real-world objects as a visual communication cue. With the system, we investigated two different collaboration styles; (1) remote expert collaboration where a remote user has the solution and gives instructions to a local partner and (2) mutual collaboration where neither user has a solution but both remote and local users share ideas and discuss ways to solve the real-world task. In the user study, the remote expert collaboration showed a number of benefits over the mutual collaboration. With the remote expert collaboration, participants had better communication from the remote user to the local user, more aligned focus between participants, and the remote participants’ feeling of enjoyment and togetherness. However, the benefits were not always apparent at the local participants’ end, especially with measures of enjoyment and togetherness. The independent view also had several benefits over the dependent view, such as allowing remote participants to freely navigate around the workspace while having a wider fully zoomed-out view. The benefits of the independent view were more prominent in the mutual collaboration than in the remote expert collaboration, especially in enabling the remote participants to see the workspace.
  • Robust tracking through the design of high quality fiducial markers: An optimization tool for ARToolKit
    Dawar Khan, Sehat Ullah, Dong-Ming Yan, Ihsan Rabbi, Paul Richard, Thuong Hoang, Mark Billinghurst, Xiaopeng Zhang

    D. Khan et al., "Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit," in IEEE Access, vol. 6, pp. 22421-22433, 2018. doi: 10.1109/ACCESS.2018.2801028

    @ARTICLE{8287815,
    author={D. Khan and S. Ullah and D. M. Yan and I. Rabbi and P. Richard and T. Hoang and M. Billinghurst and X. Zhang},
    journal={IEEE Access},
    title={Robust Tracking Through the Design of High Quality Fiducial Markers: An Optimization Tool for ARToolKit},
    year={2018},
    volume={6},
    number={},
    pages={22421-22433},
    keywords={augmented reality;image recognition;object tracking;optical tracking;pose estimation;ARToolKit markers;B:W;augmented reality applications;camera tracking;edge sharpness;fiducial marker optimizer;high quality fiducial markers;optimization tool;pose estimation;robust tracking;specialized image processing algorithms;Cameras;Complexity theory;Fiducial markers;Libraries;Robustness;Tools;ARToolKit;Fiducial markers;augmented reality;marker tracking;robust recognition},
    doi={10.1109/ACCESS.2018.2801028},
    ISSN={},
    month={},}
    Fiducial markers are images or landmarks placed in real environment, typically used for pose estimation and camera tracking. Reliable fiducials are strongly desired for many augmented reality (AR) applications, but currently there is no systematic method to design highly reliable fiducials. In this paper, we present fiducial marker optimizer (FMO), a tool to optimize the design attributes of ARToolKit markers, including black to white (B:W) ratio, edge sharpness, and information complexity, and to reduce inter-marker confusion. For these operations, the FMO provides a user friendly interface at the front-end and specialized image processing algorithms at the back-end. We tested manually designed markers and FMO optimized markers in ARToolKit and found that the latter were more robust. The FMO will be used for designing highly reliable fiducials in easy to use fashion. It will improve the application's performance, where it is used.
  • Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration
    Theophilus Teo, Gun A. Lee, Mark Billinghurst, Matt Adcock

    Theophilus Teo, Gun A. Lee, Mark Billinghurst, and Matt Adcock. 2018. Hand gestures and visual annotation in live 360 panorama-based mixed reality remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI '18). ACM, New York, NY, USA, 406-410. DOI: https://doi.org/10.1145/3292147.3292200

    BibTeX | EndNote | ACM Ref
    @inproceedings{Teo:2018:HGV:3292147.3292200,
    author = {Teo, Theophilus and Lee, Gun A. and Billinghurst, Mark and Adcock, Matt},
    title = {Hand Gestures and Visual Annotation in Live 360 Panorama-based Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    series = {OzCHI '18},
    year = {2018},
    isbn = {978-1-4503-6188-0},
    location = {Melbourne, Australia},
    pages = {406--410},
    numpages = {5},
    url = {http://doi.acm.org/10.1145/3292147.3292200},
    doi = {10.1145/3292147.3292200},
    acmid = {3292200},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {gesture communication, mixed reality, remote collaboration},
    }
    In this paper, we investigate hand gestures and visual annotation cues overlaid in a live 360 panorama-based Mixed Reality remote collaboration. The prototype system captures 360 live panorama video of the surroundings of a local user and shares it with another person in a remote location. The two users wearing Augmented Reality or Virtual Reality head-mounted displays can collaborate using augmented visual communication cues such as virtual hand gestures, ray pointing, and drawing annotations. Our preliminary user evaluation comparing these cues found that using visual annotation cues (ray pointing and drawing annotation) helps local users perform collaborative tasks faster, easier, making less errors and with better understanding, compared to using only virtual hand gestures.
  • The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training.
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training. In 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 1-2). IEEE.

    @inproceedings{clifford2018effect,
    title={The Effect of Immersive Displays on Situation Awareness in Virtual Environments for Aerial Firefighting Air Attack Supervisor Training},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
    pages={1--2},
    year={2018},
    organization={IEEE}
    }
    Situation Awareness (SA) is an essential skill in Air Attack Supervision (AAS) for aerial based wildfire firefighting. The display types used for Virtual Reality Training Systems (VRTS) afford different visual SA depending on the Field of View (FoV) as well as the sense of presence users can obtain in the virtual environment. We conducted a study with 36 participants to evaluate SA acquisition in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical simulation projection display called the SimPit. We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for the three levels of SA.
  • Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017)
    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F.

    Kim, K., Billinghurst, M., Bruder, G., Duh, H. B. L., & Welch, G. F. (2018). Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008–2017). IEEE transactions on visualization and computer graphics, 24(11), 2947-2962.

    @article{kim2018revisiting,
    title={Revisiting trends in augmented reality research: A review of the 2nd decade of ISMAR (2008--2017)},
    author={Kim, Kangsoo and Billinghurst, Mark and Bruder, Gerd and Duh, Henry Been-Lirn and Welch, Gregory F},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2947--2962},
    year={2018},
    publisher={IEEE}
    }
    In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008-2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them-which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.
  • Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment
    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H.

    Reichherzer, C., Cunningham, A., Walsh, J., Kohler, M., Billinghurst, M., & Thomas, B. H. (2018). Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment. IEEE transactions on visualization and computer graphics, 24(11), 2917-2926.

    @article{reichherzer2018narrative,
    title={Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment},
    author={Reichherzer, Carolin and Cunningham, Andrew and Walsh, James and Kohler, Mark and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2917--2926},
    year={2018},
    publisher={IEEE}
    }
    This paper showcases one way of how virtual reconstruction can be used in a courtroom. The results of a pilot study on narrative and spatial memory are presented in the context of viewing real and virtual copies of a simulated crime scene. Based on current court procedures, three different viewing options were compared: photographs, a real life visit, and a 3D virtual reconstruction of the scene viewed in a Virtual Reality headset. Participants were also given a written narrative that included the spatial locations of stolen goods and were measured on their ability to recall and understand these spatial relationships of those stolen items. The results suggest that Virtual Reality is more reliable for spatial memory compared to photographs and that Virtual Reality provides a compromise for when physical viewing of crime scenes are not possible. We conclude that Virtual Reality is a promising medium for the court.
  • A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks
    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H.

    Volmer, B., Baumeister, J., Von Itzstein, S., Bornkessel-Schlesewsky, I., Schlesewsky, M., Billinghurst, M., & Thomas, B. H. (2018). A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks. IEEE transactions on visualization and computer graphics, 24(11), 2846-2856.

    @article{volmer2018comparison,
    title={A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks},
    author={Volmer, Benjamin and Baumeister, James and Von Itzstein, Stewart and Bornkessel-Schlesewsky, Ina and Schlesewsky, Matthias and Billinghurst, Mark and Thomas, Bruce H},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2846--2856},
    year={2018},
    publisher={IEEE}
    }
    Previous research has demonstrated that Augmented Reality can reduce a user's task response time and mental effort when completing a procedural task. This paper investigates techniques to improve user performance and reduce mental effort by providing projector-based Spatial Augmented Reality predictive cues for future responses. The objective of the two experiments conducted in this study was to isolate the performance and mental effort differences from several different annotation cueing techniques for simple (Experiment 1) and complex (Experiment 2) button-pressing tasks. Comporting with existing cognitive neuroscience literature on prediction, attentional orienting, and interference, we hypothesized that for both simple procedural tasks and complex search-based tasks, having a visual cue guiding to the next task's location would positively impact performance relative to a baseline, no-cue condition. Additionally, we predicted that direction-based cues would provide a more significant positive impact than target-based cues. The results indicated that providing a line to the next task was the most effective technique for improving the users' task time and mental effort in both the simple and complex tasks.
  • Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface
    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M.

    Piumsomboon, T., Lee, G. A., Ens, B., Thomas, B. H., & Billinghurst, M. (2018). Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface. IEEE transactions on visualization and computer graphics, 24(11), 2974-2982.

    @article{piumsomboon2018superman,
    title={Superman vs giant: a study on spatial perception for a multi-scale mixed reality flying telepresence interface},
    author={Piumsomboon, Thammathip and Lee, Gun A and Ens, Barrett and Thomas, Bruce H and Billinghurst, Mark},
    journal={IEEE transactions on visualization and computer graphics},
    volume={24},
    number={11},
    pages={2974--2982},
    year={2018},
    publisher={IEEE}
    }
    The advancements in Mixed Reality (MR), Unmanned Aerial Vehicle, and multi-scale collaborative virtual environments have led to new interface opportunities for remote collaboration. This paper explores a novel concept of flying telepresence for multi-scale mixed reality remote collaboration. This work could enable remote collaboration at a larger scale such as building construction. We conducted a user study with three experiments. The first experiment compared two interfaces, static and dynamic IPD, on simulator sickness and body size perception. The second experiment tested the user perception of a virtual object size under three levels of IPD and movement gain manipulation with a fixed eye height in a virtual environment having reduced or rich visual cues. Our last experiment investigated the participant’s body size perception for two levels of manipulation of the IPDs and heights using stereo video footage to simulate a flying telepresence experience. The studies found that manipulating IPDs and eye height influenced the user’s size perception. We present our findings and share the recommendations for designing a multi-scale MR flying telepresence interface.
  • Design considerations for combining augmented reality with intelligent tutors
    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G.

    Herbert, B., Ens, B., Weerasinghe, A., Billinghurst, M., & Wigley, G. (2018). Design considerations for combining augmented reality with intelligent tutors. Computers & Graphics, 77, 166-182.

    @article{herbert2018design,
    title={Design considerations for combining augmented reality with intelligent tutors},
    author={Herbert, Bradley and Ens, Barrett and Weerasinghe, Amali and Billinghurst, Mark and Wigley, Grant},
    journal={Computers \& Graphics},
    volume={77},
    pages={166--182},
    year={2018},
    publisher={Elsevier}
    }
    Augmented Reality overlays virtual objects on the real world in real-time and has the potential to enhance education, however, few AR training systems provide personalised learning support. Combining AR with intelligent tutoring systems has the potential to improve training outcomes by providing personalised learner support, such as feedback on the AR environment. This paper reviews the current state of AR training systems combined with ITSs and proposes a series of requirements for combining the two paradigms. In addition, this paper identifies a growing need to provide more research in the context of design and implementation of adaptive augmented reality tutors (ARATs). These include possibilities of evaluating the user interfaces of ARAT and potential domains where an ARAT might be considered effective.
  • Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression
    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Khan, H., Hoermann, S., Billinghurst, M., & Lindeman, R. W. (2018, March). Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression. In 2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good) (pp. 1-5). IEEE.

    @inproceedings{clifford2018development,
    title={Development of a Multi-Sensory Virtual Reality Training Simulator for Airborne Firefighters Supervising Aerial Wildfire Suppression},
    author={Clifford, Rory MS and Khan, Humayun and Hoermann, Simon and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE Workshop on Augmented and Virtual Realities for Good (VAR4Good)},
    pages={1--5},
    year={2018},
    organization={IEEE}
    }
    Wildfire firefighting is difficult to train for in the real world due to a variety of reasons, cost and environmental impact being the major barriers to effective training. Virtual Reality offers greater training opportunities to practice crucial skills, difficult to obtain without experiencing the actual environment. Situation Awareness (SA) is a critical aspect of Air Attack Supervision (AAS). Timely decisions need to be made by the AAS based on the information gathered while airborne. The type of display used in virtual reality training systems afford different levels of SA due to factors such as field of view, as well as presence within the virtual environment and the system. We conducted a study with 36 participants to evaluate SA acquisition and immersion in three display types: a high-definition TV (HDTV), an Oculus Rift Head-Mounted Display (HMD) and a 270° cylindrical projection system (SimPit). We found a significant difference between the HMD and the HDTV, as well as with the SimPit and the HDTV for SA levels. Preference was given more to the HMD for immersion and portability, but the SimPit gave the best environment for the actual role.
  • Collaborative immersive analytics.
    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T.

    Billinghurst, M., Cordeil, M., Bezerianos, A., & Margolis, T. (2018). Collaborative immersive analytics. In Immersive Analytics (pp. 221-257). Springer, Cham.

    @incollection{billinghurst2018collaborative,
    title={Collaborative immersive analytics},
    author={Billinghurst, Mark and Cordeil, Maxime and Bezerianos, Anastasia and Margolis, Todd},
    booktitle={Immersive Analytics},
    pages={221--257},
    year={2018},
    publisher={Springer}
    }
    Many of the problems being addressed by Immersive Analytics require groups of people to solve. This chapter introduces the concept of Collaborative Immersive Analytics (CIA) and reviews how immersive technologies can be combined with Visual Analytics to facilitate co-located and remote collaboration. We provide a definition of Collaborative Immersive Analytics and then an overview of the different types of possible collaboration. The chapter also discusses the various roles in collaborative systems, and how to support shared interaction with the data being presented. Finally, we summarize the opportunities for future research in this domain. The aim of the chapter is to provide enough of an introduction to CIA and key directions for future research, so that practitioners will be able to begin working in the field.
  • Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting
    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W.

    Clifford, R. M., Hoermann, S., Marcadet, N., Oliver, H., Billinghurst, M., & Lindeman, R. W. (2018, September). Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting. In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) (pp. 1-8). IEEE. Clifford, Rory MS, Simon Hoermann, Nicolas Marcade

    @inproceedings{clifford2018evaluating,
    title={Evaluating the effects of realistic communication disruptions in VR training for aerial firefighting},
    author={Clifford, Rory MS and Hoermann, Simon and Marcadet, Nicolas and Oliver, Hamish and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
    pages={1--8},
    year={2018},
    organization={IEEE}
    }
    Aerial firefighting takes place in stressful environments where decision making and communication are paramount, and skills need to be practiced and trained regularly. An experiment was performed to test the effects of disrupting the communications ability of the users on their stress levels in a noisy environment. The goal of this research is to investigate how realistic disruption of communication systems can be simulated in a virtual environment and to what extent they induce stress. We found that aerial firefighting experts maintained a better Heart Rate Variability (HRV) during disruptions than novices. Experts showed better ability to manage stress based on the change in HRV during the experiment. Our main finding is that communication disruptions in virtual reality (e.g., broken transmissions) significantly impacted the level of stress experienced by participants.
  • TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams
    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossel, C. (2018, August). TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 991-996). IEEE.

    @inproceedings{wen2018teammate,
    title={TEAMMATE: A Scalable System for Measuring Affect in Human-Machine Teams},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossel, Chad},
    booktitle={2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)},
    pages={991--996},
    year={2018},
    organization={IEEE}
    }
    Strong empathic bonding between members of a team can elevate team performance tremendously but it is not clear how such bonding within human-machine teams may impact upon mission success. Prior work using self-reporting surveys and end-of-task metrics do not capture how such bonding may evolve over time and impact upon task fulfillment. Furthermore, sensor-based measures do not scale easily to facilitate the need to collect substantial data for measuring potentially subtle effects. We introduce TEAMMATE, a system designed to provide insights into the emotional dynamics humans may form for machine teammates that could critically impact upon the design of human machine teams.
  • Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques
    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S.

    Ismail, A. W., Billinghurst, M., Sunar, M. S., & Yusof, C. S. (2018, September). Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques. In Proceedings of SAI Intelligent Systems Conference (pp. 309-322). Springer, Cham.

    @inproceedings{ismail2018designing,
    title={Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques},
    author={Ismail, Ajune Wanis and Billinghurst, Mark and Sunar, Mohd Shahrizal and Yusof, Cik Suhaimi},
    booktitle={Proceedings of SAI Intelligent Systems Conference},
    pages={309--322},
    year={2018},
    organization={Springer}
    }
    Augmented Reality (AR) supports natural interaction in physical and virtual worlds, so it has recently given rise to a number of novel interaction modalities. This paper presents a method for using hand-gestures with speech input for multimodal interaction in AR. It focuses on providing an intuitive AR environment which supports natural interaction with virtual objects while sustaining accessible real tasks and interaction mechanisms. The paper reviews previous multimodal interfaces and describes recent studies in AR that employ gesture and speech inputs for multimodal input. It describes an implementation of gesture interaction with speech input in AR for virtual object manipulation. Finally, the paper presents a user evaluation of the technique, showing that it can be used to improve the interaction between virtual and physical elements in an AR environment.
  • Emotion Sharing and Augmentation in Cooperative Virtual Reality Games
    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lawrence, L., Lee, G. A., Smith, R. T., & Billinghurst, M. (2018, October). Emotion Sharing and Augmentation in Cooperative Virtual Reality Games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts (pp. 453-460). ACM.

    @inproceedings{hart2018emotion,
    title={Emotion Sharing and Augmentation in Cooperative Virtual Reality Games},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lawrence, Louise and Lee, Gun A and Smith, Ross T and Billinghurst, Mark},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts},
    pages={453--460},
    year={2018},
    organization={ACM}
    }
    We present preliminary findings from sharing and augmenting facial expression in cooperative social Virtual Reality (VR) games. We implemented a prototype system for capturing and sharing facial expression between VR players through their avatar. We describe our current prototype system and how it could be assimilated into a system for enhancing social VR experience. Two social VR games were created for a preliminary user study. We discuss our findings from the user study, potential games for this system, and future directions for this research.
  • Effects of Manipulating Physiological Feedback in Immersive Virtual Environments
    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Manipulating Physiological Feedback in Immersive Virtual Environments. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (pp. 101-111). ACM.

    @inproceedings{dey2018effects,
    title={Effects of Manipulating Physiological Feedback in Immersive Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play},
    pages={101--111},
    year={2018},
    organization={ACM}
    }

    Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one’s physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants’ emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.

  • Real-time visual representations for mobile mixed reality remote collaboration.
    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W.

    Gao, L., Bai, H., He, W., Billinghurst, M., & Lindeman, R. W. (2018, December). Real-time visual representations for mobile mixed reality remote collaboration. In SIGGRAPH Asia 2018 Virtual & Augmented Reality (p. 15). ACM.

    @inproceedings{gao2018real,
    title={Real-time visual representations for mobile mixed reality remote collaboration},
    author={Gao, Lei and Bai, Huidong and He, Weiping and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2018 Virtual \& Augmented Reality},
    pages={15},
    year={2018},
    organization={ACM}
    }
    In this study we present a Mixed-Reality based mobile remote collaboration system that enables an expert providing real-time assistance over a physical distance. By using the Google ARCore position tracking, we can integrate the keyframes captured with one external depth sensor attached to the mobile phone as one single 3D point-cloud data set to present the local physical environment into the VR world. This captured local scene is then wirelessly streamed to the remote side for the expert to view while wearing a mobile VR headset (HTC VIVE Focus). In this case, the remote expert can immerse himself/herself in the VR scene and provide guidance just as sharing the same work environment with the local worker. In addition, the remote guidance is also streamed back to the local side as an AR cue overlaid on top of the local video see-through display. Our proposed mobile remote collaboration system supports a pair of participants performing as one remote expert guiding one local worker on some physical tasks in a more natural and efficient way in a large scale work space from a distance by simulating the face-to-face co-work experience using the Mixed-Reality technique.
  • Band of Brothers and Bolts: Caring About Your Robot Teammate
    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C.

    Wen, J., Stewart, A., Billinghurst, M., & Tossell, C. (2018, October). Band of Brothers and Bolts: Caring About Your Robot Teammate. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1853-1858). IEEE.

    @inproceedings{wen2018band,
    title={Band of Brothers and Bolts: Caring About Your Robot Teammate},
    author={Wen, James and Stewart, Amanda and Billinghurst, Mark and Tossell, Chad},
    booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
    pages={1853--1858},
    year={2018},
    organization={IEEE}
    }
    It has been observed that a robot shown as suffering is enough to cause an empathic response from a person. Whether the response is a fleeting reaction with no consequences or a meaningful perspective change with associated behavior modifications is not clear. Existing work has been limited to measurements made at the end of empathy inducing experimental trials rather measurements made over time to capture consequential behavioral pattern. We report on preliminary results collected from a study that attempts to measure how the actions of a participant may be altered by empathy for a robot companion. Our findings suggest that induced empathy can in fact have a significant impact on a person's behavior to the extent that the ability to fulfill a mission may be affected.
  • The effect of video placement in AR conferencing applications
    Lawrence, L., Dey, A., & Billinghurst, M.

    Lawrence, L., Dey, A., & Billinghurst, M. (2018, December). The effect of video placement in AR conferencing applications. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 453-457). ACM.

    @inproceedings{lawrence2018effect,
    title={The effect of video placement in AR conferencing applications},
    author={Lawrence, Louise and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={453--457},
    year={2018},
    organization={ACM}
    }
    We ran a pilot study to investigate the impact of video placement in augmented reality conferencing on communication, social presence and user preference. In addition, we explored the influence of different tasks, assembly and negotiation. We discovered a correlation between video placement and the type of the tasks, with some significant results in social presence indicators.
  • HandsInTouch: sharing gestures in remote collaboration
    Huang, W., Billinghurst, M., Alem, L., & Kim, S.

    Huang, W., Billinghurst, M., Alem, L., & Kim, S. (2018, December). HandsInTouch: sharing gestures in remote collaboration. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 396-400). ACM.

    @inproceedings{huang2018handsintouch,
    title={HandsInTouch: sharing gestures in remote collaboration},
    author={Huang, Weidong and Billinghurst, Mark and Alem, Leila and Kim, Seungwon},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={396--400},
    year={2018},
    organization={ACM}
    }
    Many systems have been developed to support remote collaboration, where hand gestures or sketches can be shared. However, the effect of combining gesture and sketching together has not been fully explored and understood. In this paper we describe HandsInTouch, a system in which both hand gestures and sketches made by a remote helper are shown to a local user in real time. We conducted a user study to test the usability of the system and the usefulness of combing gesture and sketching for remote collaboration. We discuss results and make recommendations for system design and future work.
  • A generalized, rapid authoring tool for intelligent tutoring systems
    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G.

    Herbert, B., Billinghurst, M., Weerasinghe, A., Ens, B., & Wigley, G. (2018, December). A generalized, rapid authoring tool for intelligent tutoring systems. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 368-373). ACM.

    @inproceedings{herbert2018generalized,
    title={A generalized, rapid authoring tool for intelligent tutoring systems},
    author={Herbert, Bradley and Billinghurst, Mark and Weerasinghe, Amali and Ens, Barret and Wigley, Grant},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={368--373},
    year={2018},
    organization={ACM}
    }
    As computer-based training systems become increasingly integrated into real-world training, tools which rapidly author courses for such systems are emerging. However, inconsistent user interface design and limited support for a variety of domains makes them time consuming and difficult to use. We present a Generalized, Rapid Authoring Tool (GRAT), which simplifies creation of Intelligent Tutoring Systems (ITSs) using a unified web-based wizard-style graphical user interface and programming-by-demonstration approaches to reduce technical knowledge needed to author ITS logic. We implemented a prototype, which authors courses for two kinds of tasks: A network cabling task and a console device configuration task to demonstrate the tool's potential. We describe the limitations of our prototype and present opportunities for evaluating the tool's usability and perceived effectiveness.
  • User virtual costume visualisation in an augmented virtuality immersive cinematic environment
    Tang, W., Lee, G. A., Billinghurst, M., & Lindeman, R. W.

    Tang, W., Lee, G. A., Billinghurst, M., & Lindeman, R. W. (2018, December). User virtual costume visualisation in an augmented virtuality immersive cinematic environment. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (pp. 219-223). ACM.

    @inproceedings{tang2018user,
    title={User virtual costume visualisation in an augmented virtuality immersive cinematic environment},
    author={Tang, Wenjing and Lee, Gun A and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 30th Australian Conference on Computer-Human Interaction},
    pages={219--223},
    year={2018},
    organization={ACM}
    }
    Recent development of affordable head-mounted displays (HMDs) has led to accessible Virtual Reality (VR) solutions for watching 360-degree panoramic movies. While conventionally users cannot see their body while watching 360 movies, our prior work seamlessly blended a user's physical body into a 360 virtual movie scene. This paper extends this work by overlaying context-matching virtual costumes onto the user's real body. A prototype was developed using a pair of depth cameras and an HMD to capture the user's real body and embed it into a virtual 360 movie scene. Virtual costumes related to the movie scene are then overlaid on user's real body to enhance the user experience. Results from a user study showed that augmenting the user's real body with context-matching virtual costume was most preferred by users while having no significant effect on sense of presence compared to showing only user's body in a 360 movie scene. The results offer a future direction to generate enhanced 360 VR movie watching experiences.
  • Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.
    Kim, S., Billinghurst, M., Lee, C., & Lee, G

    Kim, S., Billinghurst, M., Lee, C., & Lee, G. (2018). Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration. KSII Transactions on Internet & Information Systems, 12(12).

    @article{kim2018using,
    title={Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration.},
    author={Kim, Seungwon and Billinghurst, Mark and Lee, Chilwoo and Lee, Gun},
    journal={KSII Transactions on Internet \& Information Systems},
    volume={12},
    number={12},
    year={2018}
    }

    This paper describes two user studies in remote collaboration between two users with a video conferencing system where a remote user can draw annotations on the live video of the local user’s workspace. In these two studies, the local user had the control of the view when sharing the first-person view, but our interfaces provided instant control of the shared view to the remote users. The first study investigates methods for assisting drawing annotations. The auto-freeze method, a novel solution for drawing annotations, is compared to a prior solution (manual freeze method) and a baseline (non-freeze) condition. Results show that both local and remote users preferred the auto-freeze method, which is easy to use and allows users to quickly draw annotations. The manual-freeze method supported precise drawing, but was less preferred because of the need for manual input. The second study explores visual notification for better local user awareness. We propose two designs: the red-box and both-freeze notifications, and compare these to the baseline, no notification condition. Users preferred the less obtrusive red-box notification that improved awareness of when annotations were made by remote users, and had a significantly lower level of interruption compared to the both-freeze condition.

  • A user study on mr remote collaboration using live 360 video.
    Lee, G. A., Teo, T., Kim, S., & Billinghurst, M.

    Lee, G. A., Teo, T., Kim, S., & Billinghurst, M. (2018, October). A user study on mr remote collaboration using live 360 video. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 153-164). IEEE.

    @inproceedings{lee2018user,
    title={A user study on mr remote collaboration using live 360 video},
    author={Lee, Gun A and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={153--164},
    year={2018},
    organization={IEEE}
    }
    Sharing and watching live 360 panorama video is available on modern social networking platforms, yet the communication is often a passive one-directional experience. This research investigates how to further improve live 360 panorama based remote collaborative experiences by adding Mixed Reality (MR) cues. SharedSphere is a wearable MR remote collaboration system that enriches a live captured immersive panorama based collaboration through MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). We describe the design and implementation details of the prototype system, and report on a user study investigating how MR live panorama sharing affects the user's collaborative experience. The results showed that providing view independence through sharing live panorama enhances co-presence in collaboration, and the MR cues help users understanding each other. Based on the study results we discuss design implications and future research direction.
  • The Potential of Augmented Reality for Computer Science Education
    Resnyansky, D., İbili, E., & Billinghurst, M.

    Resnyansky, D., İbili, E., & Billinghurst, M. (2018, December). The Potential of Augmented Reality for Computer Science Education. In 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE) (pp. 350-356). IEEE.

    @inproceedings{resnyansky2018potential,
    title={The Potential of Augmented Reality for Computer Science Education},
    author={Resnyansky, Dmitry and {\.I}bili, Emin and Billinghurst, Mark},
    booktitle={2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE)},
    pages={350--356},
    year={2018},
    organization={IEEE}
    }
    Innovative approaches in the teaching of computer science are required to address the needs of diverse target audiences, including groups with minimal mathematical background and insufficient abstract thinking ability.  In order to tackle this problem, new pedagogical approaches as needed, such as using new technologies such as Virtual and Augmented Reality, Tangible User Interfaces, and 3D graphics. This paper draws upon relevant pedagogical and technological literature to determine how Augmented Reality can be more fully applied to computer science education.
  • Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments
    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W.

    Dey, A., Chen, H., Zhuang, C., Billinghurst, M., & Lindeman, R. W. (2018, October). Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 165-173). IEEE.

    @inproceedings{dey2018effects,
    title={Effects of Sharing Real-Time Multi-Sensory Heart Rate Feedback in Different Immersive Collaborative Virtual Environments},
    author={Dey, Arindam and Chen, Hao and Zhuang, Chang and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
    pages={165--173},
    year={2018},
    organization={IEEE}
    }
    Collaboration is an important application area for virtual reality (VR). However, unlike in the real world, collaboration in VR misses important empathetic cues that can make collaborators aware of each other's emotional states. Providing physiological feedback, such as heart rate or respiration rate, to users in VR has been shown to create a positive impact in single user environments. In this paper, through a rigorous mixed-factorial user experiment, we evaluated how providing heart rate feedback to collaborators influences their collaboration in three different environments requiring different kinds of collaboration. We have found that when provided with real-time heart rate feedback participants felt the presence of the collaborator more and felt that they understood their collaborator's emotional state more. Heart rate feedback also made participants feel more dominant when performing the task. We discuss the implication of this research for collaborative VR environments, provide design guidelines, and directions for future research.
  • Sharing and Augmenting Emotion in Collaborative Mixed Reality
    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M.

    Hart, J. D., Piumsomboon, T., Lee, G., & Billinghurst, M. (2018, October). Sharing and Augmenting Emotion in Collaborative Mixed Reality. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 212-213). IEEE.

    @inproceedings{hart2018sharing,
    title={Sharing and Augmenting Emotion in Collaborative Mixed Reality},
    author={Hart, Jonathon D and Piumsomboon, Thammathip and Lee, Gun and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={212--213},
    year={2018},
    organization={IEEE}
    }
    We present a concept of emotion sharing and augmentation for collaborative mixed-reality. To depict the ideal use case of such system, we give two example scenarios. We describe our prototype system for capturing and augmenting emotion through facial expression, eye-gaze, voice, physiological data and share them through their virtual representation, and discuss on future research directions with potential applications.
  • Filtering 3D Shared Surrounding Environments by Social Proximity in AR
    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W.

    Nassani, A., Bai, H., Lee, G., Langlotz, T., Billinghurst, M., & Lindeman, R. W. (2018, October). Filtering 3D Shared Surrounding Environments by Social Proximity in AR. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 123-124). IEEE.

    @inproceedings{nassani2018filtering,
    title={Filtering 3D Shared Surrounding Environments by Social Proximity in AR},
    author={Nassani, Alaeddin and Bai, Huidong and Lee, Gun and Langlotz, Tobias and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={123--124},
    year={2018},
    organization={IEEE}
    }
    In this poster, we explore the social sharing of surrounding environments on wearable Augmented Reality (AR) devices. In particular, we propose filtering the level of detail of sharing the surrounding environment based on the social proximity between the viewer and the sharer. We test the effect of having the filter (varying levels of detail) on the shared surrounding environment on the sense of privacy from both viewer and sharer perspectives and conducted a pilot study using HoloLens. We report on semi-structured questionnaire results and suggest future directions in the social sharing of surrounding environments.
  • The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation
    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M.

    Zhang, L., Ha, W., Bai, X., Chen, Y., & Billinghurst, M. (2018, October). The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 216-221). IEEE.

    @inproceedings{zhang2018effect,
    title={The Effect of AR Based Emotional Interaction Among Personified Physical Objects in Manual Operation},
    author={Zhang, Li and Ha, Weiping and Bai, Xiaoliang and Chen, Yongxing and Billinghurst, Mark},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={216--221},
    year={2018},
    organization={IEEE}
    }
    In this paper, we explore how Augmented Reality (AR) and anthropomorphism can be used to assign emotions to common physical objects based on their needs. We developed a novel emotional interaction model among personified physical objects so that they could react to other objects by changing virtual facial expressions. To explore the effect of such an emotional interface, we conducted a user study comparing three types of virtual cues shown on the real objects: (1) information only, (2) emotion only and (3) both information and emotional cues. A significant difference was found in task completion time and the quality of work when adding emotional cues to an informational AR-based guiding system. This implies that adding emotion feedback to informational cues may produce better task results than using informational cues alone.
  • Do you know what i mean? an mr-based collaborative platform
    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., Wang, S.

    Wang, P., Zhang, S., Bai, X., Billinghurst, M., He, W., Zhang, L., ... & Wang, S. (2018, October). Do you know what i mean? an mr-based collaborative platform. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (pp. 77-78). IEEE.

    @inproceedings{wang2018you,
    title={Do you know what i mean? an mr-based collaborative platform},
    author={Wang, Peng and Zhang, Shusheng and Bai, Xiaoliang and Billinghurst, Mark and He, Weiping and Zhang, Li and Du, Jiaxiang and Wang, Shuxia},
    booktitle={2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
    pages={77--78},
    year={2018},
    organization={IEEE}
    }
    The Mixed Reality (MR) technology can be used to create unique collaborative experiences. In this paper, we propose a new remote collaboration platform using MR and eye-tracking that enables a remote helper to assist a local worker in an assembly task. We present results from research exploring the effect of sharing virtual gaze and annotations cues in an MR-based projector interface for remote collaboration. The key advantage compared to other remote collaborative MR interfaces is that it projects the remote expert's eye gaze into the real worksite to improve co-presence. The prototype system was evaluated with a pilot study comparing two conditions: POINTER and ET (eye-tracker cues). We observed that the task completion performance was better in the ET condition. And that sharing gaze significantly improved the awareness of each other's focus and co-presence.
  • 2017
  • Mixed Reality Collaboration through Sharing a Live Panorama
    Gun A. Lee, Theophilus Teo, Seungwon Kim, Mark Billinghurst

    Gun A. Lee, Theophilus Teo, Seungwon Kim, and Mark Billinghurst. 2017. Mixed reality collaboration through sharing a live panorama. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (SA '17). ACM, New York, NY, USA, Article 14, 4 pages. http://doi.acm.org/10.1145/3132787.3139203

    @inproceedings{Lee:2017:MRC:3132787.3139203,
    author = {Lee, Gun A. and Teo, Theophilus and Kim, Seungwon and Billinghurst, Mark},
    title = {Mixed Reality Collaboration Through Sharing a Live Panorama},
    booktitle = {SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    series = {SA '17},
    year = {2017},
    isbn = {978-1-4503-5410-3},
    location = {Bangkok, Thailand},
    pages = {14:1--14:4},
    articleno = {14},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3132787.3139203},
    doi = {10.1145/3132787.3139203},
    acmid = {3139203},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {panorama, remote collaboration, shared experience},
    }
    One of the popular features on modern social networking platforms is sharing live 360 panorama video. This research investigates on how to further improve shared live panorama based collaborative experiences by applying Mixed Reality (MR) technology. SharedSphere is a wearable MR remote collaboration system. In addition to sharing a live captured immersive panorama, SharedSphere enriches the collaboration through overlaying MR visualisation of non-verbal communication cues (e.g., view awareness and gestures cues). User feedback collected through a preliminary user study indicated that sharing of live 360 panorama video was beneficial by providing a more immersive experience and supporting view independence. Users also felt that the view awareness cues were helpful for understanding the remote collaborator’s focus.
  • User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors
    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst

    Gun Lee, Omprakash Rudhru, Hye Sun Park, Ho Won Kim, and Mark Billinghurst. User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, 109-116. http://dx.doi.org/10.2312/egve.20171347

    @inproceedings {egve.20171347,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{User Interface Agents for Guiding Interaction with Augmented Virtual Mirrors}},
    author = {Lee, Gun A. and Rudhru, Omprakash and Park, Hye Sun and Kim, Ho Won and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171347}
    }
    This research investigates using user interface (UI) agents for guiding gesture based interaction with Augmented Virtual Mirrors. Compared to prior work in gesture interaction, where graphical symbols are used for guiding user interaction, we propose using UI agents. We explore two approaches for using UI agents: 1) using a UI agent as a delayed cursor and 2) using a UI agent as an interactive button. We conducted two user studies to evaluate the proposed designs. The results from the user studies show that UI agents are effective for guiding user interactions in a similar way as a traditional graphical user interface providing visual cues, while they are useful in emotionally engaging with users.
  • Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze
    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst

    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst. 2017. Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, pp. 197-204. http://dx.doi.org/10.2312/egve.20171359

    @inproceedings {egve.20171359,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze}},
    author = {Lee, Gun A. and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammathip and Norman, Mitchell and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171359}
    }
    To improve remote collaboration in video conferencing systems, researchers have been investigating augmenting visual cues onto a shared live video stream. In such systems, a person wearing a head-mounted display (HMD) and camera can share her view of the surrounding real-world with a remote collaborator to receive assistance on a real-world task. While this concept of augmented video conferencing (AVC) has been actively investigated, there has been little research on how sharing gaze cues might affect the collaboration in video conferencing. This paper investigates how sharing gaze in both directions between a local worker and remote helper in an AVC system affects the collaboration and communication. Using a prototype AVC system that shares the eye gaze of both users, we conducted a user study that compares four conditions with different combinations of eye gaze sharing between the two users. The results showed that sharing each other’s gaze significantly improved collaboration and communication.
  • Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst

    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst. 2017. Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 36-39. https://doi.org/10.1109/3DUI.2017.7893315

    @INPROCEEDINGS{7893315,
    author={T. Piumsomboon and G. Lee and R. W. Lindeman and M. Billinghurst},
    booktitle={2017 IEEE Symposium on 3D User Interfaces (3DUI)},
    title={Exploring natural eye-gaze-based interaction for immersive virtual reality},
    year={2017},
    volume={},
    number={},
    pages={36-39},
    keywords={gaze tracking;gesture recognition;helmet mounted displays;virtual reality;Duo-Reticles;Nod and Roll;Radial Pursuit;cluttered-object selection;eye tracking technology;eye-gaze selection;head-gesture-based interaction;head-mounted display;immersive virtual reality;inertial reticles;natural eye movements;natural eye-gaze-based interaction;smooth pursuit;vestibulo-ocular reflex;Electronic mail;Erbium;Gaze tracking;Painting;Portable computers;Resists;Two dimensional displays;H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction styles},
    doi={10.1109/3DUI.2017.7893315},
    ISSN={},
    month={March},}
    Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
  • Enhancing player engagement through game balancing in digitally augmented physical games
    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C.

    Altimira, D., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C. (2017). Enhancing player engagement through game balancing in digitally augmented physical games. International Journal of Human-Computer Studies, 103, 35-47.

    @article{altimira2017enhancing,
    title={Enhancing player engagement through game balancing in digitally augmented physical games},
    author={Altimira, David and Clarke, Jenny and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph and others},
    journal={International Journal of Human-Computer Studies},
    volume={103},
    pages={35--47},
    year={2017},
    publisher={Elsevier}
    }
    Game balancing can be used to compensate for differences in players' skills, in particular in games where players compete against each other. It can help providing the right level of challenge and hence enhance engagement. However, there is a lack of understanding of game balancing design and how different game adjustments affect player engagement. This understanding is important for the design of balanced physical games. In this paper we report on how altering the game equipment in a digitally augmented table tennis game, such as the table size and bat-head size statically and dynamically, can affect game balancing and player engagement. We found these adjustments enhanced player engagement compared to the no-adjustment condition. The understanding of how the adjustments impacted on player engagement helped us to derive a set of balancing strategies to facilitate engaging game experiences. We hope that this understanding can contribute to improve physical activity experiences and encourage people to get engaged in physical activity.
  • Effects of sharing physiological states of players in a collaborative virtual reality gameplay
    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M.

    Dey, A., Piumsomboon, T., Lee, Y., & Billinghurst, M. (2017, May). Effects of sharing physiological states of players in a collaborative virtual reality gameplay. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 4045-4056). ACM.

    @inproceedings{dey2017effects,
    title={Effects of sharing physiological states of players in a collaborative virtual reality gameplay},
    author={Dey, Arindam and Piumsomboon, Thammathip and Lee, Youngho and Billinghurst, Mark},
    booktitle={Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems},
    pages={4045--4056},
    year={2017},
    organization={ACM}
    }
    Interfaces for collaborative tasks, such as multiplayer games can enable more effective and enjoyable collaboration. However, in these systems, the emotional states of the users are often not communicated properly due to their remoteness from one another. In this paper, we investigate the effects of showing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart-rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. The games had significant main effects on the overall emotional experience.
  • User evaluation of hand gestures for designing an intelligent in-vehicle interface
    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M.

    Jahani, H., Alyamani, H. J., Kavakli, M., Dey, A., & Billinghurst, M. (2017, May). User evaluation of hand gestures for designing an intelligent in-vehicle interface. In International Conference on Design Science Research in Information System and Technology (pp. 104-121). Springer, Cham.

    @inproceedings{jahani2017user,
    title={User evaluation of hand gestures for designing an intelligent in-vehicle interface},
    author={Jahani, Hessam and Alyamani, Hasan J and Kavakli, Manolya and Dey, Arindam and Billinghurst, Mark},
    booktitle={International Conference on Design Science Research in Information System and Technology},
    pages={104--121},
    year={2017},
    organization={Springer}
    }
    Driving a car is a high cognitive-load task requiring full attention behind the wheel. Intelligent navigation, transportation, and in-vehicle interfaces have introduced a safer and less demanding driving experience. However, there is still a gap for the existing interaction systems to satisfy the requirements of actual user experience. Hand gesture as an interaction medium, is natural and less visually demanding while driving. This paper aims to conduct a user-study with 79 participants to validate mid-air gestures for 18 major in-vehicle secondary tasks. We have demonstrated a detailed analysis on 900 mid-air gestures investigating preferences of gestures for in-vehicle tasks, their physical affordance, and driving errors. The outcomes demonstrate that employment of mid-air gestures reduces driving errors by up to 50% compared to traditional air-conditioning control. Results can be used for the development of vision-based in-vehicle gestural interfaces.
  • Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals
    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M.

    Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M. (2017, June). Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals. In International Conference on Artificial Intelligence in Education (pp. 450-454). Springer, Cham.

    @inproceedings{almiyad2017intelligent,
    title={Intelligent Augmented Reality Tutoring for Physical Tasks with Medical Professionals},
    author={Almiyad, Mohammed A and Oakden-Rayner, Luke and Weerasinghe, Amali and Billinghurst, Mark},
    booktitle={International Conference on Artificial Intelligence in Education},
    pages={450--454},
    year={2017},
    organization={Springer}
    }
    Percutaneous radiology procedures often require the repeated use of medical radiation in the form of computed tomography (CT) scanning, to demonstrate the position of the needle in the underlying tissues. The angle of the insertion and the distance travelled by the needle inside the patient play a major role in successful procedures, and must be estimated by the practitioner and confirmed periodically by the use of the scanner. Junior radiology trainees, who are already highly trained professionals, currently learn this task “on-the-job” by performing the procedures on real patients with varying levels of guidance. Therefore, we present a novel Augmented Reality (AR)-based system that provides multiple layers of intuitive and adaptive feedback to assist junior radiologists in achieving competency in image-guided procedures.
  • Augmented reality entertainment: taking gaming out of the box
    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H.

    Von Itzstein, G. S., Billinghurst, M., Smith, R. T., & Thomas, B. H. (2017). Augmented reality entertainment: taking gaming out of the box. Encyclopedia of Computer Graphics and Games, 1-9.

    @article{von2017augmented,
    title={Augmented reality entertainment: taking gaming out of the box},
    author={Von Itzstein, G Stewart and Billinghurst, Mark and Smith, Ross T and Thomas, Bruce H},
    journal={Encyclopedia of Computer Graphics and Games},
    pages={1--9},
    year={2017},
    publisher={Springer}
    }
    In this chapter, an overview of using AR for gaming and entertainment is provided, one of the most popular application areas. There are many possible AR entertainment applications. For example, the Pokémon Go mobile phone game has an AR element that allows people to see virtual Pokémon to appear in the live camera view, seemingly inhabiting the real world. In this case, Pokémon Go satisfies Azuma’s three AR criteria: the virtual Pokémon appears in the real world, the user can interact with them, and they appear fixed in space.
  • Estimating Gaze Depth Using Multi-Layer Perceptron

    Lee, Y., Shin, C., Plopski, A., Itoh, Y., Piumsomboon, T., Dey, A., ... & Billinghurst, M. (2017, June). Estimating Gaze Depth Using Multi-Layer Perceptron. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 26-29). IEEE.

    @inproceedings{lee2017estimating,
    title={Estimating Gaze Depth Using Multi-Layer Perceptron},
    author={Lee, Youngho and Shin, Choonsung and Plopski, Alexander and Itoh, Yuta and Piumsomboon, Thammathip and Dey, Arindam and Lee, Gun and Kim, Seungwon and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={26--29},
    year={2017},
    organization={IEEE}
    }
    In this paper we describe a new method for determining gaze depth in a head mounted eye-tracker. Eyetrackers are being incorporated into head mounted displays (HMDs), and eye-gaze is being used for interaction in Virtual and Augmented Reality. For some interaction methods, it is important to accurately measure the x- and y-direction of the eye-gaze and especially the focal depth information. Generally, eye tracking technology has a high accuracy in x- and y-directions, but not in depth. We used a binocular gaze tracker with two eye cameras, and the gaze vector was input to an MLP neural network for training and estimation. For the performance evaluation, data was obtained from 13 people gazing at fixed points at distances from 1m to 5m. The gaze classification into fixed distances produced an average classification error of nearly 10%, and an average error distance of 0.42m. This is sufficient for some Augmented Reality applications, but more research is needed to provide an estimate of a user’s gaze moving in continuous space.
  • Empathic mixed reality: Sharing what you feel and interacting with what you see
    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M.

    Piumsomboon, T., Lee, Y., Lee, G. A., Dey, A., & Billinghurst, M. (2017, June). Empathic mixed reality: Sharing what you feel and interacting with what you see. In 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR) (pp. 38-41). IEEE.

    @inproceedings{piumsomboon2017empathic,
    title={Empathic mixed reality: Sharing what you feel and interacting with what you see},
    author={Piumsomboon, Thammathip and Lee, Youngho and Lee, Gun A and Dey, Arindam and Billinghurst, Mark},
    booktitle={2017 International Symposium on Ubiquitous Virtual Reality (ISUVR)},
    pages={38--41},
    year={2017},
    organization={IEEE}
    }
    Empathic Computing is a research field that aims to use technology to create deeper shared understanding or empathy between people. At the same time, Mixed Reality (MR) technology provides an immersive experience that can make an ideal interface for collaboration. In this paper, we present some of our research into how MR technology can be applied to creating Empathic Computing experiences. This includes exploring how to share gaze in a remote collaboration between Augmented Reality (AR) and Virtual Reality (VR) environments, using physiological signals to enhance collaborative VR, and supporting interaction through eye-gaze in VR. Early outcomes indicate that as we design collaborative interfaces to enhance empathy between people, this could also benefit the personal experience of the individual interacting with the interface.
  • The Social AR Continuum: Concept and User Study
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., Hoermann, S., & Lindeman, R. W. (2017, October). [POSTER] The Social AR Continuum: Concept and User Study. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 7-8). IEEE.

    @inproceedings{nassani2017poster,
    title={[POSTER] The Social AR Continuum: Concept and User Study},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Hoermann, Simon and Lindeman, Robert W},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={7--8},
    year={2017},
    organization={IEEE}
    }
    In this poster, we describe The Social AR Continuum, a space that encompasses different dimensions of Augmented Reality (AR) for sharing social experiences. We explore various dimensions, discuss options for each dimension, and brainstorm possible scenarios where these options might be useful. We describe a prototype interface using the contact placement dimension, and report on feedback from potential users which supports its usefulness for visualising social contacts. Based on this concept work, we suggest user studies in the social AR space, and give insights into future directions.
  • Mutually Shared Gaze in Augmented Video Conference
    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M.

    Lee, G., Kim, S., Lee, Y., Dey, A., Piumsomboon, T., Norman, M., & Billinghurst, M. (2017, October). Mutually Shared Gaze in Augmented Video Conference. In Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017 (pp. 79-80). Institute of Electrical and Electronics Engineers Inc..

    @inproceedings{lee2017mutually,
    title={Mutually Shared Gaze in Augmented Video Conference},
    author={Lee, Gun and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammatip and Norman, Mitchell and Billinghurst, Mark},
    booktitle={Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017},
    pages={79--80},
    year={2017},
    organization={Institute of Electrical and Electronics Engineers Inc.}
    }
    Augmenting video conference with additional visual cues has been studied to improve remote collaboration. A common setup is a person wearing a head-mounted display (HMD) and camera sharing her view of the workspace with a remote collaborator and getting assistance on a real-world task. While this configuration has been extensively studied, there has been little research on how sharing gaze cues might affect the collaboration. This research investigates how sharing gaze in both directions between a local worker and remote helper affects the collaboration and communication. We developed a prototype system that shares the eye gaze of both users, and conducted a user study. Preliminary results showed that sharing gaze significantly improves the awareness of each other's focus, hence improving collaboration.
  • CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues
    Piumsomboon, T., Dey, A., Ens, B., Lee, G., and Billinghurst, M

    Piumsomboon, T., Dey, A., Ens, B., Lee, G., & Billinghurst, M. (2017, October). [POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues. In 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (pp. 218-219). IEEE.

    @inproceedings{piumsomboon2017poster,
    title={[POSTER] CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues},
    author={Piumsomboon, Thammathip and Dey, Arindam and Ens, Barrett and Lee, Gun and Billinghurst, Mark},
    booktitle={2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)},
    pages={218--219},
    year={2017},
    organization={IEEE}
    }
    We present CoVAR, a novel Virtual Reality (VR) and Augmented Reality (AR) system for remote collaboration. It supports collaboration between AR and VR users by sharing a 3D reconstruction of the AR user's environment. To enhance this mixed platform collaboration, it provides natural inputs such as eye-gaze and hand gestures, remote embodiment through avatar's head and hands, and awareness cues of field-of-view and gaze cue. In this paper, we describe the system architecture, setup and calibration procedures, input methods and interaction, and collaboration enhancement features.
  • Exhibition approach using an AR and VR pillar
    See, Z. S., Sunar, M. S., Billinghurst, M., Dey, A., Santano, D., Esmaeili, H., and Thwaites, H.

    See, Z. S., Sunar, M. S., Billinghurst, M., Dey, A., Santano, D., Esmaeili, H., & Thwaites, H. (2017, November). Exhibition approach using an AR and VR pillar. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 8). ACM.

    @inproceedings{see2017exhibition,
    title={Exhibition approach using an AR and VR pillar},
    author={See, Zi Siang and Sunar, Mohd Shahrizal and Billinghurst, Mark and Dey, Arindam and Santano, Delas and Esmaeili, Human and Thwaites, Harold},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={8},
    year={2017},
    organization={ACM}
    }
    This demonstration presents a development of an Augmented Reality (AR) and Virtual Reality (AR) pillar, a novel approach for showing AR and VR content in a public setting. A pillar in a public exhibition venue was converted to a four-sided AR and VR showcase. A cultural heritage theme of Boatbuilders of Pangkor was been featured in an experiment of the AR and VR Pillar. Multimedia tablets and mobile AR head-mount-displays (HMDs) were freely provided for the public visitors to experience multisensory content demonstrated on the pillar. The content included AR-based videos, maps, images and text, and VR experiences that allowed visitors to view reconstructed 3D subjects and remote locations in a 360 virtual environment. A miniature version of the pillar will be used for the demonstration where users could experience features of the prototype system.
  • Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie
    Khan, Humayun, Gun Lee, Simon Hoermann, Rory Clifford, Mark Billinghurst, and Robert W. Lindeman.

    Khan, Humayun, Gun Lee, Simon Hoermann, Rory Clifford, Mark Billinghurst, and Robert W. Lindeman. "Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie." (2017).

    @article{khan2017evaluating,
    title={Evaluating the Effects of Hand-gesture-based Interaction with Virtual Content in a 360 Movie},
    author={Khan, Humayun and Lee, Gun and Hoermann, Simon and Clifford, Rory and Billinghurst, Mark and Lindeman, Robert W},
    year={2017}
    }
    Head-mounted displays are becoming increasingly popular as home entertainment devices for viewing 360◦ movies. This paper explores the effects of adding gesture interaction with virtual content and two different hand-visualisation modes for 360◦ movie watching experience. The system in the study comprises of a Leap Motion sensor to track the user’s hand and finger motions, in combination with a SoftKinetic RGB-D camera to capture the texture of the hands and arms. A 360◦ panoramic movie with embedded virtual objects was used as content. Four conditions, displaying either a point-cloud of the real hand or a rigged computer-generated hand, with and without interaction, were evaluated. Presence, agency, embodiment, and ownership, as well as the overall participant preference were measured. Results showed that participants had a strong preference for the conditions with interactive virtual content, and they felt stronger embodiment and ownership. The comparison of the two hand visualisations showed that the display of the real hand elicited stronger ownership. There was no overall difference for presence between the four conditions. These findings suggest that adding interaction with virtual content could be beneficial to the overall user experience, and that interaction should be performed using the real hand visualisation instead of the virtual hand if higher ownership is desired.
  • The effect of user embodiment in AV cinematic experience
    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., and Bartneck, C.

    Chen, J., Lee, G., Billinghurst, M., Lindeman, R. W., & Bartneck, C. (2017). The effect of user embodiment in AV cinematic experience.

    @article{chen2017effect,
    title={The effect of user embodiment in AV cinematic experience},
    author={Chen, Joshua and Lee, Gun and Billinghurst, Mark and Lindeman, Robert W and Bartneck, Christoph},
    year={2017}
    }
    Virtual Reality (VR) is becoming a popular medium for viewing immersive cinematic experiences using 360◦ panoramic movies and head mounted displays. There are previous research on user embodiment in real-time rendered VR, but not in relation to cinematic VR based on 360 panoramic video. In this paper we explore the effects of introducing the user’s real body into cinematic VR experiences. We conducted a study evaluating how the type of movie and user embodiment affects the sense of presence and user engagement. We found that when participants were able to see their own body in the VR movie, there was significant increase in the sense of Presence, yet user engagement was not significantly affected. We discuss on the implications of the results and how it can be expanded in the future.
  • A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs
    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M.

    Lee, Y., Piumsomboon, T., Ens, B., Lee, G., Dey, A., & Billinghurst, M. (2017, November). A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos (pp. 1-2). Eurographics Association.

    @inproceedings{lee2017gaze,
    title={A gaze-depth estimation technique with an implicit and continuous data acquisition for OST-HMDs},
    author={Lee, Youngho and Piumsomboon, Thammathip and Ens, Barrett and Lee, Gun and Dey, Arindam and Billinghurst, Mark},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments: Posters and Demos},
    pages={1--2},
    year={2017},
    organization={Eurographics Association}
    }

    The rapid developement of machine learning algorithms can be leveraged for potential software solutions in many domains including techniques for depth estimation of human eye gaze. In this paper, we propose an implicit and continuous data acquisition method for 3D gaze depth estimation for an optical see-Through head mounted display (OST-HMD) equipped with an eye tracker. Our method constantly monitoring and generating user gaze data for training our machine learning algorithm. The gaze data acquired through the eye-tracker include the inter-pupillary distance (IPD) and the gaze distance to the real andvirtual target for each eye.

  • Exploring pupil dilation in emotional virtual reality environments.
    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W.

    Chen, H., Dey, A., Billinghurst, M., & Lindeman, R. W. (2017, November). Exploring pupil dilation in emotional virtual reality environments. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 169-176). Eurographics Association.

    @inproceedings{chen2017exploring,
    title={Exploring pupil dilation in emotional virtual reality environments},
    author={Chen, Hao and Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={169--176},
    year={2017},
    organization={Eurographics Association}
    }
    Previous investigations have shown that pupil dilation can be affected by emotive pictures, audio clips, and videos. In this paper, we explore how emotive Virtual Reality (VR) content can also cause pupil dilation. VR has been shown to be able to evoke negative and positive arousal in users when they are immersed in different virtual scenes. In our research, VR scenes were used as emotional triggers. Five emotional VR scenes were designed in our study and each scene had five emotion segments; happiness, fear, anxiety, sadness, and disgust. When participants experienced the VR scenes, their pupil dilation and the brightness in the headset were captured. We found that both the negative and positive emotion segments produced pupil dilation in the VR environments. We also explored the effect of showing heart beat cues to the users, and if this could cause difference in pupil dilation. In our study, three different heart beat cues were shown to users using a combination of three channels; haptic, audio, and visual. The results showed that the haptic-visual cue caused the most significant pupil dilation change from the baseline.
  • Collaborative View Configurations for Multi-user Interaction with a Wall-size Display
    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C.

    Kim, H., Kim, Y., Lee, G., Billinghurst, M., & Bartneck, C. (2017, November). Collaborative view configurations for multi-user interaction with a wall-size display. In Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments (pp. 189-196). Eurographics Association.

    @inproceedings{kim2017collaborative,
    title={Collaborative view configurations for multi-user interaction with a wall-size display},
    author={Kim, Hyungon and Kim, Yeongmi and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph},
    booktitle={Proceedings of the 27th International Conference on Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments},
    pages={189--196},
    year={2017},
    organization={Eurographics Association}
    }
    This paper explores the effects of different collaborative view configuration on face-to-face collaboration using a wall-size display and the relationship between view configuration and multi-user interaction. Three different view configurations (shared view, split screen, and split screen with navigation information) for multi-user collaboration with a wall-size display were introduced and evaluated in a user study. From the experiment results, several insights for designing a virtual environment with a wall-size display were discussed. The shared view configuration does not disturb collaboration despite control conflict and can provide an effective collaboration. The split screen view configuration can provide independent collaboration while it can take users’ attention. The navigation information can reduce the interaction required for the navigational task while an overall interaction performance may not increase.
  • Towards Optimization of Mid-air Gestures for In-vehicle Interactions
    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M.

    Hessam, J. F., Zancanaro, M., Kavakli, M., & Billinghurst, M. (2017, November). Towards optimization of mid-air gestures for in-vehicle interactions. In Proceedings of the 29th Australian Conference on Computer-Human Interaction (pp. 126-134). ACM.

    @inproceedings{hessam2017towards,
    title={Towards optimization of mid-air gestures for in-vehicle interactions},
    author={Hessam, Jahani F and Zancanaro, Massimo and Kavakli, Manolya and Billinghurst, Mark},
    booktitle={Proceedings of the 29th Australian Conference on Computer-Human Interaction},
    pages={126--134},
    year={2017},
    organization={ACM}
    }
    A mid-air gesture-based interface could provide a less cumbersome in-vehicle interface for a safer driving experience. Despite the recent developments in gesture-driven technologies facilitating the multi-touch and mid-air gestures, interface safety requirements as well as an evaluation of gesture characteristics and functions, need to be explored. This paper describes an optimization study on the previously developed GestDrive gesture vocabulary for in-vehicle secondary tasks. We investigate mid-air gestures and secondary tasks, their correlation, confusions, unintentional inputs and consequential safety risks. Building upon a statistical analysis, the results provide an optimized taxonomy break-down for a user-centered gestural interface design which considers user preferences, requirements, performance, and safety issues.
  • Exploring Mixed-Scale Gesture Interaction
    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M.

    Ens, B., Quigley, A. J., Yeo, H. S., Irani, P., Piumsomboon, T., & Billinghurst, M. (2017). Exploring mixed-scale gesture interaction. SA'17 SIGGRAPH Asia 2017 Posters.

    @article{ens2017exploring,
    title={Exploring mixed-scale gesture interaction},
    author={Ens, Barrett and Quigley, Aaron John and Yeo, Hui Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    journal={SA'17 SIGGRAPH Asia 2017 Posters},
    year={2017},
    publisher={ACM}
    }
    This paper presents ongoing work toward a design exploration for combining microgestures with other types of gestures within the greater lexicon of gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors.
  • Multi-Scale Gestural Interaction for Augmented Reality
    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M.

    Ens, B., Quigley, A., Yeo, H. S., Irani, P., & Billinghurst, M. (2017, November). Multi-scale gestural interaction for augmented reality. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 11). ACM.

    @inproceedings{ens2017multi,
    title={Multi-scale gestural interaction for augmented reality},
    author={Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={11},
    year={2017},
    organization={ACM}
    }

    We present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to fatigue. Our prototype application uses multiple sensors to detect gestures from both arm and hand motions (macro-scale), and finger gestures (micro-scale). Micro-gestures can provide precise input through a belt-worn sensor configuration, with the hand in a relaxed posture. We present an application that combines direct manipulation with microgestures for precise interaction, beyond the capabilities of direct manipulation alone.

  • Static local environment capturing and sharing for MR remote collaboration
    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M.

    Gao, L., Bai, H., Lindeman, R., & Billinghurst, M. (2017, November). Static local environment capturing and sharing for MR remote collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 17). ACM.

    @inproceedings{gao2017static,
    title={Static local environment capturing and sharing for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lindeman, Rob and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={17},
    year={2017},
    organization={ACM}
    }
    We present a Mixed Reality (MR) system that supports entire scene capturing of the local physical work environment for remote collaboration in a large-scale workspace. By integrating the key-frames captured with external depth sensor as one single 3D point-cloud data set, our system could reconstruct the entire local physical workspace into the VR world. In this case, the remote helper could observe the local scene independently from the local user's current head and camera position, and provide gesture guiding information even before the local user staring at the target object. We conducted a pilot study to evaluate the usability of the system by comparing it with our previous oriented view system which only sharing the current camera view together with the real-time head orientation data. Our results indicate that this entire scene capturing and sharing system could significantly increase the remote helper's spatial awareness of the local work environment, especially in a large-scale workspace, and gain an overwhelming user preference (80%) than previous system.
  • 6DoF input for hololens using vive controller
    Bai, H., Gao, L., & Billinghurst, M.

    Bai, H., Gao, L., & Billinghurst, M. (2017, November). 6DoF input for hololens using vive controller. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 4). ACM.

    @inproceedings{bai20176dof,
    title={6DoF input for hololens using vive controller},
    author={Bai, Huidong and Gao, Lei and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={4},
    year={2017},
    organization={ACM}
    }
    In this research we present a calibration method that enables 6 degree of freedom (DoF) interaction technology with high accuracy for the Microsoft HoloLens using HTC Vive controllers. We calibrate the HoloLens's front color camera with the Vive lighthouse sensors by tracking a reference image and a Vive tracker at the first beginning automatically, and the Vive controllers' position and pose data can be transmitted to the HoloLens in real time via the Bluetooth connection, which provides a more accurate and efficient input solution for users to manipulate the augmented content compared with the default gesture or head-gaze interfaces.
  • Exploring enhancements for remote mixed reality collaboration
    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M.

    Piumsomboon, T., Day, A., Ens, B., Lee, Y., Lee, G., & Billinghurst, M. (2017, November). Exploring enhancements for remote mixed reality collaboration. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 16). ACM.

    @inproceedings{piumsomboon2017exploring,
    title={Exploring enhancements for remote mixed reality collaboration},
    author={Piumsomboon, Thammathip and Day, Arindam and Ens, Barrett and Lee, Youngho and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={16},
    year={2017},
    organization={ACM}
    }
    In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative natural interaction, and AV-User-Body-Scaling were implemented for enhancing interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and interaction.
  • AR social continuum: representing social contacts
    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W.

    Nassani, A., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2017, November). AR social continuum: representing social contacts. In SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications (p. 6). ACM.

    @inproceedings{nassani2017ar,
    title={AR social continuum: representing social contacts},
    author={Nassani, Alaeddin and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH Asia 2017 Mobile Graphics \& Interactive Applications},
    pages={6},
    year={2017},
    organization={ACM}
    }
    One of the key problems with representing social networks in Augmented Reality (AR) is how to differentiate between contacts. In this paper we explore how visual and spatial cues based on social relationships can be used to represent contacts in social AR applications, making it easier to distinguish between them. Previous implementations of social AR have been mostly focusing on location based visualization with no focus on the social relationship to the user. In contrast, we explore how to visualise social relationships in mobile AR environments using proximity and visual fidelity filters. We ran a focus group to explore different options for representing social contacts in a mobile an AR application. We also conducted a user study to test a head-worn AR prototype using proximity and visual fidelity filters. We found out that filtering social contacts on wearable AR is preferred and useful. We discuss the results of focus group and the user study, and provide insights into directions for future work.
  • 2016
  • Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration
    Kunal Gupta, Gun A. Lee and Mark Billinghurst

    Kunal Gupta, Gun A. Lee and Mark Billinghurst. 2016. Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration. IEEE Transactions on Visualization and Computer Graphics Vol.22, No.11, pp.2413-2422. https://doi.org/10.1109/TVCG.2016.2593778

    @ARTICLE{7523400,
    author={K. Gupta and G. A. Lee and M. Billinghurst},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    title={Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration},
    year={2016},
    volume={22},
    number={11},
    pages={2413-2422},
    keywords={cameras;gaze tracking;helmet mounted displays;eye-tracking camera;gaze tracking;head-mounted camera;head-mounted display;remote collaboration;task space remote collaboration;virtual gaze information;virtual pointer;wearable interface;Cameras;Collaboration;Computers;Gaze tracking;Head;Prototypes;Teleconferencing;Computer conferencing;Computer-supported collaborative work;teleconferencing;videoconferencing},
    doi={10.1109/TVCG.2016.2593778},
    ISSN={1077-2626},
    month={Nov},}
    We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
  • A Remote Collaboration System with Empathy Glasses

    Y. Lee, K. Masai, K. Kunze, M. Sugimoto and M. Billinghurst. 2016. A Remote Collaboration System with Empathy Glasses. 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW), Merida, pp. 342-343. http://doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112

    @INPROCEEDINGS{7836533,
    author = {Y. Lee and K. Masai and K. Kunze and M. Sugimoto and M. Billinghurst},
    booktitle = {2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct)(ISMARW)},
    title = {A Remote Collaboration System with Empathy Glasses},
    year = {2017},
    volume = {00},
    number = {},
    pages = {342-343},
    keywords={Collaboration;Glass;Heart rate;Biomedical monitoring;Cameras;Hardware;Computers},
    doi = {10.1109/ISMAR-Adjunct.2016.0112},
    url = {doi.ieeecomputersociety.org/10.1109/ISMAR-Adjunct.2016.0112},
    ISSN = {},
    month={Sept.}
    }
    In this paper, we describe a demonstration of remote collaboration system using Empathy glasses. Using our system, a local worker can share a view of their environment with a remote helper, as well as their gaze, facial expressions, and physiological signals. The remote user can send back visual cues via a see-through head mounted display to help them perform better on a real world task. The system also provides some indication of the remote users face expression using face tracking technology.
  • Empathy Glasses
    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst

    Katsutoshi Masai, Kai Kunze, Maki Sugimoto, and Mark Billinghurst. 2016. Empathy Glasses. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA, 1257-1263. https://doi.org/10.1145/2851581.2892370

    @inproceedings{Masai:2016:EG:2851581.2892370,
    author = {Masai, Katsutoshi and Kunze, Kai and sugimoto, Maki and Billinghurst, Mark},
    title = {Empathy Glasses},
    booktitle = {Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
    series = {CHI EA '16},
    year = {2016},
    isbn = {978-1-4503-4082-3},
    location = {San Jose, California, USA},
    pages = {1257--1263},
    numpages = {7},
    url = {http://doi.acm.org/10.1145/2851581.2892370},
    doi = {10.1145/2851581.2892370},
    acmid = {2892370},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {emotional interface, facial expression, remote collaboration, wearables},
    }
    In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it is the first to combine the following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) a head worn camera, and (4) a see-through head mounted display, with a focus on remote collaboration. Using the system, a local user can send their information and a view of their environment to a remote helper who can send back visual cues on the local user's see-through display to help them perform a real world task. A pilot user study was conducted to explore how effective the Empathy Glasses were at supporting remote collaboration. We describe the implications that can be drawn from this user study.
  • A comparative study of simulated augmented reality displays for vehicle navigation
    Jose, R., Lee, G. A., & Billinghurst, M.

    Jose, R., Lee, G. A., & Billinghurst, M. (2016, November). A comparative study of simulated augmented reality displays for vehicle navigation. In Proceedings of the 28th Australian conference on computer-human interaction (pp. 40-48). ACM.

    @inproceedings{jose2016comparative,
    title={A comparative study of simulated augmented reality displays for vehicle navigation},
    author={Jose, Richie and Lee, Gun A and Billinghurst, Mark},
    booktitle={Proceedings of the 28th Australian conference on computer-human interaction},
    pages={40--48},
    year={2016},
    organization={ACM}
    }
    In this paper we report on a user study in a simulated environment that compares three types of Augmented Reality (AR) displays for assisting with car navigation: Heads Up Display (HUD), Head Mounted Display (HMD) and Heads Down Display (HDD). The virtual cues shown on each of the interface were the same, but there was a significant difference in driver behaviour and preference between interfaces. Overall, users performed better and preferred the HUD over the HDD, and the HMD was ranked lowest. These results have implications for people wanting to use AR cues for car navigation.
  • A Systematic Review of Usability Studies in Augmented Reality between 2005 and 2014
    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E.

    Dey, A., Billinghurst, M., Lindeman, R. W., & Swan II, J. E. (2016, September). A systematic review of usability studies in augmented reality between 2005 and 2014. In 2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct) (pp. 49-50). IEEE.

    @inproceedings{dey2016systematic,
    title={A systematic review of usability studies in augmented reality between 2005 and 2014},
    author={Dey, Arindam and Billinghurst, Mark and Lindeman, Robert W and Swan II, J Edward},
    booktitle={2016 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct)},
    pages={49--50},
    year={2016},
    organization={IEEE}
    }
    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review most AR papers published between 2005 and 2014 that include user studies. A total of 291 papers have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We also identify areas where there have been few user studies, and opportunities for future research. This poster describes the methodology of the review and the classifications of AR research that have emerged.
  • Augmented Reality Annotation for Social Video Sharing

    Nassani, A., Kim, H., Lee, G., Billinghurst, M., Langlotz, T., & Lindeman, R. W. (2016, November). Augmented reality annotation for social video sharing. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 9). ACM.

    @inproceedings{nassani2016augmented,
    title={Augmented reality annotation for social video sharing},
    author={Nassani, Alaeddin and Kim, Hyungon and Lee, Gun and Billinghurst, Mark and Langlotz, Tobias and Lindeman, Robert W},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={9},
    year={2016},
    organization={ACM}
    }
    This paper explores different visual interfaces for sharing comments on a social live video streaming platforms. So far, comments are displayed separately from the video making it hard to relate the comments to event in the video. In this work we investigate an Augmented Reality (AR) interface displaying comments directly on the streamed live video. Our described prototype allows remote spectators to perceive the streamed live video with different interfaces for displaying the comments. We conducted a user study to compare different ways of visualising comments and found that users prefer having comments in the AR view rather than on a separate list. We discuss the implications of this research and directions for future work.
  • Digitally Augmenting Sports: An Opportunity for Exploring and Understanding Novel Balancing Techniques
    Altimira, D., Mueller, F. F., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C.

    Altimira, D., Mueller, F. F., Clarke, J., Lee, G., Billinghurst, M., & Bartneck, C. (2016, May). Digitally augmenting sports: An opportunity for exploring and understanding novel balancing techniques. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 1681-1691). ACM.

    @inproceedings{altimira2016digitally,
    title={Digitally augmenting sports: An opportunity for exploring and understanding novel balancing techniques},
    author={Altimira, David and Mueller, Florian Floyd and Clarke, Jenny and Lee, Gun and Billinghurst, Mark and Bartneck, Christoph},
    booktitle={Proceedings of the 2016 CHI conference on human factors in computing systems},
    pages={1681--1691},
    year={2016},
    organization={ACM}
    }
    Using game balancing techniques can provide the right level of challenge and hence enhance player engagement for sport players with different skill levels. Digital technology can support and enhance balancing techniques in sports, for example, by adjusting players’ level of intensity based on their heart rate. However, there is limited knowledge on how to design such balancing and its impact on the user experience. To address this we created two novel balancing techniques enabled by digitally augmenting a table tennis table. We adjusted the more skilled player’s performance by inducing two different styles of play and studied the effects on game balancing and player engagement. We showed that by altering the more skilled player’s performance we can balance the game through: (i) encouraging game mistakes, and (ii) changing the style of play to one that is easier for the opponent to counteract. We outline the advantages and disadvantages of each approach, extending our understanding of game balancing design. We also show that digitally augmenting sports offers opportunities for novel balancing techniques while facilitating engaging experiences, guiding those interested in HCI and sports.
  • An oriented point-cloud view for MR remote collaboration
    Gao, L., Bai, H., Lee, G., & Billinghurst, M.

    Gao, L., Bai, H., Lee, G., & Billinghurst, M. (2016, November). An oriented point-cloud view for MR remote collaboration. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (p. 8). ACM.

    @inproceedings{gao2016oriented,
    title={An oriented point-cloud view for MR remote collaboration},
    author={Gao, Lei and Bai, Huidong and Lee, Gun and Billinghurst, Mark},
    booktitle={SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications},
    pages={8},
    year={2016},
    organization={ACM}
    }
    We present a Mixed Reality system for remote collaboration using Virtual Reality (VR) headsets with external depth cameras attached. By wirelessly sharing a 3D point-cloud data of a local workers' workspace with a remote helper, and sharing the remote helper's hand gestures back to the local worker, the remote helper is able to assist the worker to perform manual tasks.Displaying the point-cloud video in a conventional way, such as a static front view in VR headsets, does not provide helpers with sufficient understanding of the spatial relationships between their hands and the remote surroundings. In contrast, we propose a Mixed Reality (MR) system that shares with the remote helper, not only 3D captured environment data but also real-time orientation info of the worker's viewpoint. We conducted a pilot study to evaluate the usability of the system, and we found that extra synchronized orientation data can make collaborators feel more connected spatially and mentally.
  • 2015
  • If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking
    Wen, J., Helton, W. S., & Billinghurst, M.

    Wen, J., Helton, W. S., & Billinghurst, M. (2015, March). If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 3). ACM.

    @inproceedings{wen2015if,
    title={If Reality Bites, Bite Back Virtually: Simulating Perfection in Augmented Reality Tracking},
    author={Wen, James and Helton, William S and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={3},
    year={2015},
    organization={ACM}
    }
    Augmented Reality (AR) on smart phones can be used to overlay virtual tags in the real world to show points of interest that people may want to visit. However, field tests have failed to validate the belief that AR-based tools would outperform map-based tools for such pedestrian navigation tasks. Assuming this is due to inaccuracies in consumer GPS tracking used in handheld AR, we created a simulated environment that provided perfect tracking for AR and conducted experiments based on real world navigation studies. We measured time-on-task performance for guided traversals on both desktop and head-mounted display systems and found that accurate tracking did validate the superior performance of AR-based navigation tools. We also measured performance for unguided recall traversals of previously traversed paths in order to investigate into how navigation tools impact upon route memory.
  • Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization.
    Kim, H., Lee, G., & Billinghurst, M.

    Kim, H., Lee, G., & Billinghurst, M. (2015, March). Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization. In Proceedings of the 14th Annual ACM SIGCHI_NZ conference on Computer-Human Interaction (p. 2). ACM.

    @inproceedings{kim2015adaptive,
    title={Adaptive Interpupillary Distance Adjustment for Stereoscopic 3D Visualization},
    author={Kim, Hyungon and Lee, Gun and Billinghurst, Mark},
    booktitle={Proceedings of the 14th Annual ACM SIGCHI\_NZ conference on Computer-Human Interaction},
    pages={2},
    year={2015},
    organization={ACM}
    }
    Stereoscopic visualization creates illusions of depth through disparity between the images shown to left and right eyes of the viewer. While the stereoscopic visualization is widely adopted in immersive visualization systems to improve user experience, it can also cause visual discomfort if the stereoscopic viewing parameters are not adjusted appropriately. These parameters are usually manually adjusted based on human factors and empirical knowledge of the developer or even the user. However, scenes with dynamic change in scale and configuration can lead into continuous adjustment of these parameters while viewing. In this paper, we propose a method to adjust the interpupillary distance adaptively and automatically according to the configuration of the 3D scene, so that the visualized scene can maintain sufficient stereo effect while reducing visual discomfort.
  • Intelligent Augmented Reality Training for Motherboard Assembly
    Westerfield, G., Mitrovic, A., & Billinghurst, M.

    Westerfield, G., Mitrovic, A., & Billinghurst, M. (2015). Intelligent augmented reality training for motherboard assembly. International Journal of Artificial Intelligence in Education, 25(1), 157-172.

    @article{westerfield2015intelligent,
    title={Intelligent augmented reality training for motherboard assembly},
    author={Westerfield, Giles and Mitrovic, Antonija and Billinghurst, Mark},
    journal={International Journal of Artificial Intelligence in Education},
    volume={25},
    number={1},
    pages={157--172},
    year={2015},
    publisher={Springer}
    }
    We investigate the combination of Augmented Reality (AR) with Intelligent Tutoring Systems (ITS) to assist with training for manual assembly tasks. Our approach combines AR graphics with adaptive guidance from the ITS to provide a more effective learning experience. We have developed a modular software framework for intelligent AR training systems, and a prototype based on this framework that teaches novice users how to assemble a computer motherboard. An evaluation found that our intelligent AR system improved test scores by 25 % and that task performance was 30 % faster compared to the same AR training system without intelligent support. We conclude that using an intelligent AR tutor can significantly improve learning compared to more traditional AR training.
  • User Defined Gestures for Augmented Virtual Mirrors: A Guessability Study
    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M.

    Lee, G. A., Wong, J., Park, H. S., Choi, J. S., Park, C. J., & Billinghurst, M. (2015, April). User defined gestures for augmented virtual mirrors: a guessability study. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 959-964). ACM.

    @inproceedings{lee2015user,
    title={User defined gestures for augmented virtual mirrors: a guessability study},
    author={Lee, Gun A and Wong, Jonathan and Park, Hye Sun and Choi, Jin Sung and Park, Chang Joon and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={959--964},
    year={2015},
    organization={ACM}
    }
    Public information displays are evolving from passive screens into more interactive and smarter ubiquitous computing platforms. In this research we investigate applying gesture interaction and Augmented Reality (AR) technologies to make public information displays more intuitive and easy to use. We focus especially on designing intuitive gesture based interaction methods to use in combination with an augmented virtual mirror interface. As an initial step, we conducted a user study to indentify the gestures that users feel are natural for performing common tasks when interacting with augmented virtual mirror displays. We report initial findings from the study, discuss design guidelines, and suggest future research directions.
  • Automatically Freezing Live Video for Annotation during Remote Collaboration
    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M.

    Kim, S., Lee, G. A., Ha, S., Sakata, N., & Billinghurst, M. (2015, April). Automatically freezing live video for annotation during remote collaboration. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1669-1674). ACM.

    @inproceedings{kim2015automatically,
    title={Automatically freezing live video for annotation during remote collaboration},
    author={Kim, Seungwon and Lee, Gun A and Ha, Sangtae and Sakata, Nobuchika and Billinghurst, Mark},
    booktitle={Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},
    pages={1669--1674},
    year={2015},
    organization={ACM}
    }
    Drawing annotations on shared live video has been investigated as a tool for remote collaboration. However, if a local user changes the viewpoint of a shared live video while a remote user is drawing an annotation, the annotation is projected and drawn at wrong place. Prior work suggested manually freezing the video while annotating to solve the issue, but this needs additional user input. We introduce a solution that automatically freezes the video, and present the results of a user study comparing it with manual freeze and no freeze conditions. Auto-freeze was most preferred by both remote and local participants who felt it best solved the issue of annotations appearing in the wrong place. With auto-freeze, remote users were able to draw annotations quicker, while the local users were able to understand the annotations clearer.