Empathic Computing is a research field that aims to use technology to create deeper shared understanding or empathy between people. At the same time, Mixed Reality (MR) technology provides an immersive experience that can make an ideal interface for collaboration. In this paper, we present some of our research into how MR technology can be applied to creating Empathic Computing experiences. This includes exploring how to share gaze in a remote collaboration between Augmented Reality (AR) and Virtual Reality (VR) environments, using physiological signals to enhance collaborative VR, and supporting interaction through eye-gaze in VR. Early outcomes indicate that as we design collaborative interfaces to enhance empathy between people, this could also benefit the personal experience of the individual interacting with the interface.
A method, computer program product and video communication device are provided for transmitting video to a remote user. The video communication device includes a communication interface for providing a communication session with a device of the remote user, a camera capturing images, and a control unit configured to obtain a three-dimensional model of the location, control the camera to capture video images of the location, determine a current orientation of the video communication device, and transfer the three-dimensional model and orientation to the device of the remote user together with a video stream from the camera comprising the captured video images.
Augmented reality (AR) is technology that seamlessly adds virtual imagery over a view of the real world, so that it can be seen and interacted with in real time. Azuma says that an AR system is one that has three key defining characteristics (Sutherland 1968):(1) It combines real and virtual content,(2) It is interactive in real time, and (3) It is registered in 3D.
Percutaneous radiology procedures often require the repeated use of medical radiation in the form of computed tomography (CT) scanning, to demonstrate the position of the needle in the underlying tissues. The angle of the insertion and the distance travelled by the needle inside the patient play a major role in successful procedures, and must be estimated by the practitioner and confirmed periodically by the use of the scanner. Junior radiology trainees, who are already highly trained professionals, currently learn this task “on-the-job” by performing the procedures on real patients with varying levels of guidance. Therefore, we present a novel Augmented Reality (AR)-based system that provides multiple layers of intuitive and adaptive feedback to assist junior radiologists in achieving competency in image-guided procedures.
Driving a car is a high cognitive-load task requiring full attention behind the wheel. Intelligent navigation, transportation, and in-vehicle interfaces have introduced a safer and less demanding driving experience. However, there is still a gap for the existing interaction systems to satisfy the requirements of actual user experience. Hand gesture as an interaction medium, is natural and less visually demanding while driving. This paper aims to conduct a user-study with 79 participants to validate mid-air gestures for 18 major in-vehicle secondary tasks. We have demonstrated a detailed analysis on 900 mid-air gestures investigating preferences of gestures for in-vehicle tasks, their physical affordance, and driving errors. The outcomes demonstrate that employment of mid-air gestures reduces driving errors by up to 50% compared to traditional air-conditioning control. Results can be used for the development of vision-based in-vehicle gestural interfaces.
Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.
Interfaces for collaborative tasks, such as multiplayer games can enable more effective and enjoyable collaboration. However, in these systems, the emotional states of the users are often not communicated properly due to their remoteness from one another. In this paper, we investigate the effects of showing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart-rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. The games had significant main effects on the overall emotional experience.
Game balancing can be used to compensate for differences in players’ skills, in particular in games where players compete against each other. It can help providing the right level of challenge and hence enhance engagement. However, there is a lack of understanding of game balancing design and how different game adjustments affect player engagement. This understanding is important for the design of balanced physical games. In this paper we report on how altering the game equipment in a digitally augmented table tennis game, such as the table size and bat-head size statically and dynamically, can affect game balancing and player engagement. We found these adjustments enhanced player engagement compared to the no-adjustment condition. The understanding of how the adjustments impacted on player engagement helped us to derive a set of balancing strategies to facilitate engaging game experiences. We hope that this understanding can contribute to improve physical activity experiences and encourage people to get engaged in physical activity.
According to previous research, head mounted displays (HMDs) and head worn cameras (HWCs) are useful for remote collaboration. These systems can be especially helpful for remote assistance on physical tasks, when a remote expert can see the workspace of the local user and provide feedback. However, a HWC often has a wide field of view and so it may be difficult to know exactly where the local user is looking. In this chapter we explore how head mounted eye-tracking can be used to convey gaze cues to a remote collaborator. We describe two prototypes developed that integrate an eye-tracker with a HWC and see-through HMD, and results from user studies conducted with the systems. Overall, we found that showing gaze cues on a shared video appears to be better than just providing the video on its own, and combining gaze and pointing cues is the most effective interface for remote collaboration among the conditions tested. We also discuss the limitations of this work and present directions for future research.
Teaching English to children who do not come from an English speaking background is an interesting challenge for educators. In this paper, we present an Augmented reality (AR) tool, TeachAR, for teaching basic English words (colors, shapes, and prepositions) to children for whom English is not a native language. In a pilot study we compare our AR system to a traditional non-AR system. The results indicate a potentially better learning outcome using the TeachAR system than the traditional system. It also showed that children enjoyed using AR-based methods. However, it also showed a few usability issues with the TeachAR interface, which we will improve on in the future.