Operating forklifts in warehouses is becoming an increasingly difficult task due to higher shelves and narrower aisles. In this paper we explore how Augmented Reality (AR) can aid forklift operators in performing their pallet racking and pick up tasks by superimposing virtual depth cues over the real world camera view. We developed a prototype interface, and evaluated it using a remote controlled toy forklift and a motion tracking. We measured the participant's performance on representative pallet handling tasks, finding a significant difference in the performance of participants using AR depth cues. The results show that AR could offer a novel, simple and efficient solution for the problems faced by forklift operators while performing pallet handling.
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
Current head mounted displays (HMDs) cover only a small section of the user’s visual field, preventing the use of peripheral onset cues. This study investigates whether a centrally positioned cue can use the pursuit motion reflex to reorient attention away from HMDs more quickly than arrow cues. Thirty participants recruited from the University of Canterbury campus were required to find and mark targets which appeared within a 200° visual arc of a central focused position after being given no directional cue, an arrow cue or a pursuit motion cue. A subgroup of less than half of the participants failed to extract directional information from pursuit motion cues and the remaining participants responded more slowly to the pursuit cues than the arrow cues. Arrow cues were responded to only 40ms slower than targets appearing within the participants’ peripheral vision indicating that the search for a reflexive orientation cue may be unnecessary.
Visual search performance was studied using auditory cues delivered over a bone conduction headset. Two types of auditory cues were employed to evaluate the effectiveness of such cues in an attention redirection task. Participants were required to locate and shoot targets at one of four locations on a screen when one of the two audio cues was delivered. Reaction and target acquisition times were significantly reduced when the binaurally spatialised cues were used compared to unlocalisable, monophonic cues. This appears to suggest that an auditory cue with directional information is far superior at aiding search tasks or alerting the user to redirect attention in the real-world space in comparison to a centered ‘monophonic’ cue. The results demonstrate the effectiveness of a binaurally spatialised, dynamic cue and point to its potential use in an information rich environment to provide useful and actionable information.
Augmented Reality (AR) is a technology that can overlap virtual elements over the real world in real time. This research focuses on studying how different AR elements can help forklift operators locate pallets as quickly as possible in a warehouse environment. We have developed a simulated AR environment to test Egocentric or Exocentric virtual navigation cues. The virtual elements were displayed to the user in a HUD (head-up display) on the forklift windshield, fixed place in front of the user operator, or in a HMD (head-mounted display), where the virtual cues are attached to the head of the user. A user study found that the Egocentric AR view was preferred over the Exocentric condition and performed better while the HUD and HMD viewing methods produced no difference in performance.
Purpose Surgical navigation is typically shown on a computer display that is distant from the patient, making it difficult for the surgeon to watch the patient while performing a guided task. We investigate whether a light-weight, untracked, wearable display (such as Google Glass, which has the same size and weight as corrective glasses) can improve attentiveness to the surgical field in a simulated surgical task. Methods Three displays were tested: a computer monitor; a peripheral display above the eye; and a through-the-lens display in front of the eye. Twelve subjects performed a task to position and orient a tracked tool on a plastic femur. Both wearable displays were tested on the dominant and non-dominant eyes of each subject. Attentiveness during the task was measured by the time taken to respond to randomly illuminated LEDs on the femur. Results Attentiveness was improved with the wearable displays at the cost of a decrease in accuracy. The through-the-lens display performed better than the peripheral display. The peripheral display performed better when on the dominant eye, while the through-the-lens display performed better when on the non-dominant eye. Conclusions Attentiveness to the surgical field can be improved with the use of a light-weight, untracked, wearable display. A through-the-lens display performs better than a peripheral display, and both perform better than a computer monitor. Eye dominance should be considered when positioning the display.
Binaural spatialization in the horizontal plane over a bone conduction headset (BCH) was investigated using inexpensive and commercially available hardware and software components. The aim of this study was to determine the minimum discernable angular difference between two successively spatialized sound sources. Localization accuracy and externalization was also explored. Statistically significant results were observed for angular separations of 10° and above. Localization accuracy was found to be significantly poorer than that seen for previous loudspeaker and headphone based reproduction. Localization errors between 30° – 35° were observed for stimuli presented in front, back, and sides and 92% of the participants reported externalization. The study demonstrates that an acceptable level of spatial resolution and externalization is achievable using an inexpensive bone conduction headset and software components.
In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it is the first to combine the following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) a head worn camera, and (4) a see-through head mounted display, with a focus on remote collaboration. Using the system, a local user can send their information and a view of their environment to a remote helper who can send back visual cues on the local user's see-through display to help them perform a real world task. A pilot user study was conducted to explore how effective the Empathy Glasses were at supporting remote collaboration. We describe the implications that can be drawn from this user study.
Using game balancing techniques can provide the right level of challenge and hence enhance player engagement for sport players with different skill levels. Digital technology can support and enhance balancing techniques in sports, for example, by adjusting players’ level of intensity based on their heart rate. However, there is limited knowledge on how to design such balancing and its impact on the user experience. To address this we created two novel balancing techniques enabled by digitally augmenting a table tennis table. We adjusted the more skilled player’s performance by inducing two different styles of play and studied the effects on game balancing and player engagement. We showed that by altering the more skilled player’s performance we can balance the game through: (i) encouraging game mistakes, and (ii) changing the style of play to one that is easier for the opponent to counteract. We outline the advantages and disadvantages of each approach, extending our understanding of game balancing design. We also show that digitally augmenting sports offers opportunities for novel balancing techniques while facilitating engaging experiences, guiding those interested in HCI and sports.