Thammathip Piumsomboon

Thammathip Piumsomboon

Research Fellow

Dr. Thammathip Piumsomboon is a Research Fellow at the Empathic Computing Laboratory (ECL) exploring research in Empathic Mixed Reality for remote collaboration. Prior to his return to academia, he was a Unity Director in an Augmented Reality (AR) start-up, QuiverVision, developing mobile AR application. He received a PhD in Computer Science from the University of Canterbury in 2015, supervised by Prof. Mark Billinghurst, Prof. Andy Cockburn, and Dr. Adrian Clark. During his PhD, he was a research assistant at the Human Interface Laboratory New Zealand (HIT Lab NZ). His research focused on developing novel AR interfaces and exploring natural interaction for AR using advanced interface technology.

For more information, please visit his personal website.

Projects

  • Mini-Me

    Mini-Me is an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user’s gaze direction and body gestures while it transforms in size and orientation to stay within the AR user’s field of view. We tested Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.

  • Pinpointing

    Head and eye movement can be leveraged to improve the user’s interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.

  • Empathy Glasses

    We have been developing a remote collaboration system with Empathy Glasses, a head worn display designed to create a stronger feeling of empathy between remote collaborators. To do this, we combined a head- mounted see-through display with a facial expression recognition system, a heart rate sensor, and an eye tracker. The goal is to enable a remote person to see and hear from another person's perspective and to understand how they are feeling. In this way, the system shares non-verbal cues that could help increase empathy between remote collaborators.

Publications

  • Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Jonathon D Hart, Barrett Ens, Robert W Lindeman, Bruce H Thomas, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, Jonathon D. Hart, Barrett Ens, Robert W. Lindeman, Bruce H. Thomas, and Mark Billinghurst. 2018. Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 46, 13 pages. DOI: https://doi.org/10.1145/3173574.3173620

    @inproceedings{Piumsomboon:2018:MAA:3173574.3173620,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Hart, Jonathon D. and Ens, Barrett and Lindeman, Robert W. and Thomas, Bruce H. and Billinghurst, Mark},
    title = {Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {46:1--46:13},
    articleno = {46},
    numpages = {13},
    url = {http://doi.acm.org/10.1145/3173574.3173620},
    doi = {10.1145/3173574.3173620},
    acmid = {3173620},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, awareness, gaze, gesture, mixed reality, redirected, remote collaboration, remote embodiment, virtual reality},
    }
    [download]
    We present Mini-Me, an adaptive avatar for enhancing Mixed Reality (MR) remote collaboration between a local Augmented Reality (AR) user and a remote Virtual Reality (VR) user. The Mini-Me avatar represents the VR user's gaze direction and body gestures while it transforms in size and orientation to stay within the AR user's field of view. A user study was conducted to evaluate Mini-Me in two collaborative scenarios: an asymmetric remote expert in VR assisting a local worker in AR, and a symmetric collaboration in urban planning. We found that the presence of the Mini-Me significantly improved Social Presence and the overall experience of MR collaboration.
  • Pinpointing: Precise Head-and Eye-Based Target Selection for Augmented Reality
    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 81, 14 pages. DOI: https://doi.org/10.1145/3173574.3173655

    @inproceedings{Kyto:2018:PPH:3173574.3173655,
    author = {Kyt\"{o}, Mikko and Ens, Barrett and Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality},
    booktitle = {Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI '18},
    year = {2018},
    isbn = {978-1-4503-5620-6},
    location = {Montreal QC, Canada},
    pages = {81:1--81:14},
    articleno = {81},
    numpages = {14},
    url = {http://doi.acm.org/10.1145/3173574.3173655},
    doi = {10.1145/3173574.3173655},
    acmid = {3173655},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, eye tracking, gaze interaction, head-worn display, refinement techniques, target selection},
    }
    Head and eye movement can be leveraged to improve the user's interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of wearable eye-tracking sensors. This work investigates precise, multimodal selection techniques using head motion and eye gaze. A comparison of speed and pointing accuracy reveals the relative merits of each method, including the achievable target size for robust selection. We demonstrate and discuss example applications for augmented reality, including compact menus with deep structure, and a proof-of-concept method for on-line correction of calibration drift.
  • Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications
    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, Mark Billinghurst

    Barrett Ens, Aaron Quigley, Hui-Shyong Yeo, Pourang Irani, Thammathip Piumsomboon, and Mark Billinghurst. 2018. Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper LBW120, 6 pages. DOI: https://doi.org/10.1145/3170427.3188513

    @inproceedings{Ens:2018:CEM:3170427.3188513,
    author = {Ens, Barrett and Quigley, Aaron and Yeo, Hui-Shyong and Irani, Pourang and Piumsomboon, Thammathip and Billinghurst, Mark},
    title = {Counterpoint: Exploring Mixed-Scale Gesture Interaction for AR Applications},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {LBW120:1--LBW120:6},
    articleno = {LBW120},
    numpages = {6},
    url = {http://doi.acm.org/10.1145/3170427.3188513},
    doi = {10.1145/3170427.3188513},
    acmid = {3188513},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, gesture interaction, wearable computing},
    }
    This paper presents ongoing work on a design exploration for mixed-scale gestures, which interleave microgestures with larger gestures for computer interaction. We describe three prototype applications that show various facets of this multi-dimensional design space. These applications portray various tasks on a Hololens Augmented Reality display, using different combinations of wearable sensors. Future work toward expanding the design space and exploration is discussed, along with plans toward evaluation of mixed-scale gesture design.
  • Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration
    Thammathip Piumsomboon, Gun A Lee, Mark Billinghurst

    Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper D115, 4 pages. DOI: https://doi.org/10.1145/3170427.3186495

    @inproceedings{Piumsomboon:2018:SDM:3170427.3186495,
    author = {Piumsomboon, Thammathip and Lee, Gun A. and Billinghurst, Mark},
    title = {Snow Dome: A Multi-Scale Interaction in Mixed Reality Remote Collaboration},
    booktitle = {Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems},
    series = {CHI EA '18},
    year = {2018},
    isbn = {978-1-4503-5621-3},
    location = {Montreal QC, Canada},
    pages = {D115:1--D115:4},
    articleno = {D115},
    numpages = {4},
    url = {http://doi.acm.org/10.1145/3170427.3186495},
    doi = {10.1145/3170427.3186495},
    acmid = {3186495},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {augmented reality, avatar, mixed reality, multiple, remote collaboration, remote embodiment, scale, virtual reality},
    }
    We present Snow Dome, a Mixed Reality (MR) remote collaboration application that supports a multi-scale interaction for a Virtual Reality (VR) user. We share a local Augmented Reality (AR) user's reconstructed space with a remote VR user who has an ability to scale themselves up into a giant or down into a miniature for different perspectives and interaction at that scale within the shared space.
  • Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze
    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst

    Gun Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman and Mark Billinghurst. 2017. Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze. In Proceedings of ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, pp. 197-204. http://dx.doi.org/10.2312/egve.20171359

    @inproceedings {egve.20171359,
    booktitle = {ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments},
    editor = {Robert W. Lindeman and Gerd Bruder and Daisuke Iwai},
    title = {{Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze}},
    author = {Lee, Gun A. and Kim, Seungwon and Lee, Youngho and Dey, Arindam and Piumsomboon, Thammathip and Norman, Mitchell and Billinghurst, Mark},
    year = {2017},
    publisher = {The Eurographics Association},
    ISSN = {1727-530X},
    ISBN = {978-3-03868-038-3},
    DOI = {10.2312/egve.20171359}
    }
    To improve remote collaboration in video conferencing systems, researchers have been investigating augmenting visual cues onto a shared live video stream. In such systems, a person wearing a head-mounted display (HMD) and camera can share her view of the surrounding real-world with a remote collaborator to receive assistance on a real-world task. While this concept of augmented video conferencing (AVC) has been actively investigated, there has been little research on how sharing gaze cues might affect the collaboration in video conferencing. This paper investigates how sharing gaze in both directions between a local worker and remote helper in an AVC system affects the collaboration and communication. Using a prototype AVC system that shares the eye gaze of both users, we conducted a user study that compares four conditions with different combinations of eye gaze sharing between the two users. The results showed that sharing each other’s gaze significantly improved collaboration and communication.
  • Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality
    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst

    Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman and Mark Billinghurst. 2017. Exploring Natural Eye-Gaze-Based Interaction for Immersive Virtual Reality. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 36-39. https://doi.org/10.1109/3DUI.2017.7893315

    @INPROCEEDINGS{7893315,
    author={T. Piumsomboon and G. Lee and R. W. Lindeman and M. Billinghurst},
    booktitle={2017 IEEE Symposium on 3D User Interfaces (3DUI)},
    title={Exploring natural eye-gaze-based interaction for immersive virtual reality},
    year={2017},
    volume={},
    number={},
    pages={36-39},
    keywords={gaze tracking;gesture recognition;helmet mounted displays;virtual reality;Duo-Reticles;Nod and Roll;Radial Pursuit;cluttered-object selection;eye tracking technology;eye-gaze selection;head-gesture-based interaction;head-mounted display;immersive virtual reality;inertial reticles;natural eye movements;natural eye-gaze-based interaction;smooth pursuit;vestibulo-ocular reflex;Electronic mail;Erbium;Gaze tracking;Painting;Portable computers;Resists;Two dimensional displays;H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction styles},
    doi={10.1109/3DUI.2017.7893315},
    ISSN={},
    month={March},}
    Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.