Communicating Embodied Experiences among People using Wearable Devices

    Speaker: Jun Nishida
    Date: 2021-07-28

    While today’s tools allow us to communicate effectively with others via video and text, they leave out other critical communication channels, such as non-verbal cues and body language. These cues are important not only for face-to-face communication but even when communicating forces (muscle tension, movement, etc), feelings, and emotions, which are hard to share by using existing symbolic and graphical communication tools. This brings up my research question: how can we communicate our embodied experience among people? To tackle this challenge, we have been exploring the larger concept of changing our bodies into that of another person using wearable devices. In this talk, we introduce several concepts of sharing embodied experiences using wearable devices between users, such as between a physician and a patient, including people with neuromuscular diseases and even children, by means of virtual reality systems, exoskeletons, and electrical muscle stimulation. Lastly, we introduce how we can extend this concept to change our perception and cognition, such as preserving the sense of agency when voluntary and involuntary(machine) action is combined, and discuss its further possibilities and challenges for changing our perspectives toward different people.

    Watch on Youtube
    Jun Nishida is a postdoctoral fellow at Human Computer Integration Lab at University of Chicago. He received his PhD in Human Informatics at University of Tsukuba, Japan in 2019. He is interested in exploring interaction techniques where people can communicate their embodied experiences to support each other in the fields of rehabilitation, education, and design. To this end, he designs wearable interfaces which share one’s embodied and social experiences among people by means of electrical muscle stimulation, exoskeletons, virtual/augmented reality systems, along with psychological knowledge. He has worked as a PhD fellow at Microsoft Research Asia and as a research assistant at Sony Computer Science Laboratories. He has received ACM UIST Best Paper Award, ACM CHI Best Paper Honorable Mention Award, Microsoft Research Asia Fellowship Award, and Forbes 30 Under 30 Award.

    Reading in the 21st Century – How Emerging Technologies Empower and Challenge our Reading Behaviour

    Speaker: Tilman Dingler
    Date: 2021-07-14

    While writing and reading as cultural inventions date back to the 4th millennium BC, digital technologies have recently and fundamentally changed where, how, and what we read. The information age provides us with both opportunities and challenges, which affect our reading behaviour. Various devices are now available for reading, and their mobility provides us with unprecedented opportunities to engage with text anytime, anywhere. In his talk, Tilman Dingler—whose research focuses on technologies that augment human cognitive abilities—will present challenges, best practices, and future directions of ubiquitous technologies, including VR, to support reading activities. With examples from his research on reading interfaces, scheduling algorithms and cognition-aware systems, this talk will outline a research agenda for systems that provide better readability, prioritise information gain over attention capture, and instil better reading habits in their users.

    Watch on Youtube
    Tilman is a Lecturer at the School of Computing and Information Systems at the University of Melbourne. He studied Media Computer Science in Munich, Web Science in San Francisco, and received a PhD in Computer Science from the University of Stuttgart. There, he worked on technologies to augment cognitive abilities, including people's memory and information processing capabilities. Tilman has worked on five continents and, before coming to Melbourne, did his post-doc at Osaka Prefecture University in Japan and the MIT Media Lab. Before his academic career, Tilman worked as an engineer at Yahoo! Inc. and several technology startups. He is an Associate Editor for the PACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) and serves as Associate Chair for CHI, among others. He is co-founder and acting chairman of the SIGCHI Melbourne Local Chapter. Currently, Tilman investigates systems that sense, model, and adapt to users' cognitive states and develops interfaces that support their users' information processing capabilities.

    A 20-year mission for personal fabrication and a live demo of year four

    Speaker: Patrick Baudisch
    Date: 2021-06-30

    In the first half of this talk, I will lay out a 20-year roadmap on how I think we should take personal fabrication from tech enthusiasts to regular consumers by following the path laid out by earlier media, such as desktop publishing (PDF). In the second half, I will give a live demo of how we apply this roadmap in the form of our laser cutting system "kyub" (project page). Kyub is designed to get users from idea to chair-size physical prototypes in one hour (video). This allows schools to design and manufacture furniture and musical instruments in the time frame of class time (video).

    Watch on Youtube
    Patrick Baudisch is a professor in Computer Science at Hasso Plattner Institute at Potsdam University and chair of the Human Computer Interaction Lab. After working on mobile devices, touch input, and natural user interfaces for many years, his current research focuses on personal fabrication and in particular laser cutting. Previously, Patrick Baudisch worked as a research scientist in the Adaptive Systems and Interaction Research Group at Microsoft Research and at Xerox PARC. He holds a PhD in Computer Science from Darmstadt University of Technology, Germany. He was inducted into the CHI Academy in 2013 and has been an ACM distinguished scientist since 2014. Since 2019, he has been the chair of the SIGCHI Research and Practice Awards subcommittee. Patrick's past PhD students include Christian Holz, Pedro Lopes, Lung-Pan Cheng, Stefanie Mueller, and Alex Ion, now assistant professors at ETH Zurich, U of Chicago, NTU, MIT, and CMU.

    Engaging Human Augmentation

    Speaker: Stephan Lukosch
    Date: 16 June 2021

    The concept of engagement is far from unique to interactive systems. In colloquial use, it refers to a state of involvement or participation. When people are engaged, it means they are focused with their thoughts on an activity, instead on something else. Human augmentation is about augmenting human abilities by human-computer-integration to enhance and create new experiences as well as foster engagement. In this talk, I review the concept of engagement in interactive systems. I, then, discuss the challenge of assessing engagement along different projects in the health, safety and security as well as sports domain that have used human augmentation for different purposes. For that purpose, I present several different assessment methods and discuss their advantages and disadvantages. I will conclude with a summary and several future work directions.

    Watch on Youtube
    Stephan Lukosch is a Professor at the HIT Lab NZ of the University of Canterbury in Christchurch, New Zealand. Until 2019, he was an Associate Professor at the Delft University of Technology in The Netherlands. He received his Dr. rer. nat. with distinction from the University of Hagen, Germany in 2003. His current research focuses on human augmentation to enhance human capabilities. In order to create a design framework for developers, designers and future users, he explores human augmentation in different domains such as sports, health, safety & security, and engineering. His work includes the evaluation of human factors on acceptance, engagement and experience of human augmentation. He is a member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). He further serves on the editorial board of the Springer Journal of Computer Supported Cooperative Work (CSCW), Augmented Reality Specialty Section of Frontiers in Virtual Reality, Journal of Universal Computer Science (J.UCS) and the International Journal of Cooperative Information Systems (IJCIS).

    Extended Reality for Everybody

    Speaker: Michael Nebeling
    Date: 19 May 2021

    Michael's research focuses on democratizing virtual and augmented reality technologies, and on empowering more users to become active design participants. In his current work, he is focusing on how enabling more people to participate in design needs new methods and tools that better guide novice designers in addressing accessibility and equity as well as privacy and security directly through design. He will share current examples and is looking forward to a discussion around mechanisms that could promote "good" XR design behavior.

    Watch on Youtube
    Michael is an Assistant Professor at the University of Michigan, where he leads the Information Interaction Lab ( with a current focus on virtual, augmented, and mixed reality applications. He is active in CHI and UIST and is currently UIST 2021 Program co-chair. He received a Disney Research Faculty Award and a Mozilla Research Award. He started his role as the XR Faculty Innovator-in-Residence with the U-M wide XR Initiative in 2019. He created the XR MOOC series (, a three-course AR/VR specialization on Coursera. He joined U-M in 2016 after completing a postdoc in the Human-Computer Interaction Institute at Carnegie Mellon University and a PhD in the Department of Computer Science at ETH Zurich.

    Ubiquitous VR, Digital Twin, and Metaverse in the Age of AR3.0

    Speaker: Woontack Woo
    Date: 5 May 2021

    Recently, VR/AR (or XR) is getting in the spotlight again. While there are deep concerns that XR is a bubble again, there is also great hope that XR will spread as a part of everyday life in the near future. XR can be the key to success when used in conjunction with DNA (data, network, AI), the core of the ‘Digital New Deal Project' led by the Korean government. In particular, it is worth paying attention to the possibility that the 'digital twin-based metaverse' linked with XR will become a new social platform. UVR Lab, launched in 2001, is preparing for a new leap forward in the era of AR3.0. In this talk, I would like to share my thoughts on the possibilities of AR3.0 and UVR3.0 on the way to the 3rd challenge for XR to become an everyday technology.

    Watch on Youtube
    Currently Head, KAIST G-School of Culture Technology Director, KAIST Culture Technology Research Institute Director, KI-ITC Augmented Reality Research Center Adjunct Professor, KAIST School of Computing Adjunct Professor, KAIST G-School of Future Strategy Previously, Professor, Gwangju Institute of Science and Technology (GIST) Invited Researcher, Advanced Telecommunication Research (ATR), Japan Research Associate, University of Southern California, LA, CA, USA

    Augmented Virtual Teleportation – Teleport into the Media

    Speaker: Taehyun Rhee
    Date: 7 Apr 2021

    New Zealand is well known for its beautiful nature and has contributed to many movies and commercials. A strong media ecosystem has been built to support the related industry. This talk will introduce recent convergence of immersive, interactive, and intelligent media technologies, which presents a novel Cinematic XR platform allowing users’ illusion to teleport and interact with the objects in the video. The further extension of the platform with the high-speed network, Augmented Virtual Teleportation (AVT) is then introduced. It will be able to allow remote interaction and immersive collaboration with people in distance. Finally, case studies and potential applications of AVT will be introduced and discussed.

    Watch on Youtube
    A/Prof. Taehyun James (TJ) Rhee, is a Director of Computational Media Innovation Centre (CMIC) at Victoria University of Wellington, New Zealand. He is a founder of Victoria Computer Graphics Programme, founder/director of the Victoria Computer Graphics Research Lab, and a founder of the Mixed Reality startup, DreamFlux. His current research activities are focused on developing future media technology and platform; cinematic XR including real-time lighting, rendering, composition in virtual, augmented, and mixed reality; virtual teleportation; immersive remote collaboration; immersive visualization and interaction; and human digital content interaction. He is highly interested in prototyping the research outcome into potential commercial products and platform; a winner of 2018 Researcher Entrepreneur Award by Kiwinet. He is serving for Computer Graphics community as a conference chair of Pacific Graphics 2020 and 2021, executive committee of Asia Graphics Association, and SIGGRAPH Asia 2018 Virtual and Augmented Reality programme chair. Before joining Victoria in 2012, he was a principal researcher and senior manager in the Mixed Reality Group, Future IT Centre at Samsung (2008-2012). Also, he was a senior researcher/researcher of Research Innovation Center at Samsung Electronics (1996-2003).

    Toward Extensions of the Self: Near-body Interactions for Augmented Humans

    Speaker: Enrico Rukzio
    Date: 24/3/2021

    Augmenting the human intellect is one of the central goals in human-computer interaction. This can only be achieved when the time between intention and action becomes very small and the user sees the interface as an extension of the self. Wearable and mobile interaction devices supporting near-body interactions can fulfil this vision by reducing the time to access significantly. The talk will cover a number of my research projects that show how I tackle the challenges and potentials in that field such as limited out- and input capabilities, proximity of interaction devices to human sensors and actuators, and concerns regarding obtrusiveness and privacy.

    Watch on Youtube
    Enrico Rukzio is a professor in Human-Computer Interaction in the Institute of Media Informatics at Ulm University, Germany. He is interested in designing intelligent interactive systems that enable people to be more efficient, satisfied, and expressive in their daily lives. His research focuses on the design of novel interaction concepts, devices and applications in areas such as mobile and wearable interaction, computerized eyewear, human-technology interaction for elderly people, automotive user interfaces and interactive production planning. Together with his students, he has won awards for research published at CHI, EuroVR, IEEE VR, ISWC, ITS, MobileHCI, MUM and PERCOM. Prior to his current position, he was an Assistant Professor at the Ruhr Institute for Software and Technology (University of Duisburg-Essen, Germany) and a RCUK academic fellow and lecturer at the School of Computing and Communications at Lancaster University (UK). He hold a PhD in Computer Science from the University of Munich (Germany).

    Ubiquitous Mixed Reality: Designing Mixed Reality Technology to Fit into the Fabric of our Daily Lives

    Speaker: Jan Gugenheimer
    Date: 10/3/2021

    Technological advancements in the fields of optics, display technology and miniaturization have enabled high-quality mixed reality (AR and VR) head-mounted displays (HMDs) to be used beyond research labs. This enables a novel interaction scenario where the context of use changes drastically and HMDs aim to become a daily commodity. In this talk I will present two perspectives onto AR/VR research in the field of HCI. The one perspective is aiming to improve the technology (e.g., input techniques, haptic devices) and work towards the vision of everyday usable mixed reality. The other perspective is starting to act like an auditing authority and challenging the constant positive perspective on the progress of mixed reality. While both perspectives have the goal of designing MR technology to fit into the fabric of our daily lives, I will argue that HCI research should not only focus on the positive framing. HCI should position itself stronger on the critical side and reflect upon the artifacts and concepts it creates, acting as an auditing authority on itself and other research in the field of AR and VR.

    Watch on Youtube
    Jan Gugenheimer is an Assistant Professor (Maître de conférences) for Computer Science at Télécom Paris (Institut Polytechnique de Paris) inside the DIVA group, working on several topics around Mixed Reality (Augmented Reality and Virtual Reality). He received his Ph.D. from Ulm University, working on the topic of Nomadic Virtual Reality. During his studies, Jan worked within a variety of research labs at universities (ETH Zurich, MIT Media Lab) and research institutions (Daimler AG, IBM, Mercedes Benz Research and Development North America, Microsoft Research). His work is frequently published and awarded at leading HCI conferences such as UIST, CHI, and CSCW. In his most recent research, Jan is exploring the potential negative and harmful impact of mixed reality technology.

    Interactive Human Centered Artificial Intelligence – A Definition and Research Challenges

    Speaker: Albrecht Schmidt
    Date: 24/2/2021

    Artificial Intelligence (AI) has become the buzzword of the last decade. Advances so far have been largely technical and only recently have we been seeing a shift towards focusing on human aspects of artificial intelligence. Particularly the notion of making AI interactive and explainable are in the center, which is a very narrow view. In the talk, I will suggest a definition for “Interactive Human Centered Artificial Intelligence” and outline the required properties to start a discussion on the goals of AI research and the properties that we should expect of future systems. It is central to be able to state who will benefit from a system or service. Staying in control is essential for humans to feel safe and have self-determination. I will discuss the key challenge of control and understanding of AI based systems and show that levels of abstractions and granularity of control are a potential solution. I further argue that AI and machine learning (ML) are very much comparable to raw materials (like stone, iron, or bronze). Historical periods are named after these materials as they have change what humans can build and what tools humans can engineer. Hence, I argue in the AI age we need to shift the focus from the material (e.g. the AI algorithms, as there will be plenty of material) towards the tools that are enabled and that are beneficial for humans. It is apparent that AI will allow the automation of mental routine tasks and that it will extend our ability to perceive things and foresee events. For me, the central question is how to create these tools for amplifying the human mind, without compromising human values.

    Watch on Youtube
    Albrecht Schmidt is professor for Human-Centered Ubiquitous Media in the computer science department of the Ludwig-Maximilians-Universität München in Germany. He studied computer science in Ulm and Manchester and received a PhD from Lancaster University, UK, in 2003. He held several prior academic positions at different universities, including Stuttgart, Cambridge, Duisburg-Essen, and Bonn and also worked as a researcher at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and at Microsoft Research in Cambridge. In his research, he investigates the inherent complexity of human-computer interaction in ubiquitous computing environments, particularly in view of increasing computer intelligence and system autonomy. Albrecht has actively contributed to the scientific discourse in human-computer interaction through the development, deployment, and study of functional prototypes of interactive systems and interface technologies in different real world domains. His early experimental work addressed the use of diverse sensors to recognize situations and interactions, influencing our understanding of context-awareness and situated computing. He proposed the concept of implicit human-computer interaction. Over the years, he worked on automotive user interfaces, tangible interaction, interactive public display systems, interaction with large high-resolution screens, and physiological interfaces. Most recently, he focuses on how information technology can provide cognitive and perceptual support to amplify the human mind. To investigate this further, he received in 2016 a ERC grant. Albrecht has co-chaired several SIGCHI conferences; he is in the editorial board of ACM TOCHI, edits a forum in ACM interactions, a column of human augmentation in IEEE Pervasive, and formerly edited a column on interaction technologies in IEEE Computer. The ACM conferences on tangible and embedded interaction in 2007 and on automotive user interfaces in 2010 were co-founded by him. In 2018 Albrecht was induced into the ACM SIGCH Academy.

    Integrating Interactive Devices with the Body

    Speaker: Pedro Lopes
    Date: 10/2/2021

    When we look back to the early days of computing, user and device were distant, often located in separate rooms. Then, in the ’70s, personal computers “moved in” with users. In the ’90s, mobile devices moved computing into users’ pockets. More recently, wearable devices brought computing into constant physical contact with the user’s skin. These transitions proved useful: moving closer to users allowed interactive devices to sense more of their user and act more personal. The main question that drives my research is: what is the next interface paradigm that supersedes wearable devices? The primary way researchers have been investigating this is by asking where future interactive devices will be located with respect to the user’s body. Many posit that the next generation of interfaces will be implanted inside the user’s body. However, I argue that their location with respect to the user’s body is not the primary factor; in fact, implanted devices are already happening in that we have pacemakers, insulin pumps, etc. Instead, I argue that the key factor is how will devices integrate with the user’s biological senses and actuators. This body-device integration allows us to engineer interactive devices that intentionally borrow parts of the body for input and output, rather than adding more technology to the body. For example, one such type of body-integrated devices, which I have advanced during my PhD, are interactive systems based on electrical muscle stimulation; these are able to move their user’s muscles using computer-controlled electrical impulses, achieving the functionality of exoskeletons without the bulky motors. Their smaller size, a consequence of this integration with the user’s body, enabled haptic feedback in scenarios previously not possible with existing devices. In my research group, we engineer interactive devices that integrate directly with the user’s body. We believe that these types of devices are the natural succession to wearable interfaces and allow us to investigate how interfaces will connect to our bodies in a more direct and personal way.

    Watch on Youtube
    Pedro Lopes is an Assistant Professor in Computer Science at the University of Chicago, where he leads the Human Computer Integration lab. Pedro focuses on integrating computer interfaces with the human body—exploring the interface paradigm that supersedes wearable computing. Some of these new integrated-devices include: a device based on muscle stimulation that allows users to manipulate tools they have never seen before or that accelerate their reaction time, or a device that leverages the sense of smell to create an illusion of temperature. Pedro’s work is published at top-tier conferences (ACM CHI, ACM UIST, Cerebral Cortex). Pedro has received three Best Paper awards, two Best Paper nominations and several Best Talk/Demo/Video awards. Pedro’s work also captured the interest of media, such as New York Times, MIT Technology Review, NBC, Discovery Channel, NewScientist, Wired and has been shown at Ars Electronica and World Economic Forum (More:

    Vision Augmentation: How see-through displays could overwrite our visual world via computation?

    Speaker: Yuta Itoh
    Date: 26/1/2021

    In this talk, I present our works on augmenting and enhancing our visual capability via augmented reality technology. Adding virtual information indistinguishable from reality has been a long-awaited goal in Augmented Reality (AR). While already demonstrated in the 1960ies, only recently optical see-through displays have seen a reemergence. Our group explores augmented vision, a subarea of human augmentation, that aims to assist our visual perception via AR displays. Keywords include augmented reality, human augmentation, see-through displays, and eye-tracking.

    Watch on Youtube
    Yuta Itoh is an Assistant Professor at the Tokyo Institute of Technology, Japan. His research interest is in vision augmentation, which supports and enhances human vision via augmented reality technology, including see-through near-eye displays. He received his Dr. rer. nat. from the Technical University of Munich in 2016. Before joining the university, he worked as a project assistant professor at Keio University (2016-2017). He was a research engineer at Multimedia Lab. in Toshiba Corp. (2011-2013).

    Millimeter-wave radar for touchless interaction

    Speaker: Jaime Lien
    Date: 13/1/2021

    Google’s Pixel 4 launched in fall 2019 with the first radar ever integrated into a smartphone, enabling gesture and presence sensing for new modes of user interaction. These sensing capabilities are powered by Soli, a miniaturized radar system designed and developed specifically for touchless interaction at Google Advanced Technology and Projects (ATAP). In this talk, we discuss Soli’s development and productization path, from concept to core R&D to integration into Pixel 4. We highlight the tightly intertwined co-design of core radar technology, algorithms, and interaction design in Soli’s development, driven by requirements for a ubiquitous and scalable human sensing technology. We also discuss some of Soli’s new approaches and innovations in radar systems, signal processing, machine learning, and interaction design in order to enable a new modality for sensory perception. In the second part of the talk, we cover some of the technical challenges faced in integrating Soli radar into the Pixel 4 smartphone, including chip size and placement, power consumption, and interference. We discuss how our cross-disciplinary approaches to these challenges ultimately enabled Soli to successfully ship in the phone. Finally, we close with future-looking directions for radar in consumer devices. By capturing the sensitivity and precision of human movements in everyday user interfaces, we believe scalable millimeter-wave radar has the potential to revolutionize the way we interact with technology.

    Watch on Youtube
    Dr. Jaime Lien is the Lead Radar Research Engineer of Project Soli at Google Advanced Technology and Projects (ATAP). She leads a technical team developing novel radar sensing techniques and systems for human perception and interaction. Soli radar technology has enabled new modes of touchless interaction in consumer wearables and devices, including Google’s Pixel 4 and the Nest Thermostat. Prior to Google, Dr. Lien worked as a communications engineer at NASA’s Jet Propulsion Laboratory. She holds a Ph.D. in electrical engineering from Stanford University, where her research focused on interferometric synthetic aperture radar theory and techniques. She obtained her bachelor's and Master’s degrees in electrical engineering from MIT. Her current research interests include radar signal processing and sensing algorithms; modeling and analysis of the underlying RF physics; and inference on radar data.

    Marvin or Terminator? The role of empathy in depression and aggression

    Speaker: Alexander Sumich
    Date: 1/12/2020

    Empathy, the capacity to share emotions or viewpoints of others to attain interpersonal reciprocity, can be a double-edged sword in relation to psychological wellbeing. Affective empathy is implicated as a mechanism inhibiting maladaptive aggression, yet dysregulated sharing of another’s emotional pain can exacerbate depression. On the other hand, being able to understand someone else’s emotions (without necessarily feeling them) or appreciating other perspectives (requiring cognitive flexibility) is the cornerstone of psychological resilience, protecting from both depressive and antisocial disorders. The current talk presents on some of the work from our lab on the role of empathy in depression and interpersonal aggression, and underpinning biological mechanisms (blood, guts and brains). Closely linked with this is the personality construct of psychopathy which is typically associated with poor empathy and increased risk for aggression. However, what happens when psychopathy meets empathy? We have discovered a novel psychological construct characterised by high empathy and dark traits: the Dark Empath which will be described relative to personality, aggression, and wellbeing.

    Watch on Youtube
    Dr Alex Sumich completed his initial training in Psychology at University of Auckland and doctoral studies at Kings College London, Institute of Psychiatry. Currently, he is Associate Professor at Nottingham Trent University and Adjunct Professor at Auckland University of Technology. He leads the Affect, Personality and Embodied Brain Research Group, in the Centre for Behavioural Research Methods. His work investigates cognitive and affective traits that influence our behaviour, particularly regarding biological underpinnings and application to understanding and treating psychopathology.

    Novel Uses of Neurophysiological Signals in Extended Reality

    Speaker: Arindam Dey
    Date: 16/10/2020

    This talk will present a few novel ways in which neurological sensors (e.g. EEG) and physiological sensors (e.g. ECG and GSR) are used in extended reality in both collaborative and single-user setups. I will present how our research has used these neurophysiological signals to measure performances, enabling empathy, and performing interaction in virtual environments.

    Watch on Youtube
    Arindam is a Lecturer in the Human-Centred Computing Group of the University of Queensland's School of ITEE, primarily focusing on Mixed Reality, Empathic Computing, and Human-Computer Interaction. He is co-directing Empathic XR and Pervasive Computing Laboratory and is a proponent of "for good" research with these technologies and aiming to create positive societal impact.

    A New Vision for a Better Reality

    Speaker: Thomas Furness
    Date: 13/11/2020

    Virtual reality is emerging as a vital teaching tool of our age.  Much of the seminal research in this field has been led by UW Professor Tom Furness over the 54 years of his professional career.  As an original pioneer of virtual and augmented reality technology Tom is widely known as the ‘grandfather’ of virtual reality.   In his talk Tom will share the lessons learned during the development of VR and its application.  He will focus on the need for reimagining education and his efforts to create a new learning environment for home through the Virtual World Society.

    Watch on Youtube
    Tom Furness is an amalgam of Professor, Inventor and Entrepreneur in a professional career that spans 54 years.  In addition to his contributions in photonics, electro-optics, and human interface technology, he is an original pioneer of virtual and augmented reality technology and widely known as the ‘grandfather’ of virtual reality.  Tom is currently a professor of Industrial and Systems Engineering with adjunct professorships in Electrical Engineering, Mechanical Engineering and Human Centered Design and Engineering at the University of Washington (UW), Seattle, Washington, USA.  He is the founder of the family of Human Interface Technology Laboratories at the University of Washington, Christchurch, New Zealand and Tasmania, Australia.  He is the founder and chairman of the Virtual World Society, a non-profit for extending virtual reality as a learning system for families and other humanitarian applications.  His current research interests include exploring the functionality of peripheral vision at large eccentricities and investigations into photon emission from the retina.  Tom and his students/colleagues have spun off 27 companies with an aggregate market capitalization of ~$10B.  He is a Fellow of the IEEE.

    Building bonds with robots and digital companions

    Speaker: Elizabeth Broadbent
    Date: 2/10/2020

    Robots and digital humans are starting to be used for conversations in social, health, and business applications. It is important that rapport is built during these conversations, especially in healthcare contexts. This talk will look at some techniques that may build rapport as well as some barriers that need to be overcome.

    Watch on Youtube
    Elizabeth Broadbent is a Professor in Health Psychology in the Faculty of Medical and Health Sciences at the University of Auckland, New Zealand. She obtained an honours degree in electrical and electronic engineering from Canterbury University to pursue her interest in making personal robots. After becoming interested in psychoneuroimmunology, she obtained her MSc and PhD degrees in health psychology from the University of Auckland. She now combines her health psychology and robotics interests to study healthcare robotics. Elizabeth is a Vice Chair of the multidisciplinary CARES robotics group. In 2010, Elizabeth was a visiting academic at the School of Psychology at Harvard University and in the Program in Science, Technology, and Society at Massachusetts Institute of Technology in Boston, USA. In 2017, she obtained a Fulbright award to return to Boston to conduct further research on companion robots.

    Combining BCI with Virtual/Augmented Reality: toward hybrid technologies and novel immersive applications

    Speaker: Anatole Lécuyer
    Date: 23/11/2020

    In this talk we will present our research path on Brain-Computer Interfaces (BCI). We will first evoke the great success of OpenViBE, a software dedicated to BCI research used today all over the world, notably with VR systems. Then, we will illustrate how BCI and virtual reality technologies can be combined to design novel 3D interactions and effective applications, e.g. for health, sport, entertainment, or training.

    Watch on Youtube
    Anatole Lécuyer is Senior Researcher and Head of Hybrid research team, at Inria, the French National Institute for Research in Computer Science and Control, in Rennes, France. His research interests include virtual reality, haptic interaction, 3D user interfaces, and brain-computer interfaces. He regularly serves as expert in Virtual Reality and BCI for public bodies such as European Commission (EC), European Research Council (ERC), or French National Research Agency (ANR). He is currently Associate Editor of "IEEE Transactions on Visualization and Computer Graphics", “Frontiers in Virtual Reality” and “Presence” journals. He was Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017) and IEEE Symposium on 3D User Interfaces (2012-2013). He is author or co-author of more than 200 scientific publications. Anatole Lécuyer obtained the Inria-French Academy of Sciences Young Researcher Prize in 2013, and the IEEE VGTC Technical Achievement Award in Virtual/Augmented Reality in 2019 .

    Superception – engineering the sense of self

    Speaker: Shunichi Kasahara
    Date: 23/9/2020

    Perception refers to recognizing meaning and organizing it into information via the inputs of sensory organs such as eyes, ears and somatosensory organs, as a basis for actions and constructing the self. How we can leverage our own perceptual ability and emerging technologies to overcome our intrinsic limitation of our own body? I am leading Superception: a research framework that makes it possible to expand, transform, and engineering human perception and cognition by intervening in human sensory input and output using computer technology. In this talk, I will present my recent work for engineering sense of self with technologies including Virtual Reality, EMS (electrical muscle stimulation), Projection mapping. I will also introduce current and future directions for augmenting the sense of self.

    Watch on Youtube
    Dr. Kasahara is a researcher, Sony Computer Science Laboratories, Inc., and a project Assistant Professor, Research Center for Advanced Science and Technology, The University of Tokyo. He joined Sony Corporation in 2008.  work as an affiliate researcher at MIT media lab in 2012, then he joined Sony CSL in 2014. He received his Ph.D in Interdisciplinary Information Studies from the University of Tokyo in 2017  He is leading “Superception” research: computational control and extension of human perception in SonyCSL.

    Beyond AR / VR / HCI – Augmenting Humans?

    Speaker: Kai Kunze
    Date: 4/9/2020

    This talk discusses potential ideas beyond traditional AR/VR/HCI fields, starting with an overview of wearable computing with a focus on smart eyewear, starting from interaction techniques, moving over topics of interpersonal synchrony, towards ideas on body schema extensions and embodied learning to steering collective attention.  Finally, I will introduce a couple of application cases extending our work towards Augmented Sports and Augmented Humans.

    Watch on Youtube
      With over 20 years of experience in Wearable Computing research, Kai Kunze works as Professor at the Graduate School of Media Design, Keio University, Yokohama, Japan. Beforehand, he held an Assistant Professorship at Osaka Prefecture University, Osaka. He received a Summa Cum Laude for his Ph.D. thesis from Passau University. His work experience includes research visits/internships at the Palo Alto Research Center (PARC), MIT Media Lab, Sunlabs Europe, and the German Stock Exchange.

    Human Factors: Automation, Trust and Cognitive Load

    Speaker: Andreas Duenser
    Date: 21/8/2020

    This talk presents some of our work on Human Factors for health, safety and efficiency in critical task environments. Specifically, we are studying the relationship between systems (automation, decision support, AI/ML), human trust in these systems and cognitive load of system operators. Automation and autonomous systems, ranging from robotics, to decision support and other AI?ML based systems, are playing an increasingly important role in our lives. Our work aims at informing the design of such systems to improve human-system interaction and collaboration. automation, can assist human operators in performing their work, help reduce workload (in particular cognitive load) and allow the operator to attend to important tasks at hand. However, in order to be able to rely on these systems, the operators have to trust that they perform accurately. Problems may arise when undue over-trust or under-trust are exhibited by the operators. Developing a better understanding of how people build and calibrate trust and being able to measure the amount of trust they put in a system could allow us to better manage their expectations and improve their interaction with and reliance on a system.

    Watch on Youtube
    Andreas Duenser is a Senior Research Scientist at the CSIRO, Data61, in Hobart, Australia. He is interested in the convergence of psychology and emerging technology systems to develop a deeper understanding of human behaviour and cognition in a technology context and to drive technology innovation and adoption. Andreas’ work focuses on:
    • Human Factors research with new interactive technologies
    • Understanding and developing models of human trust and workload when interacting with (semi)automated systems, decision support and ML/AI systems
    • Novel evaluation and assessment methods of human behaviour and cognitive processes
    • Designing new technologies for training and healthcare.