ECL Speaker Series (2021)

Extended Reality for Everybody

Speaker: Michael Nebeling
Date: 19 May 2021
Michael's research focuses on democratizing virtual and augmented reality technologies, and on empowering more users to become active design participants. In his current work, he is focusing on how enabling more people to participate in design needs new methods and tools that better guide novice designers in addressing accessibility and equity as well as privacy and security directly through design. He will share current examples and is looking forward to a discussion around mechanisms that could promote "good" XR design behavior.

Watch on Youtube
Biography:
Michael is an Assistant Professor at the University of Michigan, where he leads the Information Interaction Lab (https://mi2lab.com) with a current focus on virtual, augmented, and mixed reality applications. He is active in CHI and UIST and is currently UIST 2021 Program co-chair. He received a Disney Research Faculty Award and a Mozilla Research Award. He started his role as the XR Faculty Innovator-in-Residence with the U-M wide XR Initiative in 2019. He created the XR MOOC series (https://xrmooc.com), a three-course AR/VR specialization on Coursera. He joined U-M in 2016 after completing a postdoc in the Human-Computer Interaction Institute at Carnegie Mellon University and a PhD in the Department of Computer Science at ETH Zurich.
 

Ubiquitous VR, Digital Twin, and Metaverse in the Age of AR3.0

Speaker: Woontack Woo
Date: 5 May 2021
Recently, VR/AR (or XR) is getting in the spotlight again. While there are deep concerns that XR is a bubble again, there is also great hope that XR will spread as a part of everyday life in the near future. XR can be the key to success when used in conjunction with DNA (data, network, AI), the core of the ‘Digital New Deal Project' led by the Korean government. In particular, it is worth paying attention to the possibility that the 'digital twin-based metaverse' linked with XR will become a new social platform. UVR Lab, launched in 2001, is preparing for a new leap forward in the era of AR3.0. In this talk, I would like to share my thoughts on the possibilities of AR3.0 and UVR3.0 on the way to the 3rd challenge for XR to become an everyday technology.

Watch on Youtube (Pending)
Biography:
Currently
Head, KAIST G-School of Culture Technology
Director, KAIST Culture Technology Research Institute
Director, KI-ITC Augmented Reality Research Center
Adjunct Professor, KAIST School of Computing
Adjunct Professor, KAIST G-School of Future Strategy

Previously,
Professor, Gwangju Institute of Science and Technology (GIST)
Invited Researcher, Advanced Telecommunication Research (ATR), Japan
Research Associate, University of Southern California, LA, CA, USA
 

Augmented Virtual Teleportation – Teleport into the Media

Speaker: Taehyun Rhee
Date: 7 Apr 2021
New Zealand is well known for its beautiful nature and has contributed to many movies and commercials. A strong media ecosystem has been built to support the related industry. This talk will introduce recent convergence of immersive, interactive, and intelligent media technologies, which presents a novel Cinematic XR platform allowing users’ illusion to teleport and interact with the objects in the video. The further extension of the platform with the high-speed network, Augmented Virtual Teleportation (AVT) is then introduced. It will be able to allow remote interaction and immersive collaboration with people in distance. Finally, case studies and potential applications of AVT will be introduced and discussed.

Watch on Youtube
Biography:
A/Prof. Taehyun James (TJ) Rhee, is a Director of Computational Media Innovation Centre (CMIC) at Victoria University of Wellington, New Zealand. He is a founder of Victoria Computer Graphics Programme, founder/director of the Victoria Computer Graphics Research Lab, and a founder of the Mixed Reality startup, DreamFlux.
His current research activities are focused on developing future media technology and platform; cinematic XR including real-time lighting, rendering, composition in virtual, augmented, and mixed reality; virtual teleportation; immersive remote collaboration; immersive visualization and interaction; and human digital content interaction.
He is highly interested in prototyping the research outcome into potential commercial products and platform; a winner of 2018 Researcher Entrepreneur Award by Kiwinet.
He is serving for Computer Graphics community as a conference chair of Pacific Graphics 2020 and 2021, executive committee of Asia Graphics Association, and SIGGRAPH Asia 2018 Virtual and Augmented Reality programme chair. Before joining Victoria in 2012, he was a principal researcher and senior manager in the Mixed Reality Group, Future IT Centre at Samsung (2008-2012). Also, he was a senior researcher/researcher of Research Innovation Center at Samsung Electronics (1996-2003).
 

Toward Extensions of the Self: Near-body Interactions for Augmented Humans

Speaker: Enrico Rukzio
Date: 24/3/2021
Augmenting the human intellect is one of the central goals in human-computer interaction. This can only be achieved when the time between intention and action becomes very small and the user sees the interface as an extension of the self. Wearable and mobile interaction devices supporting near-body interactions can fulfil this vision by reducing the time to access significantly. The talk will cover a number of my research projects that show how I tackle the challenges and potentials in that field such as limited out- and input capabilities, proximity of interaction devices to human sensors and actuators, and concerns regarding obtrusiveness and privacy.

Watch on Youtube
Biography:
Enrico Rukzio is a professor in Human-Computer Interaction in the Institute of Media Informatics at Ulm University, Germany. He is interested in designing intelligent interactive systems that enable people to be more efficient, satisfied, and expressive in their daily lives. His research focuses on the design of novel interaction concepts, devices and applications in areas such as mobile and wearable interaction, computerized eyewear, human-technology interaction for elderly people, automotive user interfaces and interactive production planning. Together with his students, he has won awards for research published at CHI, EuroVR, IEEE VR, ISWC, ITS, MobileHCI, MUM and PERCOM. Prior to his current position, he was an Assistant Professor at the Ruhr Institute for Software and Technology (University of Duisburg-Essen, Germany) and a RCUK academic fellow and lecturer at the School of Computing and Communications at Lancaster University (UK). He hold a PhD in Computer Science from the University of Munich (Germany).
 

Ubiquitous Mixed Reality: Designing Mixed Reality Technology to Fit into the Fabric of our Daily Lives

Speaker: Jan Gugenheimer
Date: 10/3/2021
Technological advancements in the fields of optics, display technology and miniaturization have enabled high-quality mixed reality (AR and VR) head-mounted displays (HMDs) to be used beyond research labs. This enables a novel interaction scenario where the context of use changes drastically and HMDs aim to become a daily commodity. In this talk I will present two perspectives onto AR/VR research in the field of HCI. The one perspective is aiming to improve the technology (e.g., input techniques, haptic devices) and work towards the vision of everyday usable mixed reality. The other perspective is starting to act like an auditing authority and challenging the constant positive perspective on the progress of mixed reality. While both perspectives have the goal of designing MR technology to fit into the fabric of our daily lives, I will argue that HCI research should not only focus on the positive framing. HCI should position itself stronger on the critical side and reflect upon the artifacts and concepts it creates, acting as an auditing authority on itself and other research in the field of AR and VR.

Watch on Youtube
Biography:
Jan Gugenheimer is an Assistant Professor (Maître de conférences) for Computer Science at Télécom Paris (Institut Polytechnique de Paris) inside the DIVA group, working on several topics around Mixed Reality (Augmented Reality and Virtual Reality). He received his Ph.D. from Ulm University, working on the topic of Nomadic Virtual Reality. During his studies, Jan worked within a variety of research labs at universities (ETH Zurich, MIT Media Lab) and research institutions (Daimler AG, IBM, Mercedes Benz Research and Development North America, Microsoft Research). His work is frequently published and awarded at leading HCI conferences such as UIST, CHI, and CSCW. In his most recent research, Jan is exploring the potential negative and harmful impact of mixed reality technology.
 

Interactive Human Centered Artificial Intelligence - A Definition and Research Challenges

Speaker: Albrecht Schmidt
Date: 24/2/2021
Artificial Intelligence (AI) has become the buzzword of the last decade. Advances so far have been largely technical and only recently have we been seeing a shift towards focusing on human aspects of artificial intelligence. Particularly the notion of making AI interactive and explainable are in the center, which is a very narrow view. In the talk, I will suggest a definition for “Interactive Human Centered Artificial Intelligence” and outline the required properties to start a discussion on the goals of AI research and the properties that we should expect of future systems. It is central to be able to state who will benefit from a system or service. Staying in control is essential for humans to feel safe and have self-determination. I will discuss the key challenge of control and understanding of AI based systems and show that levels of abstractions and granularity of control are a potential solution. I further argue that AI and machine learning (ML) are very much comparable to raw materials (like stone, iron, or bronze). Historical periods are named after these materials as they have change what humans can build and what tools humans can engineer. Hence, I argue in the AI age we need to shift the focus from the material (e.g. the AI algorithms, as there will be plenty of material) towards the tools that are enabled and that are beneficial for humans. It is apparent that AI will allow the automation of mental routine tasks and that it will extend our ability to perceive things and foresee events. For me, the central question is how to create these tools for amplifying the human mind, without compromising human values.

Watch on Youtube
Biography:
Albrecht Schmidt is professor for Human-Centered Ubiquitous Media in the computer science department of the Ludwig-Maximilians-Universität München in Germany. He studied computer science in Ulm and Manchester and received a PhD from Lancaster University, UK, in 2003. He held several prior academic positions at different universities, including Stuttgart, Cambridge, Duisburg-Essen, and Bonn and also worked as a researcher at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and at Microsoft Research in Cambridge. In his research, he investigates the inherent complexity of human-computer interaction in ubiquitous computing environments, particularly in view of increasing computer intelligence and system autonomy. Albrecht has actively contributed to the scientific discourse in human-computer interaction through the development, deployment, and study of functional prototypes of interactive systems and interface technologies in different real world domains. His early experimental work addressed the use of diverse sensors to recognize situations and interactions, influencing our understanding of context-awareness and situated computing. He proposed the concept of implicit human-computer interaction. Over the years, he worked on automotive user interfaces, tangible interaction, interactive public display systems, interaction with large high-resolution screens, and physiological interfaces. Most recently, he focuses on how information technology can provide cognitive and perceptual support to amplify the human mind. To investigate this further, he received in 2016 a ERC grant. Albrecht has co-chaired several SIGCHI conferences; he is in the editorial board of ACM TOCHI, edits a forum in ACM interactions, a column of human augmentation in IEEE Pervasive, and formerly edited a column on interaction technologies in IEEE Computer. The ACM conferences on tangible and embedded interaction in 2007 and on automotive user interfaces in 2010 were co-founded by him. In 2018 Albrecht was induced into the ACM SIGCH Academy.
 

Integrating Interactive Devices with the Body

Speaker: Pedro Lopes
Date: 10/2/2021
When we look back to the early days of computing, user and device were distant, often located in separate rooms. Then, in the ’70s, personal computers “moved in” with users. In the ’90s, mobile devices moved computing into users’ pockets. More recently, wearable devices brought computing into constant physical contact with the user’s skin. These transitions proved useful: moving closer to users allowed interactive devices to sense more of their user and act more personal. The main question that drives my research is: what is the next interface paradigm that supersedes wearable devices?
The primary way researchers have been investigating this is by asking where future interactive devices will be located with respect to the user’s body. Many posit that the next generation of interfaces will be implanted inside the user’s body. However, I argue that their location with respect to the user’s body is not the primary factor; in fact, implanted devices are already happening in that we have pacemakers, insulin pumps, etc. Instead, I argue that the key factor is how will devices integrate with the user’s biological senses and actuators.
This body-device integration allows us to engineer interactive devices that intentionally borrow parts of the body for input and output, rather than adding more technology to the body. For example, one such type of body-integrated devices, which I have advanced during my PhD, are interactive systems based on electrical muscle stimulation; these are able to move their user’s muscles using computer-controlled electrical impulses, achieving the functionality of exoskeletons without the bulky motors. Their smaller size, a consequence of this integration with the user’s body, enabled haptic feedback in scenarios previously not possible with existing devices.
In my research group, we engineer interactive devices that integrate directly with the user’s body. We believe that these types of devices are the natural succession to wearable interfaces and allow us to investigate how interfaces will connect to our bodies in a more direct and personal way.

Watch on Youtube
Biography:
Pedro Lopes is an Assistant Professor in Computer Science at the University of Chicago, where he leads the Human Computer Integration lab. Pedro focuses on integrating computer interfaces with the human body—exploring the interface paradigm that supersedes wearable computing. Some of these new integrated-devices include: a device based on muscle stimulation that allows users to manipulate tools they have never seen before or that accelerate their reaction time, or a device that leverages the sense of smell to create an illusion of temperature. Pedro’s work is published at top-tier conferences (ACM CHI, ACM UIST, Cerebral Cortex). Pedro has received three Best Paper awards, two Best Paper nominations and several Best Talk/Demo/Video awards. Pedro’s work also captured the interest of media, such as New York Times, MIT Technology Review, NBC, Discovery Channel, NewScientist, Wired and has been shown at Ars Electronica and World Economic Forum (More: https://lab.plopes.org).
 

Vision Augmentation: How see-through displays could overwrite our visual world via computation?

Speaker: Yuta Itoh
Date: 26/1/2021
In this talk, I present our works on augmenting and enhancing our visual capability via augmented reality technology. Adding virtual information indistinguishable from reality has been a long-awaited goal in Augmented Reality (AR). While already demonstrated in the 1960ies, only recently optical see-through displays have seen a reemergence. Our group explores augmented vision, a subarea of human augmentation, that aims to assist our visual perception via AR displays. Keywords include augmented reality, human augmentation, see-through displays, and eye-tracking.

Watch on Youtube
Biography:
Yuta Itoh is an Assistant Professor at the Tokyo Institute of Technology, Japan. His research interest is in vision augmentation, which supports and enhances human vision via augmented reality technology, including see-through near-eye displays. He received his Dr. rer. nat. from the Technical University of Munich in 2016. Before joining the university, he worked as a project assistant professor at Keio University (2016-2017). He was a research engineer at Multimedia Lab. in Toshiba Corp. (2011-2013).
 

Soli: Millimeter-wave radar for touchless interaction

Speaker: Jaime Lien
Date: 13/1/2021
Google’s Pixel 4 launched in fall 2019 with the first radar ever integrated into a smartphone, enabling gesture and presence sensing for new modes of user interaction. These sensing capabilities are powered by Soli, a miniaturized radar system designed and developed specifically for touchless interaction at Google Advanced Technology and Projects (ATAP).
In this talk, we discuss Soli’s development and productization path, from concept to core R&D to integration into Pixel 4. We highlight the tightly intertwined co-design of core radar technology, algorithms, and interaction design in Soli’s development, driven by requirements for a ubiquitous and scalable human sensing technology. We also discuss some of Soli’s new approaches and innovations in radar systems, signal processing, machine learning, and interaction design in order to enable a new modality for sensory perception.
In the second part of the talk, we cover some of the technical challenges faced in integrating Soli radar into the Pixel 4 smartphone, including chip size and placement, power consumption, and interference. We discuss how our cross-disciplinary approaches to these challenges ultimately enabled Soli to successfully ship in the phone.
Finally, we close with future-looking directions for radar in consumer devices. By capturing the sensitivity and precision of human movements in everyday user interfaces, we believe scalable millimeter-wave radar has the potential to revolutionize the way we interact with technology.

Watch on Youtube
Biography:
Dr. Jaime Lien is the Lead Radar Research Engineer of Project Soli at Google Advanced Technology and Projects (ATAP). She leads a technical team developing novel radar sensing techniques and systems for human perception and interaction. Soli radar technology has enabled new modes of touchless interaction in consumer wearables and devices, including Google’s Pixel 4 and the Nest Thermostat. Prior to Google, Dr. Lien worked as a communications engineer at NASA’s Jet Propulsion Laboratory. She holds a Ph.D. in electrical engineering from Stanford University, where her research focused on interferometric synthetic aperture radar theory and techniques. She obtained her bachelor's and Master’s degrees in electrical engineering from MIT. Her current research interests include radar signal processing and sensing algorithms; modeling and analysis of the underlying RF physics; and inference on radar data.