Talk Title

Aria Glasses: An open-science wearable device for egocentric multi-modal AI research

Talk Description

Augmented reality (AR) glasses will profoundly re-shape how we design assistive AI technolgoies, including healthcare robotics, by harnessing human experience and training powerful AI models using egocentric data sources. We introduce Aria, the all-day wearable, socially acceptable form-factor glasses to support always available, context-aware and personalized AI applications. Its industry-leading hardware specs, multi-modal data recording and streaming features, along with open-source software, datasets and community, make Aria the go-to platform for egocentric AI research and are quickly accelerating this emerging research field. In this talk, I will give an overview of the Aria ecosystem, recent advances in state-of-the-art egocentric machine perception, available open-source Aria software tools, as well as discussing the challenges and opportunities related to wearable/healthcare robotics research.

Speaker Bio

Dr. Zijian Wang is a Research Engineer at Meta’s Reality Lab Research division. His research interests include Augmented Reality (AR) glasses, 3D computer vision, contextual AI, hardware-software co-optimization, and robotics. Prior to joining Meta, he obtained his Ph.D. degree in 2019 advised by Prof. Mac Schwager from Stanford University, where he worked on planning and control for multi-robot systems.

Machine learning and applications in human disease, health and medicine

MAESTRO: Multi-sensed AI Environment for Surgical Task and Role Optimisation

Social Rehabilitation Network

Talk Title

Feeling the Future: Augmenting Medicine with Relocated Haptic Feedback in Extended Reality Environments

Talk Description

Extended reality, encompassing virtual reality, augmented reality, mixed reality, and everything in between, offers promising opportunities across various fields, including medicine and healthcare. Traditional medical training methods rely on expensive and often limited models, or direct practice on patients, posing ethical and safety concerns. Medical simulations in virtual environments provide a safer and more flexible alternative, allowing trainees to practice with virtual or “mixed reality” patients and receive feedback on their interactions. However, the absence of haptic feedback hampers the immersive experience. Wearable haptic devices can enhance realism and training effectiveness. By relocating haptic feedback from the fingertips to the wrist, trainees can interact with physical tools while receiving supplemental haptic feedback, improving realism and enabling unencumbered interactions with tools and patients. This talk will explore the importance of relocated haptic feedback in medical training, and its applications to enhance realism and facilitate skill refinement in extended reality environments.

Speaker Bio

Jasmin holds an SB in Mechanical Engineering from MIT and an MS in Mechanical Engineering from Stanford University. She is currently working on her Ph.D. in Mechanical Engineering at the Collaborative Haptics and Robotics in Medicine (CHARM) Lab under Prof. Allison Okamura. Jasmin’s research focus is on relocated haptic feedback for wrist-worn tactile displays, aiming to optimize haptic relocation from fingertips to wrist for virtual interactions. Ultimately, she seeks to inform design decisions for wearable haptic devices and haptic rendering cues in extended reality environments.

Talk Title

Robotic Leg Control: From Artificial Intelligence to Brain-Machine Interfaces

Talk Description

One of the grand challenges in human locomotion with robotic prosthetic legs and exoskeletons is control - i.e., how should the robot walk? In this talk, Dr. Laschowski will present his latest research on robotic leg control, ranging from autonomous control using computer vision and/or reinforcement learning to neural control using brain-machine interfaces. One of the long-term goals of his research is to conduct the first experiments to study, along this spectrum of autonomy, what level of control do individual users prefer, which remains one of the major unsolved research questions in the field.

Speaker Bio

Dr. Brok Laschowski is a Research Scientist and Principal Investigator with the Artificial Intelligence and Robotics in Rehabilitation Team at the Toronto Rehabilitation Institute, Canada’s largest rehabilitation hospital, and an Assistant Professor in the Department of Mechanical and Industrial Engineering at the University of Toronto. He also works as a Core Faculty Member in the University of Toronto Robotics Institute, where he leads the Neural Robotics Lab. His fields of expertise include machine learning, computer vision, neural networks (biological and artificial), human-robot interaction, reinforcement learning, computational neuroscience, deep learning, and brain-machine interfaces. Overall, his research aims to improve health and performance by integrating humans with robotics and artificial intelligence.

Talk Title

NOIR: Neural Signal Operated Intelligent Robot for Everyday Activities

Talk Description

Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). The novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. This work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication.

Speaker Bio

Ruohan is a postdoctoral researcher at Stanford Vision and Learning Lab (SVL), as well as a Wu Tsai Human Performance Alliance Fellow. He works on robotics, human-robot interaction, brain-machine interface, neuroscience, and art.

He is currently working with Prof. Fei-Fei Li, Prof. Jiajun Wu, and Prof. Silvio Savarese. He received his Ph.D. from The University of Texas at Austin, advised by Prof. Dana Ballard and Prof. Mary Hayhoe.