As AI decouples intelligence from consciousness, as XR technologies transform perception, and as matter becomes machine manipulable, three key questions about the future arise: 1) what will newer AI-infused interfaces look like where digital bits occupy space and algorithms have faces, 2) what can we learn about human behavior through human-AI interaction and how can AI models learn from human behavior, and 3) how will a shift from data-centered to human-centered AI systems augment our capabilities and enrich our lives?
To answer these questions, our current work focuses on Human-Computer Interaction (HCI), eXtended Reality (XR) and Artificial Intelligence (AI). In particular, we investigate the design, engineering and study of interfaces (software and hardware), interactions, tools and systems for augmenting our physical capabilities. We are interested in skill acquisition and skill recovery such as fitness training and rehab therapy, physical task guidance such as equipment repair or cooking, and creative collaboration for music and dance.
We are inspired by the writings of Douglas Englebart and JCR Licklider. In both Augmenting Human Intellect and Man-Computer Symbiosis we see a vision of a world where human capabilities are greatly enhanced through close interaction with machines, which we call Human-AI Integration.
Memento Player: Shared Multi-Perspective Playback of Volumetrically-Captured Moments in Augmented Reality. CHI 2023.
CardsVR: A Two-Person VR Experience with Passive Haptic Feedback from a Deck of Playing Cards. ISMAR 2022.
Situated VR: Toward a Congruent Hybrid Reality Without Experiential Artifacts. IEEE CG&A 2022.
SRI-EEG: State-Based Recurrent Imputation for EEG Artifact Correction. Frontiers in Neuroscience 2022.
AI-Generated Virtual Instructors Based on Liked or Admired People Can Improve Motivation and Foster Positive Emotions for Learning. FIE 2022.
AI-Generated Virtual Instructors Based on Liked or Admired People Can Improve Motivation and Foster Positive Emotions for Learning. Preprint 2022.
FaraPy: An Augmented Reality Feedback System for Facial Paralysis using Action Unity Intensity Estimation. UIST 2021.
SceneAR: Scene-based Micro Narratives for Sharing and Remixing in Augmented Reality. ISMAR 2021.
EntangleVR: A Visual Programming Interface for Virtual Reality Interactive Scene Generation. VRST 2021.
Exploratory Design of a Hands-free Video Game Controller for a Quadriplegic Individual. AHs 2021.
Motion Improvisation: 3D Human Motion Synthesis with a Transformer. UIST 2021.
PneuMod: A Modular Haptic Device with Localized Pressure and Thermal Feedback. VRST 2021.
Design and Evaluation of a Hands-free Video Game Controller for Individuals with Motor Impairments. Frontiers 2021.
Opportunities and Challenges for Integrating Living Microorganisms in Human-Computer Interaction. AHs 2020. Best Paper Award (top 4%).
Designing Interactions Beyond Conscious Control: A New Model for Wearable Interfaces. IMWUT 2020.
Electrical stimulation wearable device for providing haptic feedback and reducing motion sickness in VR. CHI 2019.
A flexible, portable mat that can sense and track objects, and support multi-touch and hover gestures. CHI 2018. Best Paper Award (top 1%)
An outdoor VR time travel experience that takes you to the MIT of 1916 as well as an envisioned future 100 years from now.
A collaborative interface that allows remote control of a person's walking trajectory through galvanic vestibular stimulation (GVS). VRST 2017.
Methods for managing user perception and attention based on the cognitive illusion of Inattentional Blindness. DIS 2018.
A mocap hack using ViVe controllers before Vive trackers became available. IAP 2017.
A pipeline for the automatic generation of VR worlds from music. It uses a deep neural network for mood-based image generation. VRST 2017.
Creating immersive and interactive VR worlds using the real world as a template. New Context 2015.
A sensor-based bedtime alarm and a wallpaper display to promote awareness with sleep data visualization. MobiHealth 2015.
Room-scale social VR with object and full-body tracking built using Kinects before Vive became available. IEEE VR 2016.
Handwritten Tamil character recognition with a Convolutional Neural Network (CNN). NEML 2014.
A continuously morphing 4D geometrical VR world was my first VR project (DK1) back in 2013.
An outdoor fast paced team-based Android game. This was my Master's Thesis at the Media Lab completed in Aug 2013. PUC 2015.
Three physical microgames for Google Glass for taking short breaks from sitting. Preprint 2014.
Misha Sra Assistant Professor of Computer Science
Sherry Chen PhD student, CS
Yimeng Liu PhD student, CS
Atieh Taheri PhD candidate, ECE
Andrew Huard PhD student, ECE
Jungah Son PhD student, MAT
Arthur Caetano PhD student, CS
Avinash Nargund PhD student, ECE
Zichen Chen PhD student, CS
Kojiro Takeyama Visiting Researcher, Toyota Research
Purav Bhardwaj Visiting student, NID, India
Carlos Gilberto Visiting PhD student, MechE. UNAM, Mexico City
Giuliana Barrios Dell'Olio Visiting Master's Student, CS 2020-2021. Now at McKinsey & Company
Jake Guida Master's Student, CS 2020 (now at Adobe)
Mengyu Chen PhD'23, MAT
We are always looking for exceptional students at the intersection of Human-Computer Interaction, Computer Science, Electrical and Computer Engineering/Electrical Engineering, Cognitive Science and Mechanical Engineering.
If you are interested in applying to the Human-AI Integration Lab:
© Human-AI Integration Lab