Note: Past Seminars Appear at Bottom


Title: Collaborative Active Learning for Robots
 
Data/Time/Location: Monday, February 2nd at 4:10 p.m. in Barnard 108
 
Speaker: Michelle Zhao
 
Abstract:  Today, robot learning paradigms rely on human-provided data, (e.g. demonstrations, preference labels) to adapt their behavior and align with user intent. Yet in practice, this process of teaching robots is one of trial-and-error that places the burden on humans to decipher what the robot misunderstands, diagnose failures, and supply the “right” corrective data.  My research develops user-centric active learning methods that learn by supporting human teachers. In this talk, I will first introduce uncertainty quantification tooling that extends conformal prediction to the human-robot interaction setting, enabling robots to rigorously “know when they don’t know” even when relying on black-box policies. I will then discuss how these uncertainty self-assessments enable robots to communicate insights with human teachers and proactively ask for targeted feedback within novel interactive learning paradigms. Coupling these ideas with cost-optimal planning algorithms, I will demonstrate how robots can interleave both learning and collaboration with human partners over multitask sequences. I will end this talk by taking a step back and examining the alignment process for robotics and discussing opportunities for how rethinking interactive learning as collaborative and continual accounts for not only task, but the nuanced interaction dynamics present during the teaching process.
 
Bio: Michelle Zhao is a Ph.D. candidate at Carnegie Mellon University in the Robotics Institute, working with Professors Henny Admoni and Reid Simmons. She studies human-robot interaction, with an emphasis on how robots can learn from and about people. Her research integrates methods from statistical uncertainty quantification, machine learning, and human-robot interaction to develop theoretical frameworks and practical algorithms for active learning from human feedback in domains like assistive robotic manipulation. Prior to her Ph.D., she earned her B.S. at the California Institute of Technology. She is the recipient of the Siebel Scholarship, Rising Stars in Computational and Data Sciences, the NDSEG Research Fellowship, HRI Pioneers 2025 Honorable Mention, and has worked at Toyota Research Institute.  

PAST 2026 SEMINARS

Title: Toward Actionable and Reliable Decision Making by Sim-to-Real Framework and Trustworthy Machine Learning
 
Data/Time/Location: Monday, January 26th at 4:10 p.m. in Barnard 108
 
Speaker: Longchao Da
 
Abstract: Complex decision-making can be framed as a Markov Decision Process, and then solved by advanced policy learning such as Reinforcement Learning. However, policies learned in simulation often struggle to generalize to real, safety-critical environments due to distribution shift, partial observability, and uncertainty. This talk presents a line of work that addresses these challenges by developing high-fidelity simulation, introducing sim-to-real training paradigms, performing offline policy evaluation, and conducting uncertainty quantification to support actionable and trustworthy decision-making in real-world domains, with potential solutions in transportation, healthcare, and disaster monitoring and response.
 
Bio: Longchao Da is a Ph.D. Candidate in Computer Science at Arizona State University. His research interests include Sim-to-Real Policy Learning, Trustworthy AI, and Data Mining. He also leverages Generative AI with uncertainty quantification to detect and mitigate hallucinations for more trustworthy responses. His work has appeared in top venues like AAAI, KDD, NeurIPS, ICML, IJCAI, CIKM, ECML, CDC, etc. He is a 2025 Google PhD Fellowship nominee, a two-time ASU Ph.D. Fellowship recipient and the Best Poster Award winner at SDM 2025.

Seminars from 2025.