Title: Towards Collaborative Intelligence: Learning from Decentralized Data at Scale
Date/Time/Location: Monday, February 9th at 4:10 p.m. in Barnard 108
Speaker: Yujia Wang
Abstract: As modern data increasingly comes from decentralized sources, e.g., phones, smart
devices, and medical systems, learning must occur without centralizing sensitive data.
Federated learning (FL) enables learning from decentralized data sources but faces
significant challenges in real-world deployments, including data heterogeneity, system
variability, and communication bottlenecks. In this talk, I will present the algorithmic
and optimization foundations of collaborative intelligence, focusing on building efficient
and scalable learning from decentralized data. My work addresses FL’s challenges both
individually and in a more systematic, integrated way, depending on what the problem
demands. I will first diagnose how stale updates and data heterogeneity jointly destabilize
asynchronous FL and introduce a cached calibration mechanism that probably removes
the harmful delay-heterogeneity interaction. I will then introduce a modularized and
parallel block-coordinate framework for federated fine-tuning of large language models.
Together, these results establish optimization-driven principles that enable efficient
and scalable federated learning. The talk concludes with a vision for the next generation
of collaborative AI, where models learn efficiently while respecting privacy, system
constraints, and social trustworthiness
Bio: Yujia Wang is a Ph.D. candidate in the College of Information Sciences and Technology
at The Pennsylvania State University, advised by Dr. Jinghui Chen. Her research spans
the theories and applications of collaborative intelligence and privacy-preserving
machine learning. Her work has been published in top venues such as ICML, NeurIPS,
ICLR, AISTATS, ACL and TMLR. She has delivered technical talks at the SIAM-NNP Section
Conference and IBM Research, and presented her work at the SDM Doctoral Forum. She
actively serves as a reviewer for leading AI conferences and journals. Beyond academia,
she gained industry experience as a Research Intern at IBM Research.
PAST 2026 SEMINARS
Title: Toward Actionable and Reliable Decision Making by Sim-to-Real Framework and Trustworthy
Machine Learning
Data/Time/Location: Monday, January 26th at 4:10 p.m. in Barnard 108
Speaker: Longchao Da
Abstract: Complex decision-making can be framed as a Markov Decision Process, and then solved
by advanced policy learning such as Reinforcement Learning. However, policies learned
in simulation often struggle to generalize to real, safety-critical environments due
to distribution shift, partial observability, and uncertainty. This talk presents
a line of work that addresses these challenges by developing high-fidelity simulation,
introducing sim-to-real training paradigms, performing offline policy evaluation,
and conducting uncertainty quantification to support actionable and trustworthy decision-making
in real-world domains, with potential solutions in transportation, healthcare, and
disaster monitoring and response.
Bio: Longchao Da is a Ph.D. Candidate in Computer Science at Arizona State University. His research interests include Sim-to-Real Policy Learning, Trustworthy AI, and
Data Mining. He also leverages Generative AI with uncertainty quantification to detect
and mitigate hallucinations for more trustworthy responses. His work has appeared
in top venues like AAAI, KDD, NeurIPS, ICML, IJCAI, CIKM, ECML, CDC, etc. He is a
2025 Google PhD Fellowship nominee, a two-time ASU Ph.D. Fellowship recipient and
the Best Poster Award winner at SDM 2025.
Title: Collaborative Active Learning for Robots
Data/Time/Location: Monday, February 2nd at 4:10 p.m. in Barnard 108
Speaker: Michelle Zhao
Abstract: Today, robot learning paradigms rely on human-provided data, (e.g. demonstrations,
preference labels) to adapt their behavior and align with user intent. Yet in practice,
this process of teaching robots is one of trial-and-error that places the burden on
humans to decipher what the robot misunderstands, diagnose failures, and supply the
“right” corrective data. My research develops user-centric active learning methods
that learn by supporting human teachers. In this talk, I will first introduce uncertainty
quantification tooling that extends conformal prediction to the human-robot interaction
setting, enabling robots to rigorously “know when they don’t know” even when relying
on black-box policies. I will then discuss how these uncertainty self-assessments
enable robots to communicate insights with human teachers and proactively ask for
targeted feedback within novel interactive learning paradigms. Coupling these ideas
with cost-optimal planning algorithms, I will demonstrate how robots can interleave
both learning and collaboration with human partners over multitask sequences. I will
end this talk by taking a step back and examining the alignment process for robotics
and discussing opportunities for how rethinking interactive learning as collaborative
and continual accounts for not only task, but the nuanced interaction dynamics present
during the teaching process.
Bio: Michelle Zhao is a Ph.D. candidate at Carnegie Mellon University in the Robotics
Institute, working with Professors Henny Admoni and Reid Simmons. She studies human-robot
interaction, with an emphasis on how robots can learn from and about people. Her research
integrates methods from statistical uncertainty quantification, machine learning,
and human-robot interaction to develop theoretical frameworks and practical algorithms
for active learning from human feedback in domains like assistive robotic manipulation.
Prior to her Ph.D., she earned her B.S. at the California Institute of Technology.
She is the recipient of the Siebel Scholarship, Rising Stars in Computational and
Data Sciences, the NDSEG Research Fellowship, HRI Pioneers 2025 Honorable Mention,
and has worked at Toyota Research Institute.
Seminars from 2025.