Note: Past Seminars Appear at Bottom


Title: TBD

Date/Time: Monday, April 28 at 4:10pm in Barnard 108

Speaker: Luis Garcia

Abstract: N/A

Bio: N/A


PAST 2025 SEMINARS

Title: Advancing Trustworthy AI in Open Environments: Robustness, Interpretability, and Fairness

Date/Time: Monday, February 24 at 4:10pm in Barnard 108

Speaker: Bojian Hou

Abstract: As AI systems become increasingly embedded in critical applications, ensuring trustworthiness in open environments presents significant challenges. Traditional AI models, designed for closed environments with static distributions and predefined inputs, often struggle when confronted with real-world scenarios where data distributions shift and new features emerge unpredictably. This presentation introduces fundamental frameworks for developing trustworthy AI systems capable of operating effectively in open environments while maintaining three crucial properties: robustness, interpretability, and fairness. In addressing robustness, I will present our Feature Evolvable Streaming Learning (FESL) framework, a pioneering approach that enables AI systems to adapt to evolving feature spaces in online learning scenarios. FESL introduces a novel mechanism for seamlessly incorporating new features as they emerge while allowing outdated features to gracefully fade away, maintaining model relevance and performance in dynamic environments. For interpretability, I will detail our Learning with Interpretable Structure from Gated RNN (LISOR) method, which extracts interpretable structures from gated recurrent neural networks, providing crucial insights into the decision-making processes of these powerful but often opaque models. Finally, I will discuss our Fairness-Aware Class Imbalanced Learning on Multiple Subgroups (FACIMS) framework, which tackles the dual challenges of class imbalance and fairness in open-environment datasets, offering a robust approach for maintaining equitable outcomes across multiple subgroups while preserving model effectiveness. This work lays the foundation for next-generation AI systems that can maintain trustworthiness while operating in complex, evolving environments, with applications spanning healthcare, finance, and other critical domains where reliable AI is essential.

Bio: Dr. Bojian Hou is a Postdoctoral Researcher at the University of Pennsylvania. His research focuses on Trustworthy Artificial Intelligence and Machine Learning, specializing on robustness, interpretability and fairness with their applications on healthcare. He is also interested in large language models, multi-model learning, semi-supervised learning and federated/distributed learning. He received his Ph.D. in Computer Science from Nanjing University under the supervision of Prof. Zhi-Hua Zhou, where his dissertation earned multiple excellence awards including the JSAI Excellent Doctoral Dissertation Award. His research contributions have resulted in over 40 peer-reviewed publications in premier venues such as NeurIPS, ICLR, AAAI and IJCAI, accumulating more than 450 citations. Dr. Hou's work has been recognized with several prestigious awards, including the 2024 AMIA Distinguished Paper Award, the 2023 Best Paper Award at the ACM BCB, and the PennAITech Innovation Fellow Award. He serves as a program committee member for top-tier conferences including ICLR, NeurIPS, and ICML, and regularly reviews for leading journals such as IEEE TPAMI, TNNLS, Nature Methods, and Medical Image Analysis. Prior to his current position, he held research roles at Cornell University and 4Paradigm Co., bringing both academic excellence and industry perspective to his work.


Title: Towards Reliable AI: A Framework for Quantification of AI Uncertainty

Date/Time: Monday, March 3 at 4:10pm in Barnard 108

Speaker: Ali Siahkoohi 

Abstract: Recent advances in artificial intelligence (AI) have shifted computational science and engineering from first-principle methods to data-driven approaches. Such approaches, by leveraging insights from large datasets and machine learning, promise enhanced predictive accuracy and reduced computational costs. However, they often sacrifice resilience, lacking the error bounds and reliability thresholds inherent in first-principle methods. This efficiency-resilience tradeoff poses a critical barrier to deploying data-driven methods in high-stakes applications. In this talk, I will present a framework for integrating uncertainty quantification (UQ) into AI model design to address this tradeoff. The framework comprises three components: (1) probabilistic predictions via generative models, (2) uncertainty-aware training using variational inference, and (3) scalability for high-dimensional, real-world problems. Building on UQ's demonstrated success in domains such as healthcare, engineering, and climate science, this approach improves reliability by quantifying prediction confidence, helping users identify model limitations and anticipate potential errors. As part of this framework, I will present the first theoretically grounded method for learning conditional measures in function spaces, addressing the limitations of current generative models in learning from data with varying resolutions. By deriving the conditional denoising score matching objective and implementing it with neural operators, this method enables discretization-invariant generative modeling that seamlessly generalizes across resolutions. This innovation transforms applications in domains such as medical imaging, climate modeling, and Earth sciences, where data inherently spans multiple resolutions. Finally, I will outline my plans to expand this framework, bridging the gap between first-principle resilience and data-driven efficiency to develop reliable AI systems for critical, real-world challenges.

Bio: Ali Siahkoohi is a Simons Postdoctoral Fellow in the Department of Computational Applied Mathematics & Operations Research at Rice University, jointly hosted by Dr. Maarten V. de Hoop and Dr. Richard G. Baraniuk. He received his Ph.D. in Computational Science and Engineering from Georgia Institute of Technology in 2022. His research focuses on designing scalable methods for quantifying uncertainty in AI models, with a broader goal of enhancing AI reliability.


Title: Scalability, Expressiveness, and Explainability of Graph Machine Learning

Date/Time: Monday, March 10 at 4:10pm in Barnard 108

Speaker: Chunjiang Zhu

Abstract: In the past decade, graph deep learning models, including graph neural networks and graph transformers, have delivered impressive results in solving complex graph problems, driving advancements in fields such as neuroscience and chemoinformatics. However, these methods still face key challenges, including scalability to massive datasets, expressing high-order relationships, and explainability of the decision-making process. In this talk, I will start by a brief introduction on graph data reduction techniques for scalable graph learning and network analysis, followed by more detailed discussions on our recent developments in improving the expressiveness and explainability of graph deep learning models, including expressive hypergraph neural networks and mixed integer programming explainers for graph classification.

Bio: Chunjiang Zhu is an assistant professor and interim associate head in Department of Computer Science at the University of North Carolina at Greensboro. Before that, he was a postdoc at the University of Connecticut and received his Ph.D. in Computer Science from City University of Hong Kong. His research focuses on the Foundations of graph learning and network analysis through the interplay of graph learning and graph algorithms, and Applications in science (AI4Science) and Education, to bridge the gap between the communities of deep learning algorithm and algorithmic foundation; and to realize concrete impacts of AI and graphs in various scientific disciplines. His research has led to many innovative and practical contributions, including distributed graph clustering or community detection for grouping functionally similar networked entities and the first fault-tolerant spectral and cut sparsifiers for efficient dynamic graph learning. His research papers are published in competitive conferences with 15%-25% acceptance ratio in the above fields, including ICML, AAAI, EMNLP, SIGIR, ICLR, ALENEX, COCOON, SIGSPATIAL, DATE, etc., as well as ACM and IEEE Transactions and Theoretical Computer Science. 


Title: Harnessing Uncertainty in Indoor Localization: Accuracy Improvement and Ambiguity Quantification

Date/Time: Monday, March 24 at 4:10pm in Barnard 108

Speaker: Xiangyu Wang

Abstract: In recent years, more capabilities and applications have been added to existing wireless communication systems due to the rapid development of the Internet of Things (IoT). WiFi and RFID exhibit tremendous potential in this industry due to their prevalence and low-cost. Among the applications, indoor localization has been a popular field of research over the years since it plays a vital role in resolving position-related challenges such as gesture recognition and human pose estimation.

In this talk, we explore the advancements achieved by accounting for uncertainties in fingerprinting-based indoor localization. Specifically, the talk addresses two key sources of uncertainty: signal uncertainty, which arises from environmental factors, and estimation uncertainty, which reflects confidence in position estimates. Through two representative projects, the talk demonstrates that incorporating uncertainty not only enhances localization accuracy but also introduces a novel evaluation metric that enhance system reliability and adaptability.

Bio: Xiangyu Wang is a postdoctoral researcher at Auburn University RFID Lab. Before joining the RFID Lab, he worked as an embedded software engineer for SML RFID. He received the Ph.D. degree in 2022 from the Department of Electrical and Computer Engineering at Auburn University. His research interests include wireless sensing, smart health, the Internet of Things (IoT), and IoT security. In addition, he is also interested in interdisciplinary topics involving deep learning, indoor localization, and Radio-Frequency Identification (RFID).


Title: Language-Agnostic Program Verification

Date/Time: Monday, March 31 at 4:10pm in Barnard 108

Speaker: Charlie Murphy

Abstract: Recently, there has been an explosion in the number of people who write code, and correspondingly there is a great variety in the languages each person develops code in. However, as many know writing software the executes as intended can be quite difficult, and when we get it wrong can be quite costly. While there is a large variety of tools to support software development in popular languages like C, Java, and Python, there are many "low-resource" languages that do not have the same level of tooling-support  to aid in producing code that executes as intended (e.g., the languages lack the support of programming verifiers, synthesizers, and model-checkers). This talk will discuss my work on Language-agnostic program verification that aims to bring such support to low-resource languages. Specifically, my talk will focus on my work developing the Semantics-Guided Synthesis (SemGuS) framework. The SemGuS Framework has been recently developed as a solver-agnostic and domain-agnostic way of expressing synthesis problems. One of the key benefits of SemGuS is its ability to uniformly express synthesis problems; however, that generality comes at a cost. A user must provide both the syntax and the semantics of the language for which the desired program is to be synthesized from and any tools for reasoning about SemGuS problems must be parameterized by the input language. This talk will first briefly introduce the SemGuS framework and then describe my work to reduce the burden on both SemGuS users and tool developers by (1) developing a tool to automatically synthesis a SemGuS compliant semantics from an interpreter for a language and (2) developing verifiers for SemGuS solutions that soundly reduce verification queries to satisfiability of a logical formula.

Bio: Charlie Murphy is a post-doctoral research associate at the University of Wisconsin-Madison and received his PhD from Princeton University in 2023. His research focuses on helping software developers write code that executes as intended. Specifically, his work focuses on the co-development of logic solvers and program verification and synthesis techniques.


Title: AI for Real-World Impact: From Medical Resource Optimization to System Reliability

Date/Time: Monday, April 7 at 4:10pm in Barnard 108

Speaker: Mona Esmaeili

Abstract:This seminar showcases three real-world applications of AI and data analytics aimed at improving efficiency, reliability, and user experience in healthcare and consumer technology. The first project focuses on optimizing medical asset management in hospitals by forecasting equipment demand. Through time series analysis and real-time tracking, the system helps reduce stockouts and overstocking, ultimately improving patient care and operational readiness. The second project involves analyzing MacBook hardware failures, particularly from liquid spills, to identify vulnerable components and visualize failure trends using interactive dashboards enabling more informed design improvements. The final project presents the development of a smart solution recommender system that helps users resolve technical issues more efficiently, reducing support center loads and enhancing customer satisfaction. Together, these projects highlight the impact of AI-driven solutions in addressing practical challenges across diverse domains.

Bio: Mona Esmaeili is a Ph.D. candidate in Computer Science and Electrical Engineering at the University of New Mexico. Her research spans applied machine learning, systems reliability, and healthcare optimization. She recently completed a 7-month internship at Apple, where her work on failure visualization and intelligent support systems received executive-level recognition. Mona holds a master's degree from the University of Washington and a bachelor's from Iran University of Science and Technology. Her mission is to create scalable AI solutions that bridge research and industry for societal impact.


Title: Detection of Spatiotemporal Changepoints in Air Quality: A Generalised Additive Model Approach

Date/Time: Monday, April 14 at 4:10pm in Barnard 108

Speaker: Rebecca Killick

Abstract: Air quality is an important measure for both ongoing public health and as part of climate modelling. Changes in the spatio-temporal distribution of air quality are important in the short term, e.g. for managing biohazards, and in the longer term for informing climate scenarios or predicting response to climate forcings.

The detection of changepoints in spatio-temporal datasets has been receiving increased focus in recent years and is utilised in a wide range of fields. With temporal data observed at different spatial locations, the current approach is typically to use univariate changepoint methods in a marginal sense with the detected changepoint being representative of a single location only. We present a spatio-temporal changepoint method that utilises a generalised additive model (GAM) dependent on the 2D spatial location and the observation time to account for the underlying spatio-temporal process.

We demonstrate an application of the method to an air quality dataset, specifically Nitrogen Dioxide and PM2.5 covering the COVID-19 lockdown in the United Kingdom. The stark spatio-temporal changes to these measures of air quality demonstrate the importance of considering a wide range of air quality measures when constructing climate scenarios.

Bio: Rebecca Killick received their PhD degree in Statistics from Lancaster University, where they hold a Professor and DIrector of Research positions.  For 2024/25 Rebecca is also a visiting Professor at UC Santa Cruz. In 2019 they were the first UK recipient of the “Young Statistician of the Year” award from the European Network for Business and Industrial Statistics which recognizes the work of young people in introducing innovative methods, promoting the use of statistics and/or successfully using it in daily practice. 

Rebecca sees their research as a feedback loop, being inspired by problems in real world applications, creating novel methodology to solve those problems and then feeding these back into the problem domain. Their primary research interests lie in development of novel methodology for the analysis of univariate and multivariate nonstationary time series models. This covers many topics including developing models, model selection, efficient estimation, diagnostics, clustering and prediction. Rebecca is highly motivated by real world problems and has worked with data in a range of fields including Bioinformatics, Energy, Engineering, Environment, Finance, Health, Linguistics and Official Statistics.

Rebecca is passionate about ensuring the availability and accessibility of research in the form of open-source software. As part of this they advocate to the statistical community the importance of recognition of research software as an academic output, are co-Editor in Chief of the Journal of Statistical Software and a member of the rOpenSci statistical software peer review board.


Seminars from 2024.