Note: Past Seminars Appear at Bottom


Harnessing Deep Neural Networks for Early Warning of Harmful Agal Blooms

Date/Time: Tuesday, February 20th at 4:10 p.m. in Roberts Hall 210

Speaker: Neda Nazemi

Abstract: The growing frequency, intensity, and complexity of climate-induced natural hazards call for innovative risk management methodologies. Advances in Artificial Intelligence (AI), especially in machine learning and deep learning, have revolutionized the analysis of big data, leading to the development of sophisticated predictive models. These models are essential for identifying the drivers of environmental challenges and delivering accurate forecasts, playing a key role in risk communication to policymakers. This facilitates the establishment of early warning systems and decision support tools, promoting proactive decision-making and the formulation of adaptive strategies to enhance community resilience against increasing threats. This presentation emphasizes the use of deep neural networks, specifically one-dimensional Convolutional Neural Networks (1D-CNNs), in addressing the challenge of harmful algal blooms (HABs) — a critical global water environmental issue. Due to the abrupt and difficult-to-control nature of HABs, traditional mechanistic and statistical models fall short in providing timely forecasts. I will explore the application of deep learning for generating accurate, potentially real-time forecasts of chlorophyll-a levels, serving as indicators of algal blooms in aquatic systems, thereby aiding in the pursuit of sustainable environmental management.

Brief Bio: Dr. Nazemi has been an Assistant Teaching and Research professor at the Gianforte School of Computing since 2022. She earned her Ph.D. in Systems and Information Engineering from the University of Virginia in 2023. Specializing in multidisciplinary research, Dr. Nazemi focuses on the application of machine learning, data analytics, and AI methods to tackle global environmental challenges. Her research includes developing AI-enhanced frameworks for modeling and managing natural disasters. These frameworks are designed to integrate multi-source environmental monitoring, AI/ML-driven early warning systems, and adaptive decision support systems. Her innovative approach leverages advanced sensing technologies, machine learning, and data analytics to improve the management and planning of environmental, infrastructure, and natural resources.


Date/Time: Monday, 26th February at 4:10 p.m. in Barnard Hall 108

Speaker: Iflaah Salmon

Abstract: I begin my talk by presenting how my industrial experience influenced the amalgam of software testing, cognitive psychology and organisational factors with its major highlights. I also present the role of machine learning in studying human emotions and personality considering its benefit for software engineering. Furthermore, I talk about certain important methodological aspects of experimentation in software engineering. I talk about my research expertise in relation to the potential collaboration and joint growth with MSU.

Brief Bio: Iflaah Salman, PhD (University of Oulu), is a Post-doctoral Researcher at the School of Engineering Science, Lappeenranta-Lahti University of Technology (LUT), Finland. Dr. Salman started her professional career in the software industry working as a software developer and a quality assurance engineer. Her research focuses on empirical software engineering, software testing, human factors (cognitive biases, emotions, personality), artificial intelligence for software engineering and organisational factors. Dr. Salman has published her work in top-tier software engineering venues like IEEE Transactions on Software Engineering and Empirical Software Engineering. She is a supporter of open data and open science.


PAST SEMINARS

Title: Achieving Robust Neuro-Symbolic Reasoning in High-Impact Domains through Knowledge Graphs and Large Language Models

Date/Time: Monday, January 22ndat 4:10 p.m. in Barnard 108

Speaker: Mayank Kejriwal

Abstract: While large language models (LLMs) like ChatGPT have ushered in a new era of opportunities and challenges and their performance has been impressive, there have been some concerns about their responsible use, especially in high-impact domains such as healthcare and crisis response. Recently, neuro-symbolic AI has emerged as a subfield of AI aiming to bridge the representational differences between neural and symbolic approaches to apply LLMs more responsibly. This talk will describe my group's research in defining and designing neuro-symbolic AI for solving complex problems. Drawing on real case studies and application areas, I will argue that a judicious combination of neural reasoning and symbolic techniques can help us to design systems that are more explainable, robust, and consequently, trustworthy.

Brief Bio: Mayank Kejriwal is a research assistant professor in the Department of Industrial & Systems Engineering at the University of Southern California, and a research team leader in the USC Information Sciences Institute (ISI). Prior to joining USC, he received his PhD in computer science from the University of Texas at Austin. He is the director of the Artificial Intelligence and Complex Systems group, and is also affiliated with the Center on Knowledge Graphs and the AI4Health initiative, at ISI. His research has been funded through multi-million dollar grants by the US Defense Advanced Research Projects Agency (DARPA), corporations and philanthropic foundations. His research has been published across almost a hundred peer-reviewed venues and featured in multiple press outlets, including The Guardian, The World Economic Forum, Popular Science, BBC, CNN Indonesia, and many others. He is the author of four books, including an MIT Press textbook on knowledge graphs that has been re-published in several languages. 


Beyond Modeling: Contextualizing Data and Improving Patient Representations in the Context of Learning Health Systems

Date/Time: Monday, January 29that 4:10 p.m. in Barnard 108

Speaker: Keith Feldman

Abstract: In line with the values of P4 (predictive, preventive, participatory, and personalized) medicine, healthcare today has continued to provide increasingly unique care for each patient. While this form individualized care has been shown to improve outcomes, there exists a fundamental conflict between completely personalized medicine and the success of machine learning and statistical tools that have excelled in extracting meaningful patterns from large repositories of data. There exists a need to reframe the expectations of computational tools from simply synthesizing increasingly large bodies of data and develop techniques that can draw insight from the increasingly diverse body of the data we collect through routine care. In this talk I illustrate how these techniques can be leveraged to improve representations of patient data and create for a more complete view of a given individual’s clinical state over time, as well as how contextualization of such information can not only aid in current clinical processes, but advance them.

Brief Bio: Dr. Feldman is a computer scientist by training who has spent his graduate, postdoctoral, and early faculty career developing a portfolio of research in the area of computational health, applying machine learning and data science techniques to problems across the healthcare domain. Tied to the potential of these techniques to capture variability between patient conditions and outcomes, his work has identified patient subtypes, evidenced-based risk measures, and treatment patterns tied to the quality and effectiveness of care. Working closely with multidisciplinary teams, this work is fundamentally motivated by the notion of augmentation, not automation. Where rather than utilizing computation to replicate healthcare decisions, he aims to augment the existing skillsets of those engaged in healthcare. Broadly seeking to determine what information, if available, would improve decisions relevant to their role. His work is funded by the NIH, AHA, Frontiers CTSI and generous philanthropic gifts.


Title: Knowledge-centric Machine Learning on Graphs

Date/Time: Friday, February 2nd at 4:10 p.m. in Roberts 209

Speaker: Yijun Tian

Abstract: Due to the surge of data and computational capacities in recent years, people in the field of artificial intelligence and machine learning (ML) focus on collecting high-quality data (i.e., data-centric) and developing complex model architectures (i.e., model-centric). However, these two paradigms come with inherent limitations, such as intensive labor demand for data annotation and specialized expertise for model refinements. Consequently, there emerges a need for a new paradigm: knowledge-centric. This paradigm seeks to leverage knowledge (important and useful information) to facilitate effective and efficient machine learning. By anchoring on the knowledge, there is a reduced reliance on massive labeled data and intricate model architectures. Graphs, one of the most common and effective data types to represent structured and relational systems, have attracted tremendous attention from academia and industry. My research focuses on developing a knowledge-centric learning framework to model graphs, with the ultimate goal of impacting various research areas and benefiting real-world applications. In this talk, I will describe how I design knowledge-centric ML algorithms to obtain and leverage valuable knowledge from multiple places, including 1) learning knowledge from data, 2) distilling knowledge from models, and 3) encoding knowledge from external sources.

Biography: Yijun Tian is a Ph.D. candidate in Computer Science and Engineering at the University of Notre Dame. His research interests lie in machine learning, data science, and network science. His research aims to empower machines with knowledge to positively influence real-world applications, health, and sciences. His work appears at venues such as AAAI, ICLR, and IJCAI, and has been recognized with oral and spotlight paper honors. 


The Language of Discovery: Listening to the Whispers of Scientific and Health Data and with Data Analytics and NLP

Date/Time: Monday, February 5that 4:10 p.m. in Barnard 108

Speaker: Prashanti Manda

Abstract: Artificial intelligence (AI), machine learning, and natural language processing (NLP) are revolutionizing healthcare research and clinical practice. In this talk, I will discuss how advanced AI algorithms, such as deep learning and unsupervised learning, can sift through massive EHR datasets, uncovering hidden patterns and correlations that traditional analysis methods miss.  Beyond EHRs, the talk will showcase the power of NLP in mining insights from the treasure trove of scientific literature. We will investigate techniques like information retrieval to automatically glean knowledge from medical publication. By bridging the gap between EHR analysis and scientific knowledge extraction, this talk will demonstrate how AI and NLP can revolutionize healthcare research and clinical practice. 

Brief Bio: In 2012, Dr. Manda earned a Ph.D. in Computer Science from Mississippi State University.  An NSF Career Award winner, she currently is an Associate Professor with the Department of Informatic and Analytics at UNC Greensboro.


Geometric Modeling and Physics-Informed Machine Learning for Computer Vision Applications

Date/Time: Monday, February 12th at 4:10 p.m. in Barnard 108

Speaker: Diego Patiño

Abstract: Our world is inherently geometric because it is composed of three-dimensional objects that exist in space and have physical dimensions. We use geometry to represent these objects' properties and relationships, such as angles, distances, and shapes. Moreover, objects (and quantities) in our world follow physics laws that determine their interaction and allow us to estimate their present and future state.Geometric computer vision and physics-informed machine learning are two powerful tools that are increasingly getting attention because of their applications in various fields of research and industry, such as medical imaging, autonomous vehicles, and 3D reconstruction. This talk discusses research examples incorporating prior knowledge about the geometrical and physical constraints inherent to the 3D world into state-of-the-art computer vision and machine learning pipelines. We will show how geometric computer vision enables the analysis and understanding of complex 3D structures and environments, while physics-informed machine learning provides insight into the underlying physical phenomena to drive the machine learning models into a better representation of complex systems.

Brief Bio: Diego Patiño is a Post-doctoral Fellow in the Department of Electrical and Computer Engineering at Drexel University, working with Professor David K. Han. Before joining Drexel, he was a Post-Doctoral Researcher within the GRASP Lab at the University of Pennsylvania, working under the supervision of Kostas Daniilidis. Diego Patiño received his B.S., M.S., and Ph.D. degrees in Computer Engineering from the National University of Colombia, in 2010, 2012, and 2020 respectively. He was a visiting researcher at the University of Wisconsin-Madison and later at the University of Pennsylvania.His research interests revolve around machine learning, physics-informed machine learning, and geometric approaches to computer vision with applications in areas such as robotics and medical imaging, among others. More specifically, his research focuses on 3D vision, symmetry detection, 3D reconstruction, graph neural networks, robot perception, and reinforcement learning applied to problems in science and engineering.


Data Imputation Framework for Time Series Data with Large Missing Data Gaps and Extreme Events

Date/Time: Friday, February 16that 4:10 p.m. in Roberts 209

Speaker: Rui Wu

Abstract: This presentation is about how to estimate missing values within time series data. This can be very challenging if a dataset has large missing data gaps and includes extreme events, i.e., rare events but can have important impacts. Missing data is a common issue with time series data across domains including environmental monitoring, structural health monitoring, bioinformatics, and other Internet of Thing (IoT) applications. Missing data gaps can occur for various reasons, such as damaged sensors, loss of power, and problems with data storage or transmission. Most existing machine learning models cannot be applied directly if historical data has missing values. To tackle the missing data issue, the data records are usually removed or estimated. However, when the missing data gap is very large (e.g., continuous 30% for a parameter), removing data records with missing values can break temporal information, and data imputation for continuous missing gaps can be very challenging. Another challenge for data imputation problems is extreme events, such as hurricanes and stock market crashes. These events do not happen very often but can have huge impacts on data patterns and increase the difficulty of missing data estimation. To address these challenges, this presentation introduces a novel data imputation framework that includes reshape and extreme event classification preprocessing steps, as well as machine learning models to learn temporal connections between observed and missing values. The experimental results demonstrate that the proposed framework outperforms cutting-edge methods in terms of accuracy. Therefore, this framework can provide a more effective solution for imputing missing data in time series datasets with large missing data gaps and extreme events.

Brief Bio: Rui Wu received a Bachelor degree in Computer Science and Technology from Jilin University, China, in 2013. He then pursued his Master and Ph.D. degrees in Computer Science and Engineering at the University of Nevada, Reno, completing them in 2015 and 2018, respectively. Currently, Rui works as an assistant professor in the Department of Computer Science at East Carolina University, collaborating with geological and hydrological scientists to protect the ecological system. His primary research interests lie in machine learning and data visualization using AR/VR devices. Dr. Wu has actively contributed to several NSF and NIH funded projects, serving as both a Principal Investigator (PI) and Co-PI. 


Seminars from 2023.