Killing one bird with two stones: high-throughput capture of both genotype and phenotype
Date/Time: Monday, December 6, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Jennifer Lachowiec
Abstract: Geneticists use incomplete information to understand how DNA produces the traits we observe in organisms, the translation of genotype to phenotype. Efforts in our lab leverage large datasets to fill in these gaps, tackling missing data from both the genotypic and phenotypic sides. From the genotypic angle, we are developing novel approaches to map and annotate DNA sequences with a focus on large complex plant genomes with repetitive sequences. In contrast to standard approaches to assembling genomic sequences, we have evidence that low information-content, repetitive DNA sequences can be leveraged to fill gaps in genotypic data. And from the phenotypic angle, we use remote sensing and networks of cameras to record plant traits to measure them automatically over time, detecting variation in seemingly identical plants. By combining these datasets, we aim to understand the predictability, or robustness, in plants that include crops important to food and fuel security.
Bio: Jennifer Lachowiec is an Assistant Professor in Plant Sciences and Plant Pathology at MSU. A self-professed jane-of-all trades, Jennifer studied Genetics and Anthropology at the University of Wisconsin-Madison and later earned a PhD at the University of Washington-Seattle in Molecular Cellular Biology through the Department of Genome Sciences. Her lab studies robustness in plant development to varied environments and stochastic errors, with efforts spanning molecular and evolutionary genetics to bioinformatics and computational biology. Visit jllab.org to learn more.
Improving Research Design with Data Science
Date/Time: Wednesday, December 1, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Ann Marie Reinhold
Abstract: Across scientific disciplines, researchers have a common goal of developing parsimonious conceptual models that accurately represent hypotheses describing empirical data. From the cogent construction of data models, pipelines, and analyses to robust simulation frameworks and elegant visualizations, the application of data science tools and techniques offers researchers a means of maximizing internal and external validity of research design. Dr. Reinhold will discuss her past and ongoing research, including (1) applying data science techniques to improve hypothesis testing in empirical research; (2) reducing threats to internal validity in simulation modelling through automated code generation; and (3) using natural language processing to improve instrument fidelity in message testing research.
Bio: Dr. Reinhold is a data scientist who specializes in the development and application of computational methods to understand the mechanisms underpinning pressing environmental, societal, and cybersecurity problems. Dr. Reinhold completed her B.A. in Biology at the University of Colorado at Boulder, graduating summa cum laude in 2004. She earned her M.S. in Biology from Duke University in 2008 and Ph.D. in Ecology from Montana State University in 2014. Her postdoctoral training and ongoing research as an Assistant Research Professor at Montana State University employs a pan-disciplinary approach to data science.
Starting Two Separate Companies from the Ground Floor
Date/Time: Monday, November 29, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Jeff Trom
Abstract: The talk will discuss lessons learned through the experience of starting two separate companies from the ground floor and taking both companies public. Two very different software companies with different results in the end. Key successes and mistakes will be discussed with an opportunity for questions at the end of the talk.
Bio: Jeff Trom, is a co-founder of a $400M plus SAAS software company called Workiva (founded 2008). He serves as an executive vice president and is the Chief Technology Officer for the company. Jeff currently leads the development of all aspects of Workiva’s cloud platform. As the CTO, Jeff oversees all product development and management and shapes product priorities and direction. He oversees an R&D team of over 400 people to continuously innovate Workiva's cloud solutions. He is one of Workiva’s leading technology experts and has helped to grow the technology team from its inception. Prior to Workiva, Jeff was also a co-founder of Engineering Animation, Inc. (EAI). As the company’s chief technology officer, he spent 10 years leading software architecture, development and deployment. EAI was acquired in 2000 by Unigraphics Solutions (now part of Siemens USA). Jeff also served as a consultant for several years. Jeff served on the Montana High Tech Business Alliance board of directors and is now an advisory board member of Montana State University’s Gianforte School of Computing. He earned his Bachelor of Science and Master’s of Science in Mechanical Engineering from the University of Iowa and a Doctorate from Iowa State University.
Computational Methods for Analysis of Functional Brain Images
Date/Time: Wednesday, November 17, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Vikram Ravindra
Abstract: Rapid advances in neuroimaging have resulted in large repositories of images with high temporal and spatial resolutions. This has motivated large-scale connectomic analyses aimed at understanding representation and processing of stimuli . Such analyses require novel computational tools that significantly extend the state-of-the-art in machine learning and data science. These studies have far-reaching implications for the fields of precision psychiatry, behavioural analysis, and neurodegeneration, in addition to applications in AR/VR and advanced human interfaces.
In this talk, I will discuss my work on understanding neuronal response to naturalistic visual stimulus -- I present two methods based on archetypal analysis and deep learning to: (a) find interpretable representations of fMRI response, (b) predict objects in visual frames, and (c) reconstruct visual inputs. I will also briefly describe computational methods to characterize individual-level uniqueness of functional connectomes. I will present three methods based on matrix sampling, graph alignment, and linear algebraic techniques to accurately identify the identity of individuals, and the nature of the cognitive task being performed. I will conclude my talk by highlighting the immense significance and challenges posed by problems in the field of neuroimage analyses.
Bio: Vikram is a PhD candidate in the department of computer science at Purdue University. His areas of research are data science and machine learning. He is interested in developing methods and applying them to biomedical signals to: (a) uncover biomarkers that indicate/predict neurodegenerative diseases/disorders, (b) to understand how humans process information, and (c) to build novel human-machine interfaces. He is also interested in combining tools and data from computational biology with biomedical images to draw novel insights about diseases. Previously, Vikram obtained a Masters from TU Munich.
Intelligent Security and Resilience in Pervasive Computing
Date/Time: Wednesday, November 10, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Talal Halabi
Abstract: The last decade has shown that pervasive computing is the future of critical infrastructure
including transportation systems, energy production, and medical support. However,
cybersecurity evolution has not been keeping up with the fast-paced development in
computing and communication technologies. Distributed and pervasive systems have introduced
an uncharted territory of security vulnerabilities and a wider attack surface, mainly
due to network openness and the deeply integrated physical and cyber spaces. The risk
of an adversary turning off streetlights, shutting down water treatment plants, or
even taking over military infrastructure has never been more real. Continuing to adopt
conventional approaches to cybersecurity and hoping for the best is mere wishful thinking.
These approaches might be effective in stopping run-of-the-mill automated probes and
small-scale attacks, but remain useless against the growing number of targeted, persistent,
and often AI-enabled attacks on critical systems. How can these threats be prevented
without full knowledge of their root vulnerabilities? How can we trigger a desperately
needed revolutionary design of anomaly detection systems to cope with resource limitation
at the network edge? How can we build security-sensitive pervasive applications with
intrinsic tolerance to attacks so they can recover from them as seamlessly as possible?
This research seminar will guide you through some potential answers to these troubling
but optimistic questions and overview recent progress in the design of intelligent
security and resilience solutions for large-scale networked systems, with special
focus on emerging Cyber-Physical Systems.
Bio: Talal Halabi is an Assistant Professor in Computer Science at the University of Winnipeg.
Prior to that, he was a Postdoctoral Fellow at Queen's University in Kingston, Canada.
He received his Ph.D. in Computer and Software Engineering from the University of
Montreal in 2018. He is currently leading several high-impact and interdisciplinary
research projects in Theoretical and Applied Cybersecurity. His research mainly focuses
on designing security and resilience frameworks for large-scale networked systems
and adopts a rigorous approach to strategic defense through the lens of Artificial
Intelligence. He has published peer-reviewed research papers extensively while fostering
intensive collaborations with several Canadian universities including McGill University,
University of Montreal, and Queen's University.
Open Problems in Cybersecurity
Date/Time: Monday, November 8, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: William Peteroy
Abstract: There have been significant active investments in cybersecurity products and innovation since the 1990s, ballooning to over 11.5 billion in 2021. We'll walk through a quick survey of open problems in cybersecurity from an enterprise (buyer) perspective as well as an entrepreneur's perspective with a significant chunk of time allotted for Q&A (so bring some questions).
Bio: William Peteroy is one of the founding Partners at Blackthorne Consulting and an Advisor to a number of Pacific Northwest cybersecurity and technology startups. Prior to Blackthorne, he led security strategy and innovation efforts as Chief Technology Officer at Gigamon. William joined Gigamon through the acquisition of ICEBRG, where he was CEO and co-founder. Before Blackthorne, William worked in several business and technical leadership positions in network and software security, including as a Technical Director and Subject Matter Expert at the National Security Agency (NSA), as well as a Security Strategist at Microsoft's Security Response Center (MSRC). William has served as an instructor at the National Cryptologic School at Fort Meade, a researcher at Dartmouth College, and speaks regularly at global information security conferences. William holds a Master of Science in Engineering degree in Computer Engineering from Johns Hopkins University.
Meditations on the Performativity of Code: A Cultural-Historical Activity Theory and Human-Computer Interaction Approach to Bias in the Tools and Symbols of CS Education
Date/Time: Monday, October 25, 4:10 p.m - 5:00 p.m., via Zoom:
https://us06web.zoom.us/j/87536054583?pwd=N3d2UkkvVFFFdkxZWWc1QUw4dDAvQT09Meeting ID: 875 3605 4583
Speaker: Jake Chipps
Abstract: Underrepresentation in computer science currently exists both at the industry level and in the K-12 educational spaces. Over the past decade, a growing number of K-12 literacy courses were developed to increase the numbers of minoritized youth in computer science, and new programming languages and computational tools were developed to support novice learners. Despite an increase in minoritized students taking literacy courses, student numbers in courses such as AP Computer Science A, where curriculum focuses on industry-standard languages, tools, and practices, have remained stagnant across the nation. As the computer science education community begins to move beyond increasing participation to consider the ethical and socio political issues surrounding CS and its role in society, it is also important to analyze the tools and symbols that mediate students’ understanding of computational thinking. This talk will explore bias in technology by combining cultural-historical activity theory and human-computer interaction to investigate the systems in which novice learners use programming languages. This talk will also provide a call to action for those developing future computational tools.
Bio: Jake Chipps (he/him/his) is a computer science educator at Granada Hills Charter High School and an adjunct lecturer in the Secondary Education Department at California State University, Northridge. Jake has the opportunity to work with both pre-service and in-service teachers while also educating youths in computer science. He is dedicated to redefining what it means to do computer science and creating learning experiences that fit within the practices of local communities. Jake earned his Ed.D. from Pepperdine University with a focus on Learning Technologies where he studied the systems in which non-computer science teachers use coding and computational thinking in their classrooms.
This Problem is (NP-) Hard! Modifying Sparse Coding for Terrain Identification from Satellite Images.
Date/Time: Monday, October 4, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Bradley Whitaker
Abstract: Applying machine learning to remote sensing problems can help scientists, engineers, and policy makers accurately identify and solve various problems. However, many traditional machine learning methods have two drawbacks. First, they are often black-box approaches with uninterpretable results. Second, most algorithms were developed with the goal of maximizing classification accuracy; such algorithms often struggle to accurately identify rare events in imbalanced datasets. In this presentation, I will discuss how my research team has tackled these problems in the context of identifying terrain type from satellite images. In this presentation, I will discuss how my research team is using sparse coding in order to effectively extract features from an imbalanced dataset. I will discuss the NP-hardness of sparse coding, traditional algorithmic workarounds, and novel modifications. I will then show how the linear features are used in conjunction with a linear classifier to preserve some level of interpretability in the machine learning pipeline.
Bio: Bradley M. Whitaker received his BS in Electrical Engineering from Brigham Young University and his M.S. and Ph.D.in Electrical and Computer Engineering from the Georgia Institute of Technology. Bradley is now an Assistant Professor at Montana State University in Bozeman, MT. His research focuses on signal processing and machine learning on constrained datasets (imbalanced, incomplete, etc.) with applications in healthcare and remote sensing.
Date/Time: Monday, August 30, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Facilitator: John Paxton
Abstract: This seminar will provide new and continuing graduate students with (1) useful information, (2) an opportunity to meet other students, staff and faculty, and (3) an opportunity to ask questions.
Date/Time: Monday, April 19, 4:10 p.m. - 5:00 p.m.
Facilitator: John Paxton
Abstract: At the end of every academic year, we celebrate the accomplishments of members of the Gianforte School of Computing. Join us for this year's celebration where we will reflect on our accomplishments and present awards.
How (Not) to Run a Forecasting Competition: Incentives and Efficiency
Abstract:Forecasting competitions, wherein forecasters submit predictions about future events or unseen data points, are an increasingly common way to gather information and identify experts. Perhaps the most prominent example in computer science is Kaggle, the platform inspired by the Netflix Prize which has run machine learning competitions with prizes up to $3M. The most common approach to running such a competition is also the simplest: score each prediction given the outcome of each event (or data point), and pick the forecaster with the highest score as the winner. Perhaps surprisingly, this simple mechanism has poor incentives, especially when the number of events (data points) is small relative to the number of forecasters. Witkowski, et al. (2018) identified this problem and proposed a clever solution, the Event Lotteries Forecasting (ELF) mechanism. Unfortunately, to choose the best forecaster as the winner, ELF still requires a large number of events. This talk will overview the problem, and introduce a new mechanism which achieves robust incentives with far fewer events. Our approach borrows ideas from online machine learning; if time, we will see how the same mechanism solves an open question about learning from strategic experts.
Bio: Rafael (Raf) Frongillo is an Assistant Professor of Computer Science at the University of Colorado Boulder. His research lies at the interface between theoretical machine learning and economics, primarily focusing on information elicitation mechanisms, which incentivize humans or algorithms to predict accurately. Before Boulder, Raf was a postdoc at the Center for Research on Computation and Society at Harvard University and at Microsoft Research New York. He received his PhD in Computer Science at UC Berkeley, advised by Christos Papadimitriou and supported by the NDSEG Fellowship.
Towards High-Throughput Cryptocurrency Transactions in Payment Channel Networks
Abstract: Scalability is one of the key issues that hinder the widespread use of cryptocurrencies, like Bitcoin, due to the underlying consensus algorithms for ensuring the security in a decentralized system. Recently, payment channel network (PCN) has been proposed as a promising off-chain solution. It allows instant and inexpensive payments by not requiring expensive and slow blockchain operations. However, to enable high-throughput transactions in PCNs, there are still many barriers to overcome, including transaction fee, node reliability, always-online requirement, balance depletion, and cryptocurrency utilization. In this talk, I will present some of our recent work on addressing these challenges. More specifically, I will first introduce two distributed algorithms for minimizing the transaction fee and providing robustness despite unreliable nodes, respectively. Next, I will demonstrate how to design smart contracts to deter the potential collusion in PCNs based on a game theoretic approach. Finally, I will share our thoughts on addressing other challenges.
Bio: Dejun (DJ) Yang is an Associate Professor in the Computer Science Department at Colorado School of Mines. He received the Ph.D. degree in Computer Science from Arizona State University in 2013 and the B.S. degree in Computer Science from Peking University in 2007. His research interests include networking, blockchain, Internet of Things, and mobile sensing, with a focus on the application of game theory, optimization, algorithm design, and machine learning to resource allocation, security and privacy problems. He received the 2019 IEEE Communications Society William R. Bennett Prize (only one of all the papers published in the IEEE/ACM Transactions on Networking and the IEEE Transactions on Network and Service Management in the previous three years) and Best Paper Awards at GLOBECOM, ICC (twice), and MASS, as well as a Best Paper Runner-Up at ICNP. He is currently an associate editor for the IEEE Internet of Things Journal.
Ecological Statistics for Complex Data
Abstract: Ecological phenomena are inherently complex with data measurements due to spatial, temporal, spatiotemporal, pooled, or measurement error processes. This seminar will give an overview of statistical approaches for detecting invasive species, understanding drivers of viral shedding, and modeling animal movement.
Bio: Andy Hoegh is an assistant professor of Statistics at Montana State University. His research is focused on the interface between complex data structures, environmental and ecological processes, and novel statistical methods.
Abstract: Eye-tracking technology track where a user looks and is being increasingly integrated into mixed reality devices. Although critical applications are being enabled, there are significant possibilities for violating user privacy and security expectations. We show that there is an appreciable risk of unique user identification even under natural viewing conditions in virtual reality. This identification would allow an app to connect a user’s personal ID with their work ID without needing their consent, for example. To mitigate such risks, we propose a framework that incorporates gatekeeping via the design of the application programming interface and via software-implemented privacy mechanisms. Our results indicate that these mechanisms can reduce the rate of identification from as much as 85% to as low as 30%. The impact of introducing these mechanisms is less than 1.5° error in gaze position for gaze prediction. Gaze data streams can thus be made private while still allowing for gaze prediction, for example, during foveated rendering. Our approach is the first to support privacy-by-design in the flow of eye-tracking data while serving mixed reality applications.
Bio: Brendan David-John (he/him/his) has a BS/MS from the Rochester Institute of Technology and is currently a PhD student at the University of Florida studying Computer Science. Brendan is from Salamanca NY, which is located on the Allegany reservation of the Seneca Nation of Indians. Brendan's personal goals include increasing the representation of Native Americans in STEM and higher education, specifically in computing. Brendan is a proud member of the American Indian Science & Engineering Society and has been a Sequoyah Fellow since 2013. Brendan's research interests include eye tracking, virtual reality, human perception, and vision within the field of computer graphics.
Understanding Our Earth: Trends in Environmental and Ecological Informatics
Abstract/Bio: Dr. Sankey received her PhD from Montana State University in Land Resources and Environmental Sciences. She is currently an Associate Professor in the School of Informatics, Computing, and Cyber Systems and the School of Earth and Sustainability, Northern Arizona University (NAU) in Flagstaff, AZ. Her presentation will discuss computational challenges in environmental science, research, and training as big data availability becomes increasingly common. Specifically, Dr. Sankey will present her research in remote sensing and geoinformatics, which focuses on vegetation cover change and land surface processes via manned and unmanned airborne and satellite image analysis. Dr. Sankey will discuss NAU’s new PhD program in informatics and how the big data challenges can be addressed via student training in environmental and ecological informatics.
Nested-Solution Facility Location Models
Abstract: Selecting the location for a set of facilities to be opened in order to service a set of customers is one of the most widely studied and applied problems in all of operations research. Classical facility location models can generate solutions that do not maintain consistency in the set of utilized facilities as the number of utilized facilities is varied. This talk will introduce the concept of nested facility locations, in which the solution utilizing p facilities is a subset of the solution utilizing q facilities, for all p < q. This approach is demonstrated with applications to the p-median model, with computational tests showing these new models achieve reductions in both average regret and worst-case regret when the number of utilized facilities is different from p.
Bio:Dr. Andreas Thorsen is an associate professor in the Jake Jabs College of Business & Entrepreneurship at Montana State University. His research focuses on developing optimization models for planning and managing operations and supply chains for products and services. Recently, his research has focused on access problems related to perinatal health services in Montana.
Cognitive Science to and from NLP
Abstract:This will be a survey talk talking about the relationship between the cognitive science of language and natural language processing. In one direction, tools and techniques from NLP have been used to explain the presence of semantic universals: shared properties of meaning across the languages of the world. In the other direction, I will survey a new subfield called emergent communication, in which artificial agents learn to communicate (and in the process, develop their own "languages") in order to achieve their goals. I will sketch methods in which this tool can be used to help a core NLP task: unsupervised machine translation.
Bio: Dr. Steinert-Threlkeld is an assistant professor in the Linguistics department at the University of Washington, where he arrived in 2019 after a postdoc at the Institute for Logic, Language and Computation at the University of Amsterdam. His research attempts to explain the shared structure of the languages of the world and to use these insights to help us build more intelligent and discursive machines.
How to run elections in which voters can verify that their votes are correctly counted
Abstract: With traditional election technologies, voters have little choice but to trust that
others will handle and count their votes properly. They must trust their local election
officials; they must trust the equipment that they use and, by extension, the vendors
who built and programmed the equipment; and they must trust numerous other individuals
and processes of which they may not even be aware. We can do better.
This talk will show how elections can be run so that voters can confirm for themselves that their votes have been accurately counted - without having to trust any software, hardware, or personnel. This is not just an academic exercise. Systems have been built and piloted in actual elections, and there is reason to be optimistic about broader deployments in the near future.
Bio: Dr. Josh Benaloh is Senior Principal Cryptographer at Microsoft Research and an Affiliate
Faculty Member of the Paul G. Allen School of Computer Science and Engineering at
the University of Washington. He earned his S.B. degree from the Massachusetts Institute
of Technology and M.S., M.Phil., and Ph.D. degrees from Yale University where his
1987 doctoral dissertation, Verifiable Secret-Ballot Elections, introduced the use
of homomorphic encryption to enable end-to-end verifiable election technologies.
Dr. Benaloh's numerous research publications in cryptography and voting have pioneered new technologies including the "cast or spoil" paradigm that brings voters into the election verification process with minimal burden. He has served on the program committees of dozens of cryptography and election-related conferences and workshops and is an author of the 2018 National Academies of Science, Engineering, and Medicine Committee on the Future of Voting -Securing the Vote - Protecting American Democracy.
Learning glacier physics with neural networks
Abstract: Predicting sea level rise is difficult because there is uncertainty in how fast the polar ice caps flow, a process driven by complex friction and hydrology at the (unobservable) interface between ice and bedrock. While we can make an educated guess about the physics down there, and thus write down some equations, these equations come equipped with a variety of knobs that control what the resulting ice cap speed looks like. The settings of these knobs commensurate with reality are not known a priori. Fortunately, we have some measurements of ice cap speed, so we developed an algorithm that determines all combinations of knob settings consistent with those observations (i.e. Bayesian inference). Unfortunately, this algorithm requires us to solve the equations millions of times, which is impossible because it takes around half an hour each time. To circumvent this bottleneck, we teach a neural network to (approximately) solve the equations 10,000 times faster. Replacing the original equations with the neural network, we (leisurely) run the algorithm and find that the bumps that make friction at the ice cap's bed are probably long and tall and that satellites can't tell us much about whether there's a giant river beneath the Greenland Ice Sheet.
Bio: Dr. Doug Brinkerhoff is an Assistant Professor in the Computer Science Department at the University of Montana, where he teaches courses in Machine Learning and Computer Vision. His research focuses on applying those techniques to environmental problems, including ice sheet modelling and the detection of irrigation in Montana.
Capturing the sensitivity of wildfire spread to small perturbations in atmospheric conditions using a computational fluid dynamics model of wildfire behavior
Abstract: Atmospheric forcing and interactions between the fire and atmosphere are primary drivers of wildland fire behavior. The atmosphere is known to be a chaotic system which, although deterministic, is very sensitive to small perturbations to initial conditions. We assume that as a result of the tight coupling between fire and atmosphere, wildland fire behavior, in turn, should also be sensitive to small perturbations in atmospheric initial conditions. Observations from the RxCADRE experiment suggest that low intensity prescribed fire in particular is susceptible to small perturbations in the wind field, which can significantly alter fire spread. Here we employ a computational fluid dynamics model of coupled fire-atmosphere interactions to answer the question: How sensitive is fire behavior to small variations in atmospheric turbulence? We perform ensemble simulations of fires in homogenous grass fuels. The only difference between ensemble members is the state of the turbulent atmosphere provided to the model throughout the simulation. We find a wide range of outcomes, with area burned ranging from 2212 m2 to 11236 m2 (>400% change), driven primarily by sensitivity to initial conditions, with non-negligible contributions from boundary condition variability during the initial 30 seconds of simulation.Our results highlight the need for ensemble simulations, especially when considering fire behavior in marginal burning conditions.
Bio: Dr. Alex Jonko is an atmospheric scientist interested in modeling wildland fire behavior and fire-climate interactions. Alex is a staff member in the Computational Earth Science Group at Los Alamos National Laboratory. She has a B.S/M.S. equivalent degree in Meteorology from the University of Bonn, Germany, and a Ph.D. in Atmospheric Science from Oregon State University. In her free time, she enjoys exploring the outdoors around Los Alamos on foot, bike, and skis, fermenting vegetables and baking sourdough bread.
Seminars from 2020.