Enhancing Human Cognition in Human-Robot Collaboration

Date/Time: Monday, November 28th, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Apostolos Kalatzis
Abstract: Collaborative robots have lately been extremely relevant to the domain of production and
manufacturing industry after the arrival of the Fourth Industrial). Collaborative robots are
designed to execute tasks alongside the human workforce while sharing the same working
space as partners, offering greater mobility and flexibility. However, human-robot collaboration
(HRC) can be cognitively demanding, contributing to high levels of cognitive workload.
Consideration of the resulting impact this human-robot collaboration has on the human worker is
essential to improve the overall manufacturing systems performance, improve the
trustworthiness of HRC designs, and result in a better experience for the operator, where
robotics are designed with intelligent support. In the first part of this presentation, I will present a
framework for predicting cognitive workload and providing robot speed adaptations in real time.
This framework is a step toward workload-adaptive robotics to mitigate the negative effects
associated with the operator workload and allows for workload recovery for more effective HRC.
In the second part of this talk, I will present an augmented reality user interface designed to
assist users in performing collaborative tasks with a robot. I will present the effect of AR
interface and human-robot collaboration on task performance using both subjective and
objective measurements. Additionally, I will discuss the role of augmented reality on cognitive
workload and the use of eye-tracking data to understand the effect of AR UI on cognitive
workload.


Brief Bio: Apostolos earned a B.Sc. in Computer Science from the University of West Attica,
Greece, and an M.Sc. in Computer Science from California State University, Los Angeles
in 2019.He is currently a Ph.D. candidate at Montana State University where he studies
Human-RobotInteraction. His research focuses on human-artificial intelligence
collaboration and the mechanisms to monitor the cognitive state of the human working
next to a robot. He is currently working on a project to develop a digital interface with a
layer of augmented reality to improve worker safety and satisfaction. His research has
resulted in 8 conference proceedings and two Journal publications that are under
review.

Improving the Confidence of Machine Learning Models through Improved Software Testing Approaches

Date/Time: Monday, November 7th, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker:
Faqeer Rehman

Abstract: Machine learning is gaining popularity in transforming and improving a number of different domains e.g., self-driving cars, natural language processing, healthcare, manufacturing, retail, banking, and cybersecurity. However, knowing the fact that machine learning algorithms are computationally complex, it becomes a challenging task to verify their correctness when either the oracle is not available or is available but too expensive to apply.Software Engineering for Machine Learning (SE4ML) is an emerging research area that focuses on applying the SE best practices and methods for better development, testing, operation, and maintenance of ML models. The focus of this work is on the testing aspect of ML applications by adapting the traditional software testing approaches for improving the confidence in them.First, a statistical metamorphic testing technique is proposed to test Neural Network (NN)-based classifiers in a non-deterministic environment. Furthermore, an MRs minimization algorithm is proposed for the program under test; thus, saving computational costs and organizational testing resources.Second, a Metamorphic Relation (MR) is proposed to address a data generation/labeling problem; that is, enhancing the test inputs effectiveness by extending the prioritized test set with new tests without incurring additional labeling costs. Further, the prioritized test inputs are leveraged to propose a statistical hypothesis testing (for detection) and machine learning-based approach (for prediction) of faulty behavior in two other machine learning classifiers i.e., NN-based Intrusion Detection Systems. Finally, to test unsupervised ML models, the metamorphic testing approach is utilized to make some insightful contributions that include: i) proposing a broader set of 22 MRs for assessing the behavior of clustering algorithms under test, ii) providing a detailed analysis/reasoning to show how the proposed MRs can be used to target both the verification and validation aspects of testing the programs under investigation, and iii) showing that verification of MR using multiple criteria is more beneficial than relying on using just a single criterion (i.e., clusters assigned).Thus, the work presented here results in providing a significant contribution to address the gaps found in the field, which enhances the body of knowledge in the emergent SE4ML field.

 
Bio: Faqeer ur Rehman received his MS degree in computer science from the National University of Sciences & Technology (NUST), Pakistan. He started his career as a full stack software engineer in 2011 and has more than 10 years of experience in the design and development of large-scale software solutions. He is currently pursuing his Ph.D. in computer science from the Montana State University, Bozeman, MT, USA, where he is working under the supervision of Dr. Clemente Izurieta. His research interests include software design and development, software quality assurance, machine learning development, and testing Machine Learning (ML). His research in Software Engineering for Machine Learning (SE4ML) area resulted in publishing four research papers in IEEE peer reviewed conferences, whereas his one more research paper is under review in IEEE Transactions on Software Engineering Journal.

Trust in Autonomy: A Multimodal Metrics Perspective

Date/Time: Monday, October 17th, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Ranjana Mehta, Texas A&M

Abstract:  Investigations into physiological or neurological correlates of trust in AI systems have increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. This presentation will highlight the limitations and generalizability of the dynamics associated with trust across different technology domains, such as collaborative robotics, automated vehicle technologies, and cyber aids. There is a lot left to unravel about what features in the brain signal trust levels and whether brain activity can capture changes in trust perceptions (i.e., during trust-building, breach, and/or repair) and downstream behaviors (i.e., gaze trajectories and performance outcomes). Optical brain imaging and graph-theoretical computational analyses of functional brain networks will shed light on brain-behavior relationships associated with trust in shared space human-robot collaboration. These brain-behavior processes differ by gender or operator fatigue states, as such, their implications on modeling effective human-automation trust calibrations will be discussed. Finally, miscalibrated levels of trust do not always influence the operator’s behavior. As such, the neural correlates associated with an operator’s identification of a trust influencer and the decision to act upon the trust perception will be presented. 

BioRanjana Mehta is an associate professor and Mike and Sugar Barnes Career Development Faculty Fellow II in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering and Presidential Impact Fellow at Texas A&M University. She received her PhD from Virginia Tech in 2011. She is a leading expert in neuroergonomics, the study of brain and behavior at work, and her current human factors research program focuses on human health and performance augmentations in high-risk environments. Her research has attracted more than $18.5 M in extramural funding from NIH, NSF, NASEM, and DARPA, resulted in over 82 journal publications, 61 conference proceedings, and several plenaries and keynotes across academic and industry venues, and recognized through numerous honors, notably the Ideas* Faculty Fellow by NASA, the Early Career Research Fellow by the National Academies of Sciences, Engineering, and Medicine Gulf Research Program, IISE Award for Technical Innovation in Industrial Engineering, and several awards from the Human Factors and Ergonomics Society. As a citizen of the Industrial & Systems Engineering community, she has maintained a high level of external service at the national and international levels via numerous elected and appointed positions within professional societies, conferences, and standards development committees and as an associate editor of several human factors/ergonomics journals.

Google Seminar

Date/Time: Monday, September 12th, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Philip Brittan

Abstract: Philip will introduce his background and Google Workspace and give a high-level tour through a number of the key areas of technology challenges and innovations that Workspace is focused on, followed by an open Q&A session.

Bio: Philip Brittan leads Product Strategy for Google Workspace and is VP Engineering for Google Workspace’s Platform elements including identity, administrative functions, privacy, security, compliance, search & intelligence, e-commerce, and the Workspace API. During an earlier stint at Google, Philip was Director of Product Management for Google Finance and Google Local Search.  He has worked for over 30 years as an entrepreneur, CEO, CTO, business leader, innovator, Board member, and adviser, primarily in the enterprise software industry.  Philip was the founder and CEO of 4 software start-ups, he was GM for the Foreign Exchange and Economics business at Bloomberg LP, and he served as CTO and Head of Platform at Thomson Reuters. Philip has an AB degree in Computer Science from Harvard, where he focused on AI/NLP. He is a certified Financial Risk Manager and a PADI SCUBA Rescue Diver. He grew up on a ranch in Montana and is passionate about outdoor sports and making music.LinkedIn: https://www.linkedin.com/in/pbrittan/Blog: https://medium.com/scaling-peaks

Welcome Seminar

Date/Time: Monday, August 29th, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Facilitator: Dr. John Paxton


Awards Seminar

Date/Time: Monday, May 2, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. John Paxton

Abstract: At the end of every academic year, we celebrate the accomplishments of members of the Gianforte School of Computing. Join us for this year's celebration where we will reflect on our accomplishments and present awards.


CISA’s Efforts with Partners to Build Secure and Resilient Critical Infrastructure

Date/Time: Monday, May 2, 2:00 p.m - 3:00 p.m. in Barnard Hall 108
Speaker: Dr. Alethea Duhon
Location: Norm Asbjornson Inspiration Hall

Note: This seminar is optional for CS graduate students; attendance will not be taken.

Abstract:

The Cybersecurity and Infrastructure Security Agency (CISA) leads the national effort to understand, manage, and reduce risk to our cyber and physical infrastructure. The responsibility of this mission is becoming increasingly important, because in today’s globally interconnected world, our critical infrastructure and American way of life face a wide array of serious cyber risks. This seminar will highlight the work at CISA and how they are working with partners to defend against today’s threats and collaborating with industry to build more secure and resilient infrastructure for the future. We will also discuss how students and faculty can get engaged with CISA through partnerships or careers to help protect the homeland from cyber and physical threats. Additionally, the speaker will talk about the butterfly effect on how small actions can have life changing impacts through the telling of her personal and professional story.

Bio: Dr. Alethea Duhon, a member of the Senior Executive Service, is the Associate Director for Analysis, National Risk Management Center (NRMC) within the Cybersecurity and Infrastructure Security Agency (CISA) at the Department of Homeland Security. Dr. Duhon’s portfolio includes leading the NRMC’s efforts to take the next step in realizing the vision of the Risk Architecture (backed by the Modeling Capability Transition Environment (MCTE)); building data analysis capabilities to support the architecture via government and commercial solutions; and applying data, models, and technology to develop risk analysis and support risk management decisions around topics such as Supply Chain Security, Foreign Investment risk, and Systemic Risk to Critical Infrastructure from cyberattacks as well as other significant man-made and naturalhazard risks. Prior to this assignment, Dr. Duhon was dual-hatted as the Chief Technology Officer (CTO) to the Department of the Air Force’s Chief Modeling and Simulation Officer (CMSO) and Technical Director of the Air Force Agency for Modeling and Simulation (AFAMS). As the CTO, she served as the Department of the Air Force key scientific authority in the Modeling and Simulation (M&S) field of endeavor. As the AFAMS TD, she was responsible for the planning, direction, management, coordination, reporting, and evaluation of all technical aspects of AFAMS’ mission and programs. Preceding that assignment, Dr. Duhon was the Senior Technical Advisor in the Office of the Under Secretary of Defense (OUSD) Policy, Defense Technology Security Administration (DTSA). In this role, she provided technical insight, advice and analysis on international transfers of defense-related items and other matters of national security interest. Previously she served as an Acquisition Program Manager at the Assistant Secretary of the Air Force (Acquisition), Space Programs, Budget, Congressional and Program Integration Division. She also served as the Chief, Intelligence, Surveillance, and Reconnaissance (ISR) and Special Operations Forces (SOF) Programs, and Air Force Scientific Test and Analysis Techniques (STAT) Lead for the Headquarters United States Air Force, Directorate of Test and Evaluation and was instrumental in establishing the STAT Center of Excellence. She has held previous flight test and flight dynamics positions at Air Force Test Center, Edwards AFB, CA, Northrop Grumman Corporation, Palmdale, CA and Parker Hannifin (Aerospace), Irvine, CA. Dr. Duhon received her B.S. and M.S. degrees in Aeronautical & Astronautical Engineering from Purdue University. She received her Ph.D. in Systems Engineering from The George Washington University and was a Massachusetts Institute of Technology (MIT) Seminar XXI Fellow. 


AutoML for Classification of Human Movement and Biomechanics

Date/Time: Monday, April 25, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Corey Pew

Abstract: Classification and prediction of human movement is an important research topic in biomechanics and health. Classifying the motion of users in real-time is necessary for controlling robotic prosthetics, orthotics, and exoskeletons. In addition, the ability to identify abnormal gait or an adverse event, such as a fall, would provide prompt, objective data that allows researchers and clinicians to better understand a patient’s needs objectively and accurately. To achieve this goal, researchers utilize body worn sensors (inertial measurement units (IMUs), surface electromyography, load cells, etc.) combined with machine learning classifiers to identify various walking modes and user intent. Classifiers take input from sensors by collecting raw data on body movement, which is then translated into classifications of activity such as walking, sitting, standing, stair climbing, and more. Because researchers in the biomechanics field are often not deeply familiar with best practices in machine learning there is a tendency to inappropriately use premade classifiers and boast success through reports of high classification accuracy. It is the goal of our project to create an AutoML pipeline that addresses the specific challenges of biomechanics data that helps to mitigate classifier misuse while also educating the user of the best practices when implementing machine learning classifiers with their data. In addition, we seek to utilize AutoML to facilitate the implementation of personalized classifiers at the clinical level to help bridge the gap between theory and application in the field.

Bio: Dr. Pew is an Assistant Professor in the Mechanical and Industrial Engineering Department at Montana State University. His research interests are focused on biomechanics and human-machine interactions with applications to the advancement of lower limb amputee technologies. This includes the design of new amputee devices as well as sensing and control schemes to facilitate two-way communication between the user and the prosthetic device. Control employs the use of body-worn sensors such as inertial measurement units and electromyography as well as the development of machine learning techniques to translate that motion information into identifiable control signals. In addition, I have interests related to running and human performance evaluation and improvement. 


Assessing Forecast Performance of Empirical Crop Response Models using Precision Agriculture and On-Farm Precision Experimentation

Date/Time: Monday, April 18, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Paul Hegedus

Abstract: Precision agroecology leverages the data derived from precision agriculture technology to characterize ecological relationships between crop responses, the environment, and agronomic inputs. Decision support systems utilize models that describe these relationships to generate management recommendations for agronomic inputs, such as nitrogen fertilizer. Yet there is uncertainty in the literature on the best model forms to characterize crop responses to agricultural inputs, likely due to the variability in crop responses to inputs between fields and across years. Seven fields with at least three years of experimentally varied nitrogen fertilizer rates were used to compare the ability of five different model types to forecast crop responses and net-returns. The five model types for each field were investigated using all permutations of the three years of data, where two years were used for training and a third was held out to represent a “future” year. The five models tested were a frequentist based non-linear sigmoid function, a generalized additive model, a non-linear Bayesian regression model, a Bayesian multiple linear regression model, and a random forest regression model. The random forest regression typically resulted in the most accurate forecasts of crop responses and net-returns across most fields. However, in some cases the model type that produced the most accurate forecast of grain yield was not the same as the model producing the most accurate forecast of grain protein concentration. Models performed best when the data used for training models was collected from years with similar weather conditions to the forecasted year. The results are important to developers of decision support tools to minimize assumptions when selecting models used for simulating management outcomes and deriving economically and ecologically optimized nitrogen fertilizer rates.

Bio: Paul Hegedus is a Ph.D. candidate of Ecology and Environmental Sciences at Montana State University (MSU) in the Land Resources and Environmental Sciences Department, where he has also completed a Certificate of Applied Statistics. Planning to graduate in 2022, Paul began his Ph.D. in 2018 after receiving a B.S. in Land Rehabilitation with a minor in Soil Science from MSU in 2017. Paul works as the Research Associate for the Agroecology Lab at MSU and as Field Trial Supervisor for the Data-Intensive Farm Management project at the University of Illinois. His graduate research is focused on data intensive agroecological approaches that harness precision agriculture technology and data science approaches to optimize nitrogen fertilizer management. Paul was awarded an Undergraduate Research Award from the WSSA in 2016 for research on herbicide resistance. In 2019, Paul was awarded a graduate fellowship from the USDA WSRE program to fund his graduate work. He is a member of the ISPA and ESA. 


Understanding and Addressing Public Distrust of AI & Autonomous Systems

Date/Time: Monday, April 11, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Kristen Intemann

Abstract: According to a 2017 Pew Research poll, twice as many U.S. adults say they are “more concerned than excited” about the increased use of intelligent and autonomous systems and nearly half say that they are “as concerned as they are excited about AI.” This is worrisome given the numerous ways that such systems are being increasingly turned to in developing tools to be used in virtually every aspect of human life including: healthcare, agriculture, marketing, social media, transportation, and policing. This talk will present a variety of reasons that segments of the public have for distrusting of both particular applications as well as artificial and intelligent systems more generally. Understanding the sources of distrust is vital for identifying strategies that would increase trust and produce responsible and fair intelligent and autonomous systems.

Bio: Dr. Kristen Intemann is a Professor of Philosophy in the Department of History & Philosophy and the Director of the Center for Science, Technology, Ethics, and Society at MSU. Her research lies at the intersection of science and ethics, looking at questions such as the responsibilities of scientists, public trust in science and technology, and public engagement with science. She teaches courses on environmental ethics, biomedical ethics, technology ethics, and philosophy science, receiving the President’s Excellence in Teaching Award in 2009. She has published over 30 peer-reviewed journal articles and book chapters as well as a book, The Fight Against Doubt: How to Bridge the Gap between Scientists and the Public, which was published by Oxford University Press in 2018.  


Composing With Code—Because an 88-Key Keyboard Just Isn’t Enough

Date/Time: Monday, April 4, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Linda Antas

Abstract: A grand piano has 88 keys and is a helpful tool for composing music—but a computer has more keys and opens up even more possibilities. This talk will explore the merging of computer science and composition for acoustic instruments, electronically-generated sounds, and combinations of the two. Examples will be drawn from the presenter’s works involving code-based sound synthesis, algorithmic composition, and real-time signal processing. Creating audio via the real-time data-mapping of brain waves, and compositions based on data-mapping GPS data from trips on Montana’s wonderful trails and rivers will be discussed.

Bio: Dr. Linda Antas is an Associate Professor of Music Technology in the School of Music at Montana State University. Linda received her DMA in computer music composition from the University of Washington in 2002. Her research interests include Code- and GUI-based sound synthesis, algorithmic composition, sonification, multimedia production, and music cognition 


What if Algorithms Weren’t the Ghost in the Machine? Using Explainable AI (XAI) Methods to Turn Algorithmic User Experiences into Research Data Objects

Date/Time: Monday, March 28, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Jason Clark

Abstract:Jason A. Clark, Professor and Lead for Research Optimization, Analytics, and Data Services (ROADS) at MSU Library, will be discussing his research developing software and a curriculum to support the teaching of "Algorithmic Awareness": an understanding around the rules that govern our software and shape our digital experiences. Taking inspiration from investigative data journalists, like The Markup, Jason will introduce a research module for algorithm auditing practices using code, web scraping methods, and structured data formats to uncover proprietary algorithms and turn them into research data objects for analysis. (Code is available in our #AlgorithmicAwareness GitHub repository.) The case study for the module will be the YouTube Video Recommendation Algorithm which has come under criticism for its tactics in drawing parents’ and childrens’ attention to their videos. The goal will be to show the generic patterns, data points, and scripts one can use to analyze algorithmic user experiences and demonstrate how code can be used to turn algorithms into datasets for analysis. In the end, attendees will be able to realize actionable steps for seeing algorithms as data objects, gain a sense of the first steps one can take to programmatically audit these systems with code, and take away investigative data techniques for applying Explainable AI methods to your own work and teaching.

Bio: Jason is the lead for Research Informatics, where he builds and supports research and data services at theMontana State University (MSU) Library. In his work, he has focused on Semantic Web development, digital library development,metadata and data modeling,web services and APIs, search engine optimization, andinterface design. Before coming to MSU, Jason became interested in the intersection between libraries and technology while working as a web developer for the Division of Information Technology at the University of Wisconsin. After two years, he moved on to work as the web services librarian at Williams College Libraries. Jason holds a BA in English and Philosophy from Marquette University, an MA in English from the University of Vermont, and an MLS from the University of Wisconsin-Madison, School of Library & Information Studies.


Statistical Inference in Topological Data Analysis

Date/Time: Monday, March 21, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Jordan Schupbach

Abstract: Topological data analysis (TDA) is a fairly newinterdisciplinary field that seeks to represent the shape of data using tools from algebraic topology. It has the ability to distill complex structural information present in high-dimensional datasets. However, methods for analyzing these representations under non-trivial sampling designs are few to non-existent and as a result are rarely employed in practice. Persisence intensity functions are a common topological descriptor in used in the field of TDA.  In this talk, novel methods for conducting hypothesis testing with persistence intensity functions in the setting of hierarchical sampling designs will be presented.

Bio: Jordan Schupbach is a PhD student in statistics at Montana State University co-advised by John Borkowski and John Sheppard. His primary research involves conducting statistical and predictive inference for topological data analysis (TDA) using a point process methodology. At MSU, he has been involved in conducting research using a TDA methodology for predicting progression of prostate cancer and in using Bayesian networks for conducting diagnostics and prognostics in systems health management. His general research interests include machine learning, Bayesian statistics, nonparametric statistics, spatial statistics, and functional data analysis.


Solving Industrial Optimization Problems with Quantum Annealing

Date/Time: Monday, March 7, 4:10 p.m - 5:00 p.m. via MS TEAMS
Speaker: Dr. Alexander Feldman, PARC

Abstract: Quantum annealing machines become an important tool in solving problems of industrial significance. Experimentation in areas such as circuit diagnostics and Automated Test Pattern Generation (ATPG) indicate that quantum annealers will soon outperform classical methods. Despite this, due to hardware limitations, there will always be problems that are too big for the underlying hardware. One way of approaching these large problems is to preprocess and split them.

In this talk we will discuss hybrid methods for solving circuit diagnosis, ATPG, satisfiability and other problems from combinatorial optimization. The emphasis is on preprocessing and on the tool-chain that converts the input problem to a representation suitable for quantum annealing. We will show several advanced techniques for reducing the number of ancillary variables. These techniques significantly improve the performance of the hybrid optimization process.

We will also discuss a class of more difficult problems related to circuit synthesis. These are in the second level of the polynomial hierarchy. Solving them will answer important questions related to quantum computing and the scalability of this promising technology.

Bio: Alexander Feldman is a researcher at PARC (former Xerox PARC). Before that he was a postdoc at University College Cork and a visiting researcher at Ecole Polytechnique Fédérale de Lausanne (EPFL) and Delft University of Technology. He has obtained his Ph.D. (cum laude) in computer science/artificial intelligence and M.Sc. (cum laude) in parallel and distributed systems from the Delft University of Technology. He has more than 50 publications in leading conference proceedings and international journals covering topics from artificial intelligence, model-based diagnosis, computer science, and engineering. In cooperation with NASA Ames Research Center and PARC, Alexander Feldman has co-organized the International Diagnostic Competitions (DXC). Alexander Feldman's interest cover wide spectrum, including topics such as model-based diagnosis, automated problem solving, software and hardware design, quantum computing, logic design, design of diagnostic space applications, digital signal processing, and localization. 


Exploration of Multi-Objective Optimization in the Factored Evolutionary Framework

Date/Time: Monday, February 28, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Amy Peerlinck

Abstract: Multi-Objective Optimization (MOO) looks at problems with two or more competing objectives. Such problems occur naturally in the real world. For example, many engineering design problems have to deal with competing objectives, such as cost versus quality in product design. How do we handle these competing objectives? To answer this question, meta-heuristic algorithms that find a set of Pareto optimal solutions have become a popular approach. However, with the increase in complexity of problems, a single population approach may not be the most efficient to solve large scale multi-objective optimization problems. For this reason, co-operative co-evolutionary algorithms (CCEA) are used, which split the population into subpopulations optimizing over subsets of variables that can now be optimized simultaneously. Factored Evolutionary Algorithms (FEA) extends the CCEA idea by including overlap in the subpopulations. So far, FEA has not been applied to the field of MOO, but we believe it could be an effective alternative approach to help solve these types of problems. In this talk, we lay out our plan to research the impact of different ways to create subpopulations and how it influences large-scale and multi-objective optimization. We intend to look at the influence of overlapping and distinct variable decompositions, as well as objective decomposition approaches for MOO. 

Bio: Amy Peerlinck received her MS in computer science from Montana State University, a BA in applied linguistics from the University of Antwerp, and a BS in information science from Karel De Grote College/University. She is currently working towards her PhD in computer science at Montana State University, where she is a research assistant on a Precision Agriculture grant, optimizing profit for farmers through Machine Learning Techniques. 


Authoring Social Interactions Between Humans and Robots

Date/Time: Monday, February 7, 4:10 p.m. - 5:00 p.m. in Barnard 108
Speaker: David Porfirio

Abstract: Robots serve as interaction partners to humans in the workplace, at home, and for leisure activities, but designing social human-robot interactions (HRIs) is non-trivial. Challenges arise from the need to create interaction experiences that are successful with respect to both task and social outcomes. In particular, HRI developers must manage the low-level details of a robot program, such as asynchronously sensing external input while producing concurrent behaviors like speech and locomotion, while manipulating the robot’s higher-level decision making to produce a natural interaction flow. A further challenge includes the differing success criteria for HRIs within separate interaction contexts, in that developers must consider the end-user constraints and preferences specific to each individual context within which the robot will be deployed. In this talk, I will present my past research and plans for future work on how HRI development approaches can help mitigate these challenges. Approaches of interest include software or hardware interfaces and assistive algorithms made specifically for programming robots. I seek to answer how these development tools and techniques can support HRI developers in creating robust interaction designs by (1) filling in gaps in developer knowledge and expertise and (2) eliciting knowledge already possessed by developers and assisting with the integration of this knowledge into robot programs. 

Bio: David Porfirio is a Ph.D. candidate at the University of WisconsinMadison. His interests lie in investigating and designing human-robot interaction development tools that make the process of programming social robots easy and approachable for experts and non-experts alike. David has received numerous fellowships and awards during his Ph.D., including the NSF Graduate Research Fellowship, the Microsoft Dissertation Grant, and a best paper award for his work on formally verifying social norms in human-robot interaction designs. Prior to his research at UW–Madison, David earned bachelor’s degrees in computer science and human physiology from the University of Arizona.


Automated AI: Aspirations and Perspirations

Date/Time: Monday, January 31, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Lars Kotthof

Abstract: AI and machine learning are ubiquitous, but AI and ML experts are not. Arguably, at least some of the tasks those scarce experts are tackling do not make the best use of their skills and expertise — manually tweaking heuristics and hyperparameter settings is tedious but relatively straightforward. Automating these tasks allows the human experts to focus on the interesting and creative work.  In this talk, I will outline the aspirational goal of automating large parts of AI that are currently painstakingly done by human experts, including engineering AI software. I will describe some of the progress that has been made to date, in particular in automated machine learning. The talk will conclude with a broader outlook on how the development of automated AI has positive impacts in other fields, using Materials Science as an example.

Bio: Lars Kotthoff is an assistant professor at the University of Wyoming and held post-doctoral appointments at the University of British Columbia, Canada, University College Cork, Ireland, and the University of St Andrews, Scotland. His work in meta-algorithmics, automated machine learning, and applying AI to Materials Science has resulted in more than 80 publications with more than 3333 citations, supported by more than $3M in funding. He is one of the principal developers of the award-winning mlr machine learning software, widely used in academia and industry.


Seminars from 2021.