Automated AI: Aspirations and Perspirations

Date/Time: Monday, January 31, 4:10 p.m - 5:00 p.m. in Barnard Hall 108
Speaker: Dr. Lars Kotthof

Abstract: AI and machine learning are ubiquitous, but AI and ML experts are not. Arguably, at least some of the tasks those scarce experts are tackling do not make the best use of their skills and expertise — manually tweaking heuristics and hyperparameter settings is tedious but relatively straightforward. Automating these tasks allows the human experts to focus on the interesting and creative work.  In this talk, I will outline the aspirational goal of automating large parts of AI that are currently painstakingly done by human experts, including engineering AI software. I will describe some of the progress that has been made to date, in particular in automated machine learning. The talk will conclude with a broader outlook on how the development of automated AI has positive impacts in other fields, using Materials Science as an example.

Bio: Lars Kotthoff is an assistant professor at the University of Wyoming and held post-doctoral appointments at the University of British Columbia, Canada, University College Cork, Ireland, and the University of St Andrews, Scotland. His work in meta-algorithmics, automated machine learning, and applying AI to Materials Science has resulted in more than 80 publications with more than 3333 citations, supported by more than $3M in funding. He is one of the principal developers of the award-winning mlr machine learning software, widely used in academia and industry.


AI Security: Exploring the Vulnerabilities of Modern Deep Learning Systems and Algorithms

Date/Time: Friday, February 4, 4:10 p.m. - 5:00 p.m. via WebEx
Speaker: Adnan Rikan

Abstract: In recent years, Artificial Intelligence (AI) has been deployed in real-world applications because of its superior performance in various cognitive tasks. Such a widespread deployment of AI has raised several security issues in critical applications. A recently developed threat model, namely adversarial attack, poses a potent threat to hijack the functionality of the deployed inference AI model by manipulating the input and network parameters in sensitive applications such as autonomous vehicles, robotics and health care sectors. The adversity of these attacks can cause detrimental social, physical and economic impacts. As a result, the study and analysis of the attack threats and corresponding counter defenses have become a challenging and timely mission for both the industry and academia. This talk will shed light on the emerging security challenges in AI, particularly for deep learning algorithms and systems. It will cover the state-of-the-art adversarial examples, weight perturbation attacks, Trojan attack algorithms, and potential defensive solutions. In addition, it will cover the hardware vulnerabilities of computing platforms (e.g., FPGA) and system-level implications of such threatening novel attack frameworks.

Bio: Adnan Siraj Rakin is a Ph.D. candidate in Computer Engineering Department at Arizona State University (ASU), advised by Dr. Deliang Fan. He completed his B.Sc. degree in Electrical and Electronic Engineering (EEE) from the Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2016. He completed his Master's degree in Computer Engineering from ASU in 2021. His research interests include deep learning, computer vision and security. He has been the author/co-author of over 25 publications on IEEE/ACM top-tier journals and conferences (e.g., CVPR, ICCV, T-PAMI, USENIX Security) in this broad topic of AI Security. 


Authoring Social Interactions Between Humans and Robots

Date/Time: Monday, February 7, 4:10 p.m. - 5:00 p.m. in Barnard 108
Speaker: David Porfirio

Abstract: Robots serve as interaction partners to humans in the workplace, at home, and for leisure activities, but designing social human-robot interactions (HRIs) is non-trivial. Challenges arise from the need to create interaction experiences that are successful with respect to both task and social outcomes. In particular, HRI developers must manage the low-level details of a robot program, such as asynchronously sensing external input while producing concurrent behaviors like speech and locomotion, while manipulating the robot’s higher-level decision making to produce a natural interaction flow. A further challenge includes the differing success criteria for HRIs within separate interaction contexts, in that developers must consider the end-user constraints and preferences specific to each individual context within which the robot will be deployed. In this talk, I will present my past research and plans for future work on how HRI development approaches can help mitigate these challenges. Approaches of interest include software or hardware interfaces and assistive algorithms made specifically for programming robots. I seek to answer how these development tools and techniques can support HRI developers in creating robust interaction designs by (1) filling in gaps in developer knowledge and expertise and (2) eliciting knowledge already possessed by developers and assisting with the integration of this knowledge into robot programs. 

Bio: David Porfirio is a Ph.D. candidate at the University of WisconsinMadison. His interests lie in investigating and designing human-robot interaction development tools that make the process of programming social robots easy and approachable for experts and non-experts alike. David has received numerous fellowships and awards during his Ph.D., including the NSF Graduate Research Fellowship, the Microsoft Dissertation Grant, and a best paper award for his work on formally verifying social norms in human-robot interaction designs. Prior to his research at UW–Madison, David earned bachelor’s degrees in computer science and human physiology from the University of Arizona.


Seminars from 2021.