For a planned list of upcoming speakers see this Google Calendar.


Title: The role of application layer mechanisms in content distribution for mobile Web and augmented reality

 Date/Time: Monday, October 2, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Presented: Mike Wittie

Abstract: In this talk I will present several methods for speeding up mobile Web content delivery in cellular networks. The common theme in these approaches is to give the application layer a choice in the use of network resources. Instead of using cross layer approaches, we rely on light weight application-layer measurement implemented and validated in the context of Akamai's CDN infrastructure. I will also discuss the implications of these results for the delivery of augmented reality content - traffic that is both similar and quite different in its characteristics from the mobile Web.


 

An Overview of the Numerical Intelligent Systems Laboratory

Date/Time: Monday, September 25, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Presented: John W. Sheppard

Abstract: In this talk, we present an overview of recent and current research in the Numerical Intelligent Systems Laboratory (NISL). Specifically, we present work in probabilistic risk assessment and prognostics, factored evolutionary algorithms, and deep learning. We also touch on emerging projects in the lab and potential research problems of interest to graduate students.


 Overview of the Applied Algorithms Lab

Date/Time: Monday, September 18, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Presenters: Brendan Mumey, Alan Cleary, Sam Micka, and Daniel Salinas

Abstract: Dr. Mumey and students will present an overview of several recent and ongoing projects in the applied algorithms lab.  Topics to include job scheduling to better use green energy, monitoring traffic in networks and a couple problems from computational biology.


 Overview of Computational Topology

Date/Time: Monday, September 11, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Reid 103
Presenters: Brittany Terese Fasy, Robin Belton, Sam Micka, Anna Schenfisch

Abstract: In this talk, we give a brief overview of topology and, in particular, of persistent homology.  We then give a flavor of some of the problems that we are working on, from applications to prostate cancer diagnosis and prognosis, to understanding distance measures, to analyzing road networks under realistic assumptions (such as being represented by a directed graph).


Welcome Seminar

Date/Time: Monday, August 28, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: John Paxton, Gianforte School of Computing, Montana State University

Abstract: This seminar will provide new and continuing graduate students with (1) useful information, (2) an opportunity to meet other students, staff and faculty, and (3) an opportunity to ask questions.


Automated Prediction and Curation of Bio-Ontology Terms

Date/Time: Monday, May 8, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Indika Kahanda

Abstract: A key component of Precision Medicine is to take into account the individual variability in genes for disease treatment. The successes of this process is highly dependent on the reliability of large-scale biological databases. Typically, these databases are manually curated by professional biocurators who extract the information on biological entities from biomedical literature in the form of standard vocabularies called bio-ontologies. However, this process is highly resource consuming and thus leads to the incompleteness of these databases. Furthermore, wet-lab experiments that are used to generate evidence on many different biological entities such as proteins are also highly resource consuming in nature. We identify these as  two of the major bottlenecks of this pipeline and attempt to find answers to (1) can we develop accurate high-throughput computational tools for predicting bio-ontology terms?, and (2) can we automate the process of biocuration using natural language processing techniques?. In this talk, I will describe two projects, involving automated prediction and curation of Human Phenotype Ontology (HPO) and Gene Ontology (GO) terms, that attempt to provide answers to these questions.

Bio: Dr. Indika Kahanda is an Assistant Teaching Professor in the Gianforte School of Computing at Montana State University. His research interests include Bioinformatics and Biomedical Natural Language Processing. He works on the application of machine learning, data mining and natural language processing techniques for solving problems related to large-scale biological data. His current work focuses on predicting mental illness categories for biomedical literature, protein function prediction and protein-function relation extraction from biomedical literature. He received his Ph.D. in Computer Science from Colorado State University in 2016 in the area of Bioinformatics, a Master of Science in Computer Engineering from Purdue University in 2010, and a Bachelor of Science in Computer Engineering from University of Peradeniya, Sri Lanka in 2007.


Transport Profiling for Big Data Transfer Over Dedicated Channels

Date/Time: Thursday, April 27, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Daqing Yun

Abstract: Extreme-scale scientific applications in various domains such as earth science and high energy physics among multiple national laboratories within U.S. are generating colossal amounts of data, now frequently termed as “big data”, which must be stored, managed and moved to different geographical locations for distributed data processing and analysis. High-performance networks featuring high bandwidth and advance reservation are being developed and deployed to support such scientific applications. However, even if a dedicated channel is provisioned, the end-to-end data transfer performance still largely depends on the transport protocols being used on the end-hosts and maximizing their throughput performance is still very challenging mainly because: i) their optimal operational zone is affected by the configurations and dynamics of the network, the endhosts, and the protocol itself, ii) their default parameter setting does not always yield the best performance, iii) application users, who are domain experts, typically do not have the necessary knowledge to choose which transport protocol to use and which parameter value to set.

We design and develop a network connection profiler named “Transport Profile Generator” (TPG) to characterize and enhance the end-to-end throughput performance of a specifically selected data transfer protocol for big data movement over high-speed dedicated network connections. TPG employs an exhaustive search-based profiling approach to sweep through the combinations of parameter settings and enables users to determine the “best” set of parameter values for the optimal data transfer performance. To improve the efficiency of transport profiling, we propose a stochastic approximation-based profiling method, referred to as FastProf, which employs the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm to accelerate the exploration of the parameter space. Furthermore, we extend the “fast” profiling approach to other transport protocols and propose a profiling optimization-based data transfer advisor to help end users determine the most effective data transfer method with the most appropriate control parameter values to achieve the best data transfer performance.

In this talk, I will introduce our profiling approach to explore the optimal operational zone of a data transfer protocol in a given network environment and then present extensive experimental results of both TPG and FastProf collected in various network environments including a 10 Gb/s back-to-back connection in our local testbed, 10 Gb/s emulated long-haul connections with various RTT delays at Oak Ridge National Laboratory, and 10 Gb/s physical connections with both short and long delays from Argonne National Laboratory to University of Chicago.

Bio: Daqing Yun received his Ph.D. degree in computer science from New Jersey Institute of Technology in August 2016. He is currently an assistant professor at Harrisburg University of Science and Technology. His research interests include high-performance networking, parallel and distributed computing, green networking, and big data.


High-Performance Computing and its Application in Power System Dynamic Simulation

Date/Time: Monday, April 17, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Shuangshuang  Jin

Abstract: Dynamic simulation for transient stability assessment is one of the most important computational tasks that affect the secure operation of the bulk electric power system. However, modeling the system dynamics and network involves the computational intensive time-domain solution of numerous differential and algebraic equations (DAE), which limits the ability to operate a much-evolved power system with significant dynamic and stochastic behaviors introduced by the increasing penetration of renewable generation and the deployment of smart grid technologies. 

Modern High Performance Computing (HPC) holds the promise to accelerate power system application by parallelizing its kernel algorithms without compromising computational accuracy. The improved performance is expected to have a significant impact on online power grid dynamic security assessment, ultimately leading to better reliability and asset utilization for the power industry. 

This talk will introduce the basic structure of power system, the HPC concept, and its application to power system dynamic simulation, discuss how to utilize advanced computing techniques for real-time power grid modeling and simulation, and present research outcomes of some parallel power system dynamic simulation applications. 

Bio: Dr. Shuangshuang Jin is a senior research scientist at Electricity Infrastructure Group of Pacific Northwest National Laboratory. Her research interests include high-performance computing, parallel programming, advanced grid analytics, and computer modeling and visualization. She has authored or coauthored 30+ journal articles and conference papers in the area of Computer Science, Power Engineering, and Bioinformatics. She received her M.S. in Computer Science with a specialty in Computer Graphics and Visualization, and Ph.D. in Computer Science with a specialty in Scientific Computation from Washington State University in 2003, and 2007, respectively.


Folds, Intersections, and Inflections: Seven Ways to Distinguish a Cylinder from a Möbius Band

Date/Time: Monday, April 10, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 103
Speaker: Tom Banchoff

Abstract: This talk develops seven different visual ways to distinguish whether a strip neighborhood of a curve on a surface is an oriented cylinder or a non-orientable Möbius band. Computer graphics illustrations will explore fold curves of projections of surfaces into planes, self-intersection curves of surfaces in three space, and a new criterion in terms of surface inflections.

Bio: Thomas Francis Banchoff is an American mathematician specializing in geometry. He is an emeritus professor at brown university, where he taught since 1967. He is known for his research in differential geometry in three and four dimensions, for his efforts to develop methods of computer graphics in the early 1990s, and for his pioneering work in methods of undergraduate education utilizing online resources.

Banchoff attended the university of Notre Dame and received his Ph.D from UC Berkeley in 1964, where he was a student of Shiing-Shen Chern. Before going to Brown he taught at Harvard University and the University of Amsterdam. In 2012 he became a fellow of the American Mathematical Society. He was a president of the Mathematical Association of America from 1999-2000.


Anomaly Detection Through Spatio-Temportal Data Mining, with Application to Real-Time Outlying Sensor Identification

Date/Time: Monday, April 3, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Doug Galarus

Abstract: There is a need for robust solutions to the challenges of real-time spatio-temporal outlier and anomaly detection. In our dissertation, we define and demonstrate quality measures for evaluation and comparison of overlapping, real-time, spatio-temporal data providers and for assessment and optimization of data acquisition, system operation and data redistribution. Our measures are tested on real-world data and applications, and our results show the need and potential to develop our own mechanisms for outlier and anomaly detection. We then develop a representative, real-time solution for the identification of outlying sensors that far outperforms state-of-the-practice methods in terms of accuracy and is computationally efficient. When applied to a real-world, meteorological data set, we identify numerous problematic sites that otherwise have not been flagged as bad. We identify sites for which metadata is incorrect. We identify observations that have been mislabeled by provider quality control processes. And, we demonstrate that our method outperforms enhanced versions of state-of-the-practice methods for assessment of accuracy using comparable or less computation time. There are many quality-related problems with real data sets and, in the absence of an approach like ours, these problems may have largely gone unidentified. Our approach is novel for the simple but effective way that it accounts for spatial and temporal variation, and that it addresses more than just accuracy. Collectively these contributions form an overarching data-mining framework and example that can be used and extended for data-mining method development, model building and evaluation of spatio-temporal outlier and anomaly detection processes.

Bio: For the past 13 years, Doug Galarus has grown a nationally-recognized, award-winning research program, has supervised numerous students and staff, and has overseen multiple labs at the Western Transportation Institute at Montana State University. In his “spare time”, Doug has worked towards a PhD in Computer Science. Doug is also an accomplished educator, having taught both mathematics and computer science course at the college level. Doug has taught and led certification and continuing education programs, and worked on several nationally-published curriculum projects, developing mathematics texts and technology for middle school and high school students. Doug has an active teaching certificate for mathematics and computer science in grades 5-12. All total, Doug has 27 years of professional experience in systems engineering, information technology development, testing, implementation, management and instruction. He has extensive experience as the project manager and technical lead for mobile data communications systems, database-driven web sites, web site design, desktop applications, kiosk development, smartphone and tablet–based development, and interactive multimedia.


Deep Neural Networks for Artificial Intelligence: Talking with Machines

Date/Time: Monday, March 27, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Larry Heck

Abstract: Neural networks have been a topic of research for many decades. However, neural networks have only recently begun to achieve widespread adoption. In this talk, I will give my perspective on 'why now?'  and highlight recent R&D on a special class of neural networks called 'deep neural networks'. In the second part of the talk, I will focus on a frontier for deep learning research - talking to machines. Natural conversational interfaces have long been viewed as a watermark for intelligent systems including the often cited Turing Test. I will give an overview of our work within Google Research on how we are leveraging deep learning to make rapid advancements in this area.  

Bio: Dr. Larry Heck is Director of Research of the Deep Dialogue team at Google, an advanced R&D effort behind the Google Assistant. From 2009-­2014, he was the Chief Scientist of the Microsoft Speech products team and later a Distinguished Engineer in Microsoft Research. In 2009, he co­-founded the initiative that led to Microsoft’s Cortana personal assistant. From 2005 to 2009, he was Vice President of Search & Advertising Sciences at Yahoo!, responsible for the creation, development, and deployment of the algorithms powering Yahoo! Search, Yahoo! Sponsored Search, Yahoo! Content Match, and Yahoo! display advertising. From 1998 to 2005, he was with Nuance Communications and served as Vice President of R&D, responsible for natural language processing, speech recognition, voice authentication, and text-to-speech synthesis technology. He began his career as a researcher at the Stanford Research Institute (1992-1998), initially in the field of acoustics and later in speech research with the Speech Technology and Research (STAR) Laboratory. Dr. Heck received the PhD in Electrical Engineering from the Georgia Institute of Technology in 1991. He is a Fellow of the IEEE and has over 50 United States patents.


Biodiversity and Databases:  The Odd Couple?

Date/Time: Monday, March 20, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Dave Roberts

Abstract: Ecologists worldwide are concerned about the loss of biodiversity at local, regional, and global scales.  Determining the distribution and abundance of species by site is a primary activity of ecologists everywhere.  Computer scientists are concerned with the efficient access and storage of information and the design of user interfaces to facilitate that access.  Biodiversity databases offer the potential to capitalize on the work of computer scientists to address global biodiversity concerns.  This seminar is an effort to bridge the gap between ecologists and computer scientists and to possible recruit students and faculty with an interest in contributing to global sustainability.

Bio: Dave Roberts is a vegetation ecologist with extensive experience in the vegetation of the northern Rocky Mountains. He is a member of the Executive Committee of the Panel for the National Vegetation Classification (which seeks to document the entirety of the vegetation of the continental US) and a journeyman computer programmer with an interest in multivariate analysis, geographic information systems (GIS) and database design. Dave is currently the Head of the Ecology Department at MSU.


Flux Analysis of a Metabolic Network in Cells Stimulated by Compression

Date/Time: Monday, March 6, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Ron June and Daniel Salinas

Abstract: Cells are the fundamental units of life, and most cells use glucose to produce energy and various precursors for the machinery needed to operate. We have developed a network model of glucose metabolism. We apply this model to cartilage cells. Cartilage cells are compressed during everyday activities such as walking. We use experimental data from in vitro chondrocytes subject to sinusoidal compression. We determine changes in metabolic flux and present future directions for this work.

June Bio: Ron June has longstanding research interests in osteoarthritis and biomechanics related to improving human health. At Dartmouth College Dr. June studied Engineering Sciences focused on biomechanics and developed a novel wrist protection strategy, contributed to the design and manufacture of a system for monitoring 3D head accelerations in helmeted sports, and helped to develop a finite element model to understand the biomechanics of spinal pain in rats. As a graduate student at the University of California, Davis, Dr. June studied cartilage biomechanics. Specifically, he investigated a novel mechanism of cartilage flow-independent material properties. During the course of this project, he discovered novel biomechanical phenomena and made several experimental observations that are consistent with polymer dynamics as a potential physiological mechanism of cartilage viscoelasticity. As a postdoctoral fellow, Dr. June has implemented a surgical model of mouse osteoarthritis and studied protein transduction. He developed a pH-sensitive system for intracellular delivery of macromolecules and has investigated protein transduction in cartilage and chondrocytes. Dr. June’s laboratory at Montana State University was completed in March 2012, and his research involves synovial joint drug delivery and mechanotransduction. Dr. June has been named a GAANN Fellow, NIH Kirchstein Fellow, and the Montgomery Street Scholar by the ARCS Foundation. His long-term research interests lie in understanding cartilage and joint mechanobiology to develop novel therapeutic strategies for joint disease.

Salinas Bio: Daniel Salinas is a Ph.D. student at the Gianforte School of Computing. He is co-advised by Drs. Mumey and June. He has been in the Ph.D. program three years, following the completion of his M.S., also from the Computer Science department where his thesis work examined minimal cuts in metabolic networks. His research interests are metabolic networks and metabolic flux analysis.


Use of a Modeling and Simulation Framework to Identify and Quantify Emergent Behavior in System of Systems Simulations

Date/Time: Monday, February 27, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Mary Ann Cummings

Abstract: This presentation describes a Modeling and Simulation (M&S) framework for building System of Systems (SoS) simulations, known as Orchestrated Simulation through Modeling (OSM).  This framework allows Discrete Event System Specification (DEVS) M&S components and output visualizations to be developed separately as plug-ins and combined to form a complete system.  Independently developed plug-ins can be added and removed as desired to dramatically change the system.  With the OSM framework, an evolutionary System of Systems can be intelligently created by a community.  Each community member only needs to fully understand the pieces they develop.   With this framework, we can now define a software architecture that allows the collection and graphing of SoS metrics in one location such that these metrics can then be used to evaluate the emergent behavior of the SoS.  This can be accomplished by architecting swappable and reusable Simulators and Experimental Frames to provide the changing of these elements without any of the other elements, including the models, having to change.  This research involves determining if these collected metrics will enable the identification and analysis of emergent behavior among the interactions of the models (component systems). 

Bio: Dr. Mary Ann Cummings earned her Ph.D. from the Naval Postgraduate School in 2015.  Her research interests include software frameworks, software reuse, modeling and simulation software, and formal methods.  Dr. Cummings works for the Naval Surface Warfare Center as a Principal Computer Scientist/GS15.


Useful Math for Data Analytics (that most students forget before their first jobs)

Date/Time: Monday, February 13, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Mark Pratt

Abstract: Machine learning tools applied to large and noisy data sets can be extremely powerful and require little preparation to use.  However, they are usually difficult to interpret.  Sometimes simpler is better.  In this talk, we will go over a some general methods useful to explore and get first quantitative results from partially understood data sets.   The general theme will be transforming non-linear problems to linear ones that have meaningful outputs and predictable (and usually short) compute times.  The methods are simple, powerful and should be in everyone’s analysis toolbox but in practice, are not.  Many students entering an analytical profession already have seen these fundamentals but have already forgotten them or skip them in favor of high power techniques.

Bio: Dr. Mark Pratt is a physicist by training, data analyst by nature and system engineer by habit.  He has held a number of technical leadership positions in science and engineering spanning astronomy and astrophysics, telecommunications, lasers and photonics, instrumentation and genomics.   Since 2006, Mark has been generally focused on the development of low cost DNA sequencing as Principal Engineer at Solexa and Illumina and later on improving the accuracy of DNA sequencing applications at Personalis and 10X Genomics.   He is currently CTO of startup still in stealth mode.   Mark received his PhD in Physics from UC Santa Barbara, has 19 issued patents and a number of publications.


Coordination and Data Analytics for Networked Systems

Date/Time: Friday, February 10, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Stacy Patterson

Abstract: Networked systems are systems composed of dynamic agents that interact over a network. Examples of networked systems range from sensor networks to autonomous robotic networks to the millions of networked components within a single robot. In the first part of this talk, I will present recent theoretical results on coordination in networked systems, i.e., how can a group of agents efficiently reach and maintain agreement.  The second part of this talk addresses the challenge of how to efficiently extract and summarize data generated by a networked system, specifically, robotic tactile skins. Finally, I will discuss how tools and results for network coordination and data analytics can be combined to develop solutions for scalable, distributed data analytics in the Internet of Things.

Bio: Stacy Patterson is the Clare Boothe Luce Assistant Professor in the Department of Computer Science at Rensselaer Polytechnic Institute. She received the MS and PhD in computer science from the University of California, Santa Barbara in 2003 and 2009, respectively.  From 2009-2011, she was a postdoctoral scholar at the Center for Control, Dynamical Systems and Computation at the University of California, Santa Barbara. From 2011-2013, she was a postdoctoral fellow in the Department of Electrical Engineering at Technion – Israel Institute of Technology. Dr. Patterson is the recipient of a Viterbi postdoctoral fellowship, the IEEE Control Systems Society Axelby Outstanding Paper Award, and an NSF CAREER award.  Her research interests include distributed systems, machine learning, sensor networks, and the Internet of Things.


Secure Geometric Search on Encrypted Spatial Data

Date/Time: Monday, February 6, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Boyang Wang

Abstract: Geometric range search is a fundamental primitive for spatial data analysis in SQL and NoSQL databases. It has extensive applications in Location-Based Services, computational geometry, and computer-aided design. Due to the dramatic increase of data size, it is necessary for companies and organizations to outsource their spatial datasets to third-party cloud services (e.g. Amazon) in order to reduce storage and query processing costs, but meanwhile with the promise of no privacy leakage to the third party. Searchable encryption is a technique to perform meaningful queries on encrypted data without revealing privacy. However, geometric range search on spatial data has not been fully investigated nor supported by existing searchable encryption schemes. The main challenge, is that compute-then-compare operations required by geometric range search cannot be supported by any existing crypto primitives. In this talk, I will present my recent research in secure geometric range search over encrypted spatial data. The general approach is to adopt new representations of spatial data, and transform geometric range search to avoid compute-then-compare operations, so that existing efficient crypto primitives can be integrated. I will present two designs, the first one focuses on circular range search, and the second one can handle arbitrary geometric range queries. The security of both schemes are formally proven under standard cryptographic assumptions. Finally, I will briefly mention some of my future research plans.

Bio: Boyang Wang is a Ph.D. Candidate in the Department of Electrical and Computer Engineering at the University of Arizona. He received his first Ph.D. degree in Cryptography in 2013 and his B.S. degree in Information Security in 2007, both from Xidian University, China. He worked for Bosch Research & Technology Center as a research intern in 2015. He was a visiting student at the University of Toronto and Utah State University. His research interests include applied cryptography, information security and privacy-preserving techniques with focuses on data security and privacy. He has published over 20 research papers in top journals and conferences, including TIFS, TDSC, TSC, TPDS, INFOCOM, CNS, ACM ASIACCS, and ICDCS.

Towards End-to- End Security and Privacy: Accountability and Data Privacy in the Life Cycle of Big Data

Date/Time: Monday, January 30, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Taeho Jung

Abstract: The advent of big data has given birth to numerous innovative life-enhancing applications, but the big data is often called as a double-edged sword due to the increased privacy and security threats. Such threats, if unaddressed, will become deadly barriers to the achievement of big opportunities and success anticipated in the big data industry because they may arise at any part of the life cycle of the big data.

In this talk, I will describe my research which addressed various privacy and security issues in the big data life cycle: acquisition, storage, provisioning, and consumption. More specifically, I will present how to make large-scale data trading accountable against dishonest users for the provisioning phase of big data. Subsequently, I will briefly present how various types of data can be protected in their acquisition and consumption phases of the life cycle, and finally I will introduce the theoretic foundations of the presented research.

Bio: Taeho Jung is a Ph.D. candidate in Computer Science at Illinois Institute of Technology, advised by Professor Xiang-Yang Li. His research area, in general, includes privacy and security issues in data mining and provisioning in the big data life cycle. His paper has won a best paper award (IEEE IPCCC 2014), and two of his papers were selected as best paper candidate (ACM MobiHoc 2014) and best paper award runner up (BigCom 2015) respectively. He has served many international conferences as a TPC member, including IEEE DCOSS 2016, IEEE MSN 2016, IEEE IPCCC 2016, and BigCom 2016.


Intelligent tracking of moving objects by cracking the neural code for visual motion

Date/Time: Friday, January 27, 2017 from 4:10 p.m. - 5:00 p.m.
Location: Barnard Hall 108
Speaker: Neda Nategh

Abstract: A particularly difficult aspect of object tracking in artificial vision systems occurs when the observer itself is moving producing a confounded motion pattern that must be disentangled to reliably signal the object motion. While machine vision systems have improved manifold in their capabilities, they are still challenged by a trade-off between runtime, efficiency, accuracy, robustness, and flexibility, especially to handle real-world complexities such as object occlusions, multiple moving objects, and varying scene statistics. At other hand, our biological visual system is capable of performing similar motion detection and discrimination task reliably every moment that we are awake to compensate for constant eye movements of different sorts. Employing a statistical model-based approach driven by data, we are able to characterize the time-varying information conveyed by retinal and cortical spike responses during an eye movement task (encoding) and understand a readout mechanism by which downstream neurons can extract relevant motion information in the scene (decoding), all in a statistically optimal computational framework. Moreover, employing deep convolutional neural networks (CNN) whose computational units and connectivity are set to mimic the biophysical properties of our statistically optimal model, we will be able to generalize to real-world motion stimuli. This model-based approach to understand the neural code of visual motion may ultimately lead to intelligent motion computing schemes that will advance the state-of-the-art machine vision from a moving platform including autonomous vehicles, mobile robotic systems, and assistive technology for visually impaired people.

Bio: Neda Nategh is an Assistant Professor of Electrical and Computer Engineering at Montana State University since January of 2014. She obtained her Ph.D. in electrical engineering, her M.Sc. in electrical engineering, and her M.Sc. in statistics, all from Stanford University, and her B.Sc. in electrical engineering from Sharif University of Technology. She also holds a certificate in Biophysics and Computation in Neurons and Networks from the Neuroscience Institute at Princeton University. She conducts research in the areas of signal, image and information processing, and statistical machine learning, with particular emphasis on computational neuroscience, and biological and machine vision. She has been granted one US patent from her research internship in the Camera Algorithm group at Apple Inc., CA.

 Seminars from 2016.