Computer Science Dept.357 EPS Building
Montana State University
Bozeman, MT 59717
Tel: (406) 994-4780
Department Head:John Paxton
Presented by: Joe Faulhaber
The Microsoft Security Intelligence Report (SIR) analyzes the threat landscape of exploits, vulnerabilities, and malware using data from Internet services and over 600 million computers worldwide. Threat awareness can help you protect your organization, software, and people.
Joe graduated from MSU with a degree in CS in 1996, and started working at Microsoft in 1998, and helped ship the first two versions of the Sharepoint product. After that, he did security-related projects inside the company that ended up coalescing around Microsoft Antimalware Protection technologies and the creation of the Microsoft Malware Protection Center in 2008. Since then, he has led telemetry-gathering efforts in the MMPC, and is working in the business intelligence team, sorting through a quarter billion rows of data collected daily.
Presented by: Utkarsh Goel
MITATE is first-of-its-kind large-scale mobile application prototyping platform that will allow experimentation with custom mobile application traffic between mobile devices and cloud infrastructure endpoints. MITATE will enable developers to evaluate protocol design choices, application deployment alternatives, and component mobility mechanisms - all in live mobile networks spanning geographic areas, carriers, and devices. In this talk Utkarsh will showcase MITATE functionality and design. He will also present some preliminary measurement made using a prototype of the system.
Presented by: Gabe F. Rudy
Next Generation Sequencing technology has made it affordable and commoditized to sequence genes, exomes and genomes for individual diagnostic or research purposes. The cheap and plentiful generation of data from sequencing quickly becomes a informatics problem to process that data to be of use for a clinician or researcher. With the finishing of the human reference genomes, we have a common coordinate system in which to compare an individual’s genome and find differences that we call variants or mutations. Most of these are benign or of low functional consequence, but a single "letter" substitution in an important gene can be the cause of a sever disease. In this talk, I introduce the algorithmic and data challenges of making NGS genomic data accessible. We will deep dive into some of the algorithmic solutions from a computer science perspective and discuss the remaining challenges that are both bioinformatic and basic science oriented to enable easy interpretation of genomes for personal, clinical and research purposes.
Gabe is a 10-year veteran at Golden Helix and spends his days collaborating with a diverse set of scientists and building solutions to enable their research. He earned his Masters in Computer Science from the University of Utah before setting his sights on the fast-changing field of genomics and bioinformatics. Gabe has been involved in developing various algorithms from copy number segmentation to runs of homozygosity and rare variant association testing. Gabe blogs about the genomics field from the perspective of someone building solutions and curating genomic annotations and public databases. His series "A Hitchhiker’s Guide to Next Generation Sequencing" has become quite popular as a starter guide for those entering the field.
Presented by: Dr. Yung-Hsiang Lu
Since the first laptop and the first cellular phone in early 1980s, mobile computing has made significant progress and fundamentally changed everyone's life. This seminar will examine the trends of mobile computing. Mobile computers have many limitations, such as weight, size, and energy. Many solutions have been developed to extend the operational time of mobile computers. Some solutions integrate the convenience of mobile computers and the nearly unlimited resources in cloud servers for heavy computation, such as image processing. This seminar will describe some of these solutions and explain this integration will accelerate. The seminar will then describe the speaker's current projects that bring image processing capabilities to mobile users.
Yung-Hsiang Lu is an associate professor in the School of Electrical and Computer Engineering at Purdue University. His research topics include mobile computing, image processing, wireless sensor networks, and autonomous robots. He is a member of the ACM Distinguished Speakers Program (2013-2016). In 2011, he was a visiting associate professor in the Department of Computer Science at the National University of Singapore. In 2008, he was one of the three recipients of Purdue's Class of 1922 Helping Student Learn Award. In 2004, he obtained a career award from the National Science Foundation for studying energy conservation by operating systems. He is a senior member of the IEEE and the ACM. He is an associate editor of ACM Transactions on Embedded Computing Systems and ACM Transactions on Design Automation of Electronic Systems. He was a past chair of the Green Multimedia Interest Group in the IEEE Multimedia Communication Technical Committee and a past vice-chair of the Low Power Technical Committee in ACM SIGDA. He has served in the program committees of dozens of conferences, symposia, and workshops. He received the Ph.D. degree from the Department of Electrical Engineering at Stanford University and BSEE from National Taiwan University.
A Poisson-Lognormal Conditional-Autoregressive Model for Multivariate Spatial Analysis of Pedestrian Crash Counts across Neighborhoods
Presented by: Yiyi Wang
In this talk, I will discuss about a spatial count model for analyzing 3-year pedestrian crash counts across neighborhoods in Austin, Texas, while controlling for various land use, network, and demographic attributes (e.g., land use balance, residents’ access to commercial land uses, sidewalk density, lane-mile densities [by roadway class], and population and employment densities [by type]). The model specification allows for region-specific heterogeneity, correlation across response types, and spatial autocorrelation via a Poisson-based multivariate conditional auto-regressive (CAR) framework and is estimated using Bayesian Markov chain Monte Carlo methods. Least-squares regression estimates of walk-miles traveled per zone serve as the exposure measure. Here, the Poisson-lognormal multivariate CAR model outperforms an aspatial Poisson-lognormal multivariate model and a spatial model (without cross-severity correlation), both in terms of fit and inference.
Positive spatial autocorrelation emerges across neighborhoods, as expected (due to latent heterogeneity or missing variables that trend in space, resulting in spatial clustering of crash counts). In comparison, the positive aspatial, bivariate cross correlation of severe (fatal or incapacitating) and non-severe crash rates reflects latent covariates that have impacts across severity levels but are more local in nature (like lighting conditions and local sight obstructions), along with spatially-lagged cross correlation. Results also suggest greater mixing of residences and commercial land uses is associated with greater pedestrian crash rates across different severity levels, ceteris paribus, presumably since such access produces more potential conflicts between pedestrian and vehicle movements. Interestingly, network densities show variable effects, and sidewalk provision is associated with lower severe-crash rates.
Yiyi Wang is an assistant professor in the Civil Engineering Department at Montana State University. Her research focuses on applying (advanced) spatial statistical methods to analyze transportation-related data (e.g., lane use, travel behavior, vehicle ownership, and traffic crashes). She is also researching innovative methods to estimate complex models while maintaining computational efficiency. She has published peer-reviewed articles in the Accident Analysis & Prevention, the Journal of Transport Geography, the Journal of Transportation and Land Use, and the Transportation Research Record. Honors and awards include UT Austin’s Robert Herman Endowed Scholarship in 2012 and the Helene M. Overly Memorial Scholarship issued by the Women’s Transportation Seminar, Heart of Texas, in 2011.
Presented by: Guangchi Liu
Assessing multi-hop interpersonal trust in online social networks (OSNs) is critical for many social network applications such as online marketing but challenging due to the difficulties of handling complex OSNs topology, in existing models such as subjective logic, and the lack of effective validation methods. To address these challenges, we for the first time properly define trust propagation and combination in arbitrary OSN topologies by proposing 3VSL (Three-Valued Subjective Logic). The 3VSL distinguishes the posteriori and priori uncertainties existing in trust, and the difference between distorting and original opinions, thus be able to compute multi-hop trusts in arbitrary graphs. We theoretically proved the capability based
on the Dirichlet distribution. Furthermore, an online survey system is implemented to collect interpersonal trust data and validate the correctness and accuracy of 3VSL in real world. Both experimental and numerical results show that 3VSL is accurate in computing interpersonal trust in OSNs.
Presented by: Kaznin Alexey
The first part of presentation will be devoted to Northern (Arctic) Federal University (NArFU) which is located in the North-West of Russia. The second part of the presentation will include general information about research in the field of software engineering (especially design stage) held at NArFU. In the third part of the presentation Dr. Alexey Kaznin will talk about his research. He developed method of systems modelling via Polychromatic Sets and Polychromatic Graphs approach and implemented this method in design stage of software engineering.
Dr. Alexey Kaznin is an Associate Professor at Institute of Mathematics, Information and Space Technologies of Northern (Arctic) Federal University named after M.V. Lomonosov in Arkhangelsk, Russia. He received Ph.D. in technical sciences from the Moscow State Technological University “STANKIN” in 2010. His research interests include the design information systems within software engineering and also development and implementation of new approaches of information systems design.
Presented by: Liessman Sturlaugson
The continuous time Bayesian network (CTBN) has been defined to enable reasoning about complex systems in continuous time by representing the system as a factored, finite-state, continuous-time Markov process. As the CTBN is a relatively new model, many extensions that have been defined and researched with respect to static Bayesian networks have not yet been extended to CTBNs. This proposal intends to address some of these. First, we intend to formally prove several complexity results with respect to CTBNs. Specifically, it is known that exact inference in CTBN is NP-Hard due to the use of a Bayesian network to set the nodes' initial states. However, we propose to prove that exact inference in CTBNs is still NP-Hard even when the initial states are fully observed. Furthermore, we suspect and intend to prove that approximate inference in CTBNs, as with static Bayesian networks, is also NP-Hard. Second, we propose to formalize both uncertain and negative evidence in the context of CTBNs and extend existing inference algorithms to be able to support these new types of evidence. Third, we show how methods for sensitivity analysis of Markov processes can be applied to the CTBN while exploiting the conditional independence structure of the network. This is done through what we call "node isolation," which approximates a nodes' unconditional intensity matrix, analogous to marginalization in a static Bayesian network. Lastly, we intend to research how and when the node isolation process might be used in approximate inference to increase efficiency without significantly decreasing accuracy. This presentation will review preliminary progress on these goals and outline the direction of future research for the completion of this doctoral research.
Design Pattern Decay: An Extended Taxonomy and Empirical Study of Grime and its Impact on Design Pattern Evolution
Presented by: Isaac Griffith
Design patterns are well known solutions to common problems and are extensively utilized in software development. Yet, little empirical work has been conducted to evaluate or validate the consequences that poor design decisions have on pattern realizations. This paper describes a research program to further the understanding of design pattern evolution. Specifically, we focus on design pattern decay by studying how grime, a decisively negative consequence of software evolution occurs. The research proposed herein furthers the exploration of design pattern decay by providing empirical evidence of grime buildup, a new grime taxonomy, and the consequences exhibited through decreased adaptability and maintainability in actual realizations of patterns in code. These notions will be supported through the development of semi-automated grime detection and refactoring research tools that will also link to existing forms of design decay such as code smells, anti-patterns, and modularity violations. An extension of this research focuses on the exploration of these notions inlying coupled pattern realizations.
Presented by: Sean Yaw
Abstract: Smart grid technology has the opportunity to revolutionize our control over power consumption. Currently power-requesting jobs are scheduled in an on-demand fashion; power draw begins when the consumer requests power (turns on an appliance) and ends when the job is complete (appliance is tuned off). Often such jobs may have some flexibility in their starting times (e.g. a dishwasher or electric vehicle charger). We consider the problem scheduling power jobs so as to minimize peak demand. We first consider a general version of the problem in which the job intervals can be staggered. While the problem is known to be NP-hard (we show it is even NP-hard to approximate), we provide an optimal algorithm based on dynamic programming that is fixed-parameter tractable (FPT). For some important special cases we provide new constant-factor approximation algorithms that improve on previous results. Extensive simulation results show that our algorithms improve on existing methods.
This talk is a part of Sean's qualification examination.