Introduction to the Cascading Impact of Lag

Date/Time: Monday, November 24, 2014 from 4:10 p.m. - 5:00 p.m.

Location: EPS 108

Presenter: Eben Howard

Abstract:

One of the primary causes of player frustration in online games is lag, which means a delay in exchange of game state data. We show that current lag mitigation techniques are not complete. In an online cooperative game the change in QoE of one player due to their lag can have a cascading effect on the QoE of the other players. Our results are obtained through a novel experimental framework based on previous QoE and online game research.

Based on our observations, we recommend changes to the current approach to lag mitigation in cooperative games. We argue that the primary objective of lag mitigation should not be to reduce the lag of all players. Instead the primary objective should be to reduce the lag of the most lagged player within each cooperative group.

Montana High Tech Business Alliance

Date/Time: Monday, November 17, 2014 from 4:10 p.m. - 5:00 p.m.
Location: EPS 108
Presenter: Greg Gianforte

Abstract: Launched in April 2014, the Montana High Tech Business Alliance is a statewide membership organization focused on creating more high tech jobs in Montana. The Alliance currently has more than 100 member firms.

Short Bio: Greg Gianforte is an American engineer who has started and grown five successful computer software businesses. His most recent business, RightNow Technologies, located in Bozeman Montana, was purchased by Oracle in 2011.  He now dedicates his time to mentoring entrepreneurs, board service, working to improve the Montana economy and philanthropic work.

Risky Gadgets to the Rescue: Ubicomp Devices and Social Car Concepts to Foster Safer Driving Behaviours

Date/Time: Monday, September 22, 2014 from 4:10 p.m. - 5:00 p.m.
Location: EPS 108
Presenter: Dr. Ronald Schroeter, Queensland University of Technology (Australia)

Abstract: Digital information that is place- and time-specific, is increasingly becoming available on all aspects of the urban landscape. People (cf. the Social Web), places (cf. the Geo Web), and physical objects (cf. ubiquitous computing, the Internet of Things) are increasingly infused with sensors, actuators, and tagged with a wealth of digital information. Urban informatics research explores these emerging digital layers of the city at the intersection of people, place and technology. However, little is known about the challenges and new opportunities that these digital layers may offer to road users driving through today’s mega cities. This talk argues that this aspect is worth exploring in the pursuit of making cars both safer and more enjoyable. As an example, the talk presents findings of a guided ideation (idea creation) workshop with Urban Informatics research experts within a simulated environment. The analysis of the workshop in close collaboration with road safety experts led to the currently ongoing project "Risky Gadgets to the Rescue". This project aims to explore innovative ways to reduce young males' over-representation in road crashes by specifically investigating their proneness to boredom, a hardwired personality factor that can lead to risky driving. The talk aims to invite researchers to join our efforts in the design of innovative technologies and applications that make safe driving more pleasurable and stimulating for young males, e.g., by applying novel gamification techniques.


Short Bio:Dr Ronald Schroeter completed his PhD at the Urban Informatics Research Lab, QUT, in 2011. His research investigated forms of in-place digital augmentation, which refer to the ability to enhance the experiences of citizens in physical spaces through digital technologies that are directly accessible within that space. In particular, he developed mobile phone and public screen applications for public civic engagement of local citizens, in particular young citizens. As part of his study, he developed “Discussions In Space,” a fun, fast paced, short-text platform for collective expression and public discourse, for which he also received the 2012 National iAwards Merit in Research & Development . He is currently a PostDoctoral Research Fellow at the Centre for Accident Research and Road Safety – Queensland (CARRS-Q), QUT. He embraces multidisciplinary research, but his current main research interest is in “The Social Car”, where he seeks to translate Urban Informatics and Human Computer Interaction concepts into innovative information systems and interfaces for cars that promote road safety. In 2013, the Australian Research Council (ARC) awarded him a Discovery Early Career Researcher Award (DECRA) to work on his 3-year project “ Risky Gadgets to the Rescue: Designing Personal Ubicomp Devices to Foster Safer Driving Behaviours in Young Males “.


Welcome Seminar

Date/Time: Monday, August 25, 2014 from 4:10 p.m. - 5:00 p.m.
Location: EPS 108
Facilitator: John Paxton

Abstract: An opportunity for CS graduate students and faculty to meet.  Useful information about the department will be presented.


 Extensions to Modeling and Inference in Continuous Time Bayesian Networks

Presenter: Liessman Sturlaugson

Abstract: The continuous time Bayesian network (CTBN) enables reasoning about complex systems in continuous time by representing a system as a factored, finite-state, continuous-time Markov process. The dynamics of the CTBN are described by each node's conditional intensity matrices, determined by the states of the parents in the network. As the CTBN is a relatively new model, many extensions that have been defined with respect to Bayesian networks (BNs) have not yet been extended to CTBNs. This thesis presents five novel extensions to CTBN modeling and inference.

First, we prove several complexity results specific to CTBNs. It is known that exact inference in CTBNs is NP-hard due to the use of a BN as the initial distribution. We prove that exact inference in CTBNs is still NP-hard, even when the initial states are given, and prove that approximate inference in CTBNs, as with BNs, is also NP-hard. Second, we formalize performance functions for the CTBN and show how they can be factored in the same way as the network, even when the performance functions are define with respect to interaction between multiple nodes. Performance functions extend the model, allowing it to represent complex, user-specified functions of the behaviors of the system. Third, we present a novel method for node marginalization called "node isolation" that approximates a set of conditional intensity matrices with a single unconditional intensity matrix. The method outperforms previous node marginalization techniques in all of our experiments by better describing the long-term behavior of the marginalized nodes. Fourth, using the node isolation method we developed, we show how methods for sensitivity analysis of Markov processes can be applied to the CTBN while exploiting the conditional independence structure of the network. This enables efficient sensitivity analysis to be performed on our CTBN performance functions. Fifth, we formalize both uncertain and negative types of evidence in the context of CTBNs and extend existing inference algorithms to be able to support all combinations of evidence types. We show that these extensions make the CTBN more powerful, versatile, and applicable to real-world domains.

Date: Tuesday, Apr 22 2014 - 9:00 am
Location: EPS 126


Inference and Learning in Bayesian Networks Using Overlapping Swarm Intelligence

Presenter: Nathan Fortier

Abstract: While Bayesian networks provide a useful tool for reasoning under uncertainty, learning the structure of these networks and performing inference over them is NP-Hard. We propose several heuristic algorithms to address the problems of inference, structure learning, and parameter estimation in Bayesian networks. The proposed algorithms are based on Overlapping Swarm intelligence, a modification of particle swarm optimization in which a problem is broken into overlapping subproblems and a swarm is assigned to each subproblem. We describe how the problems of inference, structure learning, and parameter estimation can be broken into subproblems, and provide communication and competition mechanisms that allow swarms to share information about learned solutions.

For the problems of full and partial abductive inference, a swarm is assigned to each relevant node in the network. Each swarm learns the relevant state assignments associated with the Markov blanket for its corresponding node. Swarms with overlapping Markov blankets compete for inclusion in the final solution.

For the problem of structure learning a swarm is associated with each node in the network. Each swarm learns the parents and children of its associated node. Swarms that learn conflicting substructures compete for inclusion in the final network structure.  In our approach to parameter estimation, a swarm is associated with each node in the network that corresponds to either a latent variable or a child of a latent variable. Each nodes corresponding swarm learns the parameters associated with that nodes Markov blanket. Swarms with overlapping Markov blankets compete for inclusion in the final parameter set.

Date: Thursday, Apr 17 2014 - 3:00 am
Location: EPS 347


A Replication Case Study to Measure Architectural Quality Using Structure and History Analysis

Presenter: Derek Reimanis

Abstract: Replications play a pivotal role in Empirical Software Engineering, and although significant progress has been made in terms of vernacular and classification, the majority of this corpus refers to formal experiments. Herein, we present a replication case study where structural information about a system is used in conjunction with bug-related change frequencies to measure and predict architecture quality. We identified dependencies between components that change together even though they belong to different architectural modules, and as a consequence are more prone to bugs. We validated these dependencies by presenting our results back to the developers. The developers did not identify any of these dependencies as unexpected, but rather architectural necessities. This replication study adds to the knowledge base of CLIO (a tool that detects architectural degradations) by incorporating a new programming language (C++) and by externally replicating a previous case study on a separate commercial code base.

Date: Monday, Apr 14 2014 - 4:10 pm
Location: EPS 108


Impacts of Modular Grime on Technical Debt

Presenter: Melissa Dale

Abstract: The purpose of this research is to study the effects of code changes that violate a design pattern’s intended role on the quality of a project.  We use technical debt as an overarching surrogate measure of quality.  Technical debt is a metaphor borrowed from the financial domain used to describe the potential cost necessary to refactor a software system to agreed upon coding and design standards. Previous research by Izurieta and Bieman defined violations in the context of design patterns as grime. Because technical debt can ultimately lead to the downfall of a project, it is important to understand if and how grime may contribute to a system’s technical debt.

To investigate this problem, we have developed a grime injector to model grime growth on Java projects. We use SonarQube’s technical debt software to compare the technical debt scores of six different types of modular grime previously defined by Schanz and Izurieta. These six types can be classified along three major dimensions: strength, scope, direction and.

We find that the strength dimension is the most important contributor to the quality of a design and that temporary grime results in higher technical debt scores than persistent grime. This knowledge will help to make design decisions which could help manage a project’s technical debt.

Date: Tuesday, Apr 8 2014 - 11:10 am


Defending Data from Digital Disasters: Engineering Next Generation Systems for Emerging Problems in Data Science

Presenter: Eric Rozier

Abstract: Of the data that exists in the world, 90% was created in the last two years.  Last year over 2,837 exabytes of data were produced, representing an increase of 230% from 2010.  By next year this total is expected to increase to 8,591 exabytes, reaching 40,026 exabytes by 2020.  Our ability to create data has already exceeded our ability to store it, with data production exceeding storage capacity for the first time in 2007.  Our ability to analyze data has also lagged behind the deluge of digital information, with estimates putting the percent of data analyzed at less than 1%, while an estimated 23% of data created would be useful if analyzed.  Reliability, security, privacy, and confidentiality needs are outpacing our abilities as well, with only 19% of data protected.  For these reasons we need systems that are not only capable of storing the raw data, but doing so in a trustworthy manner, while enabling state of the art analytics.

In this talk we will explore problems in data science applications to medicine, climate science, natural history, and geography, and outline the reliability, availability, security, and analytics challenges to data in these domains.  We will present novel, intelligent, systems designed to combat these issues by using machine learning to apply a unique software defined approach to data center provisioning, with dynamic architectures, and on-the-fly reconfigurable middleware layers to address emergent problems in complex systems.  Specifically we will address issues of data dependence relationships, and the threat they pose to long term archival stores, and curation, as well as techniques to protect them using novel theoretical constructs of second-class data and shadow syndromes.  We will discuss the growing problem presented by the exponential explosion of both system and scientific metadata, and illustrate a novel approach to metadata prediction, sorting, and storage which allow systems to better scale to meet growing data needs.  We will explore problems in data access in the cloud of private records, illustrating the pitfalls of trusting provider claims with real world audits conducted by our lab which successfully extracted synthetic patient data through inadvertent side-channels, and demonstrate novel search techniques which allow for regular expression based search over encrypted data while placing no trust in the cloud provider, ensuring zero information leakage through side-channels.  Finally, we will conclude by discussing future work in systems engineering for Big Data, outline current challenges, and future pitfalls of next generation systems for data science.

Bio: Dr. Eric Rozier is an Assistant Professor of Electrical and Computer Engineering, head of the Trustworthy Systems Engineering Laboratory, and director of the Fortinet Security Laboratory at the University of Miami in Coral Gables, Florida. His research focuses on the intersection of problems in systems engineering with Big Data, Cloud Computing, and issues of reliability, performability, availability, security and privacy.  Prior to joining Miami, Dr. Rozier has served as a research scientist at NASA Langley Research Center, and the National Center for Supercomputing Applications, and as a Fellow at IBM Almaden Research Center.  His work in Big Data and systems engineering has been the subject of numerous awards, including being named an Frontiers of Engineering Education Faculty Member by the National Academy of Engineering in 2013, and an Eric & Wendy Schmidt Data Science for Social Good Faculty Fellow at the University of Chicago for Summer 2014.

Dr. Rozier completed his PhD in Computer Science at the University of Illinois at Urbana-Champaign where he served as an IBM Doctoral Fellow, and worked on issues of reliability and fault-tolerance of the Blue Waters supercomputer with the Information Trust Institute.  Dr. Rozier has been a long time member of the IEEE, ACM, a member of the AIAA Intelligent Systems Technical Committee where he serves with the Publications and the Professional Development, Education, and Outreach subcommittees.

Date: Monday, Apr 7 2014 - 4:10 pm
Location: EPS 108


Assessing the Fitness of Technical Debt as an Indicator of Model Quality

Presenter: Shamim Hafiz

Abstract: Contemporary Software Engineering is significantly aided by the creation of models. Therefore, early evaluation of the quality of a software system can be done by assessing corresponding models. Often, design and development teams compromise the quality of the implementation to release a product early or simply as a result of ”bad” practice. The undermining of software by such compromises is referred to as Technical Debt which needs to be addressed through refactoring. This paper presents a survey of quality and quantitative analysis of Technical Debt by exploring proposed frameworks and case studies. The paper also summarizes model-driven software refactoring and quality assessment of object-oriented models. Further, the author proposes scope for future work in developing frameworks to assess quality of models in terms of Technical Debt.

Date: Monday, Mar 31 2014 - 4:10 pm
Location: EPS 108


Advances in Linear Temporal Logic Translation: Ensuring the Safety of Safety-Critical Aeronautics Systems

Presenter: Kristin Y. Rozier

Abstract: Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware. Techniques such as requirements-based design and model checking have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, logic designs, medical devices, and other functions that ensure human safety. Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. These specifications can then be used to ensure system safety, from design-time to run-time.

In 2007, we established Linear Temporal Logic (LTL) satisfiability checking as a sanity check: each system requirement, its negation, and the set of all requirements should be checked for satisfiability before being utilized for other tasks, such as property-based system design or system verification via model checking. Our extensive experimental evaluation proved that the symbolic approach for LTL satisfiability checking is superior. However, the performance of the symbolic approach critically depends on the encoding of the formula. Since 1994, there had been essentially no new progress in encoding LTL formulas for this type of analysis. We introduced a set of 30 encodings, demonstrating that a portfolio approach utilizing these encodings translates to significant, sometimes exponential, improvement over the standard encoding for symbolic LTL satisfiability checking. We highlight major impacts of this work in aeronautics. We use these formal verification techniques to ensure there are no potentially catastrophic design flaws remaining in the design of the next Air Traffic Control system before the next stage of production. Also, our run-time monitoring of LTL safety specifications can enable a fire-fighting Unmanned Aerial System to fly!

Bio: Dr. Kristin Y. Rozier holds a position as a Research Computer Scientist in the Intelligent Systems Division of NASA Ames Research Center and a courtesy appointment at Rice University. She earned a Ph.D. from Rice University in 2012 and B.S. and M.S. degrees from The College of William and Mary in 2000 and 2001, all in theoretical computer science. Dr. Rozier's research focuses on automated techniques for the formal specification, validation, and verification of safety critical systems. Her primary research interests include: design-time checking of system logic and system requirements; specification debugging techniques and theory; and safety and security analysis. Her applications of computer science theory in the aeronautics domain earned her the American Helicopter Society's Howard Hughes Award, the American Institute of Aeronautics and Astronautics Intelligent Systems Distinguished Service Award, and the Women in Aerospace Inaugural Initiative-Inspiration-Impact Award. She has also earned the Lockheed Martin Space Operations Lightning Award, the NASA Group Achievement Award, and Senior Membership to IEEE, AIAA, and SWE. Dr. Rozier serves on the AIAA Intelligent Systems Technical Committee, where she chairs both the Publications and the Professional Development, Education, and Outreach (PDEO) subcommittees. She has served on the NASA Formal Methods Symposium Steering Committee since working to found that conference in 2008 and is serving as PC chair for the second time this year.

Date: Friday, Mar 28 2014 - 4:00 pm
Location: Roberts Hall 102


Formal Foundations of Clustering

Presenter: Margareta Ackerman

Abstract: Clustering is a central unsupervised learning task with a wide variety of applications. However, in spite of its popularity, it lacks a unified theoretical foundation. Recently, there has been work aimed at developing such a theory. We discuss recent advances in clustering theory, starting with results on clustering axioms. We will then discuss a new framework for addressing one of the most prominent problems in the field, the selection of a clustering algorithm fora specific task. The framework rests on the identification of central properties capturing the input-output behaviour of clustering paradigms. We present several results in this direction, including a characterization of linkage-based clustering methods.

Bio: Dr. Margareta Ackerman is currently a Postdoctoral Fellow at UC San Diego, and she got her PhD in Computer Science from the University of Waterloo under the supervision of Professor Shai Ben-David. Her research interests span Machine Learning, Information Retrieval, Game Theory, Auomata Theory, and Bioinformatics. The focus of her work is developing theoretical foundations of clustering that are independent of any specific algorithm or objective function. We provide a consistent set of axioms for clustering and perform a theoretical study of clusterability (NIPS '08 and AISTATS '09). Her recent work focuses on providing guidelines for selecting clustering algorithms based on their input-output behaviour (AISTATS '13, AAAI '12, IJCAI '11, NIPS '10, COLT '10).

Date: Monday, Mar 17 2014 - 4:10 pm
Location: EPS 108


Ontologies in a data-driven world

Presenter: Pascal Hitzler

Abstract: Despite numerous applications in specific scenarios, the use of ontologies for data organization, management, and integration is severely limited if faced with high volumes of heterogeneous data. Traditional ontology-based approaches using large, monolithic ontologies suffer from the drawbacks of strong ontological commitments which force perspectives on the data which may be at odds with the underlying intentions and perspectives of the data providers. In this presentation, we discuss ways forward in ontology modeling and use for high-volume heterogeneous data. In particular, we discuss the importance of combining data analytics with knowledge representation, and the use of ontology design patterns for flexible data organization and integration, including a current use case in oceanography.

Bio: Pascal Hitzler is Associate Professor at the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio, U.S.A. From 2004 to 2009, he was Akademischer Rat at the Institute for Applied Informatics and Formal Description Methods (AIFB) at the University of Karlsruhe in Germany, and from 2001 to 2004 he was postdoctoral researcher at the Artificial Intelligence institute at TU Dresden in Germany. In 2001 he obtained a PhD in Mathematics from the National University of Ireland, University College Cork, and in 1998 a Diplom (Master equivalent) in Mathematics from the University of Tubingen in Germany. His research record lists over 250 publications in such diverse areas as semantic web, neural-symbolic integration, knowledge representation and reasoning, machine learning, denotational semantics, and set-theoretic topology. He is Editor-in-chief of the Semantic Web journal by IOS Press, and of the IOS Press book series Studies on the Semantic Web. He is co-author of the W3C Recommendation OWL 2 Primer, and of the book Foundations of Semantic Web Technologies by CRC Press, 2010 which was named as one out of seven Outstanding Academic Titles 2010 in Information and Computer Science by the American Library Association's Choice Magazine, and has translations into German and Chinese. He is on the editorial board of several journals and book series and on the steering committee of the RR conference series, and he frequently acts as conference chair in various functions, including e.g. General Chair (RR2012), Program Chair (AIMSA2014, ODBASE2011, RR2010), Track Chair (ESWC2013, ESWC20111, ISWC2010), Workshop Chair (K-Cap2013), Sponsor Chair (ISWC2013, RR2009, ESWC2009). For more information, see http://www.pascal-hitzler.de.

Date: Monday, Mar 3 2014 - 4:10 pm
Location: EPS108


Intelligent User Interfaces: Where HCI Meets AI

Presenter: Bowen Hui

Abstract: Software development has historically adopted a "one-size-fits-all" approach in which applications are designed with a single target user group in mind, rather than tailoring the software features to the needs of specific users. The ability to customize software has become increasingly important as users are faced with larger, more complex software. To tackle this problem, my work adopts an intelligent agent's perspective where the system views this software customization problem as a decision-theoretic planning problem under uncertainty about the user. In my dissertation, I proposed a methodological framework for developing intelligent software interaction and assistance. Using this framework, I will highlight the interdisciplinary nature of the problem and present details of a case study to illustrate different aspects of AI and HCI involved. Current projects leveraging these ideas in the areas of education and digital youth will also be presented.

Bio: Bowen is an instructor in Computer Science at the University of British Columbia and runs her own software company Beyond the Cube Ltd. Her main research interest is intelligent user interfaces, with emphasis in probabilistic user modeling, computational linguistics, and online educational tools. She completed her PhD from the University of Toronto in 2011.

Date: Friday, Feb 28 2014 - 2:10 pm
Location: Roberts Hall 102


You Killed My Buffer. Prepare To Die. A Network Quality of Service Fairytale.

Presenter: Andrew Shewmaker

Abstract: Computer networks are madhouses, whether they're in the relative safety of a data center or out in the wilderness of the Internet. Competing flows bully each other for their fair share, and flows that try to minimize congestion suffer at the hands of those that only care when packets are dropped. Our networks function surprisingly well for being so cutthroat, but everyone suffers from the current state of affairs. Today's presentation outlines some of what's wrong with today's computer networks: software architecture issues, buffers, clocks, congestion signals, and priorities. This presentation includes an overview of ideas that deserve investigation: refactoring TCP and queueing disciplines, smarter buffers, accurate time stamps, multiple congestion signals, and deadlines.

Bio: Andrew graduated from the University of Idaho in CS in 2001, and worked in the Department of Energy Laboratory system for over 10 years integrating, administering, and troubleshooting High Performance Linux clusters. In addition to working on Roadrunner, the fastest supercomputer in the world (2008-2009), he also integrated visualization clusters driving dozens of 3D projectors for theater and CAVE environments. Andrew's graduate research has focused on real-time systems, storage, and networks. He received his MS in CS from UC Santa Cruz in 2009, and is currently pursing his PhD.

Date: Monday, Feb 10 2014 - 4:10 pm
Location: EPS 108


2013 Seminars