Seminar: John Iacono, NYU, “The geometry of data structures”

Thursday, Sept 3, 4:30pm, POST 126

The geometry of data structures
John Iacono
Professor, Computer Science and Engineering
NYU Polytechnic School of Engineering

Abstract: A technique has been developed to study and analyze data structures called the geometric view. In this view the execution of a data structure is plotted in two dimensions. The vertical dimension is the time axis, where the j-th row corresponds to the execution of the j-th operation during the lifetime of the data structure. The horizontal axis is the space axis, where every column corresponds to an element in the structure and the ordering of the columns follows some natural ordering among the items in the underlying data structure.

Three examples of the use of this view will be presented: rotation-based binary search tree data structure, cache-oblivious persistence, and disjoint sets.

In all of these examples the main theme is that by moving to the geometric view, details of the data structure that seem cumbersome and hard to approach are abstracted in such a way that allows a simple, yet equivalent, view.

Semi-seminar on monoidal computer

Dusko Pavlovic will run an informal weekly seminar on monoidal computer on Fridays from 11-12 noon in ASECOLab (POST 311).  He describes it as follows:

If anyone is interested in a

  • diagrammatic model of computation
  • that lets you prove things about computability, complexity,
  • randomized computation, one-way functions
  • central dogma of (program) evolution
  • and more
  • all entirely in pictures

then please join us!

I will begin with a very quick introduction into semantics of computation, spell out the graphical language of monoidal computer, and then proceed towards one of the applications developed so far, depending on participants’ interests and my focus.

Monoidal computer has evolved as my answer to the following questions:

Why is it that the practice of computation is mostly driven by high level programming languages, but the theory of computation is still done in low level machine languages of turing machines, boolean circuits etc?

Why is there no high level view of cryptography? (this has a practical impact: people describe their algorithms in english, which leads not only to misunderstandings and errors in implementation, but also to substantially erroneous proofs.)

I use the tools developed in semantics of programming languages to try provide such a view. I will try to reach the point where we can define one-way and trapdoor functions. this requires capturing feasible randomized computation modulo computational indistinguishability.

Another direction is to define a von Neumann-style model where genes are programs, organisms are computational processes (but with resources, not just with data), and to try to explain algorithmically the central dogma of evolutionary genetics: that the dynamic adaptations acquired by an organism cannot be inherited. wouldn’t evolution be more efficient if they could? Why was LaMarck wrong?


Creator of Final Fantasy lectures on Video Game Development

April 29, 2015

Legendary creator of the internationally acclaimed Final Fantasy video game series, Hironobu Sakaguchi gave a guest lecture at Professor Jason Leigh’s Video Game Development class today to discuss the development of his latest project Terra Battle and to also announce that it had reached two million downloads. When students asked him where he gets his creative ideas from, the shy game designer said: “surfing and taking long showers.”

Professor Leigh’s class is being taught in the CyberCANOE, which enabled students at University of Hawaiʻi West Oahu to work on game projects with their counterparts in the Information and Computer Sciences department, and the Academy for Creative Media (ACM) at University of Hawaiʻi Mānoa.

Designed by Professor Leigh, the CyberCANOE was funded by Chris Lee- founding director and creator of ACM System, who previously produced movies such as Superman Returns, and Final Fantasy Spirits Within. The class was also joined today by Hawaiʻi Council Member and former Senator Carol Fukunaga who watched with enthusiasm at how the students’ responded to their legendary speaker.

Hironobu Sakaguchi

uhm students listen to sakaguchi

Sakaguchi CyberCANOE

Sakaguchi West Oahu

Seminar: Janos Sztipanovits, Vanderbilt University, “OpenMETA: Model-integrated design tool suite for cyber-physical systems”

This Thursday, April 9th, 4:30pm-5:30pm in POST 126, Janos Sztipanovits, of the Institute for Software Integrated Systems at Vanderbilt University (Nashville, TN) will present:

OpenMETA: Model-integrated Design Tool Suite for Cyber-Physical Systems

Abstract: CPS design flows span physical and computational domains and incorporate manufacturability constraints for physical components. Heterogeneity is the norm as well as the main challenge: components and systems are modeled using multiple physical, logical, functional and non-functional aspects. Traditional design flows use the principle of separation of concerns and decompose the overall design task into domains of manageable problem sizes. However, the fundamental goal of model-based design – correct-by-construction technology for eliminating the costly design-build-test-redesign cycles – requires modeling and analyzing cross-domain interactions among physical and cyber domains. The talk will provide an overview of the OpenMETA tool suite developed at the Institute for Software Integrated Systems at Vanderbilt University under the DARPA Adaptive Vehicle Make (AVM) program. It will present some of the key innovations, such as the introduction of model, tool and analysis integration layers, a light-weight model integration language CyPhyML, and will summarize experience with the model- and component-based design flow. OpenMETA has been stress tested in the AVM challenge problem – FANG – and is now under transitioning to several application domains.

Bio: Dr. Janos Sztipanovits is currently the E. Bronson Ingram Distinguished Professor of Engineering at Vanderbilt University and founding director of the Institute for Software Integrated Systems. His current research interest includes the foundation and applications of model and component-based design of Cyber Physical Systems, design space exploration and systems-security co-design technology. He leads the CPS Virtual Organization in the US and he is co-chair the CPS Reference Architecture and Definition public working group established by NIST in 2014. In 2014 he was elected to be member of the Steering Committee of the Industrial Internet Consortium. He was founding chair of the ACM Special Interest Group on Embedded Software (SIGBED). He is member of the Academic Executive Board of Cyber-Physical Systems Virtual Organization and he is member of the national steering group. Dr. Sztipanovits was elected Fellow of the IEEE in 2000 and external member of the Hungarian Academy of Sciences in 2010. He won the National Prize in Hungary in 1985 and the Golden Ring of the Republic in 1982. He graduated (Summa Cum Laude) from the Technical University of Budapest in 1970 and received his doctorate from the Hungarian Academy of Sciences in 1980.


M.S. Defense: Zachary Tomaszewski, “Skald: An Affordance-based User Interface for Interactive Fiction”

Skald: An Affordance-based User Interface for Interactive Fiction
Zachary Tomaszewski
Friday, February 27, 2015,
POST 302, 9:30am

Abstract: Interactive fiction (IF) is a long-lived text-based computer game genre. A small community of game authors are still producing independent IF games, and IF can be used for a number of serious applications, including education and AI research. This work explores the defining features of IF as a form, as well as its shortcomings in terms of user interface (UI) affordances. It then provides Skald, an alternative menu-driven UI for IF. An empirical evaluation shows that Skald offers a number of advantages over the traditional IF interface. It eliminates user input errors, is rated as easier to use, and encourages players to explore the virtual game world to a greater degree. However, the study also reveals that Skald does not significantly increase users’ feelings of world-level user agency and that a sizable minority of players still prefer the traditional UI overall.

Scott Robertson (ICS, Chair)
Kim Binsted (ICS)
John Zuern (English)

Jason Leigh to deliver colloquium presentation on Advanced Visualization Instrumentation at the National Center for Supercomputer Applications

Jason Leigh, professor of Information and Computer Sciences and director of the University of Hawaiʻi’s Laboratory for Advanced Visualization and Applications will deliver a presentation on March 20, 2015 on Advanced Visualization Instrumentation at the National Center for Supercomputing and Applications at the University of Illinois at Urbana Champaign.



Ph.D. Defense: Michael Gowanlock, “In-Memory Distance Threshold Searches on Moving Object Trajectories”

Wed 2/11/2015, 3PM

POST 302


Processing moving object trajectories arises in many applications. To gaininsight into target domains, historical continuous trajectory similarity searches find those trajectories that have similar attributes in common. In this work, we focus on a trajectory similarity search, the distance threshold search, which finds all trajectories within a given distance of a query trajectory over a time interval. We investigate novel approaches for the efficient processing of these searches over in-memory databases on the CPU and on the GPU.

On the CPU, we use an in-memory R-tree index to store trajectory data and evaluate its performance on a range of trajectory datasets. The R-tree is a well-studied out-of-core indexing scheme. However, in the context of in-memory searches, we find that the traditional notion of considering good trajectory splits by minimizing the volume of MBBs so as to reduce index overlap is not well-suited to improve the performance of in-memory distance threshold searches. Another finding is that computing good trajectory splits to achieve a trade-off between the time to search the index-tree and the time to process the candidate set of trajectory segments may not be beneficial when considering large datasets.

The GPU is an attractive technology for distance threshold searches because of the inherent data parallelism involved in calculating moving distances between pairs of polylines; however, challenges arise from the SIMD programming model and limited GPU memory. We study the processing of distance threshold searches using GPU implementations that avoid the use of index-trees. We focus on two scenarios for which we propose GPU-friendly indexing schemes for trajectory data. In the first scenario the database fits in GPU memory but the entire query set does not, thereby requiring back-and-forth communication between the host and the GPU. We find that our indexing scheme yields significant speedup over a multithreaded CPU implementation. We gain insight into our GPU algorithms via a performance model that proves accurate across a range of experimental scenarios. In the second scenario, we study the case where both the query set and the database fit in GPU memory. With this relaxed memory constraint, the design space for trajectory indexing schemes is larger. We investigate indexing schemes with temporal, spatial, and spatiotemporal selectivity. Through performance analyses, we determine the salient characteristics of the trajectories that are conducive to the proposed indexes. In particular, we find that there are two complementary niches for GPU and CPU distance threshold search algorithms. Namely, using the GPU is preferable when the trajectory dataset is large and/or when the threshold distance is large. The GPU is thus the better choice for large-scale scientific trajectory dataset processing, as for instance in the motivating application for this work in the domain of Astrobiology.


Seminar: Kevin Fu, University of Michigan, “Medical Device Security: The First 165 Years”

Kevin Fu
Department of Electrical Engineering and Computer Science
University of Michigan
Thursday, January 29, 2014
POST 126, 4:30pm

Abstract: Today, it would be difficult to find medical device technology that does not critically depend on computer software. Network connectivity and wireless communication has transformed the delivery of patient care. The technology often enables patients to lead more normal and healthy lives. However, medical devices that rely on software (e.g., drug infusion pumps, linear accelerators, pacemakers) also inherit the pesky cybersecurity risks endemic to computing. What’s special about medical devices and cybersecurity? What’s hype and what’s real? What can history teach us? How are international standards bodies and regulatory cybersecurity requirements changing the global manufacture of medical devices? This talk will provide a glimpse into the risks, benefits, and regulatory issues for medical device cybersecurity and innovation of trustworthy medical device software.

Bio: Kevin Fu is Associate Professor in EECS at the University of Michigan where he directs the Security and Privacy Research Group (SPQR) and the Archimedes Center for Medical Device Security. He was program chair of USENIX Security, and is a member of the ACM Committee on Computers and Public Policy and the NIST Information Security and Privacy Advisory Board. Prof. Fu received a Sloan Research Fellowship, NSF CAREER award, and best paper awards from USENIX Security, IEEE S&P, and ACM SIGCOMM. He was named MIT Technology Review TR35 Innovator of the Year. Fu has testified in U.S. Congress on health matters and has written commissioned work for the U.S. Institute of Medicine. He served as a visiting scientist at the Food & Drug Administration, the Beth Israel Deaconess Medical Center of Harvard Medical School, Microsoft Research, and MIT CSAIL. He serves on the advisory board of Samsung’s Strategy and Innovation Center. Fu received his B.S., M.Eng., and Ph.D. from MIT. He also earned a certificate of artisanal bread making from the French Culinary Institute.

Seminar: Vladimir Vovk, Royal Holloway, University of London, “Game theoretic probability: A brief Review”

Game-theoretic probability: brief review
Vladimir Vovk (Royal Holloway, University of London)
January 9, 2014, 3:30PM
Keller 401

Abstract: The standard approach to probabilistic modelling is to assume a probability measure generating the observed outcomes. Game-theoretic probability weakens this assumption but still allows one to obtain many familiar results, such as laws of large numbers and iterated logarithm, central limit theorems, large deviation inequalities, and zero-one laws. It also leads to completely new results.

Background: The speaker ( ) is visiting asecolab to work on a joint project. He was one of the last students of great Andrei Kolmogorov, but since the mid 90s he focused on algorithmic learning and foundations of statistics. With Glenn Shafer (of Dempster-Shafer Theory) he wrote an influential book ( ), reconstructing probability theory from gaming. Last term they ran a reading seminar at the Math Dept devoted to this book. Hence the talk.

Ph.D. Defense: Pavel Senin, “Software Trajectory Analysis: An empirically based method for automated software process discovery”

Software Trajectory Analysis: An empirically based method for automated software process discovery
Pavel Senin
Friday, January 16, 2014, 12:00pm
POST 302

Abstract: Recurrent behaviors are considered to be the basic building blocks of any human-driven goal-oriented process, reflecting the development of efficient ways for dealing with common tasks based on the past performance. Thus, the ability to discover recurrent behaviors is utterly important for a bottom-up systematic study, modeling, and improvement of human-driven processes. In the context of software development, whose ultimate goal is the delivery of software, the ability to recognize recurrent behaviors enables the understanding, formal description, and effective guidance of evolving software processes. While a number of approaches for recurrent behaviors discovery and software process modeling and improvement has been previously proposed, they typically built upon on-line intrusive techniques, such as observations and interviewing, therefore expensive, suffering from biases, and unwelcome by software developers.

In this exploratory study, I have developed and tested the idea of software process discovery via off-line analysis of software process artifacts. For this, I have prototyped and evaluated the Software Trajectory Analysis framework, which is built upon the definition of “software trajectory” data type, that is a temporally ordered sequence of software artifact measurements, and a novel technique for temporal data classification, that enables the software trajectory characteristic patterns discovery and ranking. By analogy with the Physics’ trajectory that describes a projectile path in metric space, a software trajectory describes the software process and product progression in a space of chosen software metrics, whereas its recurrent structural patterns are related to the recurrent behaviors.

The claim of this dissertation is that (1) it is possible to discover recurrent behaviors off- line via systematic study of software artifacts, (2) the Software Trajectory Analysis framework provides an effective off-line approach to recurrent software process-characteristic behaviors discovery. In addition to the extensive experimental evaluation of a proposed algorithm for time series characteristic pattern discovery, three empirical case studies were carried out to evaluate the claim: two using software artifacts from public software repositories and one using public dump of a Q&A web site. The results suggested that Software Trajectory Analysis is capable to discover software process-characteristic recurrent behaviors off-line, though their sensible interpretation is sometimes difficult.

Bio: Pavel Senin is a PhD candidate at the ICS department of UH Manoa, working at the Collaborative Software Development Laboratory (CSDL) under supervision of Prof. Philip M. Johnson.  He is originally from Krasnyi Luch, a city in Eastern Ukraine.  Before studying at UH, he received an MS in Applied Mathematics from SFedU, Russia.  During his time at UH Manoa he worked at ASGPB where he assembled the Transgenic Papaya Genome and annotated a representative of Verrucomicrobia phylum; he also received a practical training from JGI at LANL, where he participated in the pioneering single-cell genome sequencing research project. At CSDL Pavel was involved in the Hackystat project, where he worked on the problem of software process discovery.  He developed a novel technique for time series classification and proposed a framework for software process characteristic recurrent behaviors discovery and ranking. When not mining software repositories, Pavel tinkers with Arduino sensors — a project which led to the development of a novel approach for the discovery of spatio-temporal anomalies.