Dr. Kim Binsted Talks Mars at The President’s Series on Hawai‘i Island

What Will It Be Like to Live on Mars?
Planning for Human Exploration of Space

On April 5, ICS Professor Kim Binsted will discuss the HI-SEAS (Hawai‘i Space Exploration Analog and Simulation) project as part of The President’s Series on Hawai‘i Island.

 

HI-SEAS is an analog habitat for human spaceflight to Mars, located on an isolated Mars-like site on the Mauna Loa side of the saddle area at 8200 feet above sea level. The fourth phase of the project began in August of 2015 and lasted for one year. Learn what was discovered about what humans will need to stay happy and healthy during an extended mission to Mars.

Wednesday, April 5, 5 p.m.
RSVP by March 30
808-956-9340 or events@uhfoundation.org

Location
Hawaiʻi Community College – Pālamanui Outdoor Theatre (Campus Piko)
73-4225 Ane Keohokālole Hwy, Kailua-Kona

More information: The President’s Series – Hawai‘i Island

 

ICS Lunch and Seminar Series: John Iacono on 3/16

Thursday, March 16, 12:00pm, POST318B (ICSpace)

Subquadratic Algorithms for Algebraic Generations of 3SUM

John Iacono
Professor, Computer Science and Engineering
NYU Polytechnic School of Engineering

Abstract:

The 3SUM problem asks if an input n-set of real numbers contains a triple whose sum is zero. We consider the 3POL problem, a natural generalization of 3SUM where we replace the sum function by a constant-degree polynomial in three variables. The motivations are threefold. Raz, Sharir, and de Zeeuw gave a O(n11/6) upper bound on the number of solutions of trivariate polynomial equations when the solutions are taken from the Cartesian product of three n-sets of real numbers. We give algorithms for the corresponding problem of counting such solutions. Grønlund and Pettie recently designed subquadratic algorithms for 3SUM. We generalize their results to 3POL. Finally, we shed light on the General Position Testing (GPT) problem: “Given n points in the plane, do three of them lie on a line?”, a key problem in computational geometry.
We prove that there exist bounded-degree algebraic decision trees of depth O(n12/7+ε) that solve 3POL, and that 3POL can be solved in O(n2(log log n)3/2/(log n)1/2) time in the real-RAM model. Among the possible applications of those results, we show how to solve GPT in subquadratic time when the input points lie on o((log n)1/6/(log log n)1/2) constant-degree polynomial curves. This constitutes a first step towards closing the major open question of whether GPT can be solved in subquadratic time.

To obtain these results, we generalize important tools — such as batch range searching and dominance reporting — to a polynomial setting. We expect these new tools to be useful in other applications.

Full article: Luis Barba, Jean Cardinal, John Iacono, Stefan Langerman, Aurélien Ooms, Noam Solomon: Subquadratic Algorithms for Algebraic Generalizations of 3SUM. CoRR abs/1612.02384 (2016)

 

ACM Student Chapter Sponsors Teresa Nededog for Wetware Wednesday

The UH Manoa Student Chapter of the Association for Computing Machinery (ACM) is sponsoring February’s WetWare Wednesday! This is a group that is all about problem solving with any kind of tech under the sun. Come early to hear Teresa Nededog and her experience as a software developer.

ACM will also be demonstrating awesome projects (Arduino, Vive, 3D Printing)!

When: Wednesday, February 22, 2017, 5pm
Where: UH Manoa iLab Building 37
Cost: FREE

Free Appetizers and refreshments
Parking available on campus

WetWare Wednesday is a networking event targeting local software developers and system engineers, students and faculty.

Grad. Seminar (2/2): Henri Casanova, “Simulating HPC Systems and Applications”

Thursday February 2nd, 4:30-5:30pm in POST 126

Speaker: Henri Casanova

Title: “Simulating HPC Systems and Applications”

Abstract: There is a well-identified problem of (lack of) reproducibility in experimental Computer Science research, and in particular in High Performance Computing (HPC) research.  Running experiments in simulation is one way to lower barriers to reproducibility, and it is used effectively in several areas of Computer Science.  Its use in HPC research has gained some traction, but comparatively it is still in its infancy. Part of the reason is that developing simulations models that are sufficiently accurate to produce meaningful results but that are also sufficiently scalable to handle the scale of HPC simulation is challenging. In this presentation we will discuss several advances in the development of such simulation models.  These models are implemented as part of the open-source SimGrid simulation framework. We will give an overview of SimGrid, describe its most salient features and capabilities. We will conclude by discussing ways in which SimGrid can be used not as a research tool, but as a tool for debugging, for teaching, and for enabling online decision making.

Grad Seminar Guest Talk: Raghava Rao Mukkamala on 1/12

Thursday, Jan. 12, 4:30-5:30, POST 126

Multi-Dimensional Text Analytics: Concepts, Methods, Tools and Findings

Dr. Raghava Rao Mukkamala
Centre for Business Data Analytics
Copenhagen Business School, Denmark

Abstract:

In this talk, I first present a methodology for multi-dimensional text analysis using text mining, topic modeling, and domain-specific classification. Second, I present a research tool, MUTATO developed at the Centre for Business Data Analytics (http://bda.cbs.dk) for performing text mining, topic modeling and text classification using supervised and unsupervised machine learning approaches. MUTATO is developed as a web application by using Python and C# programming languages with open-source libraries such as Natural Language Toolkit (NLTK) and Gensim topic modeling libraries. As part of unsupervised approaches, MUTATO supports text mining by providing keyword analysis, collocation analysis and word-frequency analysis. MUTATO supports topic modeling to identify/discover topics and hidden information patterns from a given text corpus. As part of supervised machine-learning approaches, MUTATO supports 1) manual coding of text documents for preparing training sets using a systematic approach for manual content analysis 2) classification of text corpus according the given domain-specific models using the training sets. MUTATO also provides key performance measures for text classification in terms of precision, recall, accuracy and F-measure. Third and last, I will present applications of MUTATO in different research domains, highlight key empirical findings, discuss limitations and outline future work.

Bio:

Raghava Rao Mukkamala is an Assistant Professor of Computational Social Science at the Department of IT Management, Copenhagen Business School; external lecturer of applied computing at the Westerdals Oslo School of Arts Communication and Technology; and co-director of the Computational Social Science Laboratory (cssl.cbs.dk). Raghava’s current research focus is on interdisciplinary approach to big data analytics. Combining formal/mathematical modeling approaches with data/text mining techniques and machine learning methodologies, his current research program seeks to develop new algorithms and techniques for big data analytics such as Social Set Analytics.  Raghava holds a PhD degree in Computer Science and a M.Sc degree in Information Technology, both from IT University of Copenhagen, Denmark and a Bachelor of Technology degree from Jawaharlal Nehru Technological University, India. Before moving to research, Raghava has many of years of programming and IT development experience from Danish IT industry.

ICS Lunch and Seminar Series: Sean Goggins on 1/12

Thursday, Jan. 12, 12:00pm, POST318B (ICSpace) 

Computational Intelligence Pipelines: Imagination and Reality

Sean Goggins
Computer Science Department
University of Missouri

Abstract:

The promise of making sense of human behavior through technology for profit is immense. If one looks at the success of online advertising, Amazon’s suggestions and Facebook advertising, indeed, monetizing human behavior related to purchasing “things” is beyond the realm of promise. What, however, about the more complex enterprises that humans undertake?

How do I seek support for a health condition, and know to trust the information? Where can I deploy my opinions and efforts so that my aim of affecting social change through social media is more than “Slacktivism”? Anywhere the influence of human beings stretches beyond transactions involving the exchange of currency for goods or services, our understanding of how social computing technologies influence, help or support our humanity we realize how little we know.

Dr. Goggins work focuses on systematic collection, reshaping and alignment of methodology and theory with data in social computing research. There are three specific threads of work that combine in Dr. Goggins endeavors, and which will be the focus of his talk. First, Group Informatics is a systematic methodology and ontology for making sense of electronic traces of human behavior. Second, Dr. Goggins empirical work examines social influence and information quality across social media platforms and subgroups. Finally, Dr. Goggins work as one of the organizers of the “open collaboration data exchange” (ocdx.io) is illuminating the challenges of collection, analysis, writing and storage of data across social computing research labs. From these 3 perspectives, Dr. Goggins outlines what he views as the most interesting social computing questions facing us in the coming decade.

Bio:

Sean Goggins is an Associate Professor at the University of Missouri in the Computer Science department. He teaches, publishes and conducts research on the uptake and use of information and communication technologies by small groups in medium to large scale sociotechnical systems; from Facebook, to online course systems. Sean conceptualizes “group informatics” as a methodological approach and ontology (Goggins et al, 2013) for making sense of the interactions between people in medium to large scale social computing environments. Sean spent 12 years as a software engineering and architect in industries ranging from medical devices to online publishing before pursuing his Ph.D. After four years at Drexel University in Philadelphia, he moved to Missouri in the fall of 2013, where he as continued to get his work funded, and launched a new masters degree program in data science in the fall of 2016.

ICS/CIS Joint Research Seminar: Ravi Vatrapu on Jan. 9

Monday, Jan. 9th, 4:30-5:30, Hamilton Library 2K

Social Set Analysis: A Set Theoretical Approach to Big Data Analysis
Prof. Ravi Vatrapu
Centre for Business Data Analytics
Copenhagen Business School, Denmark

Abstract: Current analytical approaches in computational social science can be characterized by four dominant paradigms: text analysis (information extraction and classification), social network analysis (graph theory), social complexity analysis (complex systems science), and social simulations (cellular automata and agent-based modeling). However, when it comes to organizational and societal units of analysis, there exists no approach to conceptualize, model, analyze, explain, and predict social media interactions as individuals’ associations with ideas, values, identities, and so on. To address this limitation, based on the sociology of associations and the mathematics of set theory, this paper presents a new approach to big data analytics called social set analysis. Social set analysis consists of a generative framework for the philosophies of computational social science, theory of social data, conceptual and formal models of social data, and an analytical framework for combining big social data sets with organizational and societal data sets. Three empirical studies of big social data are presented to illustrate and demonstrate social set analysis in terms of fuzzy set-theoretical sentiment analysis, crisp set-theoretical interaction analysis, and event-studies-oriented set-theoretical visualizations. Implications for big data analytics, current limitations of the set-theoretical approach, and future directions are outlined. (IEEE Access Paper:
http://ieeexplore.ieee.org/document/7462188/)

Bio: Ravi Vatrapu is a professor of human computer interaction at the Department of IT Management, Copenhagen Business School; professor of applied computing at the Westerdals Oslo School of Arts Communication and Technology; and director of the Centre for Business Data Analytics (http://bda.cbs.dk). Prof. Vatrapu’s current research focus is on big social data analytics. Based on the enactive approach to the philosophy of mind and phenomenological approach to sociology and the mathematics of classical, fuzzy and rough set theories, his current research program seeks to design, develop and evaluate a new holistic approach to computational social science, Social Set Analytics (SSA). SSA models social media interactions as associations to ideas, values and social actors and consists of novel formal models, predictive methods and visual analytics tools for big social data. Prof. Vatrapu holds a Doctor of Philosophy (PhD) degree in Communication and Information Sciences from the University of Hawaii at Manoa, a Master of Science (M.Sc) in Computer Science and Applications from Virginia Tech, and a Bachelor of Technology in Computer Science and Systems Engineering from Andhra University

ICS Lunch and Seminar Series: Pawel Sobocinski on 11/3

Pawel Sobocinski

Where and When: POST318B (ICSpace) Thursday November 3, 12:00pm

Title: Compositional model checking of concurrent systems with Petri nets

Abstract:

Compositionality and process equivalence are both standard concepts of process algebra. Compositionality means that the behaviour of a compound system relies only on the behaviour of its components, i.e. there is no emergent behaviour. Process equivalence means that the explicit statespace of a system takes a back seat to its interaction patterns: the information that an environment can obtain though interaction. Petri nets are a widely used and understood, model of concurrency. Nevertheless, they have often been described as a non-compositional model, and tools tend to deal with monolithic, globally-specified models. In this talk I will introduce Petri Nets with Boundaries (PNB), which is a compositional, graphical algebra of elementary net systems, an important class of Petri nets. I will show that compositionality and process equivalence are a powerful combination that can be harnessed to improve the performance of checking reachability and coverability in several common examples where Petri nets model realistic concurrent systems. 

Bio:

Pawel Sobocinski studied computer science and mathematics at the University of Sydney, Australia and completed his PhD at the University of Aarhus, Denmark, in 2004. He has held postodoctoral positions at the Universities of Pisa, Paris VII and Cambridge, and, since 2007, an academic position at the University of Southampton, UK. His research interests lie in concurrency theory, verification, programming languages, and applications of category theory in computer science. 

ICS Lunch and Seminar Series: Bas Westerbaan on 10/13

Bas Westerbaan

Where and When: POST318B (ICSpace) Thursday October 13, 12:00pm

Title: In search for the smallest Kochen-Specker system

Abstract:

At the heart of the famous Conway-Kochen Free Will Theorem and Kochen and Specker’s argument against non-contextual hidden variable theories is the existence of a Kochen-Specker (KS) system: a finite set of points on the sphere that is not “010-colorable”.  The first KS-system was found in 1975 consisting of 117 points.  In 1991 Penrose and Peres independently found a system of 33 points.  Around 1995 John Conway took the lead with his 31 point system.  To this day, Conway’s system is the smallest known Kochen-Specker system.  In public lectures Conway encouraged the search for a smaller system.

In 2009 Arends, Ouaknine and Wampler from Oxford have shown that a Kochen-Specker system must have at least 18 points.  They did this by reducing the problem to the existence of graphs with a topological embeddability and non-colorability property.  The bottleneck in their search proved to be enumerating the sheer number of graphs on more than 17 vertices and deciding embeddability.

Continuing their effort, we reduced the class of graphs to consider and found a more practical algorithm to decide embeddabiltiy.  This allowed us to show that a Kochen-Specker system must have at least 22 points.

There is a lot of room for improvement; e.g. if the enumeration of graphs can be effectively restricted to those graphs in which every point is part of a triangle, the upper bound may be pushed to 24 and the smallest KS-system might be found. So, if you have a knack for algorithms, please join and give this problem a shot!

https://westerbaan.name/~bas/math/ks-extended-abstract.pdf

Sander Uijlen and Bas Westerbaan. “A Kochen-Specker system has at least 22 vectors.” New Generation Computing 34.1-2 (2016): 3-23.

Bio:

Bas Westerbaan is Ph.D-student at Radboud Universiteit Nijmegen (the Netherlands).

Grad Seminar Guest: Niki Vazou

Graduate seminar, Thursday, Oct. 13, 4:30-5:30, POST 126

LiquidHaskell: Liquid Types for Haskell

Niki Vazou, University of California, San Diego

Abstract:

Code deficiencies and bugs constitute an unavoidable part of software systems. In safety-critical systems, like aircrafts or medical equipment, even a single bug can lead to catastrophic impacts such as injuries or death. Formal verification can be used to statically track code deficiencies by proving or disproving correctness properties of a system. However, at its current state formal verification is a cumbersome process that is rarely used by mainstream developers.

This talk presents LiquidHaskell, a usable formal verifier for Haskell programs. LiquidHaskell naturally integrates the specification of correctness properties in the development process. Moreover, verification is automatic, requiring no explicit proofs or complicated annotations.  At the same time, the specification language is expressive and modular, allowing the user to specify correctness properties ranging from totality and termination to memory safety and safe resource (e.g., file) manipulation.  Finally, LiquidHaskell has been used to verify more than 10,000 lines of real-world Haskell programs.

LiquidHaskell serves as a prototype verifier in a future where formal techniques will be used to facilitate, instead of hinder, software development. For instance, by automatically providing instant feedback, a verifier will allow a web security developer to immediately identify potential code vulnerabilities.

Bio:

Niki Vazou is a Ph.D. candidate at University of California, San Diego, supervised by Ranjit Jhala. She works in the area of programming languages, with the goal of building usable program verifiers that will naturally integrate formal verification techniques into the mainstream software development chain. Niki Vazou received the Microsoft Research Ph.D. fellowship in 2014 and her BS from National Technical University of Athens, Greece in 2011.