(This page contains a porting of the ICS technical report library page from the old website. Many links might not work.)

FileICS2012-01-01: Assessing Researcher Interdisciplinarity: A Case Study of the University of Hawaii NASA Astrobiology Institute – M. Gowanlock, R. Gazan — by Henri Casanova — last modified Jan 18, 2012 04:30 PM
Abstract: In this study, we combine bibliometric techniques with a machine learning algorithm, the sequential Information Bottleneck, to assess the interdisciplinarity of research produced by the University of Hawaii NASA Astrobiology Institute (UHNAI). In particular, we cluster abstract data to evaluate ISI Web of Knowledge subject categories as descriptive labels for astrobiology documents, and to assess individual researcher interdisciplinarity to determine where collabo- ration opportunities might occur. We find that the majority of the UHNAI team is engaged in interdisciplinary research, and suggest that our method could be applied to additional NASA Astrobiology Institute teams to identify and facilitate collaboration opportunities.
FileICS2009-06-03: Evaluating Terminal Heads Of Length K – D. Pager — by David Pager — last modified Jun 24, 2009 08:03 PM
ABSTRACT. This paper presents an alternative algorithm for finding the terminal heads of length k of a given string in a given context-free grammar for given k.
FileICS2009-06-02: The Lane Table Method Of Constructing LR(1) Parsers – D. Pager — by David Pager — last modified Jul 03, 2009 12:46 PM
ABSTRACT. The first practical application of the LR algorithm was by [1] for the LALR(1) subset of LR(1) grammars. In [2] an efficient method of producing an LR(1) parser for all LR(1) grammars was described which involves resolving conflicts at states of the LR(0) parsing machine, employing two phases. In Phase 1 the contexts of the productions involved in conflicts are evaluated by a process described there called “lane tracing”. If conflicts cannot be resolved by these means, then in Phase 2 the parts of the machine involved in lane tracing are regenerated, avoiding the combination of states that potentially lead to conflicts. Other works along the same lines include [4, 5]. The criterion employed in [2] for determining whether or not states may be combined was that of weak compatibility, as defined in [3]. In this paper we describe an alternative method for determining whether states can be combined. According to testing by [6] this method requires less computation. It is also more efficient. when extending the method from LR(1) to LR(k) parsing as described in [7] where very large grammars may be used for the purposes of natural language translation. Taken together with Phase 1, this new method of Phase 2 will, as before, produce a conflict-free LR(1) parser for all LR(1) grammars.
FileICS2009-06-01: Resolving LR Type Conflicts at Translation or Compile Time – D. Pager — by David Pager — last modified Jul 03, 2009 12:49 PM
ABSTRACT. The paper considers circumstances in which it is advantageous to resolve reduce-reduce conflicts at compile time, rather than at compiler-construction time. The application considered is that of translating English to one of the Romance languages, such as Italian, where adjectives and nouns have distinctive forms depending on their gender.
FileICS2008-12-01: Note Taking and Note Sharing While Browsing Campaign Information – S. Robertson, R. Vatrapu, G. Abraham — by Scott Robertson — last modified Jun 21, 2009 10:10 PM
Abstract: Participants were observed while searching and browsing the internet for campaign information in a mock-voting situation in three online note-taking conditions: No Notes, Private Notes, and Shared Notes. Note taking significantly influenced the manner in which participants browsed for information about candidates. Note taking competed for time and cognitive resources and resulted in less thorough browsing. Effects were strongest when participants thought that their notes would be seen by others. Think-aloud comments indicated that participants were more evaluative when taking notes, especially shared notes. Our results suggest that there could be design trade-offs between e-Democracy and e-Participation technologies.
FileICS2008-09-01: Resource Allocation using Virtual Clusters – M. Stillwell, D. Schanzenbach, F. Vivien, H. Casanova — by Henri Casanova — last modified Sep 29, 2008 03:27 PM
In this report we demonstrate the utility of resource allocations that use virtual machine technology for sharing parallel computing resources among competing users. We formalize the resource allocation problem with a number of underlying assumptions, determine its complexity, propose several heuristic algorithms to find near-optimal solutions, and evaluate these algorithms in simulation. We find that among our algorithms one is very efficient and also leads to the best resource allocations. We then describe how our approach can be made more general by removing several of the underlying assumptions.
FileICS2008-05-02: Traffic and Navigation Support through an Automobile Heads Up Display (A-HUD) – K. Chu, R. Brewer, S. Joseph — by Kar-Hai Chu — last modified May 22, 2008 06:05 AM
Abstract: The automobile industry has produced many cars with new features over the past decade. Taking advantage of advances in technology, cars today have fuel-efficient hybrid engines, proximately sensors, windshield wipers that can detect rain, built-in multimedia entertainment, and all-wheel drive systems that adjust power in real-time. However, the interaction between the driver and the car has not changed significantly. The information being delivered – both in quantity and method – from the car to the driver has not seen the same improvements as there has been “under the hood.” This is a position paper that proposes immersing the driver inside an additional layer of traffic and navigation data, and presenting that data to the driver by embedding display systems into the automobile windows and mirrors. We have developed the initial concepts and ideas for this type of virtual display. Through gaze tracking the digital information is superimposed and registered with real world entities such as street signs and traffic intersections.
FileICS2008-05-01: Accuracy and Responsiveness of CPU Sharing Using Xen’s Cap Values – D. Schanzenbach, H. Casanova — by Henri Casanova — last modified May 22, 2008 06:05 AM
Abstract: The accuracy and responsiveness of the Xen CPU Scheduler is evaluated using the “cap value” mechanism provided by Xen. The goal of the evaluation is to determine whether state-of-the-art virtualization technology, and in particular Xen, enables CPU sharing that is sufficiently accurate and responsive for the purpose of enabling “flexible resource allocations” in virtualized cluster environments.
FileICS2008-1-01: Multiple-Genome Annotation of Genome Fragments Using Hidden Markov Model Profiles – M.Menor, K.Baek, G.Poisson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Abstract: To learn more about microbes and overcome the limitations of standard cultured methods, microbial communities are being studied in an uncultured state. In such metagenomic studies, genetic material is sampled from the environment and sequenced using the whole-genome shotgun sequencing technique. This results in thousands of DNA fragments that need to be identified, so that the composition and inner workings of the microbial community can begin to be understood. Those fragments are then assembled into longer portions of sequences. However the high diversity present in an environment and the often low level of genome coverage achieved by the sequencing technology result in a low number of assembled fragments (contigs) and many unassembled fragments (singletons). The identification of contigs and singletons is usually done using BLAST, which finds sequences similar to the contigs and singletons in a database. An expert may then manually read these results and determine if the function and taxonomic origins of each fragment can be determined. In this report, an automated system called Anacle is developed to annotate, following a taxonomy, the unassembled fragments before the assembly process. Knowledge of what proteins can be found in each taxon is built into Anacle by clustering all known proteins of that taxon. The annotation performances from using Markov clustering (MCL) and Self- Organizing Maps (SOM) are investigated and compared. The resulting protein clusters can each be represented by a Hidden Markov Model (HMM) profile. Thus a “skeleton” of the taxon is generated with the profile HMMs providing a summary of the taxon’s genetic content. The experiments show that (1) MCL is superior to SOMs in annotation and in running time performance, (2) Anacle achieves good performance in taxonomic annotation, and (3) Anacle has the ability to generalize since it can correctly annotate fragments from genomes not present in the training dataset. These results indicate that Anacle can be very useful to metagenomics projects.
FileICS2007-12-01: Health Management Information Systems for Resource Allocation and Purchasing in Developing Countries – D.Streveler, S.Sherlock — by Henri Casanova — last modified May 22, 2008 06:05 AM
World Bank, Health Nutrition and Population, Discussion Paper: The paper begins with the premise that it is not possible to implement an efficient, modern RAP strategy today without the effective use of information technology. The paper then leads the architect through the functionality of the systems components and environment needed to support RAP, pausing to justify them at each step. The paper can be used as a long-term guide through the systems development process as it is not necessary (and likely not possible) to implement all functions at once. The paper’s intended audience is those members of a planning and strategy body, working in conjunction with technical experts, who are charged with designing and implementing a RAP strategy in a developing country.
FileICS2007-11-02: On the NP-Hardness of the RedundantTaskAlloc Problem – J. Wingstrom, H. Casanova — by Henri Casanova — last modified May 22, 2008 06:05 AM
Consider an application that consists of n independent identical tasks to be executed on m computers, with m > n. Assume that each computer can execute a task with a given probability of success (typically <1). One can use redundancy to execute replicas of some of the tasks on the m-n computers. The problem is to determine how many replicas should be created for each task, or more precisely the number of task instances that should be created for each task and to which computers these instances should be allocated in order to maximize the probability of successful application completion. We formally define this problem, which we call RedundantTaskAlloc, and prove that it is NP-hard.
FileICS2007-11-01: Statistical Modeling of Resource Availability in Desktop Grids – J. Wingstrom, H. Casanova — by Henri Casanova — last modified May 22, 2008 06:05 AM
Desktop grids are compute platforms that aggregate and harvest the idle CPU cycles of individually owned personal computers and workstations. A challenge for using these platforms is that the compute resources are volatile. Due to this volatility the vast majority of desktop grid applications are embarrassingly parallel and high-throughput. Deeper understanding of the nature of resource availability is needed to enable the use of desktop grids for a broader class of applications. In this document we further this understanding thanks to statistical analysis of availability traces collected on real-world desktop grid platforms.
FileICS2006-12-03: Improving Software Development Process and Product Management with Software Project Telemetry, Q. Zhang — by Henri Casanova — last modified May 22, 2008 06:05 AM
Software development is slow, expensive and error prone, often resulting in products with a large number of defects which cause serious problems in usability, reliability, and performance. To combat this problem, software measurement provides a systematic and empirically-guided approach to control and improve software development processes and final products. However, due to the high cost associated with “metrics collection” and difficulties in “metrics decision-making,” measurement is not widely adopted by software organizations. This dissertation proposes a novel metrics-based program called “software project telemetry” to address the problems. It uses software sensors to collect metrics automatically and unobtrusively. It employs a domain-specific language to represent telemetry trends in software product and process metrics. Project management and process improvement decisions are made by detecting changes in telemetry trends and comparing trends between different periods of the same project. Software project telemetry avoids many problems inherent in traditional metrics models, such as the need to accumulate a historical project database and ensure that the historical data remain comparable to current and future projects. The claim of this dissertation is that software project telemetry provides an effective approach to (1) automated metrics collection and analysis, and (2) in-process, empirically-guided software development process problem detection and diagnosis. Two empirical studies were carried out to evaluate the claim: one in software engineering classes, and the other in the Collaborative Software Development Lab. The results suggested that software project telemetry had acceptably-low metrics collection and analysis overhead, and that it provided decision-making value at least in the exploratory context of the two studies.
FileICS2006-12-02: Evaluation of Jupiter: A Lightweight Code Review Framework – T. Yamashita — byHenri Casanova — last modified May 22, 2008 06:05 AM
Software engineers generally agree that code reviews reduce development costs and improve software quality by finding defects in the early stages of software development. In addition, code review software tools help the code review process by providing a more efficient means of collecting and analyzing code review data. On the other hand, software organizations that conduct code reviews often do not utilize these review tools. Instead, most organizations simply use paper or text editors to support their code review processes. Using paper or a text editor is potentially less useful than using a review tool for collecting and analyzing code review data. In this research, I attempted to address the problems of previous code review tools by creating a lightweight and flexible review tool. This review tool that I have developed, called “Jupiter”, is an Eclipse IDE Plug-In. I believe the Jupiter Code Review Tool is more efficient at collecting and analyzing code review data than the text-based approaches. To investigate this hypothesis, I have constructed a methodology to compare the Jupiter Review Tool to the text-based review approaches. I carried out a case study using both approaches in a software engineering course with 19 students. The results provide some supporting evidence that Jupiter is more useful and more usable than the text-based code review, requires less overhead than the text-based review, and appears to support long-term adoption. The major contributions of this research are the Jupiter design philosophy, the Jupiter Code Review Tool, and the insights from the case study comparing the text-based review to the Jupiter-based review.
FileICS2006-12-01: Results from the 2006 Classroom Evaluation of Hackystat-UH – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
his report presents the results from a classroom evaluation of Hackystat by ICS 413 and ICS 613 students at the end of Fall, 2006. The students had used Hackystat-UH for approximately six weeks at the time of the evaluation. The survey requests their feedback regarding the installation, configuration, overhead of use, usability, utility, and future use of the Hackystat-UH configuration. This classroom evaluation is a semi-replication of an evaluation performed on Hackystat by ICS 413 and 613 students at the end of Fall, 2003, which is reported in “Results from the 2003 Classroom Evaluation of Hackystat-UH”. As the Hackystat system has changed significantly since 2003, some of the evaluation questions were changed. The data from this evaluation, in combination with the data from the 2003 evaluation, provide an interesting perspective on the past, present, and possible future of Hackystat. Hackystat has increased significantly in functionality since 2003, which has enabled the 2006 usage to more closely reflect industrial application, and which has resulted in significantly less overhead with respect to client-side installation. On the other hand, results appear to indicate that this increase in functionality has resulted in a decrease in the usability and utility of the system, due to inadequacies in the server-side user interface. Based upon the data, the report proposes a set of user interface enhancements to address the problems raised by the students, including Ajax-based menus and parameters, workflow based organization of the user interface, real-time display for ongoing project monitoring, annotations, and simplified data exploration facilities.
FileICS2005-08-02: Priority Ranked Inspection: Supporting Effective Inspection in Resource-limited Organizations – A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
Imagine that your project manager has budgeted 200 person-hours for the next month to inspect newly created source code. Unfortunately, in order to inspect all of the documents adequately, you estimate that it will take 400 person-hours. However, your manager refuses to increase the budgeted resources for the inspections. How do you decide which documents to inspect and which documents to skip? Unfortunately, the classic definition of inspection does not provide any advice on how to handle this situation. For example, the notion of entry criteria used in Software Inspection determines when documents are ready for inspection rather than if it is needed at all. My research has investigated how to prioritize inspection resources and apply them to areas of the system that need them more. It is commonly assumed that defects are not uniformly distributed across all documents in a system, a relatively small subset of a system accounts for a relatively large proportion of defects. If inspection resources are limited, then it will be more effective to identify and inspect the defect-prone areas. To accomplish this research, I have created an inspection process called Priority Ranked Inspection (PRI). PRI uses software product and development process measures to distinguish documents that are “more in need of inspection” (MINI) from those “less in need of inspection” (LINI). Some of the product and process measures include: user-reported defects, unit test coverage, active time, and number of changes. I hypothesize that the inspection of MINI documents will generate more defects with a higher severity than inspecting LINI documents. My research employed a very simple exploratory study, which includes inspecting MINI and LINI software code and checking to see if MINI code inspections generate more defects than LINI code inspections. The results of the study provide supporting evidence that MINI documents do contain more high-severity defects than LINI documents. In addition, there is some evidence that PRI can provide developers with more information to help determine what documents they should select for inspection.
FileICS2005-08-01: Continuous GQM: An automated framework for the Goal-Question-Metric paradigm, C. Lofi — by Henri Casanova — last modified May 22, 2008 06:05 AM
Measurement is an important aspect of Software Engineering as it is the foundation of predictable and controllable software project execution. Measurement is essential for assessing actual project progress, establishing baselines and validating the effects of improvement or controlling actions. The work performed in this thesis is based on Hackystat, a fully automated measurement framework for software engineering processes and products. Hackystat is designed to unobtrusively measure a wide range of metrics relevant to software development and collect them in a centralized data repository. Unfortunately, it is not easy to interpret, analyze and visualize the vast data collected by Hackystat in such way that it can effectively be used for software project control. A potential solution to that problem is to integrate Hackystat with the GQM (Goal / Question / Metric) Paradigm, a popular approach for goal-oriented, systematic definition of measurement programs for software-engineering processes and products. This integration should allow the goal-oriented use of the metric data collected by Hackystat and increase it’s usefulness for project control. During the course of this work, this extension to Hackystat which is later called hackyCGQM is implemented. As a result, hackyCGQM enables Hackystat to be used as a Software Project Control Center (SPCC) by providing purposeful high-level representations of the measurement data. Another interesting side-effect of the combination of Hackystat and hackyCGQM is that this system is able to perform fully automated measurement and analysis cycles. This leads to the development of cGQM, a specialized method for fully automated, GQM based measurement programs. As a summary, hackyCGQM seeks to implement a completely automated GQMbased measurement framework. This high degree of automation is made possible by limiting the implemented measurement programs to metrics which can be measured automatically, thus sacrificing the ability to use arbitrary metrics.
FileICS2005-07-01: Studying Micro-Processes in Software Development Stream – H. Kou — by Henri Casanova — last modified May 22, 2008 06:05 AM
n this paper we propose a new streaming technique to study software development. As we observed software development consists of a series of activities such as edit, compilation, testing, debug and deployment etc. All these activities contribute to development stream, which is a collection of software development activities in time order. Development stream can help us replay and reveal software development process at a later time without too much hassle. We developed a system called Zorro to generate and analyze development stream at Collaborative Software Development Laboratory in University of Hawaii. It is built on the top of Hackystat, an in-process automatic metric collection system developed in the CSDL. Hackystat sensors continuously collect development activities and send them to a centralized data store for processing. Zorro reads in all data of a project and constructs stream from them. Tokenizers are chained together to divide development stream into episodes (micro iteration) for classification with rule engine. In this paper we demonstrate the analysis on Test-Driven Development (TDD) with this framework.
FileICS2005-07-01: Hackystat-SQI: First Progress Report – A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents the initial analysis that are available for Hackystat-SQI and future directions.
FileICS2005-06-01: A continuous, evidence-based approach to discovery and assessment of software engineering best practices, P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document presents the project description for a proposal to the National Science Foundation. It discusses an approach that integrates Hackystat, Software Project Telemetry, Software Development Stream Analysis, Pattern Discovery, and Evidence-based software engineering to support evaluation of best practices. Both classroom and industrial case studies are proposed to support evaluation of the techniques.
FileICS2005-03-01: Modularity in Evolution: Some Low-Level Questions, L. Altenberg — by Henri Casanova — last modified May 22, 2008 06:05 AM
I have endeavored in this essay to delve into some of the low-level conceptual issues associated with the idea of modularity in the genotype-phenotype map. My main proposal is that the evolutionary advantages that have been attributed to modularity do not derive from modularity per se. Rather, they require that there be an “alignment” between the spaces of phenotypic variation, and the selection gradients that are available to the organism. Modularity in the genotype-phenotype map may make such an alignment more readily attained, but it is not sufficient; the appropriate phenotype-fitness map in conjunction with the genotype-phenotype map is also necessary for evolvability.
FileICS2005-01-01: Evolvability Suppression to Stabilize Far-Sighted Adaptations, L. Altenberg — byHenri Casanova — last modified May 22, 2008 06:05 AM
The opportunistic character of adaptation through natural selection can lead to `evolutionary pathologies’ ? situations in which traits evolve that promote the extinction of the population. Such pathologies include imprudent predation and other forms of habitat over-exploitation or the `tragedy of the commons’, adaptation to temporally unreliable resources, cheating and other antisocial behavior, infectious pathogen carrier states, parthenogenesis, and cancer, an intra-organismal evolutionary pathology. It is known that hierarchical population dynamics can protect a population from invasion by pathological genes. Can it also alter the genotype so as to prevent the generation of such genes in the first place, i.e. suppress the evolvability of evolutionary pathologies? A model is constructed in which one locus controls the expression of the pathological trait, and a series of modifier loci exist which can prevent the expression of this trait. It is found that multiple `evolvability checkpoint’ genes can evolve to prevent the generation of variants that cause evolutionary pathologies. The consequences of this finding are discussed.
FileICS2004-12-02: Using AOP to Improve Hackystat Telemetry Analysis Performance, Kou, Hongbing & Zhang, Qin, Collaborative Software Development Laboratory — by Henri Casanova — last modified May 22, 2008 06:05 AM
Abstract: Hackystat is a system that provides automated support for collecting and analyzing software development product and process metrics. The analysis service is provided through web interface of Hackystat server, which can be quite complex. This paper discusses how we use Aspect-Oriented Programming(AOP) techniques to implement a high-level cache for Hackystat telemetry analysis and reports empirical evaluation results on system throughput.
FileICS2004-12-01: Aspect Oriented Programming and Game Development – Ernest Criss — by Henri Casanova — last modified May 22, 2008 06:05 AM
Abstract: Aspect Oriented Programming could possibly be the next phase in the evolution of game development. This case-study will attempt to explore this new and exciting programming paradigm that promises to be as significant to the software development world as Object Oriented Programming. We will explore utilizing AOP facilities by extending an existing game called Invaders – a game that bares resemblance to the classic Space Invaders.
FileICS2004-08-02: Practical automated process and product metric collection and analysis in aclassroom setting: Lessons learned from Hackystat-UH – P. Johnson, H. Kou, J. Agustin, Q. Zhang, A. Kagawa, T. Yamashita — by Henri Casanova — last modified May 22, 2008 06:05 AM
Measurement definition, collection, and analysis is an essential component of high quality software engineering practice, and is thus an essential component of the software engineering curriculum. However, providing students with practical experience with measurement in a classroom setting can be so time-consuming and intrusive that it’s counter-productive—teaching students that software measurement is “impractical” for many software development contexts. In this research, we designed and evaluated a very low-overhead approach to measurement collection and analysis using the Hackystat system with special features for classroom use. We deployed this system in two software engineering classes at the University of Hawaii during Fall, 2003, and collected quantitative and qualitative data to evaluate the effectiveness of the approach. Results indicate that the approach represents substantial progress toward practical, automated metrics collection and analysis, though issues relating to the complexity of installation and privacy of user data remain.
FileICS2004-08-01: Keeping the coverage green: Investigating the cost and quality of testing in agile development, P. Johnson, J. Agustin — by Henri Casanova — last modified May 22, 2008 06:05 AM
An essential component of agile methods such as Extreme Programming is a suite of test cases that is incrementally built and maintained throughout development. This paper presents research exploring two questions regarding testing in these agile contexts. First, is there a way to validate the quality of test case suites in a manner compatible with agile development methods? Second, is there a way to assess and monitor the costs of agile test case development and maintenance? In this paper, we present the results of our recent research on these issues. Our results include a measure called XC (for Extreme Coverage) which is implemented in a system called JBlanket. XC is designed to support validation of the test-driven design methodology used in agile development. We describe how XC and JBlanket differ from other coverage measures and tools, assess their feasibility through a case study in a classroom setting, assess its external validity on a set of open source systems, and illustrate how to incorporate XC into a more global measure of testing cost and quality called Unit Test Dynamics (UTD). We conclude with suggested research directions building upon these findings to improve agile methods and tools.
FileICS2004-07-03: Hackystat MDS supporting MSL MMR: Round 2 Results, A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents selected additional results from Hackystat Analyses on Mission Data System’s Release 9. The goal is to identify reports of use to the Monthly Management Report for Mars Science Laboratory.
FileICS2004-07-02: Hackystat-SQI: Modeling Different Development Processes, A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents the design of a Hackystat module called SQI, whose purpose is to support quality analysis for multiple projects at Jet Propulsion Laboratory.
FileICS2004-06-02: Hackystat MDS supporting MSL MMR, A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents selected results from Hackystat Analyses on Mission Data System’s Release 9. The goal is to identify reports of use to the Monthly Management Report for Mars Science Laboratory.
FileICS2004-06-01: Simon-Ando Decomposability and Fitness Landscapes – M. Shpak, P. Stadler, G. Wagner, L. Altenberg — by Henri Casanova — last modified May 22, 2008 06:05 AM
In this paper, we investigate fitness landscapes (under point mutation and recombination) from the standpoint of whether the induced evolutionary dynamics have a “fast-slow ” time scale associated with the differences in relaxation time between local quasi-equilibria and the global equilibrium. This dynamical behavior has been formally described in the econometrics literature in terms of the spectral properties of the appropriate operator matrices by Simon and Ando (1961), and we use the relations they derive to ask which fitness functions and mutation/recombination operators satisfy these properties. It turns out that quite a wide range of landscapes satisfy the condition (at least trivially) under point mutation given a sufficiently low mutation rate, while the property appears to be difficult to satisfy under genetic recombination. In spite of the fact that Simon-Ando decomposability can be realized over fairly wide range of parameters, it imposes a number of restrictions on which landscape partitionings are possible. For these reasons, the Simon-Ando formalism does not appear to be applicable to other forms of decomposition and aggregation of variables that are important in evolutionary systems.
FileICS2004-05-02: The Hackystat-JPL Configuration: Round 2 Result, A. Kagawa, P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents selected round two results from Hackystat-based descriptive analyses of Harvest workflow data gathered from the Mission Data System software development project from January, 2003 to December, 2003. The information provided in this report describes improvements and differences made since the time of writing of the previous techreport (The Hackystat-JPL Configuration: Overview and Initial Results.
File ICS2004-05-01: ALU: An artifact-centered asynchronous online discussion facility – B. Harris — byHenri Casanova — last modified May 22, 2008 06:05 AM
It is often the case with online discussions that the artifact upon which a discussion is based is not easily accessible through the discussion; or that references are made to multiple artifacts during the discussion and are therefore difficult to organize. Alu has attempted to address these issues by combining a discussion and artifact in the same browser window, allowing the user of the system to easily consult the appropriate artifact while reading message postings. This allows the messages to be kept in context as well as making reference to the artifact simpler. Additional features such as uploading files, email subscriptions and various layouts have been added over time to enhance its overall functionality.
FileICS2004-03-01: Open Problems in the Spectral Analysis of Evolutionary Dynamics, L. Altenberg —by Henri Casanova — last modified May 22, 2008 06:05 AM
For broad classes of selection and genetic operators, the dynamics of evolution can be completely characterized by the spectra of the operators that define the dynamics, in both infinite and finite populations. These classes include generalized mutation, frequency-independent selection, uniparental inheritance. Several open questions exist regarding these spectra: 1. For a given fitness function, what genetic operators and operator intensities are optimal for finding the fittest genotype? The concept of rapid first hitting time, an analog of Sinclair’s “rapidly mixing” Markov chains, is examined. 2. What is the relationship between the spectra of deterministic infinite population models, and the spectra of the Markov processes derived from them in the case of finite populations? 3. Karlin proved a fundamental relationship between selection, rates of transformation under genetic operators, and the consequent asymptotic mean fitness of the population. Developed to analyze the stability of polymorphisms in subdivided populations, the theorem has been applied to unify the reduction principle for self-adaptation, and has other applications as well. Many other problems could be solved if it were generalized to account for the interaction of different genetic operators. Can Karlin’s theorem on operator intensity be extended to account for mixed genetic operators?
FileICS2003-12-02: Results from Qualitative Evaluation of Hackystat-UH – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This report presents the results from a qualitative evaluation of ICS 413and ICS 613 students at the end of Fall, 2003. The students had usedHackystat-UH for approximately six weeks at the time of the evaluation. The survey requests theirfeedback regarding the installation, configuration, overhead of use,usability, utility, and future use of the Hackystat-UHconfiguration. Results provide evidence that: (1) Significant problemsoccur during installation and configuration of the system; (2) theHackystat-UH configuration incurs very low overhead after completinginstallation and configuration; (3) Analyses were generally found to besomewhat useful and usable; and (4) feasibility in a professionaldevelopment context requires addressing privacy and platform issues.
FileICS2003-12-01: Jupiter Users Guide – T. Yamashita — by Henri Casanova — last modified May 22, 2008 06:05 AM
Users guide for the Jupiter code review plug-in for Eclipse.
FileICS2003-11-01: The Review Game: Teaching Asynchronous Distributed Software Review Using Eclipse – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Presents an approach to teaching software review involving an Eclipse plug-in called Jupiter and automated metrics collection and analysis using Hackystat.
FileICS2003-10-01: The Hackystat-JPL Configuration: Overview and Initial Results – P. Johnson — byHenri Casanova — last modified May 22, 2008 06:05 AM
This report presents selected initial results from Hackystat-baseddescriptive analysesof Harvest workflow data gathered from the Mission Data System softwaredevelopment project from January, 2003 to August, 2003. We present themotivation for this work, the methods used, examples of the analyses, andquestions raised by the results. Our major findings include: (a) workflowtransitions not documented in the “official” process; (b) significantnumbers of packages with unexpected transitionsequences; (c) cyclical levels of development “intensity” asrepresented by levels of promotion/demotion; (d) a possible approach tocalculating the proportion of “new” scheduled work versus rework/unscheduled workalong with baseline values; and (e) a possible approach to calculating the distribution of package “ages” and days spent in thevarious workflow states, along with potential issues with the representation of”package age” based upon the current approach to packagepromotion.The report illustrates how our current approach to analysis can yielddescriptive perspectives on the MDS development process. It provides a firststep toward more prescriptive, analytic models of the MDS software developmentprocess by providing insights into the potential uses and limitations of MDSproduct and process data.
FileICS2003-08-01: Hackystat Metric Collection and Analysis for the MDS Harvest CM System: A Design Specification – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This proposal describes the requirements and top-level design for a Hackystat-based system that automatically monitors and analyzes the MDS development process using data collected from the Harvest CM system.
FileICS2003-05-03: The design, implementation, and evaluation of CLEW: An improved Collegiate Department Website – A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
The purpose of a collegiate department website is to provide prospective students, current students, faculty, staff, and other academic and industry professionals with information concerning the department. The information presented on the website should give the user an accurate model of the department, even as it changes overtime. Some of these changes include: adding new faculty members, new students, new courses, etc. The more accurately the website models the department, the more aware the web site’s users will be of the department.Traditional collegiate department websites have two primary problems increating an accurate model of their department. First, only a few people,usually the department webmasters, can add information to the website.Second, it is difficult to enable website users to be informed of changes to the website that might be of interest to them. These two problems decrease the accuracy of the model and hamper its effectiveness in alerting users of changes to the website. As a result, user awareness of the department is also decreased.The Collaborative Educational Website (CLEW) is a Java web application intended to support accurate modeling of a collegiate department. CLEW is designed to solve the traditional collegiate department web site’s two main problems. First, it provides interactive services which will allow users to add various kinds of information to the website. Secondly, CLEW addresses the notification problem by providing tailored email notifications of changes to the website. CLEW was developed by a Software Engineering class in the Information and Computer Science Department at the University of Hawaii at Manoa. My role in this development as project leader is to design and implement the framework for the system. CLEW currently contains approximately 28,000 lines of Java code and it contains upwards of 500 web pages. In the Spring 2003 semester, CLEW replaced the existing Information and Computer Science Department website. I evaluated CLEW to measure its effectiveness as a model of the department using a pre and post release questionnaire. I also evaluated usage data of the CLEW System to assess the functionality provided by CLEW.If CLEW provides a more accurate model of a collegiate department,then the next step is to provide the CLEW framework to other collegiate departments worldwide. It is my hope that the users’ of CLEW will get a clue about their department!
FileICS2003-05-02: Improving software quality through extreme coverage with JBlanket — by Henri Casanova — last modified May 22, 2008 06:05 AM
Unit testing is an important part of software testing that aids in thediscovery of bugs sooner in the software development process. Extreme Programming (XP), and its Test First Design technique, relies so heavily upon unit tests that the first code implemented is made up entirely oftest cases. Furthermore, XP considers a feature to be completely coded only when all of its test cases pass. However, passing all test cases does not necessarily mean the test cases are good. Extreme Coverage (XC) is a new approach that helps to assess and improve the quality of software by enhancing unit testing. It extends the XP requirement that all test cases must pass with the requirement that all defect-prone testable methods must be invoked by the tests. Furthermore, a set of flexiblerules are applied to XC to make it as attractive and light-weight as unittesting is in XP. One example rule is to exclude all methods containing one line of code from analysis. I designed and implemented a new tool, called JBlanket, that automates the XC measurement process similar to the way thatJUnit automates unit testing. JBlanket produces HTML reports similar to JUnitreports which inform the user about which methods need to be tested next. In this research, I explore the feasibility of JBlanket, the amount of effort needed to reach and maintain XC, and the impact that knowledge of XC has onsystem implementation through deployment and evaluation in an academic environment. Results show that most students find JBlanket to be a useful tool in developing their test cases, and that knowledge of XC did influence the manner in which students implemented their systems. However, more studies are needed to conclude precisely how much effort is needed to reach and maintain XC. This research lays the foundation for future research directions. One direction involves increasing its flexibility and value by expanding and refining the rules of XC. Another direction involves tracking XC behavior to find out when it is and is not applicable.
FileICS2003-05-01: Beyond the Personal Software Process: Metrics collection and analysis for the differently disciplined – P. Johnson, H. Kou, J. Agustin, C. Chan, C. Moore, J. Miglani, S. Zhen, W. Doane — by Henri Casanova — last modified May 22, 2008 06:05 AM
Pedagogies such as the Personal Software Process (PSP) shift metrics definition, collection, and analysis from the organizational level to the individual level. While case study research indicates that the PSP can provide software engineering students with empirical support for improving estimation and quality assurance, there is little evidence that many students continue to use the PSP when no longer required to do so. Our research suggests that this “PSP adoption problem” may be due to two problems: the high overhead of PSP-style metrics collection and analysis, and the requirement that PSP users “context switch” between product development and process recording. This paper overviews our initial PSP experiences, our first attempt to solve the PSP adoption problem with the LEAP system, and our current approach called Hackystat. This approach fully automates both data collection and analysis, which eliminates overhead and context switching. However, Hackystat changes the kind of metrics data that is collected, and introduces new privacy-related adoption issues of its own.
FileICS2002-12-04: Configuration Management and Hackystat: Initial steps to relating organizational and individual development – C. Tomosada, B. Leung — by Henri Casanova — last modified May 22, 2008 06:05 AM
Hackystat is a software development metrics collection tool that focuses on individual developers. Hackystat is able to provide a developer with a personal analysis of his or her unique processes. Source code configuration management (SCM) systems, on the other hand, are a means of storage for source code in a development community and serve as controller for what each individual may contribute to the community. We created a Hackystat sensor for CVS (an SCM system) in the hopes of bridging the gap between these two very different, yet related software applications. It was our hope to use the data we collected to address the issue of development conflicts that often arise in organizational development environments. We found, however, that neither application, Hackystat or CVS, could be easily reconfigured to our needs.
FileICS2002-12-03: Comparing personal project metrics to support process and product improvement – C. Aschwanden, A. Kagawa — by Henri Casanova — last modified May 22, 2008 06:05 AM
Writing high quality software with a minimum of effort is an importantthing to learn. Various personal metric collection processes exist, such asPSP and Hackystat. However, using the personal metric collection processesgives only a partial indication of how a programmer stands amongst hispeers. Personal metrics vary greatly amongst programmers and it is notalways clear what is the “correct” way to develop software. This paper compares personal programming characteristics of students in a class environment. Metrics, such as CK Metrics, have been analyzed and compared against a set of similar students in an attempt to find the correct or accepted value for these metrics. It is our belief that programmers can gain much, if not, more information from comparing their personal metrics against other programmers. Preliminary results show that people with more experience in programming produce different metrics than those with less.
FileICS2002-12-02: You can’t even ask them to push a button: Towardubiquitous, developer-centric, empirical software engineering – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Collection and analysis of empirical software project data is central tomodern techniques for improving software quality, programmer productivity,and the economics of software project development. Unfortunately, barriers surrounding the cost, quality, and utility of empirical project data hamper effective collection and application in many software development organizations.This paper describes Hackystat, an approach to enabling ubiquitous collection and analysis of empirical software project data. The approach rests on three design criteria: data collection and analysis must be developer-centric rather than management-centric; it must be in-process rather than between-process, and it must be non-disruptive—it must not require developers to interrupt their activities to collect and/or analyze data. Hackystat is being implemented via an open source, sensor and web service based architecture. After a developer instruments their commercial development environment tools (such as their compiler, editor, versioncontrol system, and so forth) with Hackystat sensors, data is silently and unobtrusively collected and sent to a centralized web service. The web service runs analysis mechanisms over the data and sends email notifications back to a developer when “interesting” changes in their process or product occur.Our research so far has yielded an initial operational release in daily use with a small set of sensors and analysis mechanisms, and a research agenda for expansion in the tools, the sensor data types, and the analyses. Our research has also identified several critical technical and social barriers, including: the fidelity of the sensors; the coverage of the sensors; the APIs exposed by commercial tools for instrumentation; and thesecurity and privacy considerations required to avoid adoption problems due to the spectre of “Big Brother”.
FileICS2002-12-02: Most active file measurement in Hackystat – H. Kou, X. Xu — by Henri Casanova — last modified May 22, 2008 06:05 AM
Hackystat, an automated metric collection and analysis tool, adopts the”Most Active File” measurement in five-minute time chunks to represent the developers’ effort. This measurement is validated internally in this report. The results show that big time chunk sizes are highly linear regressive with the standard time chunk size (1 minute). The percentage of missed effort to total effort is very low with five minutes chunk size. And the relative ranking with respect to the effort of the active files is onlyslightly changed.
FileICS2002-12-01: JBlanket: Support for Extreme Coverage in Java Unit Testing – J. Agustin — by Henri Casanova — last modified May 22, 2008 06:05 AM
nit testing is a tool commonly used to ensure reliability in software development and can be applied to the software development process as soon as core functionality of a program is implemented. In conventional unit testing, to properly design unit tests programmers will need to have access to specifications and source code. However, this is not possible in Extreme Programming (XP), where tests are created before any feature of a system is ever implemented. Obviously, XP’s approach does not lead to improper testing, but instead leads to a different approach for testing. JBlanket is a tool developed in the Collaborative Software Development Laboratory(CSDL) at the University of Hawai’i (UH) that is meant to assist these types of “unconventional” testing that calculates method-level coverage in Java programs, a coarse enough granularity of test case coverage whereby programmers will not only be able to ensure that all of their unit tests pass, but will also be able to test all of their currently implemented methods. Unit testing where 100 percent of all unit tests must pass that also exercises 100 percent of all non-trivial remaining implemented methods is called Extreme Coverage. This research will attempt to show that Extreme Coverage is useful in developing quality software.
FileICS2002-07-01: Supporting development of highly dependable software through continous, automated, in-process, and individualized software measurement validation, P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Highly dependable software is, by nature, predictable. For example, one can predict with confidence the circumstances under which the software will work and the circumstances under which it will fail. Empirically-based approaches to creating predictable software are based on two assumptions:(1) historical data can be used to develop and calibrate models that generate empirical predictions, and (2) there exists relationships between internal attributes of the software (i.e. immediately measurable process and product attributes such as size, effort, defects, complexity,and so forth) and external attributes of the software (i.e. abstract and/or non-immediately measurable attributes, such as “quality”, the time and circumstances of a specific component’s failure in the field, and so forth). Software measurement validation is the process of determining a predictive relationship between available internal attributes and correspondingly useful external attributes and the conditions under which this relationship holds. This report proposes research whose general objective is to design, implement, and validate software measures within a development infrastructure that supports the development of highly dependable software systems. The measures and infrastructure are designed to support dependable software development in two ways: (1) They will support identification of modules at risk for being fault-prone, enabling more efficient and effective allocation of quality assurance resources, and (2) They will support incremental software development through continuous monitoring,notifications, and analyses. Empirical assessment of these methods and measures during use on the Mission Data System project at Jet Propulsion Laboratory will advance the theory and practice of dependable computing and software measurement validation and provide new insight into the technological and methodological problems associated with the current state of the art.
FileICS2002-06-01: Lessons learned from VCommerce: A virtual environment for interdisciplinary learning about software entrepreneurship – P. Johnson, M. Moffett, B. Pentland — by Henri Casanova — last modified May 22, 2008 06:05 AM
he Virtual Commerce (VCommerce) simulation environment providesa framework within which students can develop internet-basedbusinesses. Unlike most entrepreneurship simulation games, the objectiveof VCommerce is not to maximize profits. The environment, which is designed for use in interdisciplinary classroom settings, provides an opportunity for students with different backgrounds to build (virtual)businesses together. Elements of VCommerce, such as its open-ended business model and product; significant technical depth; external players; and severe time constraints combine to create a surprisingly realistic and effective learning experience for students in both computer science and management. This article overviews the VCommerce environment and our lessons learned from using it at both the University of Hawaii and Michigan State University.
FileICS2002-05-02: Improving the dependability and predictability of JPL/MDS software through low-overhead validation of software process and product metrics – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This white paper presents information regarding a proposed collaboration between the Collaborative Software Development Laboratory, the Mission Data Systems group at Jet Propulsion Laboratory, and the Center for Software Engineering at University of Southern California. The proposed collaboration would be funded through grants from the NSF/NASA Highly Dependable Computing and Communication Systems Research (HDCCSR) program.
FileICS2002-05-01: The design, implementation, and evaluation of INCA: anautomated system for approval code allocation – J. Miglani — by Henri Casanova — last modified May 22, 2008 06:05 AM
The ICS department of the University of Hawaii has faced problems surrounding approval code distribution as its enrollment has increased. The manual system for approval code allocation was time-consuming, ineffective and inefficient. INCA is designed to automate the task of approval code allocation, improve the quality of course approval decisions, and decrease the administrative overhead involved in those decisions. Based upon informal feedback from department administrators, it appears that INCA reduces their overhead and makes their life easier. What are the old problems that are solved by INCA? Does INCA introduce new kinds of problems for the administrator? What about the students? Are they completely satisfied with the system? In what ways does the system benefit the department as a whole? This thesis discusses the design, implementation and evaluation of INCA. It evaluates INCA from the viewpoint of the administrator, the students, and the department. An analysis of emails received at uhmics@hawaii.edu account indicates that INCA reduces administrative overhead. The results of the user survey show that three quarters of students believe INCA improved their course approval predictability and course requirements understandability. They prefer INCA to old method of requesting approval codes by email. INCA database analysis provided course demand information and student statistics useful for departments. This evaluation of INCA from three different perspectives provides useful insights for future improvement of INCA and for improving the student experience with academic systems in general.
FileICS2002-03-01: Bringing the Faulkes Telescope to Classrooms in Hawaii – B. Giebink — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Faulkes Telescope (FT), currently under construction on the summit of Haleakala, Maui, Hawaii, will provide data from celestial observations to schools in the United Kingdom and Hawaii. This project, with its unique goal of building a telescope to be used exclusively for educational purposes, is a joint venture between groups in the United Kingdom and Hawaii. Teachers and students will be able to download data that has been collected by the telescope on a previous occasion or sign up to have the telescope collect data at a specific time for them. Current plans call for data from the telescope to be delivered to classrooms in the form of raw data files and images from processed raw data files. In addition to sharing use of the telescope, part of the agreement between the UK and Hawaii groups provides for the UK group to share all software developed for the project with the Hawaii group. However, though a system for transporting images to schools is being developed for the UK side, at present there is no corresponding system for Hawaii. Also, at this point neither the British nor Hawaii sides have a definite system for storing and transporting raw data files. A first step, therefore, toward making the FT useful for students and teachers in Hawaii is to develop a plan for a complete system to archive and transport telescope data. It is anticipated that aplan for this system will include: 1) a specification of the required hardware components, 2) a description of how data will move in and out of the system, 3) a definition of the data pathway within the system, and 4) a description of the data storage requirements (i.e. database). The development of each of the components of the system will consist of a discussion of available options followed by a suggestion of the best choice of action. Development of this system is anticipated to be the topic for a directed reading/research project to be undertaken during spring,2002. After the system has been clearly defined there are some additional questions to be answered. Among the more interesting aspects is the question of how to present data from the telescope in the most useful and effective manner to teachers and students.
FileICS2001-12-01: Vendor Relationship Management: Re-engineering the businessprocess through B2B infrastructure to accelerate the growth of smallbusinesses in geographically isolated areas – J. Agustin, W. Albritton — by Henri Casanova — last modified May 22, 2008 06:05 AM
Instead of limiting the business to the local populace, the World Wide Web gives global access to all companies that have made the transition to online. Ideally, the Internet seems to offer vast, untapped markets,lowers the costs of reaching these markets, and frees businesses from geographical constraints. Applying this to Hawai’i, small companies can now sell their products in the expanding global marketplace, instead of restricting themselves to an island economy.The goal of research on the Vendor Relationship Management (VRM) System is to explore the requirements for new business process models and associated technological infrastructure for small local businesses in Hawaii that wish to exploit the global reach of the Internet. In order to understand the requirements and potential of this approach, we had meetings with different groups of people that included the host of a virtual mall, a financial service provider, two courier services, and several local companies.The interface of the VRM system includes both a vendor and a host side. The host side is used by the virtual mall company to send customers orders to the various vendors. It can also be used to create and edit vendor company information, create and edit vendor product information, and enter a contact email address. The vendor side is used by the numerous vendors to receive the orders, confirm that the orders have been sent, view the customer information, create and edit product information, and create and edit contact information.After creating the first prototype, several experts gave their critiques of the system. Based on their critiques, we came up with several possible directions for future research.
FileICS2001-11-01: Project Hackystat: Accelerating adoption of empirically guided software development through non-disruptive, developer-centric, in-process data collection and analysis – P. Johnson — byHenri Casanova — last modified May 22, 2008 06:05 AM
Collection and analysis of empirical software project data is central to modern techniques for improving software quality, programmer productivity,and the economics of software project development. Unfortunately,effective collection and analysis of software project data is rare in mainstream software development. Prior research suggests that three primary barriers are: (1) cost: gathering empirical software engineering project data is frequently expensive in resources and time; (2) quality: it is often difficult to validate the accuracy of the data; and(3) utility: many metrics programs succeed in collecting data but fail to make that data useful to developers.This report describes Hackystat, a technology initiative and research project that explores the strengths and weaknesses of a developer-centric, in-process, and non-disruptive approach to validation of empirical software project data collection and analysis.
FileICS2001-07-01: Lightweight Disaster Management Training and Control, M. Staver — by Henri Casanova — last modified May 22, 2008 06:05 AM
Disaster management is increasingly a global enterprise for international organizations, governmental institutions, and arguably individuals. The tempo at which information is collected and disseminated during natural and man-made disasters paces the rate and effectiveness of relief efforts. As the Internet becomes a ubiquitous platform for sharing information, a browser-based application can provide disaster managers a lightweight solution for training and control. A heavyweight solution might include dedicated communications, real-time command and control software and hardware configurations, and dedicated personnel. In contrast, a lightweight solution requires trained personnel with Internet access to a server via computers or hand-held devices. Tsunami Sim provides asynchronous situational awareness with an interactive, Geographic Information System (GIS). Tsunami Sim is not capable of providing real-time situational awareness nor intended to replace or compete with heavyweight solutions developed for that purpose. Rather, Tsunami Sim will enhance the disaster managers’ abilities to train for and control disasters in regions where heavyweight solutions are impractical. For distributed training, Tsunami Sim will provide deterministic and stochastic scenarios of historical and fictional disasters. Tsunami Sim will be an open-source, Java application implemented for maintainability and extensibility. United States Pacific Command (PACOM) located at Camp Smith, Hawai’i, will enable Tsunami Sim validation and assessment.
FileICS2001-06-02: Hackystat Developer Release Installation Guide – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document provides an overview of the Hackystat developer distribution. This includes the structure of the source code, the Java-based component technologies Hackystat is built on (including Tomcat, Ant, Soap, Xerces, Cocoon, Java Mail, JUnit, Http Unit, JDOM,and Jato), configuration instructions, testing, and frequently asked questions. An updated version of this document is provided in the actual developer release package; this technical report is intended to provide easy access to near-current instructions for those who are evaluating the system and would like to learn more before downloading the entire package.
FileICS2001-06-01: Hackystat Design Notes – P. Johnson, C. Moore, J. Miglani — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document collects together a series of design notes concerning Hackystat, a system for automated collection and analysis of software engineering data. Some of the design notes include: Insights from the Presto Development Project: Requirements from the IDE for automated data collection; A roundtable discussion of Hackystat; Change management in Hackystat; Validated idle time detection; and Defect collection and analysis in Hackystat.
FileICS2001-05-02: An artificial neural network for recognition of simulated dolphin whistles – T. Burgess— by Henri Casanova — last modified May 22, 2008 06:05 AM
It is known that dolphins are capable of understanding 200 “word” vocabularies with sentence complexity of three or more “words”, where words consist of audio tones or hand gestures. An automated recognition method of words where a word is a defined whistle, within a predetermined acceptable degree of variance, could allow words to be both easily reproducible by dolphins and identifiable by humans. We investigate a neural network to attempt to distinguish four artificially generated whistles from themselves and from common underwater environmental noises, where a whistle consists of four variations of a fundamental whistle style. We play these whistle variations into the dolphins normal tank environment and then record from a separate tank hydrophone. This results in slight differences for each whistle variation’s spectrogram, the complete collection of which we use to form the neural network training set. For a single whistle variation, the neural network demonstrates strong output node values,greater than 0.9 on a scale of 0 to 1. However, for combinations of”words”, the network exhibits poor training performance and an inability to distinguish between words. To validate this, we used a test set of 41 examples, of which only 22 were correctly classified. This result suggests that an appropriately trained back propagation neural network using spectrographic analysis asinputs is a viable means for a very specific whistle recognition,however a large degree of whistle variation will dramatically lower theperformance of the network, past that required for acceptable recognition.
FileICS2001-05-01: The Hardware Subroutine Approach to Developing Custom Co-Processors – M. Waterson — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Hardware Subroutine Approach to developing a reconfigurable, custom co-processor is an architecture and a process for implementing a hardware subsystem as a direct replacement for a subroutine in a larger program.The approach provides a framework that helps the developer analyze the trade-offs of using hardware acceleration, and a design procedure to guide the implementation process. To illustrate the design process a HWS implementation of a derivative estimation subroutine is described. In this context I show how key performance parameters of the HWS can be estimated in advance of complete implementation and decisions made regarding the potential benefit of implementation alternatives to program performance improvement. Performance of the actual hardware coprocessor is compared to the software-only implementation and to estimates developed during the design process.
FileICS2001-04-02: Inca Software Requirement Specification – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Inca is a system designed to improve the efficiency and effectiveness of course approval request processing. This software requirements specification details: (a) the traditional manual process used by the ICS department for course approval request processing, (b) the 12 basic requirements Inca must satisfy, the fine-grained rules for prioritization of requests, (c) several usage scenarios, (d) n-tier architectural issues for an Enterprise JavaBeans implementation, and (e) miscellaneous requirements including authentication, data file formats, special topics,and so forth.
FileICS2001-04-01: Inca Business Plan – J. Miglani — by Henri Casanova — last modified May 22, 2008 06:05 AM
Inca is an Enterprise JavaBean based technology to provide Internet-based allocation of course approval codes. This business plan explores the commercial potential of this technology. The Inca business plan was selected as a finalist in the 2001 Business Plan Competition of the University of Hawaii College of Business Administration.
FileICS2001-01-01: VCommerce Administrator Guide – M. Moffett, B. Pentland, P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Provides administrative support for installation, configuration, and running the VCommerce simulation.
FileICS2000-12-02: The Design, Development, and Evaluation of VCommerce: A Virtual Environment to Support Entrepreneurial Learning – M. Moffett — by Henri Casanova — last modified May 22, 2008 06:05 AM
This thesis describes VCommerce, a virtual environment whose goal is to significantly increase students’ knowledge of the process involved with starting a high tech company, and through hands-on experience enhance their confidence in their ability to start such a company. The thesis presents the design and implementation of the environment, and a case study of its use in a graduate course comprised of 50 students from amongst the computer science, business school,engineering, and other departments. A course survey and fourteen post-semester interviews show that students felt the class was extremely effective in teaching entrepreneurship concepts, and that they have learned valuable lessons about managing an Internet startup.
FileICS2000-12-01: Empirically Guided Software Effort Guestimation – P. Johnson, C. Moore, J. Dane, R. Brewer — by Henri Casanova — last modified May 22, 2008 06:05 AM
For several years, we have pursued an initiative called Project LEAP, whose goal is the improvement of individual developers though lightweight,empirical, anti-measurement dysfunction, and portable software engineering tools and methods.During the Fall of 1999, we performed a case study using the LEAP toolkit in a graduate software engineering class. One of the goals of the study was to evaluate the various analytical estimation methods made available by the toolkit. We were curious as to whether a single method would prove most accurate, or whether the most accurate method would depend upon the type of project or the specific developer. To our surprise, we found that,in most cases, the developer-generated “guesstimates” were more accurate than the analytical estimates.We also found that the PROBE method of the Personal Software Process, perhaps the most widely publicized analytical approach to personal effort estimation, was the sixth most accurate method. Finally, we found that access to a range of analytical estimation methods appeared to be useful to developers in generating their guesstimates and improving them over time.
FileICS2000-11-01: A Comparative Review of LOCC and CodeCount – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This paper provides one review of the comparative strengths and weaknesses of LOCC and CodeCount, two tools for calculating the size of software source code. The next two sections provide quick overviews of CodeCount and LOCC. The final section presents the perceived strengths and weaknesses of the two tools. A caveat: although I am attempting to be objective in this review, I have in-depth knowledge of LOCC and only very superficial knowledge of CodeCount. Comments and corrections solicited and welcomed.
FileICS2000-09-01: Aligning the financial services, fulfillment distribution infrastructure, and small business sectors in Hawaii through B2B technology innovation – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document is a proposal to the University of Hawaii New Economy Research Grant Program. It describes a study intended to discover business-to-business technologies that have the potential to improve the efficiency and reduce the cost for small Hawaiian businesses that produce physical products and desire to expand into national and international markets.
FileICS2000-08-02: JavaJam: Supporting Collaborative Review and Improvement of Open Source Software -M. Hodges — by Henri Casanova — last modified May 22, 2008 06:05 AM
Development of Open Source Software is in many cases a collaborative effort, often by geographically dispersed team members. The problem for members is to efficiently review documentation and source code and to collect and share comments and annotations that will lead to improvements in performance, functionality, and quality. javaJAM is a collaborative tool for assisting with the development of Open Source Software. It generates integrated documentation and source code presentations to be viewed over the web. More importantly, javaJAM provides an interactive environment for navigating documentation and source code and for posting annotations. javaJAM creates relationships between sections of documentation, source, and related comments and annotations to provide the necessary cross-referencing to support quick and efficient reviews.javaJAM was evaluated in a classroom setting. Student teams posted projects for team review using javaJAM and found it to be an easy way to review their projects and post their comments.
FileICS2000-08-01: Investigating Individual Software Development: AnEvaluation of the Leap Toolkit – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
Software developers work too hard and yet do not get enough done. Developing high quality software efficiently and consistently is a very difficult problem. Developers and managers have tried many different solutions to address this problem. Recently their focus has shifted from the software organization to the individual software developer. For example, the Personal Software Process incorporates many of the previous solutions while focusing on the individual software developer. This thesis presents the Leap toolkit, which combines ideas from prior research on the Personal Software Process, Formal Technical Review and my experiences building automated support for software engineering activities. The Leap toolkit is intended to help individuals in their efforts to improve their development capabilities. Since it is a light-weight, flexible, powerful, and private tool, it provides a novel way for developers to gain valuable insight into their own development process. The Leap toolkit also addresses many measurement and data issues involved with recording any software development process. The main thesis of this work is that the Leap toolkit provides a novel tool that allows developers and researchers to collect and analyze software engineering data. To investigate some of the issues of data collection and analysis, I conducted a case study of 16 graduate students in an advanced software engineering course at the University of Hawaii, Manoa. The case study investigated: (1) the relationship between the Leap toolkit’s time collection tools and “collection stage” errors; and (2) different time estimation techniques supported by the Leap toolkit. The major contributions of this research includes (1) the LEAP design philosophy; (2) the Leap toolkit, which is a novel tool for individual developer improvement and software engineering research; and (3) the insights from the case study about collection overhead, collection error and project estimation.
FileICS2000-06-01: Improving Problem-Oriented Mailing List Archives with MCS – R. Brewer — by Henri Casanova — last modified May 22, 2008 06:05 AM
Developers often use electronic mailing lists when seeking assistance with a particular software application. The archives of these mailing lists provide a rich repository of problem-solving knowledge. Developers seeking a quick answer to a problem find these archives inconvenient, because they lack efficient searching mechanisms, and retain the structure of the original conversational threads which are rarely relevant to the knowledge seeker.We present a system called MCS which improves mailing list archives through a process called em condensation. Condensation involves several tasks:extracting only messages of longer-term relevance, adding metadata to those messages to improve searching, and potentially editing the content of the messages when appropriate to clarify. The condensation process is performed by a human editor (assisted by a tool), rather than by an artificial intelligence(AI) system.We describe the design and implementation of MCS, and compare it to related systems. We also present our experiences condensing a 1428 message mailing list archive to an archive containing only 177 messages (an 88\% reduction). The condensation required only 1.5 minutes of editor effort per message. Thec ondensed archive was adopted by the users of the mailing list.
FileICS2000-05-01: Lessons Learned from Teaching Reflective Software Engineering using the Leap Toolkit – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
This paper presents our experiences using the Leap toolkit, an automated tool to support personal developer improvement. The Leap toolkit incorporates ideas from the PSP and group review. It relaxes some of the constraints in the PSP and reduces process overhead. Our lessons learned include: (1) Collecting data about software development is useful; (2) Leap enables users to accurately estimate size and time in a known domain; (3) Many users feel their programming skills improve primarily due to practice, not their method; (4) To reduce measurement dysfunction, make the results less visible; (5) Partial defect collection and analysis is still useful; (6) Tool support should require few machine resources; and (7) Experience may lead to overconfidence.
FileICS2000-03-01: Improving Mailing List Archives Through Condensation – R. Brewer — by Henri Casanova — last modified May 22, 2008 06:05 AM
Searching the archives of electronic product support mailing lists often provides unsatisfactory results for users looking for quick solutions to their problems. Archives are inconvenient because they are too voluminous, lack efficient searching mechanisms, and retain the original thread structure which is not relevant to knowledge seekers. I present MCS, a system which improves mailing list archives through em condensation. Condensation involves omitting redundant or useless messages, and adding meta-level information to messages to improve searching. The condensation process is performed by a human assisted by an editing tool. I describe the design and implementation of MCS, and compare it to related systems. I also present my experiences condensing a 1428 message mailing list archive to an archive containing only 177 messages (an 88% reduction). The condensation required only 1.5 minutes of editor effort per message. The condensed archive was adopted by the users of the mailing list.
FileICS2000-02-01: A Proposal for VCommerce: An Internet Entrepreneurship Environment – M. Moffett— by Henri Casanova — last modified May 22, 2008 06:05 AM
The document proposes the development of an internet entrepreneurship simulation environment called VCommerce for the University of Hawaii Aspect Technology Grant program.
FileICS2000-01-02: VCommerce Example Business Plan: Pizza Portal – P. Johnson — by Henri Casanova— last modified May 22, 2008 06:05 AM
This document provides an example business plan for the VCommerce simulation. It details the design and implementation of a hypothetical business called “Pizza Portal”.
FileICS2000-01-01: VCommerce Entrepreneur User Guide – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
VCommerce is intended to provide you with an educational and stimulating introduction to the initial, “startup” phases of entreprenurial activity in the online, Internet-enabled economy. VCommerce is designed to reward those who can innovate, explore market niches, design viable businesses within the context of the VCommerce world, exploit the information resources of the Internet for business planning, react appropriately to VCommerce market data, and develop effective partnerships with other people with complementary skills. This user guide provides an overview of the VCommerce process.
FileICS1999-12-03: Project LEAP: Lightweight, Empirical, Anti-measurement dysfunction, and Portable Software Developer Improvement – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Project LEAP investigates the use of lightweight, empirical,anti-measurement dysfunction, and portable approaches to software developer improvement. This document provides a one-page progress report on Project Leap for inclusion in the “Millenium” issue of SoftwareEngineering Notes.
FileICS1999-12-02: Modular Program Size Counting – J. Dane — by Henri Casanova — last modified May 22, 2008 06:05 AM
Effective program size measurement is difficult to accomplish. Factors such as program implementation language, programmer experience and application domain influence the effectiveness of particular size metrics to such a degree that it is unlikely that any single size metric will be appropriate for all applications. This thesis introduces a tool, LOCC, which provides a generic architecture and interface to the production and use of different size metrics. Developers can use the size metrics distributed with LOCC or can design their own metrics, which can be easily incorporated into LOCC. LOCC pays particular attention to the problem of supporting incremental development, where a work product is not created all at once but rather through a sequence of small changes applied to previously developed programs. LOCC requires that developers of new size metrics support this approach by providing a means of comparing two versions of a program. LOCC’s effectiveness was evaluated by using it to count over 50,000 lines of Java code, by soliciting responses to a questionnaire sent to users, and by personal reflection on the process of using and extending it. The evaluation revealed that users of LOCC found that it assisted them in their development process, although there were some improvements which could be made.
FileICS1999-12-01: A Critical Analysis of PSP Data Quality: Results from a Case Study – P. Johnson, A. Disney — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work. Published studies typically use data collected using the PSP to draw quantitative conclusions about its impact upon programmer behavior and product quality. However, our experience using PSP led us to question the quality of data both during collection and its later analysis. We hypothesized that data quality problems can make a significant impact upon the value of PSP measures—significant enough to lead to incorrect conclusions regarding process improvement. To test this hypothesis, we built a tool to automate the PSP and then examined 89 projects completed by ten subjects using the PSP manually in an educational setting. We discovered 1539 primary errors and categorized them by type, subtype, severity, and age. To examine the collection problem we looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem we developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. We found significant differences for measures such as yield and the cost-performance ratio, confirming our hypothesis. Our results raise questions about the accuracy of manually collected and analyzed PSP data, indicate that integrated tool support may be required for high quality PSP data analysis, and suggest that external measures should be used when attempting to evaluate the impact of the PSP upon programmer behavior and product quality.
FileICS1999-11-03: Teaching Software Engineering skills with the Leap Toolkit – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Personal Software Process (PSP) teaches software developers many valuable software engineering techniques. Developers learn how to develop high quality software efficiently and how to accurately estimate the amount of effort it will take. To accomplish this the PSP forces the developer to follow a very strict development model, to manually record time, to defect and size data, and analyze their data. The PSP appears successful at improving developer performance during the training, yet there are questions concerning long-term adoption rates and the accuracy of PSP data.This paper presents our experiences using the Leap toolkit, an automated tool to support personal developer improvement. The Leap toolkit incorporates ideas from the PSP and group review. It relaxes some of the constraints in the PSP and reduces process overhead. We are using the Leap toolkit in an advanced software engineering course at the University of Hawaii, Manoa.
FileICS1999-11-02: A Case Study Of Defect Detection And Analysis With JWiz – J. Geis — by Henri Casanova — last modified May 22, 2008 06:05 AM
This paper presents a study designed to investigate the occurrence of certain kinds of errors in Java programs using JavaWizard (JWiz), a static analysis mechanism for Java source code. JWiz is a tool that supports detection of certain commonly occurring semantic errors in Java programs. JWiz was used within a research framework designed to reveal (1) knowledge about the kinds of errors made by Java programmers, (2) differences among Java programmers in the kinds of errors made, and (3) potential avenues for improvement in the design and/or implementation of the Java language or environment.We found that all programmers inject a few of the same mistakes into their code, but these are only minor, non-defect causing errors. We also found that the types of defects injected vary drastically with no correlation to program size or developer experience. Finally, we found that for those developers who make some of the mistakes that JWiz is designed for, JWiz can be a great help, saving significant amounts of time ordinarily spent tracking down defects in test.
FileICS1999-11-01: Automated Support for Technical Skill Acquisition and Improvement: An Evaluation of the Leap Toolkit – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
Software developers work too hard and yet do not get enough done. Developing high quality software efficiently and consistently is a very difficult problem. Developers and managers have tried many different solutions to address this problem. Recently their focus has shifted from the software organization to the individual software developer. The Personal Software Process incorporates many of the previous solutions while focusing on the individual software developer. I combined ideas from prior research on the Personal Software Process, Formal Technical Review and my experiences building automated support for software engineering activities to produce the Leap toolkit. The Leap toolkit is intended to help individuals in their efforts to improve their development capabilities. Since it is a light-weight, flexible, powerful, and private tool, it allows individual developers to gain valuable insight into their own development process. The Leap toolkit also addresses many measurement and data issues involved with recording any software development process. The main thesis of this work is the Leap toolkit provides a more accurate and effective way for developers to collect and analyze their software engineering data than manual methods. To evaluate this thesis I will investigate three claims: (1) the Leap toolkit prevents many important errors in data collection and analysis; (2) the Leap toolkit supports data collection and analyses that are not amenable to manual enactment; and (3) the Leap toolkit reduces the level of “collection stage” errors. To evaluate the first claim, I will show how the design of the Leap toolkit effectively prevents important classes of errors shown to occur in prior related research. To evaluate the second claim, I will conduct an experiment investigating 14 different quantitative time estimation techniques based upon historical size data to show that the Leap toolkit is capable of complex analyzes not possible in manual methods. To evaluate the third claim, I will analyze software developers data and conduct surveys to investigate the level of data collection errors.
FileICS1999-08-01: roject LEAP: Addressing Measurement Dysfunction in Review – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
The software industry and academia believe that software review,specifically Formal Technical Review (FTR), is a powerful method for improving the quality of software. Computer support for FTR reduces the overhead of conducting reviews for reviewers and managers. Computer support of FTR also allow for the easy collection of empirical measurement of process and products of software review. These measurements allow researchers or reviewers to gain valuable insights into the review process. After looking closely at review metrics, we became aware of the possibility of measurement dysfunction in formal technical review. Measurement dysfunction is a situation in which the act of measurement affects the organization in a counter-productive fashion, which leads to results directly counter to those intended by the organization for the measurement. How can we reduce the threat of measurement dysfunction in software review without losing the benefits of metrics collection? Project LEAP is our attempt at to answer this question. This paper present Project Leap’s approach to the design, implementation,and evaluation of tools and methods for empirically-based, individualized software developer improvement.
FileICS1999-06-01: A controlled experimental study of the Personal Waterslide Process: Results and Interpretations – P. Johnson, A. Mockus, L. Votta — by Henri Casanova — last modified May 22, 2008 06:05 AM
The paper reports on the Personal Waterslide Process, an innovative software engineering technique pioneered during the 1999meeting of the International Software Engineering Research Network atits annual meeting in Oulu, Finland.
FileICS1999-05-02: Leap: A “Personal Information Environment” for Software Engineers – P. Johnson —by Henri Casanova — last modified May 22, 2008 06:05 AM
The Leap toolkit is designed to provide Lightweight, Empirical,Anti-measurement dysfunction, and Portable approaches to software developer improvement. Using Leap, software engineers gather and analyze personal data concerning time, size, defects, patterns, and checklists. They create and maintain definitions describing their software development procedures, work products, and project attributes, including document types, defect types, severities, phases, and size definitions.Leap also supports asynchronous software review and facilitates integration of this group-based data with individually collected data. The Leap toolkit provides a “reference model” for a personal information environment to support skill acquisition and improvement for software engineers.
FileICS1999-05-01: Project LEAP: Personal Process Improvement for the Differently Disciplined – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
This paper overviews the research motivations for Project Leap.
FileICS1999-01-02: LOCC User Guide – J. Dane — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document describes the installation and use of LOCC. LOCC is a general mechanism for producing one or more measurements of the size of work products. LOCC can produce both the “total” size of a work product, as well as the “difference” in size between successive versions of the same work product.
FileICS1999-01-01: Reflective Software Engineering with the Leap Toolkit – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document describes a empirical, experience-based approach to software engineering at the individual level using the Leap toolkit.
FileICS1998-12-01: JavaWizard User Guide – J. Geis — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document describes the use of JavaWizard, an automated code checker for the Java programming language. The user guide includes directions for installation, command line invocation, and graphical user interface invocation.
FileICS1998-11-01: Investigating Data Quality Problems in the PSP – A. Disney, P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work. Published studies typically use data collected using the PSP to draw quantitative conclusions about its impact upon programmer behavior and product quality. However, our experience using PSP in both industrial and academic settings revealed problems both in collection of data and its later analysis. We hypothesized that these two kinds of data quality problems could make a significant impact upon the value of PSP measures. To test this hypothesis, we built a tool to automate the PSP and then examined 89 projects completed by ten subjects using the PSP manually in an educational setting. We discovered 1539 primary errors and categorized them by type, subtype, severity, and age. To examine the collection problem we looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem we developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. This resulted in significant differences for measures such as yield and the cost-performance ratio, confirming our hypothesis. Our results raise questions about the accuracy of manually collected and analyzed PSP data, indicate that integrated tool support may be required for high quality PSP data analysis, and suggest that external measures should be used when attempting to evaluate the impact of the PSP upon programmer behavior and product quality.
FileICS1998-09-01: Improving Mailing List Archives through Condensation – R. Brewer — by Henri Casanova — last modified May 22, 2008 06:05 AM
Electronic mailing lists are popular Internet information sources. Many mailing lists maintain an archive of all messages sent to the list which is often searchable using keywords. While useful, these archives suffer from the fact that they include all messages sent to the list. Because they include all messages, the ability of users to rapidly find the information they want in the archive is hampered. To solve the problems inherent in current mailing list archives, I propose a process called condensation whereby one can strip out all the extraneous, conversational aspects of the data stream leaving only the pearls of interconnected wisdom. To explore this idea of mailing list condensation and to test whether or not a condensed archive of a mailing list is actually better than traditional archives, I propose the construction and evaluation of a new software system. I name this system the Mailing list Condensation System or MCS. MCS will have two main parts: one which is dedicated to taking the raw material from the mailing list and condensing it, and another which stores the condensed messages and allows users to retrieve them. The condensation process is performed by a human editor (assisted by a tool), not an AI system. While this adds a certain amount of overhead to the maintenance of the MCS-generated archive when compared to a traditional archive, it makes the system implementation feasible. I believe that an MCS-generated mailing list archive maintained by an external researcher will be adopted as a information resource by the subscribers of that mailing list. Furthermore, I believe that subscribers will prefer the MCS-generated archive over existing traditional archives of the mailing list. This thesis will be tested by a series of quantitative and qualitative measures.
FileICS1998-08-01: Data Quality Problems in the Personal Software Process – A. Disney — by Henri Casanova — last modified May 22, 2008 06:05 AM
he Personal Software Process (PSP) is used by software engineers to gather and analyze data about their work and to produce empirically based evidence for the improvement of planning and quality in future projects. Published studies have suggested that adopting the PSP results in improved size and time estimation and in reduced numbers of defects found in the compile and test phases of development. However, personal experience using PSP in both industrial and academic settings caused me to wonder about the quality of two areas of PSP practice: collection and analysis. To investigate this I built a tool to automate the PSP and then examined 89 projects completed by nine subjects using the PSP in an educational setting. I discovered 1539 primary errors and analyzed them by type, subtype, severity, and age. To examine the collection problem I looked at the 90 errors that represented impossible combinations of data and at other less concrete anomalies in Time Recording Logs and Defect Recording Logs. To examine the analysis problem I developed a rule set, corrected the errors as far as possible, and compared the original and corrected data. This resulted in substantial differences for numbers such as yield and the cost-performance ratio. The results raise questions about the accuracy of published data on the PSP and directions for future research.
FileICS1998-05-01: Investigating the Design and Evaluation of Research Web Sites – A. Disney, J. Lee, T. Huynh, J. Saito — by Henri Casanova — last modified May 22, 2008 06:05 AM
The Aziza design group (formally 691 Web Development Team) was commissioned by CSDL to implement a new web site. The group was assigned not only to update the entire site, but also to research and investigate the process and life cycle of World Wide Web site development.This research document records the process and products that occurred while updating of the CSDL web site. It discusses issues such as the balance between providing information and providing an image of the group, and ways to share research information over the World Wide Web.To back the data researched, evaluations by the various users of the site occurred and are discussed here. This document records our web site design processes, what insights we had about those processes, our findings, and finally, our conclusions.
FileICS1998-04-01: CSDL Web Site Requirements Specification Document – A. Disney, J. Lee, T. Huynh, J. Saito — by Henri Casanova — last modified May 22, 2008 06:05 AM
The purpose of this document is to summarize the results of our background research for the CSDL web site, and describe the resulting requirements for evaluation and review.
FileICS1997-11-01: An Annotated Overview of CSDL Software Engineering – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Current software engineering activities in CSDL can be viewed as consisting of two basic components: product engineering and process engineering. Product engineering refers to the various work products created during development. Process engineering refers to the various measurements and analyses performed on the development process. This document describes activities within CSDL over the past five years to better understand and improve our process and product engineering within our academic research development environment.
FileICS1997-10-02: LEAP Initial Toolset: Software Requirements Specification – J. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This SRS for the LEAP Toolset is based heavily upon the ideas specified in the PSP/Baseline SRS. Conceptually, the LEAP toolset is a variant of the PSP/Baseline toolset in two major ways. First, the LEAP toolset is substantially more simple to implement and use. Itwill serve as a prototype for proof-of-concept evaluation of the ideas in the PSP/Baseline toolkit. Second, the LEAP toolset emphasizes group review and minimization of measurement dysfunction to a greater extent than the PSP/Baseline toolset.
FileICS1997-10-01: Project LEAP: Lightweight, Empirical, Anti-measurement dysfunction, and Portable Software Developer Improvement – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
Project LEAP investigates the use of lightweight, empirical, anti-measurement dysfunction, and portable approaches to software developer improvement. A lightweight method involves a minimum of process constraints, is relatively easy to learn, is amenable to integration with existing methods and tools, and requires only minimal management investment and commitment. An empirical method supports measurements that can lead to improvements in the software developers skill. Measurement dysfunction refers to the possibility of measurements being used against the programmer, so the method must take care to collect and manipulate measurements in a “safe” manner. A portable method is one that can be applied by the developer across projects, organizations, and companies during her career.Project LEAP will investigate the strengths and weaknesses of this approach to software developer improvement in a number of ways. First, it will enhance and evaluate a LEAP-compliant toolset and method for defect entry and analysis. Second, it will use LEAP-compliant tools to explore the quality of empirical data collected by the Personal Software Process. Third, it will collect data from industrial evaluation of the toolkit and method. Fourth, it will create component-based versions of LEAP-compliant tools for defect and time collection and analysis that can be integrated with other software development environment software. Finally, Project LEAP will sponsor a web site providing distance learning materials to support education of software developers in empirically guided software process improvement. The web site will also support distribution and feedback of Project LEAP itself.
FileICS1997-07-01: A Proposal for CSDL2: A Center for Software Development Leadership through Learning – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document describes the design of CSDL2: a social, physical, and virtual environment to support the development of world class software engineering professionals. In CSDL2, a “multi-generational learning community” of faculty, graduate students, and undergraduates all collaborate within a structured work environment for practicing product, process, and organizational engineering.
FileICS1997-01-01: PSP/Baseline: Software Requirements Specification – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
PSP/Baseline is a system design that predated Project LEAP by about a year. The PSP/Baseline system was intended to provide an approach to empirical software process improvement inspired by, but different from, the Personal Software Process.
FileICS1996-11-01: Measurement Dysfunction in Formal Technical Review – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This paper explores some of the issues that arise in effective use of measures to monitor and improve formal technical review practice in industrial settings. It focuses on measurement dysfunction: a situation in which the act of measurement affects the organization in a counter-productive fashion, which leads to results directly counter to those intended by the organization for the measurement.
FileICS1996-09-01: BRIE: A Benchmark Inspection Experiment – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
The BenchmaRk Inspection Experiment (BRIE) is an attempt to design and package a simple experimental design that satisfies the goals of a benchmark experiment.The BRIE acronym has a second expansion: Basic RevIew Education. BRIE is designed to have a second, complementary goal: a high quality training package for a simple formal technical review method. Thus, BRIE is ac urriculum module intended to be usefulin either an industry or academic setting to introduce students to both software review and empirical software engineering research practice.
FileICS1994-12-02: Redefining the Web: Creating a Computer Network Community – R. Andrada — byHenri Casanova — last modified May 22, 2008 06:05 AM
Organizations are formed to accomplish a goal or mission, where individual members do their part and make a combined effort leading toward this goal. As the organization grows in size, the level of community inevitably deteriorates. This research will investigate the strengths and weaknesses of a computer-based approach to improving the sense of community within one organization, the Department of Computer Science at the University of Hawaii. We will assess the current level of community by administering a questionnaire to members of the department. Next, we will introduce a World Wide Web information system for and about the department in an effort to impact the level of community that exists. We will then administer another questionnaire to assess the level of community within the department after a period of use with the information system. We will analyze the results of both questionnaires and usage statistics logged by the system.
FileICS1994-12-01: Supporting authoring and learning in a collaborative hypertext system: The Annotated Egret Navigator – C. Moore — by Henri Casanova — last modified May 22, 2008 06:05 AM
This research is concerned with how people collaboratively author and learn. More specifically, it is concerned with how to design and implement a hypertext system to support collaborative authoring and learning.We are investigating these issues through the design, implementation, and evaluation of AEN, a hypertext collaborative authoring and learning tool.

No albums or photos uploaded yet.

FileICS1993-09-02: CLARE User’s Guide – D. Wan — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document provides an illustrated user’s guide to the CLARE system.
FileICS1993-09-01: CLARE 1.4.7 Design Document – D. Wan — by Henri Casanova — last modified May 22, 2008 06:05 AM
This document provides an overview of the design of CLARE Version 1.4.7
FileICS1993-05-01: DSB: The Next Generation Tool for Software Engineers – K. K. Ram — by Henri Casanova — last modified May 22, 2008 06:05 AM
During the development of software projects, there always exists the problem of design specification maintenance. As the project team surges ahead with the development process, there is a strong need to maintain an up-to-date documentation of the current system. This requires an additional effort on each team member to maintain a consistent report of the modifications and additions they make on the system.This Design base project attempts to reduce the overhead involved in the maintenance of ever changing design specifications, by generating automatically, a design documentation from the source code and the overview files that are maintained along with the system.
FileICS1993-01-01: CLARE: A Computer-Supported Collaborative Learning Environment Based on the Thematic Structure of Research and Learning Artifacts – D. Wan — by Henri Casanova — last modified May 22, 2008 06:05 AM
This research concerns the representation issue in collaborative learning environments. Our basic claim is that knowledge representation is not only fundamental to machine learning, as shown by AI researchers, but also essential to human learning, in particular, human metalearning. Few existing learning support systems, however, provide representations which help the learner make sense of and organize the subject content of learning, integrate a wide range of classroom activities, (e.g., reading,reviewing, writing, discussion), and compare and contrast various viewpoints from individual learners. It is our primary purpose to construct an example instance of such a representation, and to show that useful computational manipulations can be performed on it, and that the combination of the representation and related computational services can actually lead to the improved learner’s performance on selected collaborative learning tasks.
FileICS1992-12-01: Reverse Engineering Collaboration Structures in Usenet – P. Johnson — by Henri Casanova — last modified May 22, 2008 06:05 AM
This plain-text file, which was posted to USENET, contains an “alpha-level” proposal concerning a “reverse-engineering” approach to improving the collaborative nature of USENET.