Technical Reports

The ICICS/CS Reading Room


1996 UBC CS Technical Report Abstracts

TR-96-01 Diamonds are not a Minimum Weight Triangulation's Best Friend, January 1996 Prosenjit Bose, Luc Devroye and William Evans, 10 pages

Two recent methods have increased hopes of finding a polynomial time solution to the problem of computing the minimum weight triangulation of a set $S$ of $n$ points in the plane. Both involve computing what was believed to be a connected or nearly connected subgraph of the minimum weight triangulation, and then completing the triangulation optimally. The first method uses the light graph of $S$ as its initial subgraph. The second method uses the \lmt-skeleton of $S$. Both methods rely, for their polynomial time bound, on the initial subgraphs having only a constant number of components. Experiments performed by the authors of these methods seemed to confirm that randomly chosen point sets displayed this desired property. We show that there exist point sets where the number of components is linear in $n$. In fact, the expected number of components in either graph on a randomly chosen point set is linear in $n$, and the probability of the number of components exceeding some constant times $n$ tends to one.

TR-96-02 Compositional Model Checking of Partially Ordered State Spaces, January 1996 Scott Hazelhurst, 268 pages

Symbolic trajectory evaluation (STE) --- a model checking technique based on partial order representations of state spaces --- has been shown to be an effective model checking technique for large circuit models. However, the temporal logic that it supports is restricted, and as with all verification techniques has significant performance limitations. The demand for verifying larger circuits, and the need for greater expressiveness requires that both these problems be examined.

The thesis develops a suitable logical framework for model checking partially ordered state spaces: the temporal logic \TL\ and its associated satisfaction relations, based on the quaternary logic $\Q$. \TL\ is appropriate for expressing the truth of propositions about partially ordered state spaces, and has suitable technical properties that allow STE to support a richer temporal logic. Using this framework, verification conditions called \emph{assertions} are defined, a generalised version of STE is developed, and three STE-based algorithms are proposed for investigation. Advantages of this style of proof include: models of time are incorporated; circuits can be described at a low level; and correctness properties are expressed at a relatively high level.

A primary contribution of the thesis is the development of a compositional theory for \TL\ assertions. This compositional theory is supported by the partial order representation of state space. To show the practical use of the compositional theory, two prototype verification systems were constructed, integrating theorem proving and STE. Data is manipulated efficiently by using binary decision diagrams as well as symbolic data representation methods. Simple heuristics and a flexible interface reduce the human cost of verification.

Experiments were undertaken using these prototypes, including verifying two circuits from the IFIP WG 10.5 Benchmark suite. These experiments showed that the generalised STE algorithms were effective, and that through the use of the compositional theory it is possible to verify very large circuits completely, including detailed timing properties.

TR-96-03 The Sounds of Physical Shapes, February 1996 Kees van den Doel and Dinesh K. Pai, 17 pages

We propose a general framework for the simulation of sounds produced by colliding physical objects in a real time graphics environment. The framework is based on the vibration dynamics of bodies. The computed sounds depend on the material of the body, its shape, and the location of the impact.

Specifically, we show how to compute (1) the spectral signature of each body (its natural frequencies), which depends on the material and the shape, (2) the ``timbre'' of the vibration (the relative amplitudes of the spectral components) generated by an impulsive force applied to the object at a grid of locations, (3) the decay rates of the various frequency components which correlates with the type of material, based on the its internal friction parameter and finally (4) the mapping of sounds on to the object's geometry for real time rendering of the resulting sound.

The framework has been implemented in a Sonic Explorer program, which simulates a room with several objects such as a chair, tables, and rods. After a preprocessing stage, the user can hit the objects at different points to interactively produce realistic sounds.

TR-96-04 Heterogeneous Process Migration: The Tui System, February 1996 Peter Smith and Norman C. Hutchinson, 32 pages

Heterogeneous Process Migration is a technique whereby an active process is moved from one machine to another. It must then continue normal execution and communication. The source and destination processors can have a different architecture, that is, different instruction sets and data formats. Because of this heterogeneity, the entire process memory image must be translated during the migration.

"Tui" is a prototype migration system that is able to translate the memory image of a program (written in ANSI-C) between four common architectures (m68000, SPARC, i486 and PowerPC). This requires detailed knowledge of all data types and variables used with the program. This is not always possible in non type-safe (but popular) languages such as C, Pascal and Fortran.

The important features of the Tui algorithm are discussed in great detail. This includes the method by which a program's entire set of data values can be located, and eventually reconstructed on the target processor. Initial performance figures demonstrating the viability of using Tui for real migration applications are given.

TR-96-05 Simplifying Terrain Models and Measuring Terrain Model Accuracy, May 1996 David Scott Andrews, 48 pages

We describe a set of TIN simplification methods that enable the use of the triangulation hierarchy introduced by Kirkpatrick and modified by de Berg and Dobrindt. This triangulation hierarchy can be used to form a terrain model combining areas with varying levels of detail. One variant of the delete simplification method formed simplifications with accuracy close to the greedy method.

We also investigated different variables that can be used to measure the accuracy of our simplified terrain models. Although the use of derivative statistics did not significantly alter our evaluation of the performance of our simplification methods, we recommend that any future comparisons should be aware of these alternative variables of surface characterization.

TR-96-06 Importance Ordering for Real-Time Depth of Field, February 1996 Paul Fearing

Depth of field (DOF) is an important component of real photography. As such, it is a valuable addition to the library of techniques used in photorealistic rendering. Several methods have been proposed for implementing DOF effects. Unfortunately, all existing methods require a great deal of computation. This prohibitive cost has precluded DOF effects from being used with any great regularity.

This paper introduces a new way of computing DOF that is particularly effective for sequences of related frames (animations). It computes the most noticeable DOF effects first, and works on areas of lesser importance only if there is enough time. Areas that do not change between frames are not computed.

All pixels in the image are assigned an importance value. This importance gives priority to pixels that have recently changed in color, depth, or degree of focus. Changes originate from object and light animation, or from variation in the camera's position or focus.

Image pixels are then recomputed in order of importance. At any point, the computation can be interrupted and the results displayed. Varying the interruption point allows a smooth tradeoff between image accuracy and result speed. If enough time is provided, the algorithm generates the exact solution.

Practically, this algorithm avoids the continual recomputing of large numbers of unchanging pixels. This can provide order-of-magnitude speedups in many common animation situations. This increase in speed brings DOF effects into the realm of real-time graphics.

TR-96-07 Wavelet Radiative Transfer and Surface Interaction, May 1996 Robert R. Lewis

In this work-in-progress paper, we present a solution to the illumination problem that is intermediate between the conventional local and global approaches to illumination. It involves the representation of radiance on a surface as a finite element expansion in terms of wavelets.

By expanding in terms of ``Nusselt coordinates'', we show how irradiance, transport, and surface interaction can be evaluated simply and directly in terms of wavelet coefficients. We present an example of transport.

TR-96-09 A Perceptual Colour Segmentation Algorithm, June 1996 Christopher G. Healey and James T. Enns, 170 pages

A Perceptual Colour Segmentation Algorithm Christopher G. Healey and James T. Enns

This paper presents a simple method for segmenting colour regions into categories like red, green, blue, and yellow. We are interested in studying how colour categories influence colour selection during scientific visualization. The ability to name individual colours is also important in other problem domains like real-time displays, user-interface design, and medical imaging systems. Our algorithm uses the Munsell and CIE LUV colour models to automatically segment a colour space like RGB or CIE XYZ into ten colour categories. Users are then asked to name a small number of representative colours from each category. This provides three important results: a measure of the perceptual overlap between neighbouring categories, a measure of a category's strength, and a user-chosen name for each strong category.

We evaluated our technique by segmenting known colour regions from the RGB, HSV, and CIE LUV colour models. The names we obtained were accurate, and the boundaries between different colour categories were well defined. We concluded our investigation by conducting an experiment to obtain user-chosen names and perceptual overlap for ten colour categories along the circumference of a colour wheel in CIE LUV.

TR-96-10 Choosing Effective Colours for Data Visualization, June 1996 Christopher G. Healey, 10 pages

Choosing Effective Colours for Data Visualization Christopher G. Healey

In this paper we describe a technique for choosing multiple colours for use during data visualization. Our goal is a systematic method for maximizing the total number of colours available for use, while still allowing an observer to rapidly and accurately search a display for any one of the given colours. Previous research suggests that we need to consider three separate effects during colour selection: colour distance, linear separation, and colour category. We describe a simple method for measuring and controlling all of these effects. Our method was tested by performing a set of target identification studies; we analysed the ability of thirty-eight observers to find a colour target in displays that contained differently coloured background elements. Results showed that our method can be used to select a group of colours that will provide good differentiation between data elements during visualization.

TR-96-11 Experimental Design: Input Device Protocols and Collaborative Learning, June 1996 Joanna McGrenere, Kori Inkpen, Kellogg Booth and Maria Klawe, 49 pages

This document outlines an experimental design for a study that investigates peer collaboration in a computer supported learning environment. Sections of this document have been adapted from a CPSC 533b term report by McGrenere et al. (1995) which outlined a similar experimental desgin. In the proposed study we examine different ways of supporting peer colloration which, for the purposes of this study, refers to two students working on a single comptuer playing an electronic game. The standard computer is configured with only one mouse and therefore when two students share a computer they need to share the mouse as well. We want to investigate the impact of adding second mouse to the configuration such that each child would have their own mouse.

TR-96-12 Design: Educational Multi-Player Games A Literature Review, June 1996 Joanna McGrenere, 92 pages

Over the past two decades electronic games have become ingrained in our culture Children's fixation with these games initially alarmed parents and educators, but educational researchers soon questioned whether the motivation to play could be tapped and harnessed for educational purposes. A number of educational electronic games have been developed and their success has been mixed. The great majority of these games are designed for singler players; it there is more than one player, the players are usually required to take turns playing. Althought learning within a cooperative group setting has been found to be extremely effective, designing educational games to support multiple players working together has received little atention. using a multi-player game format could provide the motivation that children need to learn and at the same time enhance both the achievement and the social interactions of the children. In order to design multiplayer educational games we must understand what motivates children to play electronic games, how to incoporate educational content into electronic games, and how to develop appropriate multi-person educational tasks. An understanding of design issues for multi-user software is also required.

This essay is a literature review that addresses the issues involved in the design of educational electronic multi-player games. The relevant bodies of literature include human-computer interaction, electronic games, educational electronic games, electronic multi-player games. Two of the most relevant areas of the human-computer interaction literature are Computer-Supported Cooperative Work (CSCW) and Computer-Supported Collaborative Learning (CSCL). All of the bodies of literature are discussed with respect to educational electronic multiplayer games, areas where further research is required are noted, and gegneral design guidelines for educational eelectronic multi-player games are offered.

TR-96-13 Shared 3D Workspaces, July 31, 1996 Joanne McGrenere and Kellogg S. Booth, 24 pages

The literature on Computer Supported Collaborative Work (CSCW) includes considerable research on 2D shared spaces and metaphors. Much less exists concerning 3D shared spaces and metaphors. This report defines terminology relevant to shared 3D workspaces, summarizes a number of areas within the literature on 2D and 3D interaction and collaboration, and identifies pertinent issues for further research. Among the issues identified are: - Can a 2D shared space metaphor be extended to 3D? - How is interaction different in 3D than in 2D? - Can metaphors for single-user 3D interaction be extended to shared 3D interaction?

TR-96-15 Algorithmic Aspects of Constrained Unit Disk Graphs, September 1996 Heinz Breu, 369 pages

Computational problems on graphs often arise in two- or three- dimensional geometric contexts. Such problems include assigning channels to radio transmitters (graph colouring), physically routing traces on a printed circuit board (graph drawing), and modelling molecules. It is reasonable to expect that natural graph problems have more efficient solutions when restricted to such geometric graphs. Unfortunately, many familiar NP-complete problems remain NP-complete on geometric graphs.

Indifference graphs arise in a one-dimensional geometric context; they are the intersection graphs of unit intervals on the line. Many NP-complete problems on arbitrary graphs do have efficient solutions on indifference graphs. Yet these same problems remain NP-complete for the intersection graphs of unit disks in the plane (unit disk graphs), a natural two-dimensional generalization of indifference graphs. What accounts for this situation, and how can algorithms be designed to deal with it?

To study these issues, this thesis identifies a range of subclasses of unit disk graphs in which the second spatial dimension is gradually introduced. More specifically, tau-strip graphs ``interpolate'' between unit disk graphs and indifference graphs; they are the intersection graphs of unit-diameter disks whose centres are constrained to lie in a strip of thickness tau. This thesis studies algorithmic and structural aspects of varying the value tau for tau-strip graphs.

The thesis takes significant steps towards characterizing, recognizing, and laying out strip graphs. We will also see how to develop algorithms for several problems on strip graphs, and how to exploit their geometric representation. In particular, we will see that problems become especially tractable when the strips are ``thin'' (tau is small) or ``discrete'' (the number of possible y-coordinates for the disks is small). Note again that indifference graphs are the thinnest (tau=0) and most discrete (one y-coordinate) of the nontrivial tau-strip graphs.

The immediate results of this research concern algorithms for a specific class of graphs. The real contribution of this research is the elucidation of when and where geometry can be exploited in the development of efficient graph theoretic algorithms.

TR-96-16 Civil Law and the Development of Software Engineering, September 1996 Martina Shapiro, 40 pages

This paper provides software engineers with an understanding of the basic tenets of civil law and how legal liability principles may be applied by the courts so as to affect the future direction of software engineering. The issues discussed are based on a review of selected literature in software engineering, civil law reported case decisions and texts as well as on interviews with legal consultants. Examples of some court rulings in cases involving software malfunction are included, along with some projections relating to the possible implications for software development arising from the anticipated reaction of the courts to the novel issues presented by software engineering.

TR-96-19 Lower Bounds for Noisy Boolean Decision Trees, September 1996 William Evans and Nicholas Pippernger, 18 pages

We present a new method for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions. The new method has the feature that expectations are taken with respect to a uniformly distributed random input, as well as with respect to the random noise, thus yielding stronger lower bounds. It also applies to many more functions than do previous results. The method yields a simple proof of the result (previously established by Reischuk and Schmeltz) that almost all Boolean functions of n arguments require Omega(n log n) queries, and strengthens this bound from the worst-case over inputs to the average over inputs. The method also yields bounds for specific Boolean functions in terms of their spectra (their Fourier transforms). The simplest instance of this spectral bound yields the result (previously established by Feige, Peleg, Raghavan and Upfal) that the parity function of n arguments requires Omega(n log n) queries, and again strengthens this bound from the worst-case over inputs to the average over inputs. In its full generality, the spectral bound applies to the "highly resilient" functions introduced by Chor, Friedman, Goldreich, Hastad, Rudich and Smolensky, and it yields non-linear lower bounds whenever the resiliency is asymptotic to the number of arguments.

TR-96-18 Temporally coherent stereos improving performance through knowledge of motion, September 1996 Vladimir Tucakov and David G. Lowe, 8 pages

This paper introduces the idea of temporally extending results of a stereo algorithm in order to improve the algorithm's performance. This approach anticipates the changes between two consecutive depth maps resulting from the motion of the cameras. Uncertainties in motion are accounted for by computation of an ambiguity area and a resulting disparity range for each pixel. The computation is used to verify and refine the anticipated values, rather than calculate them without prior knowledge. The paper compares the performance of the algorithm under different constraints on motion. Speedups of up to 400\% are achieved without significant errors.

TR-96-20 Drag-and-Drop vs. Point-and-Click Mouse Interaction for Children, November 1996 Kori Inkpen, Kellogg S. Booth and Maria Klawe, 7 pages

This paper presents the results of a study on girls' and boys' usage of two common mouse interaction techniques. The two techniques, drag-and-drop and point-and-click, were compared to determine whether one method was superior to the other in terms of speed, error rate, and preference. For girls, significant differences between the two methods were found for speed, error rate and preference. Point-and-click was faster, fewer errors were committed, and it was preferred over drag-and-drop. For boys, a significant difference was found for speed but not for error rate or preference. Point-and-click was faster than drag-and-drop, the errors rates were comparable and, although more boys preferred point-and-click, the difference was not significant.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.