CS Theses & Dissertations 1996

For 1996 graduation dates (in alphabetical order by last name):

Simplifying Terrain Models and Measuring Terrain Model Accuracy
Andrews, David Scott
URI : http://hdl.handle.net/2429/4341
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Snoeyink

Visual Echo Analysis and Multi-Evidential Correlation Non-Linear Matching & Registration of Signals and Images
Bandari, Esfandiar
URI : http://hdl.handle.net/2429/6230
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisor : Dr. Little

Many low-level vision tasks - such as measurement of visual motion, stereo disparity estimation, or texture segmentation - can be solved by similar computational or biological mechanisms. The primary aim of this dissertation is to introduce and describe a broadly applicable approach to address a variety of low level computational vision problems. This unified framework, which is named visual echo analysis, is based on the simple observation that many computer vision problems can be viewed as detection and estimation of echo arrival periods in time and space. To this end, the framework uses cepstral techniques, a common and effective non-linear signal processing methods for detecting the presence of echoes and estimating their spatial or temporal arrival period. The thesis introduces computational and performance improvements to the traditional power and differential cepstrum with direct extensions to complex and phase cepstrum. Visual echo analysis (and multi-dimensional cepstrum) is then applied to a number of low-level vision tasks such as: motion estimation, binocular and trinocular stereo disparity, motion-stereo analysis, multi-frame disparity estimation (multi-frame motion, multiple baseline stereo), stationary texture segmentation, boundary symmetry analysis, and detection and estimation of multiple disparities (i.e., motion transparency, reflection, and occluding boundary). The relationship between echo analysis and matching is briefly examined, and a new technique for signal registration - called multi-evidential correlation (MEC) is introduced. MEC provides multiple, and thus verifiable, measurements for individual point disparities. The technique utilizes specific matching kernels - such as cepstrum, phase correlation or Hadamard based disparity measurements methods - to furnish multiple estimates of individual disparities; estimates that can be used to verify one another and/or be combined to establish a robust and accurate measure of signal displacements.

Algorithmic Aspects of Constrained Unit Disk Graphs
Breu, Heinz
URI : http://hdl.handle.net/2429/6282
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisor : Dr. Kirkpatrick

Computational problems on graphs often arise in two- or three-dimensional geometric contexts. Such problems include assigning channels to radio transmitters (graph colouring), physically routing traces on a printed circuit board (graph drawing), and modelling molecules. It is reasonable to expect that natural graph problems have more efficient solutions when restricted to such geometric graphs. Unfortunately, many familiar NPcomplete problems remain NP-complete on geometric graphs. Indifference graphs arise in a one-dimensional geometric context; they are the intersection graphs of unit intervals on the line. Many NP-complete problems on arbitrary graphs do have efficient solutions on indifference graphs. Yet these same problems remain NP-complete for the intersection graphs of unit disks in the plane (unit disk graphs), a natural two-dimensional generalization of indifference graphs. What accounts for this situation, and how can algorithms be designed to deal with it? To study these issues, this thesis identifies a range of subclasses of unit disk graphs in which the second spatial dimension is gradually, introduced. More specifically, τ-strip graphs "interpolate" between unit disk graphs and indifference graphs; they are the intersection graphs of unit-diameter disks whose centres are constrained to lie in a strip of thickness τ. This thesis studies algorithmic and structural aspects of varying the value τ for τ-strip graphs. The thesis takes significant steps towards characterizing, recognizing, and laying out strip graphs. We will also see how to develop algorithms for several problems on strip graphs, and how to exploit their geometric representation. In particular, we will see that problems become especially tractable when the strips are "thin" (τ is small) or "discrete" (the number of possible y-coordinates for the disks is small). Note again that indifference graphs are the thinnest (τ = 0) and most discrete (one y-coordinate) of the nontrivial τ-strip graphs. The immediate results of this research concern algorithms for a specific class of graphs. The real contribution of this research is the elucidation of when and where geometry can be exploited in the development of efficient graph theoretic algorithms.

Stochastic Simulation in Dynamic Probabilistic Networks using Compact Representation
Cheuk, Adrian Yiu Wing
URI : http://hdl.handle.net/2429/6018
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Boutilier

User Models for Intent-Based Authoring
Csinger, Andrew
URI : http://hdl.handle.net/2429/6302
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisors : Dr. Poole, Dr. Booth

Authoring is the collection, selection, preparation and presentation of information to one or more readers by an author. The thesis takes a new, critical look at traditional approaches to authoring, by asking what knowledge is required and at which stages of the process. From this perspective, traditional authoring is seen to entrench an early commitment to both form and content. Although the late binding of form is now commonplace in structured document preparation systems, a similar delay in the binding of content is necessary to achieve user-tailored interaction. The authoring paradigm we have developed to service this goal is called intent-based authoring, because the author supplies at compile-time a communicative goal, or intent. Just as SGML editors and HTML browsers defer rendering decisions until run-time by referring to a local stylesheet, intent-based authoring systems defer content-selection decisions until runtime when they refer to models of both author and reader(s). This thesis shows that techniques from artificial intelligence can be developed and used to acquire, represent and exploit such models. Probabilistic abduction is used to recognize user models, and cost-based abduction to design tailored presentations. These techniques are combined in a single framework for best-first recognition and design. These reasoning techniques are further allied with an interaction paradigm we call scrutability, whereby users critique the model in pursuit of better presentations; users see a critical subset of the model determined by sensitivity analysis and can change values through a graphical user interface. The interactivity is modelled to ensure that representations of the user model to the user are made in the most perceptually salient manner. A prototype for intent-based video authoring is described. Video is used as a test medium because it is a "worst case" temporally linear medium; a viable solution to video authoring problems should apply easily to more tractable traditional media. The primary contribution of this dissertation is to the field of applied artificial intelligence, specifically to the emerging field of user modelling. The central contribution is the intent-based authoring framework for separating intent from content.

Properties of Tabulated Bidirectional Reflectance Distribution Functions
Deyoung, Joel
URI : http://hdl.handle.net/2429/5996
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Fournier

Applying Formal Method in the Implementation of the Information Retrieval Protocol Z39.50
Do, Binh Hoa
URI : http://hdl.handle.net/2429/4157
Degree : Master of Science – MSc
Graduation Date : 1995-05
Supervisor : Dr. Vuong

Design and Implementation of an Expressive Query Interface/Language for Image Database
Faulus, Dwifiandika S.
URI : http://hdl.handle.net/2429/4631
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Ng

Object-Orientation in Distributed Systems
Fazel, Neil B.
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=111794
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Hutchinson

Predictive Rendering
Fearing, Paul
URI : http://hdl.handle.net/2429/4143
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Fournier

Compositional Model Checking of Partially ordered State Spaces
Hazelhurst, Scott
URI : http://hdl.handle.net/2429/4837
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisor : Dr. Seger

Symbolic trajectory evaluation (STE) — a model checking technique based on partial order representations of state spaces — has been shown to be an effective model checking technique for large circuit models. However, the temporal logic that it supports is restricted, and as with all verification techniques has significant performance limitations. The demand for verifying larger circuits, and the need for greater expressiveness requires that both these problems be examined. The thesis develops a suitable logical framework for model checking partially ordered state spaces: the temporal logic TL and its associated satisfaction relations, based on the quaternary logic Q. TL is appropriate for expressing the truth of propositions about partially ordered state spaces, and has suitable technical properties that allow STE to support a richer temporal logic. Using this framework, verification conditions called assertions are defined, a generalised version of STE is developed, and three STE-based algorithms are proposed for investigation. Advantages of this style of proof include: models of time are incorporated; circuits can be described at a low level; and correctness properties are expressed at a relatively high level. A primary contribution of the thesis is the development of a compositional theory for TL assertions. This compositional theory is supported by the partial order representation of state space. To show the practical use of the compositional theory, two prototype verification systems were constructed, integrating theorem proving and STE. Data is manipulated efficiently by using binary decision diagrams as well as symbolic data representation methods. Simple heuristics and a flexible interface reduce the human cost of verification. Experiments were undertaken using these prototypes, including verifying two circuits from the IFIP WG 10.5 Benchmark suite. These experiments showed that the generalised STE algorithms were effective, and that through the use of the compositional theory it is possible to verify very large circuits completely, including detailed timing properties.

Effective Visualization of Large Multidimensional Datasets
Healey, Christopher Graham
URI : http://hdl.handle.net/2429/6106
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-11
Supervisor : Dr. Booth

A new method for assisting with the visualization of large multidimensional datasets is proposed. We classify datasets with more than one million elements as large. Multidimensional data elements are elements with two or more dimensions, each of which is at least binary. Multidimensional data visualization involves representation of multidimensional data elements in a low dimensional environment, such as a computer screen or printed media. Traditional visualization techniques are not well suited to solving this problem. Our data visualization techniques are based in large part on a field of cognitive psychology called preattentive processing. Preattentive processing is the study of visual features that are detected rapidly and with little effort by the human visual system. Examples include hue, orientation, form, intensity, and motion. We studied ways of extending and applying research results from preattentive processing to address our visualization requirements. We used our investigations to build visualization tools that allow a user to very rapidly and accurately perform exploratory analysis tasks. These tasks include searching for target elements, identifying boundaries between groups of common elements, and estimating the number of elements that have a specific visual feature. Our experimental results were positive, suggesting that dynamic sequences of frames can be used to explore large amounts of data in a relatively short period of time. Recent work in both scientific visualization and database systems has started to address the problems inherent in managing large scientific datasets. One promising technique is knowledge discovery, "the nontrivial extraction of implicit, previously unknown, and potentially useful information from data". We hypothesise that knowledge discovery can be used as a filter to reduce the amount of data sent to the visualization tool. Data elements that do not belong to a user-chosen group of interest can be discarded, the dimensionality of individual data elements can be compressed, and previously unknown trends and relationships can be discovered and explored. We illustrate how our techniques can be used by applying them to real-world data and tasks. This includes the visualization of simulated salmon migration results, computerized tomography medical slices, and environmental datasets that track ocean and atmospheric conditions.

Visually guided mobile robots - tasks, vision resources and sensor integration
Ivanov, Ivailo
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=112053
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Little

Using Inverse Kinematics to Position Articulated Figures
Kuder, Karen Cynthia
URI : http://hdl.handle.net/2429/4199
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Forsey

Art Gallery Theorems and Algorithms
Li, Zaiqing
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=112059
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Kirkpatrick

Design for Testability of Communication Protocols
Loureiro, Antonio Alfredo Ferreira
URI : http://hdl.handle.net/2429/4776
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisors : Dr. Vuong, Dr. Chanson

There is growing consensus that some design principles are needed to overcome the ever increasing complexity in verifying and testing software in order to build more reliable systems. Design for testability (DFT) is the process of applying techniques and methods during the design phase in order to reduce the effort and cost in testing its implementations. In this thesis, the problem of design for testability of communication protocols is studied. A framework that provides a general treatment to the problem of designing communication protocols with testability in mind and some basic design principles are presented. Following the protocol engineering life cycle we have identified and discussed in detail issues related to design for testability in the analysis, design, implementation, and testing phases. We discuss two important aspects that affect the testing of communication protocols: testing taking the environment into consideration and distributed testing. We present a novel algorithm and the corresponding design principles for tackling an important class of faults caused by an unreliable environment, namely coordination loss, that are very difficult to catch in the testing process. These design principles can be applied systematically in the design of self-stabilizing protocols. We show that conformance relations that are environment independent are not adequate to deal with errors caused by the environment such as coordination loss. A more realistic conformance relation based on external behavior as well as a "more testable" relation for environments which exhibit coordination loss are introduced. We also present a novel algorithm and the corresponding design principles for checking dynamic unstable properties during the testing process. The method proposed can be used in distributed testing of communication protocols and distributed programs in general. This technique can also be used in normal execution of the protocol implementation to tackle the problems of state build-up and exception handling when a fault is detected. A specific type of communication protocol, namely 3-way handshake protocols, is used to show it is possible to check general properties using this algorithm. A comprehensive survey of testability and design for testability in the software domain is also included in the thesis.

Design and Implementation of A simple multimedia Specification Language for the Internet
Lu, Yunfei
URI : http://hdl.handle.net/2429/4731
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Neufeld

Overview of Multimedia Application Development
Marple, Kirk Jonathan
URI : http://hdl.handle.net/2429/6006
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Booth

Design:  Educational Electronic Multi-Player Games a Literature Review
McGrenere, Joanna Lynn
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=111977
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisors : Dr. Klawe, Dr. Booth

Proving Newtonian Arbiters correct, Almost Surely
Mitchell, Ian
URI : http://hdl.handle.net/2429/6067
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Greenstreet

Empirical Evaluation of Information for Robotic Manipulation Tasks
Mulligan, Jane I.
URI : http://hdl.handle.net/2429/6053
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-11
Supervisor : Dr. Mackworth

Rigorous analysis and evaluation of real implemented robotic systems for intelligent tasks are rarely performed. Such systems are often extremely complicated, depending not only on 'interesting' theoretical parameters and models, but on many assumptions and constants which may be set almost arbitrarily. The information required to achieve good task performance includes all of these acknowledged and unacknowledged parameters. In order to evaluate and compare systems which employ diverse components and methods, we need a framework and criteria by which to measure them. We demonstrate techniques for evaluating a system's performance as a function of the parameters and information it uses and compare different task implementations on this basis. We view all task implementations as particular parameterizations of the task goals they represent. Some parameters belong to fixed, pre-measured models; others are data collected or measured online by the system. Comparing systems then becomes the comparison of these parameterizations. There are three key questions we need to ask when determining task parameterizations: What do we need to measure (observability)? How well must we discriminate between measured values (precision)? and How accurately must our measurements reflect the true world state (accuracy)? We present a performance based framework for empirical evaluation and comparison of task information based on these three basic notions of observability, precision (discrimination) and accuracy. Factorial experiments give us a starting point for determining which measurements, and their interactions, define the task subspace. Sensitivity analysis determines optimal parameter values and the effect of uncertainty and variations i n measurements on task performance. Finally a cost/performance metric offers a quantification of the task complexity with respect to the parameters, and performance based precision and accuracy requirements determined in the previous steps. The experiments presented to demonstrate this methodology are based on a part orienting task implemented in two very different ways. One system uses a 'sensorless' model-based push-orienting method, the other uses a real-time stereo vision system to localize objects in the workspace for sensor-driven part orienting. The parameters used to represent manipulated parts for sensing versus model-based manipulation are similar, though not identical sets, and encode the information critical to the task. Through detailed experiments we establish the statistically significant parameters and parameter interactions for each system, and apply sensitivity analysis to set optimal parameter values and explore the nature of interactions. Finally, the cost/performance metric gives us a means of counting the computation and sensing costs to achieve the observed system error rates. This type of analysis is a necessary step to understanding integrated intelligent systems. It reveals aspects of system implementations which cannot easily be predicted in advance, and gives a clear picture of the information required and the strengths and weaknesses of the system.

Individual Tree Detection and Localization in Aerial Imagery
Murgu, Dan
URI : http://hdl.handle.net/2429/4346
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Woodham

The Automatic Recognition of Individual Trees in Aerial Images of Forests based on a Synthetic Tree Crown Image Model
Pollock, Richard James
URI : http://hdl.handle.net/2429/6135
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-11
Supervisor : Dr. Woodham

The thesis of this work is that individual tree crowns can be automatically recognized in monocular high spatial resolution optical images of scenes containing boreal or cool temperate forests in a leaved state. The thesis was advanced by developing and testing an automatic tree crown recognition procedure that is based on a model of the image formation process at the scale of an individual tree. This model provides a means of applying specific scene and image formation knowledge to the recognition task. The procedure associates instances of a three-dimensional shape description with locations in a scene image such that the descriptions estimate the visible scene extent of tree crowns that existed at the corresponding scene locations when the image was acquired. This provides an estimate of the average horizontal diameter of the vertical projection of individual recognized tree crowns, and a basis for species classification. This work makes a contribution to the overall effort to increase the level of automation in forest type mapping. This work also introduces and demonstrates the use of a pre-defined image model to support the manual acquisition of a sample of unmodelled tree crown image properties, and the use of constraints related to the spatial relationships among multiple neighbouring candidate recognition instances to resolve image interpretation conflicts. The procedure was tested with a scene of mixed uneven-aged forests in which the trees represent a wide variety of species, size, and growing conditions. The results were assessed on the basis of ground reference data and compared to those produced by human interpreters. The scene represented a greater level of difficulty than that which has been addressed by previous attempts at automating the tree crown recognition task. The test results show that the procedure was able to largely accommodate the variation represented by the test scene, but that human interpreters were better able to accommodate irregularities in tree crown form and irradiance that were caused by tight vertical and horizontal spacing of the crowns.

Learning to Recognize Objects in Images:  Acquiring and Using Probabilistic Models of Appearance
Pope, Arthur R.
URI : http://hdl.handle.net/2429/4774
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-05
Supervisor : Dr. Lowe

We present a method of recognizing three-dimensional objects in intensity images of cluttered scenes, using models learned from training images. By modeling each object with a series of views, representing each view with a large variety of features, and describing each feature probabilistically, our method can learn to recognize objects of unusual complexity. An object is modeled by a probability distribution describing the range of possible variation in the object's appearance. This distribution is organized on two levels. Large variations are handled by partitioning training images into clusters corresponding to distinctly different views of the object. Within each cluster, smaller variations are represented by distributions characterizing uncertainty in the presence, position, and measurements of various discrete features of appearance. Many types of features may be used, ranging in abstraction from segments of intensity edges to perceptual groupings and regions. Our learning method combines two activities: (a) an incremental conceptual clustering algorithm identifies groups of training images corresponding to distinct views of the object; (b) a generalization algorithm consolidates each cluster to produce a summary description characterizing its most prominent features. The method produces models that are both economical and representative, balancing these competing goals through an application of the minimum description length principle. Recognition requires matching features of a model with those of an image. Our matching method is a form of alignment: from hypothesized feature pairings it estimates a viewpoint transformation; from that it finds additional pairings, and so on. The method uses constraints based on features' positions, numeric measurements, and relations, assigning to each an importance commensurate with its uncertainty as recorded by the model. Decisions are ordered so that more certain features are paired sooner, while ambiguous choices are postponed for later resolution, when additional constraints may be known. We also describe a system implemented to test our recognition learning method. Experiments demonstrate the system's performance at tasks including learning to recognize complex objects in cluttered scenes, and learning to discriminate two quite similar objects.

A union of spheres representation for 3D objects
Ranjan, Vishwa
URI : http://hdl.handle.net/2429/6140
Degree : Doctor of Philosophy – PhD
Graduation Date : 1996-11
Supervisor : Dr. Fournier

Reconstruction of an object from a set of points sampled from its boundary is an important problem in graphics and vision. Several methods exist to compute and display surface (e.g., polygonal) and volumetric (e.g., polyhedral) models of objects from the boundary points. In order to display, transform, and compare objects, it is often convenient or necessary to use different representations of objects. Basic desired properties of representations of objects are efficiency of computation, storage, and display. Other important properties include stability (small changes in the data, such as noise or small distortions, cause small changes in the model), the ability to determine the similarities between two data sets, and the computation of simplified models. A survey of common representations of objects (e.g., surface, octrees, etc.) shows that some important properties are lacking in these representations. In this thesis we study a union of spheres representation (UoS) for the volume enclosed by an object's boundary. We present algorithms to generate stable union of spheres models of objects from various sources of data, such as volumetric data (e.g., data from CT or MRI scanners), range data, and other existing models. The spheres can be simplified to obtain multi-scale models. We present a method to establish correspondence between two objects using their union of spheres models and use this to calculate distances between objects, to register objects, and to interpolate between objects. This establishes a measure to study and compare such models. Examples with simple and complex objects show how this measure corresponds closely to the intuitive human understanding of shape.

A Comparative Analysis of Multi-Dimensional Indexing Structures for "Eigenimages"
Sedighian, Andishe Samandari
URI : http://hdl.handle.net/2429/4399
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Ng

Civil Law and the Development of Software Engineering
Shapiro, Martina
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=111997
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Greenstreet

Issues of Mobility Support in Mobile Computing
Shi, Hao
Master’s essay available in print : https://bibrrs.library.ubc.ca/vwebv/holdingsInfo?bibId=111664
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Vuong

Near-Real-Time Implementation of Multiple Light Source Optical Flow
Siegerist, Cristina Elena
URI : http://hdl.handle.net/2429/4404
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Woodham

An Analysis of Multi-Level Filtering for High Dimensional Image Data
Tam, Dominic Pok Man
URI : http://hdl.handle.net/2429/4435
Degree : Master of Science – MSc
Graduation Date : 1996-05

Integrating Simulation and Animation Software Systems through a Generic Computational Engine
Walker, Robert James
URI : http://hdl.handle.net/2429/6064
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Forsey

Formal Verification of Asynchronous Data-Path Circuits
Weih, David Turner
URI : http://hdl.handle.net/2429/4469
Degree : Master of Science – MSc
Graduation Date : 1996-05
Supervisor : Dr. Greenstreet

Terrain Drainage Features and Queries
Yu, Sidi
URI : http://hdl.handle.net/2429/4673
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Snoeyink

Finding Strong, Common and Discriminating Characteristics of Clusters from Thematic Maps
Yu, YiQing
URI : http://hdl.handle.net/2429/4566
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Ng

An Estelle.Z Parser For a Protocol Test Generation Environment Testgen+
Zhang, Rui
URI : http://hdl.handle.net/2429/4686
Degree : Master of Science – MSc
Graduation Date : 1996-11
Supervisor : Dr. Vuong