Technical Reports

The ICICS/CS Reading Room


1990 UBC CS Technical Report Abstracts

TR-90-01 A Theory of Multi-Scale, Curvature- and Torsion-Based Shape Representation for Space Curve, January 1990 Farzin Mokhtarian

This paper introduces a novel and new multi-scale shape representation technique for space curves which satisfies several criteria considered necessary for any shape representation method. This property makes the representation suitable for tasks which call for recognition of a noisy curve at any scale or orientation. \n The method rests on the concept of describing a curve at varying levels of detail using features that are invariant with respect to transformations which do not change the shape of the curve. Three different ways of computing the representation are described in this paper. These three methods result in the following representations: the curvature and torsion scale space images, the renormalized curvature and torsion scale space images, and the resampled curvature and torsion scale space images. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution of that curve. Several evolution properties of space curves are described in this paper. Some of these properties show that evolution is a physically plausible operation and characterize possible behaviours of space curves during evolution. Some show that the representations proposed in this paper in fact satisfy the required criteria. Others impose constraints on the location of a space curve as it evolves. Together, these evolution properties provide a theoretical foundation for the representation methods introduced in this paper.

TR-90-02 Logical Foundations for Category Theory, May 1990 Paul C. Gilmore and George K. Tsiknis

Category theory provides an abstract and uniform treatment for many mathematical structures, and increasingly has found applications in computer science. Nevertheless, no suitable logic within which the theory can be developed has been provided. The classical set theories of Zermelo-Fraenkel and Godel-Bernays, for example, are not suitable because of the use category theory makes of self-referencing abstractions, such as in the theorem that the set of categories forms a category. That a logic for the theory must be developed, Feferman has argued, follows from the use in the theory of concepts fundamental to logic, namely, propositional logic, quantification and the abstractions of set theory. \nIn this paper a demonstration that the logic and set theory NaDSet is suitable for category theory is provided. Specifically, a proof of the cited theorem of category theory is provided within NaDSet. \nNaDSet succeeds as a logic for category theory because the resolution of the paradoxes provided for it is based on a reductionist semantics similar to the classical semantics of Tarski for first and second order logic. Self-membership and self-reference is not explicitly excluded. The reductionist semantics is most simply presented as a natural deduction logic. In this paper a sketch of the elementary and logical syntax or proof theory of the logic is described. \nFormalizations for most of the fundamental concepts and constructs in category theory are presented. NaDSet definitions for natural transformations and functor categories are given and an equivalence relation on categories is defined. Additional definitions and discussions on products, comma categories, universals limits and adjoints are presented. They provide enough evidence to support the claim that any construct, not only in categories, but also in toposes, sheaves, triples and similar theories can be formalized within NaDSet.

TR-90-03 Optimal Algorithms for Probalistic Solitude Detection On Anomymous Rings, January 1990 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Probabilistic algorithms that err with probability at most $\epsilon \geq 0$ are developed for the Solitude Detection problem on anonymous asynchronous unidirectional rings. Solitude Detection requires that a nonempty set of distinguished processors determine whether or not there is only one distinguished processor. The algorithms transmit an optimal expected number of bits, to within a constant factor. Las Vegas and Monte Carlo algorithms that terminate both distributively and nondistributively are developed. Their bit complexities display a surprisingly rich dependence on the kind of algorithm and on the processors' knowledge of the size of the ring.

TR-90-04 Tight Lower Bounds for Probabilistic Solitude Verification on Anomynous Rings, January 1990 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Tight lower bounds on the expected bit complexity of the Solitude Verification problem on anonymous asynchronous unidirectional rings are established that match the upper bounds demonstrated in a companion paper [5]. In the algorithms of [5], a variety of techniques are applied; In contrast, we find that a single technique, applied carefully, suffices for all of the lower bounds. The bounds demonstrate that, for this problem, the expected bit complexity depends subtly on the processors' knowledge of the size of the ring, and on the type of algorithm (Las Vegas or Monte Carlo / distributive or nondistributive termination).

TR-90-05 Direct Evidence of Occlusion in Stereo and in Motion, January 1990 James Joseph Little and Walter E. Gillett

Discontinuities of surface properties are the most important locations in a scene; they are crucial for segmentation because they often coincide with object boundaries [TMB85]. Standard approaches to discontinuity detection decouple detection of disparity discontinuities from disparity computation. We have developed techniques for locating disparity discontinuities using information internal to the stereo algorithm of [DP86], rather than by post-processing the stereo data. The algorithm determines displacements by maximizing the sum, at overlapping small regions, of local comparisons. The detection methods are motivated by analysis of the geometry of matching and occlusion and the fact that detection is not just a pointwise decision. Our methods can be used in a combination to produce robust performance. This research is part of a project to build a ``Vision Machine'' [PLG+88] at MIT that integrates outputs from early vision modules. Our techniques have been extensively tested on real images.

TR-90-06 A Measure of Semantic Relatedness for Resolving Ambiguities in Natural Language Database Requests, January 1990 Julia A. Johnson and Richard S. Rosenberg

A measure of semantic relatedness based on distance between objects in the database schema has previously been used as a basis for solving a variety of natural language understanding problems including word sense disambiguation, resolution of semantic ambiguities, and attachment of post noun modifiers. The use of min/max values which are usually recorded as part of the process of designing the database schema is proposed as a basis for solving the given problems as they arise in natural language database requests. The min/max values provide a new source of knowledge for resolving ambiguities and a semantics for understanding what knowledge has previously been used by distance measures in database schemas.

TR-90-07 Automatic Generation of Interactive Applications, February 1990 Emanuel G. Noik

As user interfaces become more powerful and easier to use, they are often harder to design and implement. This has created a great demand for tools which help programmers create interactive applications. While existing interface tools simplify interface creation, they typically focus only on the interface, do not provide facilities for simplifying application generation, and are too low-level. We have developed a tool which automatically generates complete interactive applications from a high-level description of the application's semantics. We argue that our system provides a very simple yet powerful environment for application development. Key advantages include: ease of use, separation of interface and application, interface and machine independence, more comprehensive programming aids, and greater potential for software reusability. While we tend to focus on the practical motivations for using such a tool, we conclude that this approach should form the basis of an important category of interface tools and deserves further study.

TR-90-08 A Characterizing Diagonases and Systems, June 1990 Johan de Kleer, Alan K. Mackworth and Raymond Reiter

Most approaches to model-based diagnosis describe a diagnosis for a system as a set of failing components that explains the symptoms. In order to characterize the typically very large number of diagnoses, usually only the minimal such sets of failing components are represented. This method of characterizing all diagnoses is inadequate in general, in part because not every superset of the faulty components of a diagnosis necessarily provides a diagnosis. In this paper we analyze the notion of diagnosis in depth exploiting the notions of implicate/implicant and prime implicate/implicant. We use these notions to propose two alternative approaches for addressing the inadequacy of the concept of minimal diagnosis. First, we propose a new concept, that of kernel diagnosis, which is free of the problems of minimal diagnosis. Second, we propose to restrict the axioms used to describe the system to ensure that the concept of minimal diagnosis is adequate.

TR-90-09 Assumption Based Reasoning and Clause Management Systems, May 1990 Alex Kean and George K. Tsiknis

A {\em truth maintenance system} is a subsystem that manages the utilization of assumptions in the reasoning process of a problem solver. Doyle's original motivation for creating a truth maintenance system was to augment a reasoning system with a control strategy for activities concerning its non-monotonic state of beliefs. Hitherto, much effort has been invested in designing and implementing the concept of truth maintenance and little effort has been dedicated to the formalization that is essential to understanding it. This paper provides a complete formalization of the principle of truth maintenance. Motivated by Reiter and de Kleer's preliminary report on the same subject, this paper extends their study and gives a formal account of the concept of truth maintenance under the general title of {\em assumption based reasoning}. The concept of assumption based theory is defined and the notions of explanation and direct consequence are presented as forms of plausible conclusion with respect to this theory. Additionally, the concept of extension and irrefutable sentences are discussed together with other variations of explanation and direct consequence. A set of algorithms for computing these conclusions for a given theory are presented using the notion of prime implicates. Finally, an extended example on Boolean circuit diagnosis is shown to examplify these ideas.

TR-90-10 Parallel Algorithms for Routing in Non-Blocking Networks, January 1990 Lin and Nicholas Pippenger

(Abstract not available on-line)

TR-90-11 Multiple Light Source Optical Flow, January 1990 Robert J. Woodham

(Abstract not available on-line)

TR-90-12 Selection Networks, May 1990 Nicholas Pippenger

We establish an upper bound asymptotic to $2n \log_{2}n$ for the number of comparators required in a network that classifies $n$ values into two classes each containing $n/2$ values, with each value in one class less than or equal to each value in the other. (The best lower bound known for this problem is asymptotic to $(n/2) \log_{2}n.$)

TR-90-13 On a Lower Bound for the Redundancy of Reliable Networks with Noisy Gates, May 1990 Nicholas Pippenger, George D. Stamoulis and John N. Tsitsiklis

We prove that a logarithmic redundancy factor is necessary for the reliable computation of the parity function by means of a networks with noisy gates. This result is the same as one claimed by Dobrushin and Ortyukov in 1977, but the proof they gave appears to be incorrect.

TR-90-14 Convergence Properties of Curvature and Torsion, May 1990 Farzin Mokhtarian

Multi-scale, curvature-based shape representation techniques for planar curves and multi-scale, torsion-based shape representation techniques for space curves have been proposed to the computer vision community by Mokhtarian \& Mackworth [1986], Mackworth \& Mokhtarian [1988] and Mokhtarian [1988]. These representations are referred to as the regular, renormalized and resampled curvature and torsion scale space images and are computed by combining information about the curvature or torsion of the input curve at a continuum of detail levels. \n Arc length parametric representations of planar or space curves are convolved with Gaussian functions of varying standard deviation to compute evolved versions of those curves. The process of generating evolved versions of a curve as the standard deviation of the Gaussian function goes from 0 to $\infty$ is referred to as the evolution of that curve. When evolved versions of the curve are computed through an iterative process in which the curve is reparametrized by arc length in each iteration, the process is referred to as arc length evolution. \n This paper contains a number of important results on the convergence properties of curvature and torsion scale space representations. It has been shown that every closed planar curve will eventually become simple and convex during evolution and arc length evolution and will remain in that state. This result is very important and shows that curvature scale space images are well-behaved in the sense that we can always expect to find a scale level at which the number of curvature zero-crossing points goes to zero and know that new curvature zerocrossing points will not be created beyond that scale level which can be considered to be the high end of the curvature scale space image. \n It has also been shown that every closed space curve will eventually tend to a closed planar curve during evolution and arc length evolution and that every closed space curve will eventually enter a state in which new torsion zero-crossing points will not be created during evolution and arc length evolution and will remain in that state. \n Furthermore, the proofs are not difficult to comprehend. They can be understood by readers without an extensive knowledge of mathematics.

TR-90-15 Mathematical Foundation for Orientation Based Representations of Shape, May 1990 Ying Li

Mathematical foundations for orientation based shape representation are reviewed. Basic tools include support function, mixed volume, vector addition, Blaschke addition, and the corresponding decompositions, as well as some basic facts about convex bodies, are presented. Results on several types of curvature measures such as spherical images, m-th order area functions are summarized. As a case study, the EGI approach is examined to see how the classical results on Minkowski's problem are utilized in computational vision. Finally, results on Christoffel's problem are surveyed, including constructive proofs.

TR-90-16 An Analysis of Exact and Approximation Algorithms for Dempster Shafer Theory, January 1990 Gregory M. Provan

TR-90-17 Polygon Triangulation in $0(N \log \log N)$ Time with Simple Data Structures, June 1990 David G. Kirkpatrick, Maria M. Klawe and Robert E. Tarjan

We give a new $O (n \log \log n )$-time deterministic algorithm for triangulating simple $n$-vertex polygons, which avoids the use of complicated data structures. In addition, for polygons whose vertices have integer coordinates of polynomially bounded size, the algorithm can be modified to run in $O (n \log^{*} n )$ time. The major new techniques employed are the efficient location of horizontal visibility edges that partition the interior of the polygon into regions of approximately equal size, and a linear-time algorithm for obtaining the horizontal visibility partition of a subchain of a polygonal chain, from the horizontal visibility partition of the entire chain. The latter technique has other interesting applications, including a linear-time algorithm to convert a Steiner triangulation of a polygon into a true triangulation.

TR-90-18 The Blocking Probability of Spider-Web Networks, June 1990 Nicholas Pippenger

We determine the limiting behaviour of the blocking probability for spider-web networks, a class of crossbar switching networks proposed by Ikeno. We use a probabilistic model proposed by the author, in which the busy links always form disjoint routes through the network. We show that if the occupancy probability is below the 0.5857 \ldots$, then the blocking probability tends to zero, whereas above this threshold it tends to one. This provides a theoretical explanation for results observed empirically in simulations by Bassalygo, Neiman and Vvedenskaya.

TR-90-19 The Effect of Knowledge on Belief: Conditioning, Specificity and the Lottery Paradox in Default Reasoning, June 1990 David Poole

How should what one knows about an individual affect default conclusions about that individual? This paper contrasts two views of ``knowledge'' in default reasoning systems. The first is the traditional view that one knows just what is in one's knowledge base. It is shown how, under this interpretation, having to know an exception is too strong for default reasoning. It is argued that we need to distinguish ``background'' and ``contingent'' knowledge in order to be able to handle specificity, and that this is a natural distinction. The second view of knowledge is what is contingently known about the world under consideration. Using this view of knowledge, a notion of conditioning that seems like a minimal property of a default is defined. Finally, a qualitative version of the lottery paradox is given; if we want to be able to say that individuals that are typical in every respect do not exist, we should not expect to conclude the conjunction of our default conclusions. \n This paper expands on work in the proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning [35].

TR-90-20 Projected Implicit Runge-Kutta Methods for Differential-Algebraic Equations, August 1990 Uri Ascher and Linda R. Petzold

In this paper we introduce a new class of numerical methods, Projected Implicit Runge-Kutta methods, for the solution of index-two Hessenberg systems of initial and boundary value differential-algebraic equations (DAEs). These types of systems arise in a variety of applications, including the modelling of singular optimal control problems and parameter estimation for differential-algebraic equations such as multibody systems. The new methods appear to be particularly promising for the solution of DAE boundary value problems, where the need to maintain stability in the differential part of the system often necessitates the use of methods based on symmetric discretizations. Previously defined symmetric methods have severe limitations when applied to these problems, including instability, oscillation and loss of accuracy; the new methods overcome these difficulties. For linear problems we define an essential underlying boundary value ODE and prove well-conditioning of the differential (or state-space) solution components. This is then used to prove stability and superconvergence for the corresponding numerical approximations for linear and nonlinear problems.

TR-90-21 Automating the Generation of Interactive Applications, January 1990 Emanuel G. Noik

As user interfaces become more powerful and easier to use they are often harder to design and implement. This has caused a great demand for interface tools. While existing tools ease interface creation, they typically do not provide mechanisms to simplify application development and are too low-level. Furthermore, existing tools do not provide effective mechanisms to port interactive applications across user interfaces. While some tools provide limited mechanisms to port applications across user interfaces which belong to the same class (e.g., the class of all standard graphical direct-manipulation user interfaces), very few can provide the ability to port applications across different interface classes (e.g., command-line, hypermedia, speech recognition and voice synthesis, virtual reality, etc.). \n With my approach, the programmer uses an abstract model to describe the structure of the application including the information that the application must exchange with the user, rather than describing a user interface which realizes these characteristics. By specifying application semantics at a very high level of abstraction it is possible to obtain a much greater separation between the application and the user interface. Consequently, the resulting applications can be ported not only across user interfaces which belong to a common interface class, but across interfaces which belong to distinct classes. This can be realized through simple recompilation --- source code does not have to be modified. \n NAAG (Not Another Application Generator), a tool which embodies these ideas, enables programmers to create interactive applications with minimal effort. An application is modelled as a set of operations which manipulate objects belonging to user-defined object classes. The input to NAAG is a source program which describes classes, operations and their inputs and outputs, and the organization of operations within the application. Classes and operations are implemented as data structures and functions in a conventional programming language such as C. This model simplifies not only the specification and generation of the user interface, but the design and implementation of the underlying application. \n NAAG utilizes existing technology such as macro-preprocessors, compilers, make programs, and low-level interface tools, to reduce the programming task. An application that is modified by adding, removing, or reorganizing artifacts (classes, operations, and menus), can be regenerated with a single command. Traditionally, software maintenance has been a very difficult task as well. Due to the use of a simple abstract model, NAAG applications are also easier to maintain. Furthermore, this approach encourages software reuse: applications consisting of arbitrary collections of original and pre-existing artifacts can be composed easily; functions which implement abstract operations are independent of both, user interface aspects, and the context in which they are employed. \n Application development is further simplified in the following ways: the programmer describes the semantics of the user interface --- a conventional explicit specification is not required; output primitives are defined in an interface-independent manner; many programming tasks such as resource management, event processing, and communication, are either handled directly by the tool or else simplified greatly for the programmer. \n NAAG is currently used by the members of the Laboratory for Computational Vision at the University of British Columbia to maintain a sophisticated image processing system.

TR-90-22 Logical Foundations for Programming Semantics, August 1990 Paul C. Gilmore and George K. Tsiknis

This paper was presented to the Sixth Workshop on Mathematical Foundations of Programming Semantics held at Queen's University, May 15-19,1990. \n\nThe paper provides an introduction to a natural deduction based set theory, NaDSet, and illustrates its use in programming semantics. The need for such a set theory for the development of programming semantics is motivated by contrasting the presentation of recursive definitions within first order logic with their presentation within NaDSet. Within first order logic such definitions are always incomplete in a very simple sense: Induction axioms must be added to the given definitions and extended with every new recursive definition. Within a set theory such as NaDSet, recursive definitions of sets are represented as terms in the theory and are complete in the sense that all properties of the set can be derived from its definition. Such definitions not only have this advantage of completeness, but they also permit recursively defined sets to be members of the universe of discourse of the logic and thereby be shown to be members of other defined sets. \nThe resolution of the paradoxes provided by NaDSet is dependant upon replacing the naive comprehension axiom scheme of an inconsistent first order logic with natural deduction rules for the introduction of abstraction terms into arguments. The abstraction terms admitted are a generalization of the abstraction terms usually admitted into set theory. In order to avoid a confusion of use and mention, the nominalist interpretation of the atomic formulas of the logic forces NaDSet to be second order, although only a single kind of quantifier and variable is required. \nThe use of NaDSet for programming semantics is illustrated for a simple flow diagram language that has been used to illustrate the principles of denotational semantics. The presentation of the semantics within NaDSet is not only fully formal, in contrast to the simply mathematical presentation of denotational semantics, but because NaDSet is formalized as a natural deduction logic, its derivations can be simply checked by machine.

TR-90-23 A Formalization of Category Theory in NaDSet, August 1990 Paul C. Gilmore and George K. Tsiknis

This paper was presented to the Sixth Workshop on Mathematical Foundations of Programming Semantics held at Queen's University, May 15-19, 1990. \nBecause of the increasing use of category theory in programming semantics, the formalization of the theory, that is the provision of an effective definition of what constitutes a derivation for category theory, takes on an increasing importance. Nevertheless, no suitable logic within which the theory can be formalized has been provided. The classical set theories of Zermelo-Fraenkel and Godel-Bernays, for example, are not suitable because of the use category theory makes of self-referencing abstractions, such as in the theorem that the set of categories forms a category. In this paper, a formalization of category theory and a proof of the cited theorem is provided within the logic and set theory NaDSet. NaDSet definitions for natural transformations and functor categories are given and an equivalence relation on categories is defined. Additional definitions and discussions on products, comma categories, universals limits and adjoints are presented. They provide evidence that any construct, not only in categories, but also in toposes, sheaves, triples and similar theories can be formalized within NaDSet.

TR-90-24 Errors and Perturbations in Vandermonde Systems, July 1990 James M. Varah

The Bjorck-Pereyra algorithm for Vandermonde systems is known to produce extremely accurate results in some cases, even when the matrix is very ill-conditioned. Recently, Higham has produced an error analysis of the algorithm which identifies when this behaviour will take place. In this paper, we observe that this analysis also predicts the error behaviour very well in general, and illustrate this with a series of extensive numerical tests. Moreover, we relate the computational error to that caused by perturbations in the matrix elements, and show that they are not always commensurate. We also discuss the relationship between these error and perturbation estimates with the ``effective well-condition'' of Chan and Foulser.

TR-90-25 A Tight Lower Bound on the Size of Planar Permutation Networks, July 1990 Maria M. Klawe and Tom Leighton

(Abstract not available on-line)

TR-90-26 Superlinear Bounds for Matrix Searching Problems, July 1990 Maria M. Klawe

Matrix searching in classes of totally monotone partial matrices has many applications in computer science, operations research, and other areas. This paper gives the first superlinear bound for matrix searching in classes of totally monotone partial matrices, and also contains some new upper bounds for a class with applications in computational geometry and dynamic programming. \n The precise results of this paper are as follows. We show that any algorithm for finding row maxima or minima in totally monotone partial $2n \times n$ matrices with the property that the non-blank entries in each column form a contiguous segment, can be forced to evaluate $\Omega (n \alpha (n))$ entries of the matrix in order to find the row maxima or minima, where $\alpha (n)$ denotes the very slowly growing inverse of Ackermann's function. A similar result is obtained for $n \times 2n$ matrices with contiguous non-blank segments in each row. The lower bounds are proved by introducing the concept of an independence set in a partial matrix and showing that any matrix searching algorithm for these types of partial matrices can be forced to evaluate every element in the independence set. A result involving lower bounds for Davenport-Schinzel sequences is then used to construct an independence set of size $\Omega (n \alpha (n))$ in the matrices of size $2n \times n$ and $n \times 2n$. \n We also give two algorithms to find row maxima and minima in totally monotone partial $n \times m$ matrices with the property that the non-blank entries in each column form a contiguous segment ending at the bottom row. The first algorithm evaluates at most $O (m \alpha (n) + n)$ entries of the skyline matrix and performs at most that many comparisons, but may have $O (m \alpha (n) \log \log n+ n)$ total running time. The second algorithm is simpler and has $O (m \log \log n+ n)$ total running time. \n A preliminary version of this paper appeared in the Proceedings of the First ACM/SIAM Symposium on Discrete Algorithms, 1990. The research in this paper was partially supported by an NSERC Operating Grant.

TR-90-27 Generic Specification of Digital Hardware, September 1990 Jeffrey J. Joyce

This paper argues that generic description is a powerful concept in the context of formal verification, in particular, the formal verification of digital hardware. The paper also describes a technique for creating generic specifications in any language with (at least) the expressive power of higher-order logic. This technique is based on the use of higher-order predicates parameterized by function variables and type variables. We believe that this technique is a very direct (if not the most direct) way to specify hardware generically. Two examples of generic specification are given in the pap er: a resettable counter and the programming level model of a very simple microprocessor.

TR-90-28 Parallel Techniques for Construction of Trees and Related Problems, October 1990 Teresa Maria Przytycka

The concept of a tree has been used in various areas of mathematics for over a century. In particular, trees appear to be one of the most fundamental notions in computer science. Sequential algorithms for trees are generally well studied. Unfortunately many of these sequential algorithms use methods which seem to be inherently sequential. One of the contributions of this thesis is the introduction of several parallel-techniques for the construction of various types of trees and the presentation of new parallel tree construction algorithms using these methods. Along with the parallel tree construction techniques presented here, we develop techniques which have broader applications. \n We use the Parallel Random Access Machine as our model of computation. We consider two basic methods of constructing trees: {\em tree expansion} and {\em tree synthesis}. \n In the {\em tree expansion method}, we start with a single vertex and construct a tree by adding nodes of degree one and/or by subdividing edges. We use the parallel tree expansion technique to construct the tree representation for graphs in the family of graphs known as cographs. \n In the {\em tree synthesis method}, we start with a forest of single node subtrees and construct a tree by adding edges or (for rooted trees) by creating parent nodes for some roots of the trees in the forest. We present a family of parallel and sequential algorithms to construct various approximations to the Huffman tree. All these algorithms apply the tree synthesis method by constructing a tree in a level-by-level fashion. To support one of the algorithms in the family we develop a technique which we call the {\em cascading sampling technique}. \n One might suspect that the parallel tree synthesis method can be applied only to trees of polylogarithmic height, but this is not the case.We present a technique which we call the {\em valley filling technique} and develop its accelerated version called the {\em accelerated valley filling technique}. We present an application of this technique to an optimal parallel algorithm for construction of minimax trees.

TR-90-29 Surface Curvature from Photometric Stereo, October 1990 R. J. Woodham

A method is described to compute the curvature at each point on a visible surface. The idea is to use the intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. This is the idea of photometric stereo. Previously, photometric stereo has been used to obtain local estimates of surface orientation. Here, an extension to photometric stereo is described in which the spatial derivatives of the intensity values are used to determine the principal curvatures, and associated directions, at each point on a visible surface. The result shows that it is possible to obtain reliable local estimates of both surface orientation and surface curvature without making global smoothness assumptions or requiring prior image segmentation. \n The method is demonstrated using images of several pottery vases. No prior assumption is made about the reflectance characteristics of the objects to be analyzed. Instead, one object of known shape, a solid of revolution, is used for calibration purposes.

TR-90-30 A Theory of Multi-Scale, Curvature and Torsion Based Shape Representation for Planar and Space Curves, October 1990 Farzin Mokhtarian

This thesis presents a theory of multi-scale, curvature and torsion based shape representation for planar and space curves. The theory presented has been developed to satisfy various criteria considered useful for evaluating shape representation methods in computer vision. The criteria are: invariance, uniqueness, stability, efficiency, ease of implementation and computation of shape properties. The regular representation for planar curves is referred to as the curvature scale space image and the regular representation for space curves is referred to as the torsion scale space image. Two variants of the regular representations, referred to as the renormalized and resampled curvature and torsion scale space images, have also been proposed. A number of experiments have been carried out on the representations which show that they are very stable under severe noise conditions and very useful for tasks which call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation. \n Planar or space curves are described at varying levels of detail by convolving their parametric representations with Gaussian functions of varying standard deviations. The curvature or torsion of each such curve is then computed using mathematical equations which express curvature and torsion in terms of the convolutions of derivatives of Gaussian functions and parametric representations of the input curves. Curvature or torsion zero-crossing points of those curves are then located and combined to form one of the representations mentioned above. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution or arc length evolution of that curve. This thesis contains a number of theorems about evolution and arc length evolution of planar and space curves along with their proofs. Some of these theorems demonstrate that evolution and arc length evolution do not change the physical interpretation of curves as object boundaries and others are in fact statements on the global properties of planar and space curves during evolution and arc length evolution and their representations. Other theoretical results shed light on the local behavior of planar and space curves just before and just after the formation of a cusp point during evolution and arc length evolution. Together these results provide a sound theoretical foundation for the representation methods proposed in this thesis.

TR-90-31 The Approximation of Implicates and Explanations, January 1990 Alex Kean

This paper studies the continuum between implicates and minimal implicates; and the continuum between explanations and minimally consistent explanations. The study is based on the approximation of the set of these objects. A general definition for approximated minimal implicates, called selective implicates, is presented. Three specific instances of selective implicates: query-based, ATMS and length-based are studied. Using the set of query-based minimal implicates, explanations are generated and the properties of these explanations are studied. The goal of these studies is to extract computationally feasible properties in order to achieve tractable abduction. The setting is the compiled approach using minimal implicates in Clause Management Systems.

TR-90-32 Performance Monitoring in Multi-transputer Networks, October 1990 Jie Cheng Jiang

Parallel architectures, like the transputer-based multicomputer network, offer potentially enormous computational power at modest cost. However, writing programs on a multicomputer to exploit parallelism is very difficult due to the lack of tools to help users understand the run-time behavior of the parallel system and detect performance bottlenecks in their programs. This thesis examines the performance characteristics of parallel programs in a multicomputer network, and describes the design and implementation of a real-time performance monitoring tool on transputers. \nWe started with a simple graph theoretical model in which a parallel computation is represented as a weighted directed acyclic graph, called the {\em execution graph}. This model allows us to easily derive a variety of performance metrics for parallel programs, such as program execution time, speedup, efficiency, etc. From this model, we also developed a new analysis method called {\em weighted critical path analysis} (WCPA), which incorporates the notion of parallelism into critical path analysis and helps users identify the program activities which have the most impact on performance. Based on these ideas, the design of a real-time performance monitoring tool was proposed and implemented on a 74-node transputer-based multicomputer. Major problems in parallel and distributed monitoring addressed in this thesis are: global state and global clock, minimization of monitoring overhead, and the presentation of meaningful data. New techniques and novel approaches to these problems have been investigated and implemented in our tool. Lastly, benchmarks are used to measure the accuracy and the overhead of our monitoring tool. We also demonstrate how this tool was used to improve the performance of an actual parallel application by more than 50\%.

TR-90-33 On the Power of a Posteriori Error Estimation for Numerical Integration and Function Approximation, November 1990 Feng Gao

We show that using a type of divided-difference test as an a posteriori error criterion, the solutions of a class of simple adaptive algorithms for numerical integration and function approximation such as a piecewise Newton-Cotes rule or a piecewise Lagrange interpolation, are guaranteed to have an approximation-theoretic property of near-optimality. Namely, upon successful termination of the algorithm the solution is guaranteed to be close to the solution given by the spline interpolation method on the same mesh to within any prescribed tolerance.

TR-90-34 Embedding All Binary Trees in the Hypercube, January 1990 Alan S. Wagner

An ${\cal O} (N^{2})$ heuristic algorithm is presented that embeds all binary trees, with dilation 2 and small average dilation, into the optimal sized hypercube. The heuristic relies on a conjecture about all binary trees with a perfect matching. It provides a practical and robust technique for mapping binary trees into the hypercube and ensures that the communication load is evenly distributed across the network assuming any shortest path routing strategy. One contribution of this work is the identification of a rich collection of binary trees that can be easily mapped into the hypercube.

TR-90-35 More Reasons Why Higher-Order Logic is a Good Formalism for Specifying and Verifying Hardware, January 1990 Jeffrey J. Joyce

(Abstract not available on-line)

TR-90-36 From Formal Verification to Silicon Compilation, January 1990 Jeffrey J. Joyce, Liu, Rushby, Shankar, Suaya and von Henke

Formal verification is emerging as a viable method for increasing design assurance for VLSI circuits. Potential benefits include reduction in the time and costs associated with testing and redesign, improved documentation and ease of modification, and greater confidence in the quality of the final product. This paper reports on an experiment whose main purpose was to identify the difficulties of integrating formal verification with conventional VLSI CAD methodology. Our main conclusion is that the most effective use of formal hardware verification will be at the higher levels of VLSI system design, with lower levels best handled by conventional VLSI CAD tools.

TR-90-37 The UBC OSI Distributed Application Programming Environment, January 1991 G. Neufeld, M. Goldberg and B. Brachman

(Abstract not available on-line)

TR-90-38 The Generation of Phrase-Structure Representation from Principles, January 1990 David C. LeBlanc

Implementations of grammatical theory have traditionally been based upon Context-Free Grammar (CFG) formalisms which all but ignore questions of learnability. Even implementations which are based upon theories of Generative Grammar (GG), a paradigm which is supposedly motivated by learnability, rarely address such questions. In this thesis we will examine a GG theory which has been formulated primarily to address questions of learnability and present an implementation based upon this theory. The theory argues from Chomsky's definition of epistemological priority that principles which match elements and structures from prelinguistic systems with elements and structures in linguistic systems are preferable to those which are defined purely linguistically or non-linguistically. A procedure for constructing phrase-structure representations from prelinguistic relations using principles of node percolation (rather than the traditional $ \overline{X}$-theory of GG theories or phrase-structure rules of CFG theories) is presented and this procedure integrated into a left-right, primarily bottom-up parsing mechanism. Specifically, we present a parsing mechanism which derives phrase-structure representations of sentences from Case- and $\Theta $-relations using a small number of Percolation Principles. These Percolation Principles simply determine the categorial features of the dominant node of any two adjacent nodes in a representational tree, doing away with explicit phrase structure rules altogether. The parsing mechanism also instantiates appropriate empty categories using a filler-driven paradigm for leftward argument and non-argument movement. Procedures modelling learnability are not implemented in this work, but the applicability of the presented model to a computational model of language is discussed.

TR-90-39 Finding Extrema With Unary Predicates, January 1990 Feng Gao, Leonidas J. Guibas, David G. Kirkpatrick, William T. Laaser and James Saxe

We consider the problem of determining the maximum and minimum elements {x_{1}, \ldots ,x_{n}}$, drawn from some finite universe $\cal U$ of real numbers, using only unary predicates of the inputs. It is shown that $\Theta (n + \log |{\cal U} |) $ unary predicate evaluations are necessary and sufficient, in the worst case. Results are applied to i) the problem of determining approximate extrema of a set of real numbers, in the same model, and ii) the multiparty broadcast communication complexity of determining the extrema of an arbitrary set of numbers held by distinct processors.

TR-90-40 Markov Random Fields in Visual Reconstruction, January 1990 Ola Siksik

Markov Random Fields (MRFs) are used in computer vision as an effective method for reconstructing a function starting from a set of noisy, or sparse data, or in the integration of early vision processes to label physical discontinuities. The MRF formalism is attractive because it enables the assumptions used to be explicitly stated in the energy function. The drawbacks of such models have been the computational complexity of the implementation, and the difficulty in estimating the parameters of the model. \n In this thesis, the deterministic approximation to the MRF models derived by Girosi and Geiger [10] is investigated, and following that approach, a MIMD based algorithm is developed and implemented on a network of T800 transputers under the Trollius operating system. A serial version of the algorithm has also been implemented on a SUN 4 under Unix. \n The network of transputers is configured as a 2-dimensional mesh of processors (currently 16 configured as a $4 \times 4$ mesh), and the input partitioning method is used to distribute the original image across the network. \n The implementation of the algorithm is described, and the suitability of the transputer for image processing tasks is discussed. \n The algorithm was applied to a number of images for edge detection, and produced good results in a small number of iterations.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.