Technical Reports

The ICICS/CS Reading Room

1991 UBC CS Technical Report Abstracts

TR-91-01 Revision in ACMS, April 1991 Alex Kean, 19 pages

The motivation for creating truth maintenance systems is two fold. First, it is used for the abduction process of generating explanation; and second, to perform the necessary {\em bookkeeping} of revision of the knowledge based. The process of revision is defined as addition and deletion of knowledge from the knowledge base. A logical scheme for tracking conclusions in an assumption based clause management system (ACMS) for the purpose of abduction and revision is proposed. As a consequence, an incremental deletion scheme is derived. A protocol for assumption revision is demonstrated by a backtrack search example. The proposed ACMS is the first truth maintenance system that employs incremental deletion as part of its capability.

TR-91-02 A Multigrid Method for Shape from Shading, April 1991 Uri M. Ascher and Paul M. Carter, 17 pages

The shape-from-shading problem has received much attention in the Computer Vision literature in recent years. The basic problem is to recover the {\em shape z(x,y)} of a surface from a given map of its {\em shading}, i.e. its variation of brightness over a given domain. Mathematically, one has to solve approximately the {\em image irradiance equation}\\ \begin{center} E(x,y) \end{center} relating a given image irradiance {\em E(x,y)} to the radiance of the surface at each point {\em (x,y)}, with {\em R(p,q)} a given {\em reflectance z_ {x}$ and $q = z_ {y}$. \nA possible presence of noise and lack of adequate boundary conditions adds to the difficulty of this problem. A number of different approaches towards its solution have been proposed in the Vision literature, including various regularization models. However, a reliable, efficient solution method for practical instances has remained elusive so far. \nIn this paper we analyze the various solution models proposed with the aim of applying an efficient multigrid solver. A combination of an FMG-continuation technique with an appropriate discretization of one such solution model proposed by B. Horn yields an efficient solver. Our results are demonstrated by examples.

TR-91-03 Stability of computational methods for constrained dynamics systems, May 1991 Uri M. Aschre and Linda R. Petzold, 28 pages

Many methods have been proposed for numerically integrating the differential-algebraic systems arising from the Euler-Lagrange equations for constrained motion. These are based on various problem formulations and discretizations. We offer a critical evaluation of these methods from the standpoint of stability. \nConsidering a linear model, we first give conditions under which the differential-algebraic problem is well-conditioned. This involves the concept of an essential underlying ODE. We review a variety of reformulations which have been proposed in the literature and show that most of them preserve the well-conditioning of the original problem. Then we consider stiff and nonstiff discretizations of such reformulated models. In some cases, the same implicit discretization may behave in a very different way when applied to different problem formulations, acting as a stiff integrator on some formulations and as a nonstiff integrator on others. We present the approach of projected invariants as a method for yielding problem reformulations which are desirable in this sense.

TR-91-04 Starshaded Sets, Their Distance Functions and Star Hulls, May 1991 Ying Li, 23 pages

TR-91-05 FDT Tools For Protocol Development, May 1991 A. A. F. Loureiro, Samuel T. Chanson and Song T. Vuong, 45 pages

FDT tools support protocol development by making certain activities feasible, easier to be performed, more reliable, and faster. This paper discusses the desirable properties of FDT tools and classifies them according to the different stages of the protocol development cycle. An assessment of the tools available so far and projections (or suggestions) of the tools to come are given. A list of the tools that have appeared since the mid 1980's is also included.

TR-91-06 Parallel and Distributed Algorithms for Constraint Networks", May 1991 Ying Zhang and Alan K. Mackworth, 21 pages

This paper develops two new algorithms for solving a finite constraint satisfaction problem (FCSP) in parallel. In particular, we give a parallel algorithm for the EREW PRAM model and a distributed algorithm for networks of interconnected processors. Both of these algorithms are derived from arc consistency algorithms which are preprocessing algorithms in general, but can be used to solve an FCSP when it is represented by an acyclic constraint network. If an FCSP can be represented by an acyclic constraint network of size \fIn\fP with width bounded by a constant then (1) the parallel algorithm takes \fIO(log n)\fP time using \fIO(n)\fP processors and (2) there is a mapping of this problem to a distributed computing network of \fIpoly(n)\fP processors which stabilizes in \fIO(log n)\fP time.

TR-91-15 Formulations of an Extended NaDSet, August 1991 Paul C. Gilmore and George K. Tsiknis, 28 pages

NaDSet, a {\underline N}atural {\underline D}eduction based {\underline Set} theory and logic, of this paper is an extension of an earlier logic of the same name. It and some of its applications have been described in earlier papers. A proof of the consistency and completeness of NaDSet is provided elsewhere. In all these earlier papers NaDSet has been formulated as a Gentzen sequent calculus similar to the formulation LK by Gentzen of classical first order logic, although it was claimed that any natural deduction formalization of first order logic, such as Gentzen's natural deduction formulation NK, could be simply extended to be a formalization of NaDSet. This is indeed the case for the method of semantic tableaux of Beth or for Smullyan's version of the tableaux, but the extensions needed for other formalizations, including NK and the intuitionistic version NJ, require some care. The consistency of NaDSet is dependant upon restricting its axioms to those of the form ${\bf A} \rightarrow {\bf A}$, where {\bf A} is an atomic formula; an equivalent restriction for the natural deduction formulation is not obvious. The main purpose of this paper is to describe the needed restriction and to prove the equivalence of the resulting natural deduction logic with the Gentzen sequent calculus formulation for both the intuitionistic and the classical versions of NaDSet. Additionally the paper provides a brief sketch of the motivation for NaDSet and some of its proven and potential applications. \nThe authors gratefully acknowledges support from the Natural Science and Engineering Research Council of Canada.

TR-91-16 A Simple Primal Algorithm for Intersecting 3-Polyhedra in Linear Time, July 1991 Andrew K. Martin, 44 pages

This thesis presents, in full, a simple linear time algorithm for intersecting two convex 3-polyhedra P and Q. This differs from the first such algorithm -- due to Chazelle -- in that it operates entirely in primal space, whereas Chazelle's algorithm relies heavily on duality transforms. We use the hierarchical representations of polyhedra due to Dobkin and Kirkpatrick to induce a cell complexes between coarse approximations, P^k and P_k of P satisfying P_k subset-of P subset-of P^k, and similar approximations Q^k and Q_k of Q satisfying Q_k subset-of Q subset-of Q^k. We show that the structure of such complexes allows intersection queries to be answered efficiently. In particular, the sequence of cells intersected by a ray can be identified in time proportional to the length of the sequence. The algorithm operates by recursively computing the intersections: P^k intersect Q_k and P_k intersect Q^k . Then edges of the union of approximations P intersect Q^k and Q intersect P^k are traversed by tracing their intersection with the two cell complexes. We show that each such edge can be traversed in constant time. In the process, most of the edges of P intersect Q which lie simultaneously on the boundary of P and Q will be traced. We show that the total time needed to construct those which remain is linear in the size of P and Q . Though based on the same general principles, the algorithm presented here is somewhat simpler than that described by Chazelle, which uses only the cell complexes induced by the inner hierarchical representations of P and Q . By extending Chazelle's search structure to the space exterior to the given polyhedra, we avoid having to operate simultaneously in primal and dual spaces. This permits us to conceptualise the algorithm as traversing the edges of the boundary of (P intersect Q^k) union (Q intersect P^k). As a side effect, we avoid one half of Chazelle's recursive calls, which leads to a modest improvement in the asymptotic constants.

TR-91-17 On the CONSISTENCY and COMPLETENESS of an EXTENDED NaDSet, August 1991 Paul C. Gilmore, 54 pages

NaDSet in its extended form has been defined in several previous papers describing its applications. It is a {\underline N}atural {\underline D}eduction based {\underline Set} theory and logic. In this paper the logic is shown to enjoy a form of $\omega$-consistency from which simple consistency follows. The proof uses transfinite induction over the ordinals up to $\varepsilon_0$, in the style of Gentzen's consistency proof for arithmetic. A completeness proof in the style of Henkin is also given. Finally the cut rule of deduction is shown to be redundant.

TR-91-18 Photometric Stereo: Lambertian Reflectance and Light Sources with Unknown Direction and Strength, August 1991 R. J. Woodham, Y. Iwahori and Rob A. Barman, 11 pages

This paper reconsiders the familiar case of photometric stereo under the assumption of Lambertian surface reflectance and three distant point sources of illumination. Here, it is assumed that the directions to and the relative strengths of the three light sources are not known {\it a priori}. Rather, estimation of these parameters becomes part of the problem formulation. \nEach light source is represented by a 3-D vector that points in the direction of the light source and has magnitude proportional to the strength of the light source. Thus, nine parameters are required to characterize the three light sources. It is shown that, regardless of object shape, triples of measured intensity values are constrained to lie on a quadratic surface having six degrees of freedom. Estimation of the six parameters of the quadratic surface allows the determination of the nine parameters of the light sources up to an unknown rotation. \nThis is sufficient to determine object shape, although attitude with respect to the world-based or the camera-based coordinate system can not be simultaneously recovered without additional information.

TR-91-20 Backward Error Estimates for Toeplitz and Vandermonde Systems, September 1991 Jim M. Varah, 14 pages

b$, it is of interest to find nearby systems with $\overline{x}$ as exact solution, and which have the same structure as A. In this paper, we show that the distance to these nearby structured systems can be much larger than for the corresponding general perturbation for Toeplitz and Vandermonde systems. In fact, even the correctly rounded solution $\hat{x}$ may require a structured perturbation of $O(\eta\parallel\hat{x}\parallel)$, not $O(\eta)$ as might be expected.

TR-91-21 Leaders Election Without a Conflict Resolution Rule - Fast and Efficient Randomized Simulations among CRCW PRAMs, October 1991 Joseph Gil and Yossi Matias, 26 pages

We study the question of fast leaders election on TOLERANT, a CRCW PRAM model which tolerates concurrent write but does not support symmetry breaking. We give a randomized simulation of MAXIMUM (a very strong CRCW PRAM) on TOLERANT. The simulation is optimal, reliable, and runs in nearly doubly logarithmic time and linear space. This is the first simulation which is fast, optimal {\it and} space-efficient, and therefore grants true comparison of algorithms running on different CRCW PRAMs. Moreover, it implies that the memory to which concurrent read or concurrent write are assumed should {\it never} be more than linear-the rest of the memory can always be addressed under the EREW convention. The techniques presented in this paper tackle fundamental difficulties in the design of fast parallel algorithms.

TR-91-22 Model-Guided Grouping for 3-D Motion Tracking, October 1991 Xun Li and David G. Lowe, 14 pages

The objective of this paper is to develop a robust solution to the correspondence problem in model-based motion tracking even when the frame-to-frame motion is relatively fast. A new approach called Model-Guided Grouping, which is used to derive intermediate-level structures as our matching tokens, is introduced. The groupings are guided and derived locally, with the contemporary use of model structures, around the predicted model during the object tracking. We choose junctions and parallel pairs as our matching tokens, thus the information coded in these structures is relatively invariant in consecutive frames. The matching strategy is coarse-to-fine, and partial matching will also be allowed when occlusions are present. The method for evaluation of probability of accidental match based on junction groupings will be discussed. Systematic testing shows that matches based on these new methods improve correspondence reliability by about an order of magnitude over previous method based on matching individual line segments.

TR-91-23 The Tree Model for Hashing: Lower and Upper Bounds, December 1991 Joseph Gil, Friedhelm auf Der Heide Meyer and Avi Wigderson, 29 pages

TR-91-24 Surface Reconstruction by Coupled Dept/Slope Model with Natural Boundary, November 1991 Ying Li, 27 pages

This report reports on an IRIS project of reconstructing surface height from gradient. The coupled depth/slope model developed by J.G. Harris has been used and augmented with natural boundary conditions. Experiments have been conducted with emphasis on how to deal with uncertainties about boundary values. Experiments have shown that the reconstructed surfaces are confined to the original shapes if accurate boundary values are given. The algorithm fails to produce correct shapes when inaccurate boundary values are used. Natural boundary conditions are necessary conditions for the problem of variational calculus to be solved. Experiments have shown that natural boundary conditions can be relied upon when no estimations of boundary values can be made, except on occluding boundaries. When relative boundary values of occluding boundaries can be assumed, good reconstruction results can be obtained.

TR-91-25 Computational Architectures for Responsive Vision: the Vision Engine, November 1991 James J. Little, Rod Barman, Stewart Kingdon and Jiping Lu, 10 pages

To respond actively to a dynamic environment, a vision system must process perceptual data in real time, and in multiple modalities. The structure of the computational load varies across the levels of vision, requiring multiple architectures. We describe the Vision Engine, a system with a pipelined early vision architecture, Datacube image processors, connected to a MIMD intermediate vision system, a set of Transputers. The system uses a controllable eye/head for tasks involving motion, stereo and tracking. \nA simple pipeline model describes image transformation through multiple functional stages in early vision. Later processing (e.g., segmentation, edge linking, perceptual organization) cannot easily proceed on a pipeline architecture. A MIMD architecture is more appropriate for the irregular data and functional parallelism of later visual processing. \nThe Vision Engine is designed for general vision tasks. Early vision processing, both optical flow and stereo, is implemented in near real-time using the Datacube, producing dense vector fields with confidence measures, transferred at near video rates to Transputer subsystem. We describe a simple implementation combining, in the Transputer system, stereo and motion information from the Datacube.

TR-91-26 The Logic of Constraint Satisfaction, November 1991 Alan K. Mackworth, 20 pages

The Constraint Satisfaction Problem (CSP) formalization has been a productive tool within Artificial Intelligence and related areas. The Finite CSP (FCSP) framework is presented here as a restricted logical calculus within a space of logical representation and reasoning systems. FCSP is formulated in a variety of logical settings: theorem proving in first order predicate calculus, propositional theorem proving (and hence SAT), the Prolog and Datalog approaches, constraint network algorithms, a logical interpreter for networks of constraints, the Constraint Logic Programming (CLP) paradigm and propositional model finding (and hence SAT, again). Several standard, and some not-so-standard, logical methods can therefore be used to solve these problems. By doing this we obtain a specification of the semantics of the common approaches. This synthetic treatment also allows algorithms and results from these disparate areas to be imported, and specialized, to FCSP; the special properties of FCSP are exploited to achieve, for example, completeness and to improve efficiency. It also allows export to the related areas. By casting CSP both as a generalization of FCSP and as a specialization of CLP it is observed that some, but not all, FCSP techniques lift to CSP and, perhaps, thereby to CLP. Various new connections are uncovered, in particular between the proof-finding approaches and the alternative model-finding approaches that have arisen in depiction and diagnosis applications.

TR-91-28 On Detecting Regularity of Functions: A Probabilistic Analysis, November 1991 F. Gao and G.W. Wasilkowski, 11 pages

TR-91-29 The EAN X.500 Directory Service, November 1991 Barry Brachman, Murray Goldberg Gerald Neufeld and Duncan Stickings, 38 pages

The OSI directory system manages a distributed directory information database of named objects, defining a hierarchical relationship between the objects. An object consists of a set of attributes as determined by a particular class. Attributes are tuples that include a type and one or more values. The directory database is partitioned among a set of directory system agents. The directory service is provided by a collection of agents and incorporates distributed algorithms for name resolution and search, resulting in a network transparent service. The objects can represent many real-world entities. The service is intended to serve a very large and diverse user community. This paper describes experiences gained in implementing the directory service. It also points out a number of areas in the current OSI directory design that require further work and then describes how the EAN directory system has addressed these difficulties.

TR-91-30 Implementing a Normative Theory of Communication in a Framework for Default Reasoning, November 1991 Andrew Csinger, 58 pages

This thesis presents a framework for inter-agent communication, represented and partially implemented with default reasoning. I focus on the limited goal of determining the meaning for a Hearer-agent of an utterance $\omega$ by a Speaker-agent, in terms of the beliefs of the interlocutors. This meaning is generally more than just the explicit propositional contents of $\omega$, and more than just the Speaker's goal to convey her belief that $\omega$. \nOne way of determining this meaning is to let the Hearer take stock of the implicit components of the Speaker's utterances. Among the implicit components of the meaning of $\omega$, I show in particular how to derive certain of its presuppositions with a set of default schemata using a framework for default reasoning. \nMore information can be extracted from the communications channel between interlocutors by adopting a normative model of inter-agent communication, and using this model to explain or 'make sense' of the Speaker's utterances. I construct such a model expressed in terms of a set of default principles of communication using the same framework for default reasoning. \nThe task of deriving the meaning of an utterance is similar to the job required of a user-interface, where the user is the Speaker-agent, and the interface itself is the Hearer-agent. The goal of a user-interface as Hearer is to make maximal use of the data moving along the communications channel between user and application. \nThe result is an integrated theory of normative, inter-agent communications expressed within an ontologically and logically minimal framework. This work demonstrates the development and application of a methodology for the use of default reasoning. The implementation of the theory is also presented, along with a discussion of its applicability to practical user-interfacing. A view emerges of user-modelling as a component of a user-interface.

TR-91-31 Solving Domain Equations in NaDSet, December 1991 Paul C. Gilmore and George K. Tsiknis, 23 pages

Solutions of systems of domain equations is the basis for what is called the Strachey-Scott approach to providing a denotational semantics for programming languages. The solutions offered by the mathematical methods developed by Scott, however, do not provide a computational basis for the semantics in the form of a proof theory. The purpose of this paper is to provide such a theory using the logic and set theory NaDSet. \nThe development of NaDSet was motivated by the following three principles:\\ 1. Abstraction, along with truth functions and quantification, is one of the three fundamental concepts of logic and should be formalized in the same manner as the other two.\\ 2. Natural deduction presentations of logic provide a transparent formalization of Tarski's reductionist semantics.\\ 3. Atomic formulas receive their truth values from a nominalist interpretation. \nThat these three principles lead to a successful resolution of the set theoretic paradoxes and to a sound formulation of NaDSet has been demonstrated elsewhere with proofs of consistency and completeness. Applications of NaDSet to programming language semantics, category theory, non-well-founded sets, and to foundational questions of mathematics have also been demonstrated. \nThe financial support of the Natural Science and Engineering Research Council of Canada is gratefully acknowledged.

TR-91-32 Character Animation using Hierarchical B-Splines, September 1, 1991 David R. Forsey, 15 pages

The challenge of building and animating computer generated human and animal figures is complicated by the need to smoothly and realistically control the deformation of the surface around the articulations of the underlying skeleton. This paper reviews approaches to surface modelling for character animation and describes a geometric (as apposed to physically-based) approach to character modelling using an extension to the hierarchical B-spline. This techinique provides differential attachment of the surface to the skeleton and allows multi-resolution control of surface deformation during animation. The attachment mechanism is simple, easy to use, inexpensive, extensible and can drastically reduce the effort required to animate a surface. The techniques introduced are illustrated using examples of human and animal forms.

If you have any questions or comments regarding this page please send mail to