Technical Reports

The ICICS/CS Reading Room


1992 UBC CS Technical Report Abstracts

TR-92-01 Conditional Logics for Default Reasoning and Belief Revision, January 1992 Craig Boutilier, 223 pages

Much of what passes for knowledge about the world is defeasible, or can be mistaken. Our perceptions and premises can never be certain, we are forced to jump to conclusions in the presence of incomplete information, and we have to cut our deliberations short when our environment closes in. For this reason, any theory of artificial intelligence requires at its heart a theory of default reasoning, the process of reaching plausible, but uncertain, conclusions; and a theory of belief revision, the process of retracting and adding certain beliefs as information becomes available.

In this thesis, we will address both of these problems from a logical point of view. We will provide a semantic account of these processes and develop conditional logics to represent and reason with default or normative statements, about normal or typical states of affairs, and statements of belief revision. The conditional logics will be based on standard modal systems, and the possible worlds approach will provide a uniform framework for the development of a number of such systems.

Within this framework, we will compare the two types of reasoning, determining that they are remarkably similar processes at a formal level of analysis. We will also show how a number of disparate types of reasoning may be analyzed within these modal systems, and to a large extent unified. These include normative default reasoning, probabilistic default reasoning, autoepistemic reasoning, belief revision, subjunctive, hypothetical or counterfactual reasoning, and abduction.

TR-92-02 Probabilistic Horn abduction and Bayesian networks, January 1992 David Poole, 57 pages

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates some assumptions about the rule base and some independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a descrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequancy. It also shows how Bayesian networks can be extended beyond a propositional language, and shows a relationship between probability and argument based systems.

TR-92-03 Shallow Grates, January 1992 Maria M. Klawe, 7 pages

This note proves the existence of acyclic directed graphs of logarithmic depth, such that a superlinear number of input-output pairs remain connected after the removal of any sufficiently small linearly sized subset of the vertices. The technique can be used to prove the analogous, and asymptotically optimal, result for graphs of arbitrary depth, generalizing Schnitger's grate construction for graphs large depth. Interest in this question relates to efforts to use graph theoretic methods to prove circuit complexity lower bounds for algebraic problems such as matrix multiplication. In particular, it establishes the optimality of Valiant's depth reduction technique as a method of reducing the number of connected input-output pairs. The proof uses Schnitger's grate construction, but also involves a lemma on expanding graphs which may be of independent interest.

TR-92-04 Two Algorithms for Decision Tree Search, February 1992 Runping Qi and David Poole, 26 pages

In this paper two algorithms for decision tree search are presented. The basic idea behind these algorithms is to make use of domain dependent information, in the form of an evaluation function as that used by AO*, along with a search mechanism similar to the alpha-beta technique for minimax trees. One of the advantages of our algorithms over AO* is that our algorithms need only linear space. The solution computed by the first algorithm can be either optimal or sub-optimal, depending on the admissibility of the evaluation function. The second algorithm is an approximate algorithm which can case a tradeoff between computational efficiency and solution quality. Some results are presented on the correctness of the algorithms and on the quality of the solutions computed by the algorithms.

TR-92-05 Approximating Polygons and Subdivisions with Minimum-Link Paths, March 1992 Leonidas J. Guibas, John E. Hershberger, Joseph S. B. Mitchell and Jack Scott Snoeyink, 27 pages

We study several variations on one basic approach to the task of simplifying a plane polygon or subdivision: Fatten the given object and construct an approximation inside the fattened region. We investigate fattening by convolving the segments or vertices with disks and attempt to approximate objects with the minimum number of line segments, or withem near the minimum, by using efficient greedy algorithms. We give some variants that have linear or $O (n \log n)$ algorithms approximating polygonal chains of {\em n} segments, and show that for subdivisions or chains with no self-intersections it is {\em NP}-hard to compute the best approximation.

TR-92-06 Cepstral Analysis of Optical Flow, November 1992 B, Esf ari, iar and James J. Little, 23 pages

Visual flow analysis from image sequences can be viewed as detection and retrieval of echoes or repeated patterns in two dimensional signals. In this paper we introduce a new methodology for optical flow analysis based on the cepstral filtering method. Cepstral filtering is a non-linear adaptive correlation technique used extensively in phoneme chunking and echo removal in speech understanding and signal processing. Different cepstral methodologies, in particular power cepstrum, are reviewed and more efficient variations for real image analysis are discussed.

Power cepstrum is extended to multiframe analysis. A correlative cepstral technique, cepsCorr, is developed; cepsCorr significantly increases the signal to noise ratio, virtually eliminates errors, and provides a predictive or multi-evidence approach to visual motion analysis.

TR-92-07 Speeding Up the Douglas-Peucker Line-Simplification Algorithm, April 1992 John Hershberger and Jack Snoeyink, 16 pages

We analyze the line simplification algorithm reported by Douglas and Peucker and show that its worst case is quadratic in n, the number of input points. Then we give a algorithm, based on path hulls, that uses the geometric structure of the problem to attain a worst-case running time proportional to n log_2(n), which is the best case of the Douglas algorithm. We give complete C code and compare the two algorithms theoretically, by operation counts, and practically, by machine timings.

TR-92-08 Symmetry in Self-Correcting Cellular Automata, April 1992 Nicholas Pippenger, 11 pages

We study a class of cellular automata that are capable of correcting finite configurations of errors within a finite amount of time. Subject to certain natural conditions, we determine the geometric symmetries such automata may possess. In three dimensions the answer is particularly simple: such an automaton may be invariant under all proper rotations that leave the underlying lattice invariant, but cannot be invariant under the inversion that takes each configuration into its mirror image.

TR-92-09 An Elementary Approach to Some Analytic Asymptotics, April 1992 Nicholas Pippenger, 20 pages

(Abstract not available on-line)

TR-92-10 Constraint Nets: A Semantic Model for Real-Time Embedded Systems, May 1992 Ying Zhang and Alan K. Mackworth, 16 pages

We take a real-time embedded system to be the control system of a plant in an open environment, where the control is realized by computation in digital or analog form. The key characteristic of a real-time embedded system is that computation is interleaved or in parallel with actions of the plant and events in the environment. Various models for real-time embedded systems have been proposed in recent years, most of which are extensions of existing concurrency models with delays or time bounds on transitions. In this paper, we present a different approach to modeling real-time systems. We take the overall system as a dynamic system, in which time or event structures are considered as an intrinsic dimension. Our model, called the Constraint Net model (CN), is capable of expressing dynamic behaviors in real-time embedded systems. It captures the most general structure of dynamic systems so that systems with discrete as well as dense time and asynchronous as well as synchronous event structures can be modeled in a unified framework. It models the dynamics of the environment as well as the dynamics of the plant and the dynamics of the computation and control. It provides multiple levels of abstraction so that a system can be modeled and developed hierarchically. By explicitly representing locality, CN can be used to explore true concurrency in distributed systems. With its rigorous formalization, CN provides a programming semantics for the design of real-time embedded systems. It also serves as a foundation for specification, verification, analysis and simulation of the complete dynamic system.

TR-92-11 Robust Model-based Motion Tracking Through the Integration of Search and Estimation, May 1992 David G. Lowe, 15 pages

A computer vision system has been developed for real-time motion tracking of 3-D objects, including those with variable internal parameters. This system provides for the integrated treatment of matching and measurement errors that arise during motion tracking. These two sources of error have very different distributions and are best handled by separate computational mechanisms. These errors can be treated in an integrated way by using the computation of variance in predicted feature measurements to determine the probability of correctness for each potential matching feature. In return, a best-first search procedure uses these probabilities to find consistent sets of matches, which eliminates the need to treat outliers during the analysis of measurement errors. The most reliable initial matches are used to reduce the parameter variance on further iterations, minimizing the amount of search required for matching more ambiguous features. These methods allow for much larger frame-to-frame motions than most previous approaches. The resulting system can robustly track models with many degrees of freedom while running on relatively inexpensive hardware. These same techniques can be used to speed verification during model-based recognition.

TR-92-12 A Correct Optimized Algorithm for Incrementally Generating Prime Implicates, November 1992 Alex Kean and George Tsiknis, 15 pages

In response to the demands of some applications, we had developed an algorithm, called IPIA, to incrementally generate the prime implicates/implicants of a set of clauses. In an attempt to improve IPIA some optimizations were also presented. It was pointed out to us that some of these optimizations, namely the {\em subsumption} and {\em history restriction}, are self conflicting. Subsumption is a necessary operation to guarantee the primeness of implicates/implicants and {\em history restriction} is a scheme that exploits the history of consensus operation to avoid generating non prime implicant/implicates. The original IPIA, where {\em history restriction} was not consider, was proven correct. However, when {\em history restriction} was introduced later in the optimized version, it interacted with the subsumption operation to produce an incomplete set of prime implicants/implicates. This paper explains the problem in more details, proposes a solution and provides a proof of its correctness.

TR-92-13 An Introduction to Formal Hardware Verification, June 1992 Carl-Johan Seger, 27 pages

Formal hardware verification has recently attracted considerable interest. The need for ``correct'' designs in safety-critical applications, coupled with the major cost associated with products delivered late, are two of the main factors behind this. In addition, as the complexity of the designs increase, an ever smaller percentage of the possible behaviors of the designs will be simulated. Hence, the confidence in the designs obtained by simulation is rapidly diminishing. This paper provides an introduction to the topic by describing three of the main approaches to formal hardware verification: theorem-proving, model checking and symbolic simulation. We outline the underlying theory behind each approach, we illustrate the approaches by applying them to simple examples and we discuss their strengths and weaknesses. We conclude the paper by describing current on-going work on combining the approaches to achieve multi-level verification approaches.

TR-92-14 Rearrangeable Circuit-Switching Networks, June 1992 Nicholas Pippenger, 11 pages

We present simple proofs of the basic results concerning the complexity of rearrangeable connectors and superconcentrators. We also define several types of networks whose connectivity properties interpolate between these extremes, and show that their complexities also interpolate.

TR-92-15 The Raven System, August 1992 Donald Acton, Terry Coatta and Gerald Neufeld, 43 pages

This report describes the distributed object-oriented system, Raven. Raven is both a distributed operating system as well as a programming language. The language will be familiar to C programmers in that it has many constructs similar to the C programming language. Raven uses a simple, uniform object model in which all entities, at least conceptually, are objects. The object model supports classes for implementation inheritance and type checking. Both static and dynamic typing as well as static and dynamic method binding is supported. Object behavior is defined by the class of the object. The language is compiled (rather than interpreted) and supports several features that improve performance. Support also exists for parallel and distributed computing as well as persistent data. Raven is designed specifically for high-performance parallel and distributed applications.

Note:

The Raven System has undergone many changes since this report was published. The report acurately reflects the basic operating principles of Raven, but the syntactic details of Raven are no longer the same.

TR-92-16 The Parallel Protocol Framework, August 1992 Murray W. Goldberg, Gerald W. Neufeld and Mabo R. Ito, 48 pages

(Abstract not available on-line)

TR-92-17 Stabilization of DAEs and invariant manifolds, August 1992 Uri M. Ascher, Hongsheng Qin and Sebastian Reich, 28 pages

Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice.

Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible.

We discuss the best methods thus obtained and make concrete algorithmic suggestions.

TR-92-18 Collocation Software for Boundary Value Differential - Algebraic Equations, December 1992 Uri M. Ascher and Raymond J. Spiteri, 20 pages

We describe the methods and implementation of a general-purpose code, COLDAE. This code can solve boundary value problems for nonlinear systems of semi-explicit differential-algebraic equations (DAEs) of index at most 2. Fully implicit index-1 boundary value DAE problems can be handled as well.

The code COLDAE is an extension of the package COLNEW (COLSYS) for solving boundary value ODEs. The implemented method is piecewise polynomial collocation at Gaussian points, extended as needed by the projection method of Ascher-Petzold. For general semi-explicit index-2 problems, as well as for fully implicit index-1 problems, we define a {\em selective projected collocation} method, and demonstrate its use. The mesh selection procedure of COLSYS is modified for the case of index-2 constraints. We also discuss shooting for initial guesses.

The power and generality of the code are demonstrated by examples.

TR-92-19 The Numerical Solution of Delay-Differential-Algebraic Equations of Retarded and Neutral Type, December 1992 Uri M. Ascher and Linda R. Petzold, 29 pages

In this paper we consider the numerical solution of initial value delay-differential-algebraic equations (DDAEs) of retarded and neutral types, with a structure corresponding to that of Hessenberg DAEs. We give conditions under which the DDAE is well-conditioned, and show how the DDAE is related to an underlying retarded or neutral delay-ODE (DODE). We present convergence results for linear multistep and Runge-Kutta methods applied to DDAEs of index 1 and 2, and show how higher-index Hessenberg DDAEs can be formulated in a stable way as index-2 Hessenberg DDAEs.

TR-92-20 Probabilistic Horn abduction and Bayesian networks, August 1992 David Poole, 61 pages

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language.

TR-92-23 Sequences of Revisions: On the Semantics of Nested Conditionals, September 1992 Craig Boutilier, 63 pages

The truth conditions for conditional sentences have been well-studied, but few compelling attempts have been made to define means of evaluating iterated or nested conditionals. We start with a semantic account of subjunctive conditionals based on the AGM model of revision, and extend this model in a natural fashion to account for right-nesting of conditionals, describing a process called ``natural revision''. These sentences capture sequences of propositional revisions of a knowledge base. We examine the properties of this model, demonstrating that the maximum amount of conditional information in a belief set is preserved after revision. Furthermore, we show how any sequence of revisions can be reduced to natural revision by a single sentence. This demonstrates that any arbitrarily nested sentence is equivalent to a sentence without nesting of the conditional connective. We show cases where revision models, even after the processing of an arbitrary sequence of revisions, can be described purely propositionally, and often in a manner that permits tractable inference. We also examine a form of revision known as ``paranoid revision'' which appears to be the simplest form of belief revision that fits within the AGM framework, and captures semantically the notion of full meet revision.

TR-92-24 Search for computing posterior probabilities in Bayesian networks, September 1992 David Poole, 35 pages

This paper provides a general purpose search-based technique for computing posterior probabilities in arbitrary discrete Bayesian Networks. This is an ``anytime'' algorithm, that at any stage can estimate prior and posterior probabilities with a known error bound. It is shown how well it works for systems that have normality conditions that dominate the probabilities, as is the case in many diagnostic situations where we are diagnosing systems that work most of the time, and for commonsense reasoning tasks where normality assumptions (allegedly) dominate. We give a characterisation of those cases where it works well, and discuss how well it can be expected to work on average. Finally we give a discussion on a range of implementations, and discuss why some promising approaches do not work as well as may be expected.

TR-92-25 The Rapid Recovery of three-Dimensional Orientation from Line Drawings, September 1992 R. A. Rensink, 188 pages

A computational theory is developed that explains how line drawings of polyhedral objects can be interpreted rapidly and in parallel at early levels of human vision. The key idea is that a time-limited process can correctly recover much of the three-dimensional structure of these objects when split into concurrent streams, each concerned with a single aspect of scene structure.

The work proceeds in five stages. The first extends the framework of Marr to allow a process to be analyzed in terms of resource limitations. Two main concerns are identified: (i) reducing the amount of nonlocal information needed, and (ii) making effective use of whatever information is obtained. The second stage traces the difficulty of line interpretation to a small set of constraints. When these are removed, the remaining constraints can be grouped into several relatively independent sets. It is shown that each set can be rapidly solved by a separate processing stream, and that co-ordinating these streams can yield a low-complexity ``approximation'' that captures much of the structure of the original constraints. In particular, complete recovery is possible in logarithmic time when objects have rectangular corners and the scene-to-image projection is orthographic. The third stage is concerned with making good use of the available information when a fixed time limit exists. This limit is motivated by the need to obtain results within a time independent of image content, and by the need to limit the propagation of inconsistencies. A minimal architecture is assumed, viz., a spatiotopic mesh of simple processors. Constraints are developed to guide the course of the process itself, so that candidate interpretations are considered in order of their likelihood. The fourth stage provides a specific algorithm for the recovery process, showing how it can be implemented on a cellular automaton. Finally, the theory itself is tested on various line drawings. It is shown that much of the three-dimensional structure of a polyhedral scene can indeed be recovered in very little time. It also is shown that the theory can explain the rapid interpretation of line drawings at early levels of human vision.

TR-92-26 The Complexity of Constraint Satisfaction Revisited, September 1992 Alan K. Mackworth and Eugene C. Freuder, 5 pages

This paper is a retrospective account of some of the developments leading up to, and ensuing from, the analysis of the complexity of some polynomial network consistency algorithms for constraint satisfaction problems.

TR-92-27 Starshaped Sets, The Radial Function and 3-D Attitude Determination, October 1992 Ying Li and Robert J. Woodham, 18 pages

Attitude characterizes the three rotational degrees of freedom between the coordinate system of a known object and that of a viewer. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. The domain of the representation is the entire sphere. Imaging from a single viewpoint typically determines a hemisphere of the representation. Matching the visible region to the full spherical model for a known object estimates 3-D attitude.

The radial function is used to define a new orientation-based representation of shape. The radial function is well-defined for a class of sets called {\em starshaped} in mathematics. A starshaped set contains at least one interior point from which all boundary points are visible. The radial function records the distance from the origin of the coordinate system to each boundary point. The novel contribution of this paper is to extend previous mathematical results on the matching problem for convex objects to starshaped objects. These results then allow one to transform the attitude determination problem for starshaped sets into an optimization problem for which standard numerical solutions exist. Numerical optimization determines the 3-D rotation that brings a sensed surface into correspondence with a known model.

The required surface data can be obtained, for example, from laser range finding or from shape-from-shading. A proof-of-concept system has been implemented and experiments conducted on real objects using surface data derived from photometric stereo.

TR-92-28 The Psychology of Visualization, November 1992 Andrew Csinger, 27 pages

The Psychology of Visualization

This document is a review of the literature of three related areas: psychophysical vision research, automatic display generation, and multi-dimensional data visualization. Common threads are explored, and a model of the visualization process is proposed which integrates aspects of these three areas.

In the review of psychophysics, attempts to find a set of primitive perceptual channels are explored. In the literature on automatic generation and visualization, attempts to employ these psychophysical findings are investigated. Finally, the proposed model is a framework which might facilitate this kind of cooperation.

TR-92-29 A Proposed Framework for Characterization of Robotic Systems, November 1992 Jane Mulligan, 14 pages

The plethora of approaches to planning and action in robotic systems calls for a unified framework onto which we can map various systems and compare their structure and performance. We propose such a thinking framework which allows us to examine these systems in a uniform way and discuss their adequacy for robotic problems. The inclusion of an environment specification in problem specifications is proposed as a means of clarifying a robot's abilities and creating a notion of context. Robotic systems are described as a set of {\em $<$information-source; computation$>$} strategies combined with a set of actuators.

TR-92-30 Parallel and Distributed Finite Constraint Satisfaction: Complexity, Algorithms and Experiments, November 1992 Ying Zhang and Alan K. Mackworth, 36 pages

This paper explores the parallel complexity of finite constraint satisfaction problems (FCSPs) by developing three algorithms for deriving minimal constraint networks in parallel. The first is a parallel algorithm for the EREW PRAM model, the second is a distributed algorithm for fine-grain interconnected networks, and the third is a distributed algorithm for coarse-grain interconnected networks. Our major results are: given an FCSP represented by an acyclic constraint network (or a join tree) of size n with treewidth bounded by a constant, then (1) the parallel algorithm takes O(log n) time using O(n) processors, (2) there is an equivalent network, of size poly(n) with treewidth also bounded by a constant, which can be solved by the fine-grain distributed algorithm in O(\log n) time using poly(n) processors and (3) the distributed algorithm for coarse-grain interconnected networks has linear speedup and linear scaleup. In addition, we have simulated the fine-grain distributed algorithm based on the logical time assumption, experimented with the coarse-grain distributed algorithm on a network of transputers, and evaluated the results against the theory.

TR-92-31 Will the Robot Do the Right Thing?, November 1992 Ying Zhang and Alan K. Mackworth, 20 pages

Constraint Nets have been developed as an algebraic on-line computational model of robotic systems. A robotic system consists of a robot and its environment. A robot consists of a plant and a controller. A constraint net is used to model the dynamics of each component and the complete system. The overall behavior of the system emerges from the coupling of each of its components. The question posed in the title is decomposed into two questions: first, what is the right thing? second, how does one guarantee the robot will do it? We answer these questions by establishing a formal approach to the specification and verification of robotic behaviors. In particular, we develop a real-time temporal logic for the specification of behaviors and a new verification method, based on timed forall-automata, for showing that the constraint net model of a robotic system satisfies the specification of a desired global behavior of the system. Since the constraint net model of the controller can also serve as the on-line controller of the real plant, this is a practical way of building well-behaved robots. Running examples of a coordinator for a two-handed robot performing an assembly task and a reactive maze traveler illustrate the approach.

TR-92-32 The Support Function, Curvature Functions and 3-D Attitude, November 1992 Ying Li and Robert J. Woodham

Attitude determination finds the rotation between the coordinate system of a known object and that of a sensed portion of its surface. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. They are useful in attitude determination because they rotate in the same way as the object rotates. Three such representations are defined using, respectively, the support function and the first and second curvature functions. The curvature representations are unique for smooth, strictly convex objects. The support function representation is unique for any convex object.

The essential mathematical basis for these representations is provided. The paper extends previous results on convex polyhedra to the domain of smooth, strictly convex surfaces. Combinations of the support function of a known object with curvature measurements from a visible surface transform attitude determination into an optimization problem for which standard numerical solutions exist.

Dense measurements of surface curvature are required. Surface data can be obtained from laser range finding or from shape-from-shading methods, including photometric stereo. A proof-of-concept system has been implemented and experiments conducted on a real object using surface orientation and surface curvature data obtained directly from photometric stereo.

TR-92-34 A Mathematically Precise Two-Level Formal Hardware Verification Methodology*, December 1992 Carl-Johan H. Seger and Jeffrey J. Joyce, 34 pages

Theorem-proving and symbolic trajectory evaluation are both described as methods for the \fIformal verification of hardware\fP. They are both used to achieve a common goal -- correctly designed hardware -- and both are intended to be an alternative to conventional methods based on non-exhaustive simulation. However, they have different strengths and weaknesses. The main significance of this paper is the description of a two-level approach to formal hardware verification, where the HOL theorem prover is combined with the Voss verification system. From symbolic trajectory evaluation we inherit a high degree of automation and accurate models of circuit behavior and timing. From interactive theorem-proving we gain access to powerful mathematical tools such as induction and abstraction. The interface between the HOL and Voss is, however, more than just an ad hoc translation of verification results obtained by one tool into input for the other tool. We have developed a ``mathematical'' interface where the results of the Voss system is embedded in HOL. We have also prototyped a hybrid tool and used this tool to obtain verification results that could not be easily obtained with previously published techniques.

TR-92-33 Bringing Mathematical Research to Life in the Schools, November 1992 Maria M. Klawe, 16 pages

(Abstract not available on-line)

TR-92-35 Solving the Classic Radiosity Equation Using Multigrid Techniques, February 4, 1992 Robert R. Lewis, 8 pages

We investigate the application of multigrid techniques to the solution of the ``classic'' radiosity equation. After overviews of the global illumination problem and of radiosity, we describe the latter's solution via multigrid methods. An implementation of the multigrid algorithm presented here is able to solve the classic radiosity equation in about 50% of the time required by the more commonly-used Gauss-Seidel approach. Although few researchers currently use classic radiosity, we discuss possibilities for the adaption of multigrid methods to more recent radiosity solution techniques.

TR-92-36 Multi-Resolution Surface Approximation for Animation, October 30, 1992 David Forsey and LiFeng Wang, 11 pages

This paper considers the problem of approximating a digitized surface in R3 with a hierarchical bicubic B-spline to produce a manipulatable surface for further modeling or animation. The 3D data's original mapping from R3 (multiple rows of cylindrical scans) is mapped into the parametric domain of the B-splice (also in R3) using a modified chord-length parameterization. This mapping is used to produce a gridded sampling of the surface, and a modified full multi-grid (FMG) technique is employed to obtain a high-resolution B-spline approximation. The intermediate results of the FMG calculations generate the component overlays of a hierarchical spline surface reconstruction. Storage requirements of the hierarchical representation are reduced by eliminating offsets wherever their removal will not increase the error in the approximation by more than a given amount. The resulting hierarchical spline surface is interactively modifiable (modulo the size of the dataset and computing power) using the editing capabilities of the hierarchical surface representation allowing either local or global changes to surface shape while retaining details of the scanned data.

TR-92-37 A Ray Tracing Accelerator Based on a Hierarchy of 1D Sorted Lists, October 30, 1992 Alain Fournier and Pierre Poulin, 9 pages

Since the introduction of ray tracing as a rendering technique, several approaches have been proposed to reduce the number of ray/object tests. This paper presents yet another such approach based on a hierarchy of 1D sorted lists. A bounding box aligned with the axes encloses an object. The coordinates of each bounding box are ordered in three sorted lists (one for each axis) and are treated as events. Traversing a scene with a ray consists of traversing each sorted list in order, intersecting an object only when for this object a first event has been encountered (entered) in every dimension before a second event has been encountered (exited) in any dimension. To reduce the number of events (entries and exits) traversed, a hierarchy of sorted lists is constructed from a hierarchy of bounding boxes. The results are favourable for scenes ranging from moderate to high complexity. Further applications of the technique to hardware assist for ray tracing and to collision detection are discussed.

TR-92-38 Common Illumination between Real and Computer Generated Scenes, October 30, 1992 Alain Fournier, Atjeng S. Gunawan and Chris Romanzin, 9 pages

The ability to merge a real video image (RVI) with a computer-generated image (CGI) enhances the usefulness of both. To go beyond ``cut and paste'' and chroma-keying, and merge the two images successfully, one must solve the problems of common viewing parameters, common visibility and common illumination. The results can be dubbed Computer Augmented Reality (CAR). We present in this paper techniques for approximating the common global illumination for RVIs and CGIs, assuming some elements of the scene geometry of the real world and common viewing parameters are known. Since the real image is a projection of the exact solution for the global illumination in the real world (done by nature), we approximate the global illumination of the merged image by making the RVI part of the solution to the common global illumination computation. The objects in the real scene are replaced by few boxes covering them; the image intensity of the RVI is used as the initial surface radiosity of the visible part of the boxes; the surface reflectance of the boxes is approximated by subtracting an estimate of the illuminant intensity based on the concept of ambient light; finally global illumination using a classic radiosity computation is used to render the surface of the CGIs with respect to their new environment and for calculating the amount of image intensity correction needed for surfaces of the real image. An example animation testing these techniques has been produced. Most of the geometric problems have been solved in a relatively ad hoc manner. The viewing parameters were extracted by interactively matching of the synthetic scene with the RVIs. The visibility is determined by the relative position of the ``blocks'' representing the real objects and the computer generated objects, and a moving computer generated light has been inserted. The results of the merging are encouraging, and would be effective for many applications.

TR-92-39 Harnessing Preattentive Processes for Multivariate Data Visualization, October 30, 1992 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 11 pages

A new method for designing multivariate data visualization tools is presented. These tools allow users to perform simple tasks such as estimation, target detection, and detection of data boundaries rapidly and accurately. Our design technique is based on principles arising from an area of cognitive psychology called preattentive processing. Preattentive processing involves visual features that can be detected by the human visual system without focusing attention on particular regions in an image. Examples of preattentive features include colour, orientation, intensity, size, shape, curvature, and line length. Detection is performed very rapidly by the visual system, almost certainly using a large degree of parallelism. We studied two known preattentive features, hue and orientation. The particular question investigated is whether rapid and accurate estimation is possible using these preattentive features. Experiments that simulated displays using our preattentive visualization tool were run. Analysis of the results of the experiments showed that rapid and accurate estimation is possible with both hue and orientation. A second question, whether interaction occurs between the two features, was answered negatively. This suggests that these and perhaps other preattentive features can be used to create visualization tools which allow high-speed multivariate data analysis.

TR-92-40 Investigating the Effectiveness of Direction Manipulation of 3D B-Spline Curves Using, October 30, 1992 Stanley Jang, Kellogg S. Booth, David R. Forsey and Peter Graf, 10 pages

the Shape-Matching Paradigm curves. These formulations are found in a variety of applications, including interactive curve design. Previous research has shown that the B-spline is an effective formulation for this setting. However, a possible drawback for the novice user in using the B-spline is the fact that control vertices may lie far away from the curve, making its manipulation unintuitive. This problem is compounded in three dimensions. A direct manipulation technique, allowing a curve to be manipulated with points that lie on the curve itself, offers an alternative to control vertex manipulation. An experiment was conducted to compare the interactive design of 3D curves using control vertex manipulation of B-spline curves and a particular type of direct manipulation of B-spline curves. The results of the experiment revealed that direct manipulation was significantly faster than control vertex manipulation, without sacrificing accuracy in the shape of the final 3D curve. A general testbed designed for this investigation and related studies of 3D interaction techniques was used to conduct the experiment.

TR-92-41 Filtering Normal Maps and Creating Multiple Surfaces, October 30, 1992 Alain Fournier, 14 pages

``Bump'' mapping is a variant of texture mapping where the texture information is used to alter the surface normal. Current techniques to pre-filter textures are all relying on the fact that the texture information can be linearly "factored out" of the shading equation, and therefore can be pre-averaged in some way. This is not the case with bump maps, and those techniques fail to filter them correctly. We propose here a technique to pre-filter bump maps by building a pyramid where each level stores distributions of normal vectors reconstructed from the distribution given by the underlying bump map. The distributions are represented as sums of a small number of Phong-like spreads of normal vectors. The technique, besides allowing an effective and smooth transition between a bump map and a single surface description, gives rise to the concept of a multiple surface, where each point of the single surface is characterized by more than one normal vector. This allows the description of visually complex surfaces by a trivial modification of current local illumination models. When a surface has an underlying microstructure, masking and self-shadowing are important factors in its appearance. Along with the filtering of normals we include the filtering of the masking and self-shadowing information. This is accomplished by computing the limiting angles of visibility and their variance along the two texture axes for the reconstructed distribution of normals. These techniques allow the modeling of any surface whose microstructure we can model geometrically. This includes complex but familiar surfaces such as anisotropic surfaces, many woven cloth, and stochastic surfaces.

TR-92-48 Spline Overlay Surfaces, October 30, 1993 Richard H. Bartels and David R. Forsey, 9 pages

We consider the construction of spline features on spline surfaces. The approach taken is a generalization of the hierarchical surface introduced in [Forsey88]. Features are regarded as spline-defined vector displacement fields that are overlain on existing surfaces. No assumption is made that the overlays are derived from the base surface. They may be applied with any orientation in a non-hierarchical fashion. In particular, we present a ``cheap'' version of the concept in which the displacement field is mapped to the base surface approximately, through the mapping of its control vectors alone. The result is a feature that occupies the appropriate position in space with respect to the base surface. It may be manipulated and rendered as an independent spline, thus avoiding the costs of a true displacement mapping. This approach is useful for prototyping and previewing during design. When a finished product is desired, of course, true displacement mapping is employed.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.