Technical Reports

The ICICS/CS Reading Room


1993 UBC CS Technical Report Abstracts

TR-93-01 Multi-evidential Correlation \& Visual Echo Analysis, January 1993 B, Esf ari, iar and James J. Little, 22 pages

Visual motion, stereo, texture, and symmetric boundaries are all repetition of similar patterns in time or space. These repetitions can be viewed as "echoes" of one another, and the measurement of disparities, segmentation of textons, or detection of boundary symmetries translates into detection of echo arrival periods.

Cepstral filtering, as well as Polyspectral techniques and waveform analysis, are some of the techniques used successfully for echo detection. This paper examines the application of cepstral analysis to computational vision, introduces modified improvements to the traditional methods, and provides a comparison with other routines presently used.

Finally, we introduce a general multi-evidential correlation approach which lends itself to several computational routines. CepsCorr, as we call it, is a simple general technique that can accept different matching routines, such as cepstrum and/or phase correlation as its measurement kernel. The evidences provided by each iteration of cepsCorr can then be combined to provide a more accurate estimate of motion or binocular disparity.

TR-93-02 Free Speech, Pornography, Sexual Harassment, and Electronic Networks, February 1993 Richard Rosenberg, 38 pages

Linking most universities and many companies around the world is a vast computer network called Internet. More than 7 million people, at 1.2 million attached hosts, in 117 countries are able to receive and send messages to about 4,000 newsgroups, representing the diverse interests of its users, as they are usually called. Some of these newsgroups deal with technical computer issues, some are frivolous, and some carry obscene or pornographic material. For purposes of this essay, it will be assumed that by most standards the postings of concern consisting of stories with themes of bestiality, bondage, and incest and encrypted pictures with scenes of nude women and men, and even children, are pornographic and offensive to many people. The issue under discussion is what to do about such offensive material. Issues of free speech, censorship, and sexual harassment arise. These as well as many others are explored and recommendations are made.

TR-93-03 Automatic Synthesis of Sequential Synchronizations, March 1993 Zheng Zhu and Steven D. Johnson, 17 pages

(Abstract not available on-line)

TR-93-04 Design and Analysis of Embedded Real-Time Systems: An Elevator Case Study, February 1993 Yang Zhang and Alan K. Mackworth, 44 pages

Constraint Nets have been developed as an algebraic on-line computational model of robotic systems. Timed forall-automata have been studied as a logical specification language of real-time behaviors. A general verification method has been proposed for checking whether a constraint net model satisfies a timed forall-automaton specification. In this paper, we illustrate constraint net modeling and timed forall-automata analysis using an elevator example. We start with a description of the functions and user interfaces of a simple elevator system, and then model the complete system in Constraint Nets. The analysis of a well-designed elevator system should guarantee that any request will be served within some bounded time. We specify such requirements in timed forall-automata, and show that the constraint net model of the elevator system satisfies the timed forall-automaton specification.

TR-93-05 On Seeing Robots, March 1993 Alan K. Mackworth, 13 pages

Good Old Fashioned Artificial Intelligence and Robotics (GOFAIR) relies on a set of restrictive Omniscient Fortune Teller Assumptions about the agent, the world and their relationship. The emerging Situated Agent paradigm is challenging GOFAIR by grounding the agent in space and time, relaxing some of those assumptions, proposing new architectures and integrating perception, reasoning and action in behavioral modules. GOFAIR is typically forced to adopt a hybrid architecture for integrating signal-based and symbol-based approaches because of the inherent mismatch between the corresponding on-line and off-line computational models. It is argued that Situated Agents should be designed using a unitary on-line computational model. The Constraint Net model of Zhang and Mackworth satisfies that requirement. Two systems for situated perception built in our laboratory are described to illustrate the new approach: one for visual monitoring of a robot's arm, the other for real-time visual control of multiple robots competing and cooperating in a dynamic world.

TR-93-06 A Computational Theory of Decision Networks, March 1993 Nevin Lianwen Zhang, Runping Qi and David Poole, 47 pages

Decision trees (Raiffa 1968) are the first paradigm where an agent can deal with multiple decisions. The non-forgetting influence diagram formalism (Howard and Matheson 1983, Shachter 1986) improves decision trees by exploiting random variables' independencies of decision variables and other random variables. In this paper, we introduce a notion of decision networks that further explore decision variables' independencies of random variables and other decision variables. We also drop the semantic constraints of total ordering of decisions and of one value node. Only the fundamental constraint of acyclicity is kept. \par

From a computational point of view, it is desirable if a decision network is stepwise-solvable, i.e if it can be evaluated by considering one decision at a time. However, decision network in the most general sense need not to be stepwise-solvable. A syntactic constraint called stepwise-decomposability is therefore imposed. We show that stepwise-decomposable decision networks can be evaluated not only by considering one decision at a time, but also by considering one portion of the network at a time.

TR-93-07 On Finite Covering of Infinite Spaces for Protocol Test Selection, April 1993 Masaaki Mori and Son T. Vuong, 17 pages

The core of the protocol test selection problem lies in how to derive a finite test suite from an infinite set of possible execution sequences (protocol behaviors). This paper presents two promising approaches to this problem : (i) the metric based topological approach, and (ii) the formal language theoretic approach ; both aim at producing finite coverings of an infinite set of execution sequences. The former approach makes use of the property of compactness of metric space, which guarantees the infinite metric space can be fully covered by a finite number of open "balls" (subspaces). The latter approach relies on the property that the Parikh mapping of a set of all execution sequences can be represented by a finite union of linear sets. Two simple protocol examples are given to elucidate the formal language theoretic approach.

TR-93-08 Formal Verification by Symbolic Evaluation of Partially-Ordered Trajectories, April 1993 Carl Johan H. Seger, R Bryant and al E., 38 pages

Symbolic trajectory evaluation provides a means to formally verify properties of a sequential system by a modified form of symbolic simulation. The desired system properties are expressed in a notation combining Boolean expressions and the temporal logic ``next-time'' operator. In its simplest form, each property is expressed as an assertion [ A => C ], where the antecedent A expresses some assumed conditions on the system state over a bounded time period, and the consequent C expresses conditions that should result. A generalization allows simple invariants to be established and proven automatically.

The verifier operates on system models in which the state space is ordered by ``information content''. By suitable restrictions to the specification notation, we guarantee that for every trajectory formula, there is a unique weakest state trajectory that satisfies it. Therefore, we can verify an assertion [ A => C ] by simulating the system over the weakest trajectory for A and testing adherence to C. Also, establishing invariants correspond to simple fixed point calculations.

This paper presents the general theory underlying symbolic trajectory evaluation. It also illustrates the application of the theory to the task of verifying switch-level circuits as well as more abstract implementations.

TR-93-09 Decision Graph Search, April 1993 Runping Qi and David Poole, 43 pages

A decision graph is an AND/OR graph with a certain evaluation function. Decision graphs have been found a very useful representation for a variety of decision making problems. This article present a number of search algorithms computing an optimal solution from a given decision graph. These algorithms include one {\it depth-first heuristic-search algorithm}, one {\it best-first heuristic-search algorithm}, one {\it anytime-algorithm} and two {\it iterative-deepening depth-first heuristic-search algorithms}. Similar to the *-minimax search algorithms of Ballard, our depth-first heuristic-search algorithm is developed from the alpha--beta algorithm for minimax tree search. In addition, we show how heuristic knowledge can be used to improve search efficiency. Furthermore, we present an anytime algorithm which is conveniently obtained from the depth-first heuristic-search algorithm without incurring much overhead. The best-first heuristic-search algorithm is obtained by modifying the well known AO* algorithm for AND/OR graphs with additive costs. The iterative-deepening algorithms result from combining the iterative-deepening techniques with the depth-first search techniques. Some experimental data on some of these algorithms performance are given.

TR-93-10 A New Method for Influence Diagram Evaluation, May 1993 Runping Qi and David Poole, 40 pages

As Influence diagrams become a popular representational tool for decision analysis, influence diagram evaluation attracts more and more research interests. In this article, we present a new, two--phase method for influence diagram evaluation. In our method, an influence diagram is first mapped into a decision graph and then the analysis is carried out by evaluating the decision graph. Our method is more efficient than Howard and Matheson's two--phase method because, among other reasons, the size of the decision graph generated by our method from an influence diagram can be much smaller than that by Howard and Matheson's method for the same influence diagram. Like those most recent algorithms reported in the literature, our method can also exploit independence relationship among variables of decision problems, and provides a clean interface between influence diagram evaluation and Bayesian net evaluation, thus, various well--established algorithms for Bayesian net evaluation can be used in influence diagram evaluation. In this sense, our method is as efficient as those algorithms. Furthermore, our method has a few unique merits. First, it can take advantage of asymmetric processing in influence diagram evaluation. Second, by using heuristic search techniques, it provides an explicit mechanism for making use of heuristic information that may be available in a domain--specific form. These additional merits make our method more efficient than the current algorithms in general. Finally, by using decision graphs as an intermediate representation, the value of perfect information can be computed in a more efficient way.

TR-93-11 A Framework for Interoperability Testing of Network Protocols, April 1993 Jadranka Alilovic-Curgus and Son T. Vuong, 29 pages

In this report, we extend the testing theory based on formal specifications by formalizing testing for interoperability with a new relation {\em intop}. Intuitively, $P$ $intop_{S}$ $ Q$ if, for every event offered by either $P$ or $Q$, the concurrent execution of $P$ and $Q$ will be able to proceed with the traces in $ S$, where $S$ is their (common) specification. This theory is applicable to formal description methods that allow a semantic interpretation of specifications in terms of labelled transition systems. Existing notions of implementation preorders and equivalences in protocol testing theory are placed in this framework and their discriminating power for identifying processes which will interoperate is examined. As an example, a subset of the ST-II protocol is formally specified and its possible implementations are shown to interoperate if each implementation satisfies the $ intop$ relation with respect to $S$, the specification of the ST-II protocol (subset).

TR-93-12 Orientation-Based Representations of Shape and Attitude Determination, April 1993 Ying Li, 162 pages

The three rotational degrees of freedom between the coordinate system of a sensed object and that of a viewer define the attitude of the object. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. All orientation-based representations considered share a desirable property: the representation of a rotated object is equal to the rotated representation of the object before rotation. This makes the orientation-based representations well-suited to the task of attitude determination.

The mathematical background for orientation-based representations of shape is presented in a consistent framework. Among the orientation-based representations considered, the support function is one-to-one for convex bodies, the curvature functions are one-to-one for convex bodies up to a translation and the radial function is one-to-one for starshaped sets.

Using combinations of the support function and the curvature functions for convex bodies, the problem of attitude determination is transformed into an optimization problem. Previous mathematical results on the radial function for convex objects are extended to starshaped objects and the problem of attitude determination by the radial function also is transformed into an optimization problem. Solutions to the optimization problems exist and can be effectively computed using standard numerical methods.

A proof-of-concept system has been implemented and experiments conducted both on synthesized data and on real objects using surface data derived from photometric stereo. Experimental results verify the theoretical solutions.

Novel contributions of the thesis include: the representation of smooth convex objects by the support function and curvature functions; the definition of a new orientation-based representation for starshaped sets using the 3-D radial function; and solutions to the 3-D attitude determination problem using the aforementioned representations. In particular, the scope of orientation-based representations has been extended, both in theory and in practice, from convexity to starshapedness.

TR-93-13 Symplectic Integration of Constrained Hamiltonian Systems by Runge-Kutta Methods, April 1993 Sebastian Reich, 24 pages

Recent work reported in the literature suggests that for the long--term integration of Hamiltonian dynamical systems one should use methods that preserve the symplectic structure of the flow. In this paper we investigate the symplecticity of numerical integrators for constrained Hamiltonian systems. In the first part of the paper we show that those implicit Runge--Kutta methods which result in symplectic integrators for unconstrained Hamiltonian systems can be directly applied to constrained Hamiltonian systems. The resulting discretization scheme is symplectic but does not, in general, preserve the constraints. In the second part of the paper we discuss partitioned Runge--Kutta methods. Again it turns out that those partitioned Runge--Kutta methods which are symplectic for unconstrained systems can be applied to constrained Hamiltonian systems. We show that, in contrast to implicit Runge--Kutta methods, the class of symplectic partitioned Runge--Kutta methods includes methods that also preserve the constraints. In the third part of the paper we discuss constrained Hamiltonian systems with separable Hamiltonian from a Lie algebraic point of view. This approach not only provides a different approach to the numercial integration of Hamiltonian systems but also allows for a straighforward backward error analysis.

TR-93-14 The use of conflicts in Searching Bayesian networks, May 1993 David Poole, 24 pages

This paper discusses how conflicts (as used by the consistency-based diagnosis community) can be adapted to be used in a search-based algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an ``anytime'' algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorithm is most suited to the case with extreme probabilities. This paper presents a solution to the inefficiencies found in naive algorithms, and shows how the tools of the consistency-based diagnosis community (namely conflicts) can be used effectively to improve the efficiency. Empirical results with networks having tens of thousands of nodes are presented.

TR-93-15 Implicit-Explicit Methods for Time-Dependent PDE's, May 1993 Uri M. Ascher, Steven J. Ruuth and Brian Wetton, 27 pages

Implicit-explicit (IMEX) schemes have been widely used, especially in conjunction with spectral methods, for the time integration of spatially discretized PDEs of diffusion-convection type. Typically, an implicit scheme is used for the diffusion term and an explicit scheme is used for the convection term. Reaction-diffusion problems can also be approximated in this manner. In this work we systematically analyze the performance of such schemes, propose improved new schemes and pay particular attention to their relative performance in the context of fast multigrid algorithms and of aliasing reduction for spectral methods.

For the prototype linear advection-diffusion equation, a stability analysis for first, second, third and fourth order multistep IMEX schemes is performed.

Stable schemes permitting large time steps for a wide variety of problems and yielding appropriate decay of high frequency error modes are identified.

Numerical experiments demonstrate that weak decay of high frequency modes can lead to extra iterations on the finest grid when using multigrid computations with finite difference spatial discretization, and to aliasing when using spectral collocation for spatial discretization. When this behaviour occurs, use of weakly damping schemes such as the popular combination of Crank-Nicolson with second order Adams-Bashforth is discouraged and better alternatives are proposed.

Our findings are demonstrated on several examples.

TR-93-16 A Real-Time 3D Motion Tracking System, April 1993 Johnny Wai Yee Kam, 95 pages

Vision allows one to react to rapid changes in the surrounding environment. The ability of animals to control their eye movements and follow a moving target has always been a focus in biological research. The biological control system that governs the eye movements is known as the oculomotor control system. Generally, the control of eye movements to follow a moving visual target is known as gaze control.

The primary goal of motion tracking is to keep an object of interest, generally known as the visual target, in the view of the observer at all time. Tracking can be driven by changes perceived from the real world. One obvious change introduced by a moving object is the change in its location, which can be described in terms of displacement. In this project, we will show that by using stereo disparity and optical flow, two significant types of displacements, as the major source of directing signals in a robotic gaze control system, we can determine where the moving object is located and perform the tracking duty, without recognizing what the object is.

The recent advances in computer hardware, exemplified by our Datacube MaxVideo 200 system and a network of Transputers, make it possible to perform image processing operations at video rates, and to implement real-time systems with input images obtained from video cameras. The main purposes of this project are to establish some simple control theories to monitor changes perceived in the real world, and to apply such theories in the implementation of a real-time three-dimensional motion tracking system on a binocular camera head system installed in the Laboratory for Computational Intelligence (LCI) at the Department of Computer Science of the University of British Columbia (UBC).

The control scheme of our motion tracking system is based on the Perception-Reasoning-Action (PRA) regime. We will describe an approach of using an active monitoring process together with a process for accumulating temporal data to allow different hardware components running at different rates to communicate and cooperate in a real-time system working on real world data. We will also describe a cancellation method to reduce the unstable effects of background optical flow generated from ego-motion, and create a ``pop-out'' effect in the motion field to ease the burden of target selection. The results of various experiments conducted, and the difficulties of tracking without any knowledge of the world and the objects will also be discussed.

TR-93-17 Multiresolution Geometric Algorithms Using Wavelets I: Representation for Parametric Curves and Surfaces, May 1993 L. M. Reissell, 75 pages

We apply wavelet methods to the representation of multiscale digitized parametric curves and surfaces. These representations will allow the derivation of efficient multiresolution geometric algorithms. In this report, we outline the definitions and basic properties of the resulting surface and curve hierarchies, and discuss the criteria on wavelets used in geometric representation. We then construct a new family of compactly supported symmetric biorthogonal wavelets, {\it pseudo-coiflets}, well suited to geometric multiresolution representation: these wavelets have coiflet-like moment properties, and in particular, the reconstructing scaling functions will be interpolating. The methods have been implemented in C/C++ software, which have been tested on geographic image data and other examples, comparing different choices of underlying wavelets.

TR-93-18 Linking BDD-Based Symbolic Evaluation to Interactive Theorem-Proving, May 1993 Jeffrey J. Joyce and Carl-Johan H. Seger, 6 pages

A novel approach to formal hardware verification results from the combination of symbolic trajectory evaluation and interactive theorem-proving. From symbolic trajectory evaluation we inherit a high degree of automation and accurate models of circuit behaviour and timing. From interactive theorem-proving we gain access to powerful mathematical tools such as inductions and abstraction. We have prototyped a hybrid tool and used this tool to obtain verification results that could not be easily obtained with previously published techniques.

TR-93-19 Fault Coverage Evaluation of Protocol Test Sequences, June 1993 Jinsong Zhu and Samuel T. Chanson, 22 pages

In this paper, we investigate the quality of a given protocol test sequence in detecting faulty implementations of the specification. The underlying model is a deterministic finite state machine (FSM). The basic idea is to construct all FSMs having n + i states (where n is the number of states in the specification and i a small integer) that will accept the test sequence but do not conform to the specification. It differs from the conventional simulation method in that it is not necessary to consider various forms of fault combinations and is guaranteed to identify any faulty machines. Preprocessing and backjumping techniques are used to reduce the computational complexity. We have constructed a tool based on the model and used it in assessing several UIO-based optimization techniques. We observed that the use of multiple UIO sequences and overlaps can sometimes weaken the fault coverage of the test sequence. The choice of the transfer sequences and order of the test subsequences during optimization may also affect fault coverage. Other observations and analysis on the properties of test sequences are also made.

TR-93-20 Numerical Integration of the Generatized Euler Equations, June 1993 Sebastian Reich, 16 pages

(Abstract not available on-line)

TR-93-21 Performance Measures for Robot Manipulators: A Unified Approach, June 1993 Kees van den Doel and Dinesh Pai, 60 pages

(Abstract not available on-line)

TR-93-22 How Fast can ASN.1 Encoding Rules Go?, July 1993 Mike Sample, 6 pages

(Abstract not available on-line)

TR-93-23 Abduction As Belief Revision, July 1993 Craig Boutilier and Veronica Becher, 16 pages

(Abstract not available on-line)

TR-93-24 Sequential Regularization Methods for higher index DAES with constraint singularities: I Linear Index-2 Case, July 1993 Uri Ascher and Ping Lin, 25 pages

Sequential regularization methods for higher index DAEs with constraint singularities: I. Linear index-2 case

Standard stabilization techniques for higher index DAEs often involve elimination of the algebraic solution components. This may not work well if there are singularity points where the constraints Jacobian matrix becomes rank-deficient. This paper proposes instead a sequential regularization method (SRM) -- a functional iteration procedure for solving problems with isolated singularities which have smooth differential solution components.

For linear index-2 DAEs we consider both initial and boundary value problems. The convergence of the SRM is described and proved in detail. Various aspects of the subsequent numerical discretization of the regularized problems are discussed as well and some numerical verifications are carried out.

TR-93-25 Testgen+: An Environment for Protocol Test Suite Generation Selection and Validation, July 1993 Son T. Vuong and Sangho Leon, 31 pages

(Abstract not available on-line)

TR-93-26 Computational Methods for the Shape from Shading Problem, July 1993 Paul M. Carter, 159 pages

(Abstract not available on-line)

TR-93-27 Unit Disk Graph Recognition is NP-Hard*, August 1993 Heinz Breu and David Kirkpatrick, 22 pages

Unit disk graphs are the intersection graphs of unit diameter closed disks in the plane. This paper reduces SATISFIABILITY to the problem of recognizing unit disk graphs. Equivalently, it shows that determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have sphericity at most 3, is NP-hard. We show how this reduction can be extended to 3 dimensions, thereby showing that unit sphere graph recognition, or determining if a graph has sphericity 3 or less, is also NP-hard. We conjecture that K-sphericity is NP-hard for all fixed K greater than 1.

TR-93-28 Generating Random Monotone Polygrams, September 1993 Jack Snoeyink and Chong Zhu, 20 pages

We proposed an algorithm that generates x-monotone polygons for any given set of n points uniformly at random. The time complexity of our algorithm is O(K), where n <= K <= n^2 is the number edges of the visibility graph of the x-monotone chain whose vertices are the given n points. The space complexity of our algorithm is O(n).

TR-93-29 A Compact Piecewise-Linear Voronoi Diagram for Convex sites in the Plane, October 1993 Mike McAllister, D. Kirkpatrick and J. Snoeyink, 30 pages

A Compact Piecewise-Linear Voronoi Diagram for Convex Sites in the Plane

In the plane, the post-office problem, which asks for the closest site to a query site, and retraction motion planning, which asks for a one-dimensional retract of the free space of a robot, are both classically solved by computing a Voronoi diagram. When the sites are k disjoint convex sets, we give a compact representation of the Voronoi diagram, using O(k) line segments, that is sufficient for logarithmic time post-office location queries and motion planning. If these sets are polygons with n total vertices given in standard representations, we compute this diagram optimally in O(k log n) deterministic time for the Euclidean metric and in O(k log n log m) deterministic time for the convex distance function defined by a convex m-gon.

TR-93-30 Tentative Prune- and- Search Fixed-Points with Applications to Geometric Computation, October 1993 D. Kirkpatrick and J. Snoeyink, 16 pages

Tentative Prune-and-Search for Computing Fixed-Points with Applications to Geometric Computation

Motivated by problems in computational geometry, we investigate the complexity of finding a fixed-point of the composition of two or three continuous functions that are defined piecewise. We show that certain cases require nested binary search taking Theta(log^2(n)) time. Others can be solved in logarithmic time by using a prune-and-search technique that may make tentative discards and later revoke or certify them. This work finds application in optimal subroutines that compute approximations to convex polygons, dense packings, and Voronoi vertices for Euclidean and polygonal distance functions.

TR-93-31 Objects that cannot be taken apart with two hands, October 1993 Jack Snoeyink and J. Stolfe, 15 pages

It has been conjectured that every configuration C of convex objects in 3-space with disjoint interiors can be taken apart by translation with two hands: that is, some proper subset of C can be translated to infinity without disturbing its complement. We show that the conjecture holds for five or fewer objects and give a counterexample with six objects. We extend the counterexample to a configuration that cannot be taken apart with two hands using arbitrary isometries (rigid motions).

Note: some figures have been omitted from the online version to save space.

TR-93-32 Counting and Reporting Red/Blue Segment Intersections, October 1993 Larry Palazzi, 20 pages

We simplify the red/blue segment intersection algorithm of Chazelle et al: Given sets of n disjoint red and n disjoint blue segments, we count red/blue intersections in O(n log(n)) time using O(n) space or report them in additional time proportional to their number. Our algorithm uses a plane sweep to presort the segments; then it operates on a list of slabs that efficiently stores a single level of a segment tree. With no dynamic memory allocation, low pointer overhead, and mostly sequential memory reference, our algorithm performs well even with inadequate physical memory.

TR-93-33 Analysis of a Recurrence Arising from a construction for Non-Blocking Networks, October 1993 Nicholas Pippenger, 32 pages

Define f on the integers n > 1 by the recurrence

f(n) = min{n, min 2f(m) + 3f(n/m)}. m|n

The function f has f(n) = n as its upper envelope, attained for all prime n. Our goal in this paper is to determine the corresponding lower envelope. We shall show that this has the form f(n) ~ C(log n)^(1+1/g) for certain constants g and C, in the sense that for any epsilon > 0, the inequality f(n) <= (C + epsilon)(log n)^(1 + 1/g) holds for infinitely many n, while f(n) <= (C - epsilon)(log n)^(1 + 1/g) holds for only finitely many. In fact, g = 0.7878.... is the unique real solution of the equation 2^(-g) + 3^(-g) = 1, and C = 1.5595... is given by the expression

_ _ 1/g | -g g -g g | g | 2 log 2 + 3 log 3 | |_ _| C = -------------------------------------------------------------------------- _ __ __ _ 1/g | -g g+1 -g \ g+1 \ g+1 | (g+1) | 15 log (5/2) + 3 /_ log ((k+1)/k) + /_ log ((k+1)/k) | |_ 5 <= k <= 7 8 <= k <= 15 _|

We also consider the function f0 defined by replacing the integers n > 1 with the reals x > 1 in the above recurrence:

f0(x) = min{x, inf 2f0(y) + 3f0(x / y)} 1 < y < x

We shall show that f0(x) ~ C0(log x)^(1+1/g), where C0 = 1.5586... is given by

_ _ 1/g | -g -g -g -g | 1 + 1/g C0 = 6e | 2 log 2 + 3 log 3 | (g / (g + 1)) |_ _|

and is smaller than C by a factor of 0.9994...

TR-93-34 Self-Routing Superconcentrators, October 1993 Nicholas Pippenger, 17 pages

Superconcentrators are switching systems that solve the generic problem of interconnecting clients and servers during sessions, in situations where either the clients or the servers are interchangable (so that it does not matter which client is connected to which server). Previous constructions of superconcentrators have required an external agent to find the interconnections appropriate in each instance. We remedy this shortcoming by constructing superconcentrators that are ``self-routing'', in the sense that they compute for themselves the required interconnections.

Specifically, we show how to construct, for each n, a system S_n with the following properties. (1) The system S_n has n inputs, n outputs, and O(n) components, each of which is of one of a fixed finite number of finite automata, and is connected to a fixed finite number of other components through cables, each of which carries signals from a fixed finite alphabet. (2) When some of the inputs, and an equal number of outputs, are ``marked'' (by the presentation of a certain signal), then after O(log n) steps (a time proportional to the ``diameter'' of the network) the system will establish a set of disjoint paths from the marked inputs to the marked outputs.

TR-93-35 A Model Checker for Statecharts, October 1993 Nancy Day, 98 pages

Computer-Aided Software Engineering (CASE) tools encourage users to codify the specification for the design of a system early in the development process. They often use graphical formalisms, simulation and prototyping to help express ideas concisely and unambiguously. Some tools provide little more than syntax checking of the specification but others can test the model for reachability of conditions, nondeterminism or deadlock.

Formal methods include powerful tools like automatic model checking to exhaustively check a model against certain requirements. Integrating formal techniques into the system development process is an effective method of providing more thorough analysis of specifications than conventional approaches employed by Computer-Aided Software Engineering (CASE) tools. In order to create this link, the formalism used by the CASE tool must have a precise formal semantics that can be understood by the verification tool.

The CASE tool STATEMATE makes use of an extended state transition notation called statecharts. We have formalized an operational semantics for statecharts by embedding them in the logical framework of an interactive proof-assistant system called HOL. A software interface is provided to extract a statechart directly from the STATEMATE database.

Using HOL in combination with Voss, a binary decision diagram-based verification tool, we have developed a model checker for statecharts which tests whether an operational specification, given by a statechart, satisfies a descriptive specification of the system requirements. The model checking procedure is a simple higher-order logic function which executes the semantics of statecharts.

In this technical report, we describe the formal semantics of statecharts and the model checking algorithm. Various examples, including an intersection with a traffic light and an arbiter, are presented to illustrate the method.

TR-93-36 The Raven Kernel: a Microkernel for shared memory multiprocessors, October 1993 Stuart Ritchie

The Raven kernel is a small, lightweight operating system for shared memory multiprocessors. Raven is characterized by its movement of several traditional kernel abstractions into user space. The kernel itself implements tasks, virtual memory management, and low level exception dispatching. All thread management, device drivers, and message passing functions are implemented completely in user space. This movement of typical kernel-level abstractions into user space can drastically reduce the overall number of user/kernel interactions for fine-grained parallel applications.

TR-93-37 Ranking and Unranking of Trees Using Regular Reductions, October 1993 Pierre Kelsen, 19 pages

(Abstract not available on-line)

TR-93-38 Detection and Estimation of Multiple Disparities by Multi-evidential correlation, October 1993 B, Esf ari, iar and James J. Little, 21 pages

(Abstract not available on-line)

TR-93-39 Tridiagonalization Costs of the Bandwidth contraction and Rutistauser-Schwarz Algorithms, November 1993 Ian Cavers, 20 pages

In this paper we perform detailed complexity analyses of the Bandwidth Contraction and Rutishauser-Schwarz tridiagonalization algorithms using a general framework for the analysis of algorithms employing sequences of either standard or fast Givens transformations. Each algorithm's analysis predicts the number of flops required to reduce a generic densely banded symmetric matrix to tridiagonal form. The high accuracy of the analyses is demonstrated using novel symbolic sparse tridiagonalization tools, Xmatrix and Trisymb.

TR-93-40 Automatic Verification of Asynchronous Circuits, November 1993 Trevor W. S. Lee, Mark R. Greenstreet and Carl-Johan H. Seger, 28 pages

Asynchronous circuits are often used in interface circuitry where traditional, synchronous design methods are not applicable. However, the verification of asynchronous designs is difficult, because standard simulation techniques will often fail to reveal design errors that are only manifested under rare circumstances. In this paper, we show how asynchronous designs can be modeled as programs in the Synchronized Transitions language, and how this representation facilitates rigorous and efficient verification of the designs using ordered binary diagrams (OBDDs). We illustrate our approach with two examples: a novel design of a transition arbiter and a design of a toggle element from the literature. The arbiter design was derived by correcting an error in an earlier attempt. It is noteworthy that the error in the original design, found very quickly using the methods described in this paper, went unnoticed during more than 50 hours of CPU time simulation 2^31 state transitions.

TR-93-41 A Simple Theorem Prover Based on symbolic Trajectory Evaluation and OBDD's, November 1993 Scott Hazelhurst and Carl-Johan H. Seger, 44 pages

Formal hardware verification based on symbolic trajectory evaluation shows considerable promise in verifying medium to large scale VLSI designs with a high degree of automation. However, in order to verify today's designs, a method for composing partial verification results is needed. One way of accomplishing this is to use a general purpose theorem prover to combine the verification results obtained by other tools. However, a special purpose theorem prover is more attractive since it can more easily exploit symbolic trajectory evaluation (and may be easier to use). Consequently we explore the possibility of developing a much simpler, but more tailor made, theorem prover designed specifically for combining verification results based on trajectory evaluation. In the paper we discuss the underlying inference rules of the prover as well as more practical issues regarding the user interface. We finally conclude with a couple of examples in which we are able to verify designs that could not have been verified directly. In particular, the complete verification of a 64 bit multiplier takes approximately 15 minutes on a Sparc 10 machine.

TR-93-42 Juggling Networks, November 1993 Nicholas Pippenger, 13 pages

Switching networks of various kinds have come to occupy a prominent position in computer science as well as communication engineering. The classical switching network technology has been space-division-multiplex switching, in which each switching function is performed by a spatially separate switching component (such as a crossbar switch). A recent trend in switching network technology has been the advent of time-division-multiplex switching, wherein a single switching component performs the function of many switches at successive moments of time according to a periodic schedule. This technology has the advantage that nearly all of the cost of the network is in inertial memory (such as delay lines), with the cost of switching elements growing much more slowly as a function of the capacity of the network.

In order for a classical space-division-multiplex network to be adaptable to time-division-multiplex technology, its interconnection pattern must satisfy stringent requirements. For example, networks based on randomized interconnections (an important tool in determining the asymptotic complexity of optimal networks) are not suitable for time-division-multiplex implementation. Indeed, time-division-multiplex implementations have been presented for only a few of the simplest classical space-division-multiplex constructions, such as rearrangeable connection networks.

This paper shows how interconnection patterns based on explicit constructions for expanding graphs can be implemented in time-division-multiplex networks. This provides time-division-multiplex implementations for switching networks that are within constant factors of optimal in memory cost, and that have asymptotically more slowly growing switching costs. These constructions are based on a metaphor involving teams of jugglers whose throwing, catching and passing patterns result in intricate permutations of the balls. This metaphor affords a convenient visualization of time-division-multiplex activities that should be of value in devising networks for a variety of switching tasks.

TR-93-43 Similarity Metric Learning for a Variable-Kernel Classifier, November 1993 David G. Lowe, 15 pages

Nearest-neighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than back propagation and most other learning methods. An important advantage is that the system can operate as a black box in which no model minimization parameters need to be experimentally set by the user. The number of parameters that must be determined through optimization are orders of magnitude less than for back-propagation or RBF networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain.

TR-93-44 Discrete Conservative Approximation of Hybrid Systems, November 1993 Carl-Johan H. Seger and Andrew Martin, 39 pages

Systems that are modeled using both continuous and discrete mathematics are commonly called hybrid systems. Although much work has been done to develop frameworks in which both types of systems can be handled at the same time, this is often a very difficult task. Verifying that desired properties hold in such hybrid models is even more daunting. In this paper we attack the problem from a different direction. First we make a distinction between two models of the system. A detailed model is developed as accurately as possible. Ultimately, one must trust in its correctness. An abstract model, which is typically less detailed, is actually used to verify properties of the system. The detailed model is typically defined in terms of both continuous and discrete mathematics, whereas the abstract one is typically discrete. We formally define the concept of conservative approximation, a relationship between models, that holds with respect to a translation between the questions that can be asked of them. We then progress by developing a theory that allows us to build a complicated detailed model by combining simple primitives, while simultaneously, building a conservative approximation, by similarly combining pre-defined parameterised approximations of those primitives.

TR-93-45 VOSS - A Formal Hardware Verification System User's Guide, November 1993 Carl-Johan Seger

The Voss system is a formal verification system aimed primarily at hardware verification. In particular, verification using symbolic trajectory evaluation is strongly supported. The Voss system consists of a set of programs. The main one is called fl and is the core of the verification system. Since the metalanguage in fl is a fully general functional language in which Ordered Binary Decision Diagrams (OBDDs) have been built in, the verification system is not only useful for carrying out trajectory evaluation, but also for experimenting with various verification (formal and informal) techniques that require the use of OBDDs. This document is intended as both a user's guide and (to some extent) a reference guide. For the Voss alpha release, this document is still quite incomplete, but work is underway to remedy this.

TR-93-47 We Have Never-Forgetful Flowers In Our Garden: Girls' Responses to Electronic Games, December 1993 Kori Inkpen, Rena Upitis, Maria Klawe, Joan Lawry, Ann Anderson, Mutindi Ndunda, Kamran Sedighian, Steve Lerous and David Hsu

Electronic Games for Education in Math and Science (E-GEMS) is a large-scale research project designed to increase the proportion of children who enjoy learning and mastering mathematical concepts through the use of electronic games. This paper describes one piece of research that examines how girls interact within an electronic games environment. Three interrelated questions are addressed in this paper: What interest do girls show in electronic games when the games are presented in an informal learning environment? How do girls play and watch others play? How does the presence of others in the immediate vicinity influence the ways that girls play?

The research described was conducted at an interactive science museum, Science World BC, during the summer of 1993. Children were observed while they played with various electronic games, both video and computer. In addition, interviews were conducted with the children and timed samplings were recorded. Our observations and interviews show that girls have an interest in electronic games and enjoy playing. Girls were particularly interested when given the opportunity to socially interact with others. In addition, they indicated a preference for playing on computers over video game systems.

TR-93-49 Simulated Annealing for Profile and Fill Reduction of Sparse Matrices, March 21, 1993 Robert R. Lewis, 49 pages

Simulated annealing can minimize both profile and fill of sparse matrices. We applied these techniques to a number of sparse matrices from the Harwell-Boeing Sparse Matrix Collection. We were able to reduce profile typically to about 80% of that attained by conventional profile minimization techniques (and sometimes much lower), but fill reduction was less successful (85% at best). We present a new algorithm that significantly speeds up profile computation during the annealing process. Simulated annealing is, however, still much more time-consuming than conventional techniques and is therefore likely to be useful only in situations where the same sparse matrix is being used repeatedly.

TR-93-50 Volume Models for Volumetric Data, May 30, 1993 Vishwa Ranjan and Alain Fournier, 26 pages

In order to display, transform, and compare volumetric data, it is often convenient or necessary to use different representations derived from the original discrete voxel values. In particular, several methods have been proposed to compute and display an iso-surface defined by some threshold value. In this paper we describe a method to represent the volume enclosed by an iso-surface as the union of simple volume primitives. The needed properties (displayed image, volume, surface, etc.) are derived from this representation. After a survey of properties that might be needed or useful for such representations, we show that some important ones are lacking in the representations used so far. Basic properties include efficiency of computation, storage, and display. Some other properties of interest include stability (the fact that the representation changes little for a small change in the data, such as noise or small distortions), the ability to determine the similarities between two data sets, and the computation of simplified models. We illustrate the concept with two distinct representations, one based on the union of tetrahedra derived from a Delaunay tetrahedralization of boundary points, and an other based on overlapping spheres. The former is simple and efficient in most respects, but is not stable, while the latter needs heuristics to be simplified, but can be made stable and useful for shape comparisons. This approach also helps to develop metrics indispensable to study and compare such representations.

TR-93-51 High-Speed Visual Estimation Using Preattentive Processing, March 3, 1993 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 27 pages

A new method is presented for performing rapid and accurate numerical estimation. It is derived from principles arising in an area of cognitive psychology called preattentive processing. Preattentive processing refers to an initial organization of the human visual system based on operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, intensity, orientation, size, and motion. We believe that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high speed target, boundary, and region detection are important. In our present study, we investigated two known preattentive features (hue and orientation) in the context of a new task (numerical estimation) in order to see whether preattentive estimation was possible. Our experiments tested displays that were designed to visualize data from simulations being run in the Department of Oceanography. The results showed that rapid and accurate estimation is indeed possible using either hue or orientation. Furthermore, random variation of one of these features resulted in no interference when subjects estimated the numerosity of the other. To determine the robustness of our results, we varied two important display parameters, display duration and feature difference, and found boundary conditions for each. Implications of our results for application to real-word data and tasks are discussed.

TR-93-52 From the Look of Things, October 30, 1993 Alain Fournier, 10 pages

We can't help wonder occasionally about what we do. The following is the result of such wondering, using a unique opportunity to get a paper in without much scrutiny.

TR-93-53 A Model for Coordinating Interacting Agents, October 30, 1993 Paul Lalonde, Robert Walker, Jason Harrison and David Forsey, 8 pages

SPAM (Simulated Platform for Animating Motion) is a simulation software system designed to address synchronization issues pertaining to both animation and simulation. SPAM provides application programs with the manipulation, configuration, and synchronization tools needed when simulations are combined to create animations. It is designed to be used as the glue between applications that supply lists of the parameters to animate and the callback procedures to invoke when a user wishes to modify the parameters directly. SPAM does not impose a particular model of simulation, accommodating keyframing, physical simulation, or a variety of other models, providing they can be abstracted into a set of externally modifiable values. In SPAM we recognize that the important part of simulation is not the state of the system at each time step, but rather the change in states between steps. Thus SPAM uses an interval representation of time, explicitly representing the intervals over which change occurs. In a complex animation or simulation, multiple actions will access the same resource at the same time. SPAM defines a strategy for recognizing such conflicts that increases the use and re-use of sequences.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.