Technical Reports

The ICICS/CS Reading Room


All UBC CS Technical Report Abstracts

TR-73-01 A Comparison of Some Numerical Methods for Two-point Boundary Value Problems, January 1973 Jim M. Varah

(Abstract not available on-line)

TR-73-02 On the Efficiency of Clique Detection in Graphs, January 1973 A. H. Dixon

(Abstract not available on-line)

TR-74-01 A Comparison of Global Methods for Linear Two-point Boundary Value Problems, January 1974 R. D. Russel and Jim. M. Varah

(Abstract not available on-line)

TR-74-02 On the Condition of Piecewise Polynomial Finite Element Bases, January 1974 Jim. M. Varah

(Abstract not available on-line)

TR-74-03 Alternate Row and Column Elimination for Certain Linear Systems, January 1974 Jim. M. Varah

(Abstract not available on-line)

TR-75-01 Stiffly Stable Linear Multistep Methods of Extended Order, January 1975 Jim. M. Varah

(Abstract not available on-line)

TR-75-02 Code Compaction for Minicomputers with INTCODE and MINICODE, January 1975 J.E.L. Peck, V. S. Manis and W. E. Webb

(Abstract not available on-line)

TR-75-03 Consistency in Networks of Relations, January 1975 Alan K. Mackworth

(Abstract not available on-line)

TR-75-04 How To See A Simple World, January 1975 Alan K. Mackworth

(Abstract not available on-line)

TR-75-05 A Case Driven Parser For Natural Language, January 1975 E.H. Taylor and R. S. Rosenberg

(Abstract not available on-line)

TR-77-01 On the Invariance of the Interpolation Points of the Discrete l1-approximation, February 1977 Uri Ascher, 14 pages

Consider discrete $l_{1}$-approximations to a data function $f$, on some finite set of points $X$, by functions from a linear space of dimension $m < \infty$. It is known that there always exists a best approximation which interpolates $f$ on a subset of $m$ points of $X$. This does not generally hold for the ``continuous'' $L_{1}$-approximation on an interval, as we show by means of an example.

We investigate the invariance of the interpolation points of the discrete $l_{1}$-approximation under a change in the approximated function. Conditions are given, under which the interpolant to a function $g$ on a set of ``best $l_{1}$ points'' of a function $f$ is a best $l_{1}$-approximant to $g$. Additional results are then obtained for the particular case of spline $l_{1}$-approximation.

TR-77-02 On Reading Sketch Maps, May 1977 A. K. Mackworth, 28 pages

A computer program, named MAPSEE, for interpreting maps sketched freehand on a graphical data tablet is described. The emphasis in the program is on discovering cues that invoke descriptive models which capture the requisite cartographic and geographic knowledge. A model interprets ambiguously the local environment of a cue. By resolving these interpretations using a new network consistency algorithm for n-ary relations, MAPSEE achieves an interpretation of the map. It is demonstrated that this approach can be made viable even though the map cannot initially be properly segmented. A thoroughly conservative, initial, partial segmentation is described. The effects of its necessary deficiencies on the interpretation process are shown. The ways in which the interpretation can refine the segmentation are indicated.

TR-77-03 Computers and the Mechanication of Judgement, April 1977 A. Mowshowitz

Computer-based information systems are playing an increasingly important role in organizational decision-making. Although high level managers are not in imminent danger of extinction, many managerial functions have been substantially altered or replaced by computer systems. These developments are viewed here as an extension of bureaucratic rationalism, the peculiar innovative spirit of large-scale enterprise. Advanced information technology in large organizations appears to promote the elaboration of hierarchically structured control mechanisms, and to further the resolution of complex decision tasks into routine procedures. Since the technology could in principle be used to support radically different modes of organization, an explanation must be sought in the evolution of bureaucracy.

Efforts to improve productivity and efficiency affect the distribution of power and authority, so that technical innovation in management raises serious ethical and political problems. Historical observations and empirical results point to a contradiction between bureaucratic rationalism and individual autonomy. This contradiction is revealed in the impact of computer applications on the conduct of certain classes of decision-makers. Policy issues are transformed into technical questions, and opportunities for exercising independent judgment are diminished as analysis of means displaces exploration of ends. I will attempt to show how this transformation is accomplished in the rationalization of functions which typically accompanies the introduction of computer systems.

TR-77-04 A New Notation for Derivations, June 1977 J. L Baker

Ordered directed graphs (generalizing ordered trees) are defined and used in a new formal definition of grammatical derivation. The latter is shown equivalent to the currently accepted definition. The new scheme is illustrated by detailed proofs of two familiar results: the equivalence of the notions ``context-sensitive'' and ``length-nondecreasing'' as applied to grammars, and an important lemma in the theory of deterministic context-free parsing.

TR-77-05 Lexic Scanners, June 1977 R. A. Fraley

A fast algorithm for a general purpose scanner is presented. It includes a mechanism for permitting user-defined special character tokens. The scanner is able to separate strings of special characters without imposing arbitrary spacing rules on the programmer. An analysis shows that most special character tokens from selected languages could be handled properly by the scanner, even if they were in the same language. Many of the omitted tokens could be confused for combinations of operators, demonstrating the utility of the scanner for preventing lexical ambiguity. The special character analysis is extended to other classes of tokens.

TR-77-06 Unlanguage Grammars and Their Uses, August 1977 R. A. Fraley

A new technique is presented for using context free grammars for the definition of programming languages. Rather than accumulating a number of specialized statement formats, generalized productions specify the format of all statements. A sample unlanguage grammar is presented, and the use of this grammar is described. Some of the difficulties in parsing the language are described.

TR-77-07 Simulation in a Theory of Programmable Machines, July 1977 J. L. Baker

In a theory of machines controlled by programs, automata-theoretic simulation can be presented simply and directly, and can be understood as an aspect of the algebraic structure of such machines.

Here the notions of product and homomorphism of devices in such a theory are presented, along with a notion of (computational) reducibility of one device to another. Simulation in the automata-theoretic sense is formally defined, and its validity as a technique for proving reducibility established uniformly.

The notions of device, product, homomorphism, and reducibility are then extended to model costs of computation evaluated a posteriori (in the manner of concrete complexity studies), and the validity of simulation as a proof technique established in this extended setting.

TR-77-08 Coroutines in a Theory of Programmable Machines, July 1977 J. L. Baker

It is shown that, in the author's theory of programmable machines, the composition of functions computable by programs is in some important cases computable by a program constructed to use the given programs as coroutines. To illustrate the utility of this result, a characterization of the full AFLs in terms of programmable machines is established with its help.

TR-77-09 *FUNL Semantics Work Towards UNCOL, August 1977 R. A. Fraley

An intermediate semantics language, applicable to many source languages and machines, is proposed in this paper. Over its domain and range it promises many of the advantages of the original UNCOL project. Data abstraction is used to hide machine features. The language hides from the source compiler all implementation representations and conventions, except for a few descriptive constants. The semantic model is espandable by means of a library. Higher level semantic models may be implenented in FUNL, reducing compiler writing effort.

TR-77-10 A Simulation Study of Adaptive Scheduling Policies in Interactive Computer Systems, November 1977 Samuel T. Chanson and C. Bishop

A number of adaptive processor scheduling algorithms (i.e, those that will adjust to varying workload conditions so as to maximize performance) for interactive computing systems are examined and new ones proposed. The performance indices chosen are the mean response time and the mean of those response times less than the x percentile for some x. The robustness of the algorithms are studied and a brief discussion on the overheads involved is included.

Simulation is used throughout the study and because of this, the simulator and the workload used are described in some detail. The target machine is a somewhat simplied version of the UBC system which operates an IBM 370/168 running under MTS (Michigan Terminal System). The UBC system is principally used interactively.

TR-77-11 Duals of Intuitionistic Tableaus, October 1977 G. Criscuolo and R. Tortora

We present a dual version of the intuitionistic Beth tableaus with signed formulas introduced in Fitting $\mid9\mid$ proving their correctness and completeness with respect to Kripke models.

TR-77-12 Assaulting the Tower of Babel: Experiences with a Translator Writing System, November 1977 Harvey Abramson, W. F. Appelbe and M. S. Johnson, 11 pages

TRUST is a translator writing system (TWS) which evolved from several available TWS components, including an LR (k) parser generator and a lexical scanner generator. The design and historical development of TRUST are briefly presented, but the paper is primarily concerned with relating critically the experiences gained in applying the TWS to various practical software projects and to the classroom environment. These experiences lead to a discussion of how a modular TWS should be designed and implemented.

TR-77-13 A Collocation Solver for Mixed Order Systems of BVP's, November 1977 Uri Ascher, J. Christiansen and R. D. Russell

(Abstract not available on-line)

TR-77-14 Evaluation of B-splines for Solving Systems of Boundary Value Problems, November 1977 Uri Ascher and R. D. Russell

A general purpose collocation code COLSYS has been written, which is capable of solving mixed order systems of multi-point boundary value ordinary differential equations. The peicewise polynomial solution is given in terms of a B-spline basis.

Efficient implementation of algorithms to calculate with B-splines is a necessary condition for the code to be competitive. Here we describe these algorithms and the special features incorporated to take advantage of the specific environment in which they are used.

TR-77-15 Deductive Question-Answering on Relational Databases, November 1977 R. Reiter

(Abstract not available on-line)

TR-77-16 On Closed World Data Bases, October 1977 R. Reiter

Deductive question-answering system generally evaluate queries under one of two possible assumptions which we in this paper refer to as the open and closed world assumptions. The open world assumption corresponds to the usual first order approach to query evaluation: Given a data base DB and a query Q, the only answers to Q are those which obtain from proofs of Q given DB as hypotheses. Under the closed world assumption, certain answers are admitted as a result of failure to find a proof. More specifically, if no proof of a positive ground literal exists, then the negation of that literal is assumed true.

In this paper, we show that closed world evaluation of an arbitrary query may be reduced to open world evaluation of so-called atomic queries. We then show that the closed world assumption can lead to inconsistencies, but for Horn data bases no such inconsistencies can arise.

Presented at the Workshop on Logic and Data Bases, Toulouse, France, November 16-18, 1977.

TR-77-18 Some Connections Between the Minimal Polynomial and the Automorphism, November 1977 A. Mowshowitz, G. Criscuolo, R. Tortora and Chung-Mo Kwok

The relationship between the spectrum and the automorphism group of a graph is probed with the aid of the theory of finite group representations. Three related topics are explored: l) graphs with non-derogatory adjacency matrix, 2) point-symmetric graphs, and 3) an algorithm for constructing the automorphism group of a prime, point-symmetric graph. First, we give an upper bound on the order of the automorphism group of a graph with non-derogatory adjacency matrix; and show, in a special case, that the degree of each irreducible factor of the minimal polynomial has a natural interpretation in terms of the automorphism group. Second, we prove that the degree of the minimal polynomial of a point-symmetric graph is bounded above by the number of orbits of the stabilizer of any given element. For point-symmetric graphs with a prime number of points, we exhibit a formula linking the degree of the minimal polynomial with the order of the group. Finally, we give a simple algorithm for constructing the automorphism group of a point-symmetric graph with a prime number of points.

TR-77-19 Topics in Discourse Analysis, November 1977 J. E. Davidson

This thesis deals with the theory and analysis of connected English discourse.

The abstract theory of discourse, and its distinguishing characteristics, are discussed. Some problems in computer analysis of discourse are delineated; a method of analysis, based on a modified system of predictions, is introduced, and illustrated with examples from simple stories. A program embodying these concepts is described. Finally, possibilities for discourse, and its place in computational linguistics, are discussed, and directions for further work indicated.

TR-77-20 On the Separation of Two Matrices, December 1977 Jim M. Varah

The sensitivity of the solution $X$ to the matrix equation C$ is primarily dependent on the quantity sep$(A,B)$ introduced by Stewart in connection with the resolution of invariant subspaces. In this paper, we discuss some properties of sep$(A,B)$, give some examples to show how very small it can be for seemingly harmless X^{(k)}B + C$ for solving the matrix equation.

TR-78-01 Exploiting Spectral, Spatial and Semantic Constraints in the Segmentation of Landsat Images, February 1978 Dale Starr and Alan K. Mackworth

A critique of traditional classification techniques for LANDSAT images and consideration of some scene analysis techniques, exploiting spatial organization and meaning, lead to a new approach to computer programs for LANDSAT image understanding. To justify this approach, a program that combines modified maximum likelihood techniques with interpretation-controlled region merging methods to interpret forest cover in LANDSAT images is described. For comparison purposes, a pure supervised classifier using the same data made 43% more errors and produced a segmentation twice as complex.

TR-78-02 Forests and Pyramids: Using Image Hierarchies to Understand Landsat Images, March 1978 Ezio Catanzariti and Alan K. Mackworth

Computer-based Landsat image interpretation has neglected the spatial organization of the image in favour of the spectral and temporal organization. A brief survey of techniques that exploit spatial information, including multistage sampling, is given. Semantically-guided region-merging methods have been used successfully but they require sophisticated and expensive list processing facilities. Similar semantic and spatial sensitivity can be introduced by exploiting a pyramidal, hierarchical representation of the image advocated by Kelly, Tanimoto and Levine. The image pyramid is constructed bottom-up with the original image as the base. Each level is a reduced resolution version of the level below, constructed by averaging the signatures of adjacent pixels at the lower level. By classifying pixels at the higher levels one is efficiently classifying semantically uniform regions in the original image. If, however, a region's signature lies in the spectral overlap of two or more classes its subregions will have to be considered for classification. Several refinements of this technique, including the use of semantically-based region splitting and merging techniques at each level of the pyramid, are described. .br These techniques are used to classify forest cover types on Vancouver Island in a Landsat image. The results of several initial experiments indicate that, compared to a baseline of a traditional supervised maximum-likelihood classifier, the cost of maintaining the pyramid is balanced by the vast reduction in the number of pixel classifications. The spatial homogeneity or readability of the segmented image, as measured by the number of regions, is improved by a factor of three while the accuracy of the classification is unaffected or slightly improved. When the region splitting and merging techniques are applied at each level of the imaqe pyramid the accuracy and the readability of the final segmentation both increase markedly. It is thereby demonstrated that these pyramidal techniques offer many of the advantages of the semantically-driven region-merging approach in a more flexible and efficient fashion. Indeed the two approaches have been combined to achieve substantial benefits for Landsat image interpretation.

TR-78-03 A Procedural Model of Recognition for Machine Perception, March 1978 William S. Heavens

This thesis is concerned with aspects of a theory of machine perception. It is shown that a comprehensive theory is emerging from research in computer vision, natural language understanding, cognitive psychology, and Artificial Intelligence programming language technology. A number of aspects of machine perception are characterized. Perception is a recognition process which composes new descriptions of sensory experience in terms of stored steriotypical knowledge of the world. Perception requires both a schema-based formalism for the representation of knowledge and a model of the processes necessary for performing search and deduction on that representation. As an approach towards the development of a theory of machine perception, a computational model of recognition is presented. The similarity of the model to formal mechanisms in parsing theory is discussed. The recognition model integrates top-down, hypothesis-driven search with bottom-up, data-driven search in hierarchical schemata representations. Heuristic procedural methods are associated with particular schemata as models to guide their recognition. Multiple methods may be applied concurrently in both top-down and botton-up search modes. The implementation of the recognition model as an Artificial Intelligence programming language called MAYA is described. MAYA is a multiprocessing dialect of LISP that provides data structures for representing schemata networks and control structures for integrating top-down and bottom-up processing. A characteristic example from scene analysis, written in MAYA, is presented to illustrate the operation of the model and the utility of the programming language. A programming reference manual for MAYA is included. Finally, applications for both the recognition model and MAYA are discussed and some promising directions for future research proposed.

TR-78-04 An Approach to the Organization of Knowledge for the Modelling of Converstion, February 1978 Gordon I. McCalla

This report describes an approach to modelling conversation. It is suggested that to succeed at this endeavour, the problem must be tackled principally as a problem in pragmatics rather than as one in language analysis alone. Several progmatic aspects of conversation are delineated and it is shown that the attempt to account for them raises a number of general issues in the representation of knowledge. .br A scheme for resolving some of these issues is presented and given computational description as a set of (non-implemented) LISP-based control structures called $\mid$LISP. Central to this scheme are several different types of objects that encode knowledge and communicate this knowledge by passing messages. One particular kind of object, the pattern expression ($\mid$PEXPR), turns out to be the most versatile. $\mid$PEXPRs can encode an arbitrary amount of procedural or declarative information; are capable, as a by-product of their message passing behaviour, of providing both a context for future processing decisions and a record of past processing decisions; and make contributions to the resolution of several artificial intelligence problems. .br Some examples of typical conversations that might occur in the general context of attending a symphony concert are then explored, and a particular model of conversation to handle these examples is detailed in $\mid$LISP. The model is goal oriented in its behaviour, and, in fact, is described in terms of four main goal levels: higher level non-linguistic goals; scripts directing both sides of a conversation; speech acts guiding one conversant's actions; and, finally, language level goals providing a basic parsing component for the model. In addition, a place is delineated for belief models of the conversants, necessary if utterances are to be properly understood or produced. The embedding of this kind of language model in a $\mid$LISP base yields a rich pragmatic environment for analyzing conversation.

TR-78-05 On the Efficent Implementation of Implicit Runge-Kutta Methods, May 1978 James M. Varah

Extending some recent ideas of Butcher, we show how one can efficiently implement general implicit Runge-Kutta methods, including those based on Gaussian quadrature formulas which are particularly useful for stiff equations. With this implementation, it appears that these methods are more efficient than the recently proposed semi-explicit methods and their variants.

TR-78-06 The Design and Implementation of a Run-Time Analysis and Interactive Debugging Environment, January 1978 Mark Scott Johnson

TR-78-07 Optimization of Memory Hierarchies in Multi-programmed Computer Systems with Fixed Cost Constraint, January 1978 Samuel T. Chanson and Prem Swarup Sinha

This paper presents, using queuing theory and optimization techniques, a methodology for estimating the optimal capacities and speeds of the memory levels in a computer system memory hierarchy operating in the multiprogrammed environment. Optimality is with respect to mean system response time under a fixed cost constraint. It is assumed that the number of levels in the hierarchy as well as the capacity of the lowest level are known. The effect of the storage management strategy is characterized by the hit ratio function which, together with the device technology cost functions are assumed to be representable by power functions. It is shown that as the arrival rate of processes and/or the number of active processes in the system increase, the optimal solution deviates considerably from that under a uniprogrammed environment.

TR-78-08 Solving Boundary Value Problems with a Spline-Collocation Code, January 1978 Uri Ascher

TR-78-09 Stability Restrictions on Second Order, Three Level Finite Difference Schemes for Parabolic Equations, December 1978 James M. Varah

In this paper we are concerned with second order schemes which are easy to use, and apply readily to nonlinear equations. We examine the stability restrictions for such schemes using linear stability analysis, and illustrate their behaviour on Burgers' equation.

TR-79-01 In Search of an Optimal Machine Architecture for BCPL, January 1979 R. Agarwal and Samuel T. Chanson

This paper investigates the problem of generating optimal space-efficient code for the language BCPL. Designing such a code was seen to be a two-phase process. The first phase was to describe an internal representation scheme for BCPL programs which preserved those program features which are salient to translation and at the same time minimize the number of instructions generated. The second phase consisted of the realization of the internal representation as an actual machine taking into account the usage frequencies of instructions and other real world constraints such as word size and addressing space. The \underline{i}ntermediate \underline{c}od\underline{e}, called ICE and an encoding scheme (known as ESO, standing for \underline{e}ncoding \underline{s}cheme \underline{0}) are described. ICE/ESO is seen to reduce code size by an average of about 32% compared to BCODE which is a realization of OCODE, the intermediate language currently used in BCPL program translation.

TR-79-02 Anaphoria in natural Language Understanding: A Survey, May 1979 Graeme Hirst

A problem that all computer-based natural language understanding (NLU) systems encounter is that of linguistic reference, and in particular anaphora (abbreviated reference). For example, in a text as simple as: \begin{quote} Nadia showed Sue her new car. The seats were Day-Glo orange. \end{quote} knowing that ``her'' probably means Nadia and not Sue and that ``the seats'' means the seats of Nadia's new car is not a simple task. .br This thesis is an extensive review of the reference and anaphor problem, and the approaches to it that NLU systems have taken, from early systems such as STUDENT through to current discourse-oriented ones such as PAL. .br The problem is first examined in detail, and examples are given of many different types of anaphor, some of which have been ignored by previous authors. The approaches taken in traditional systems are then described and abstracted and it is shown why they were inadequate, and why discourse theme and anaphoric focus need to be taken into account. The strengths and weaknesses of current anaphora theories and approaches are evaluated. The thesis closes with a list of some remaining research problems. .br The thesis has been written so as to be as comprehensible as possible to both AI workers who know no linguistics, and linguists who have not studied artificial intelligence.

TR-79-03 Equality and Domain Closure in First Order Data Bases, January 1979 Raymond Reiter

(Abstract not available on-line)

TR-79-04 Programming Skill Acquisition --- Progress Report, January 1979 V. Manis

(Abstract not available on-line)

TR-79-05 Multi-process Structuring and the THOTH Operating System, March 1979 David R. Cheriton

This report explores the idea of structuring programs as many concurrent processes. It is based on work done in designing, implementing, and using the Thoth operating system. Thoth implements an ahstraction that provides facilities to make this type of structuring attractive, namely inexpensive processes, efficient interprocess communication, dynamic process creation and destruction, and groups of processes sharing a common address space. .br The Thoth abstraction is described, including measurements of its performance and comments on its portability. This abstraction is motivated by considering various design and implementation tradeoffs. Then, the design of multi-process programs is discussed, both in terms of general principles and by giving specific uses and examples to demonstrate the adequacy of the abstraction. Examples are drawn from the operating system and the Thoth text editor. Finally, the feasibility of verifying the system is considered. This is motivated by the desire to exploit the multi-process structure of the system to aid in verification. .br We conclude that structuring programs as multiple processes can have significant benefits, especially for programs that respond to asynchronous events.

TR-79-06 Message-Passing, Spaces and Agents, January 1979 David R. Cheriton

(Abstract not available on-line)

TR-79-07 Saturation Estimation in Interactive Computer Systems, June 1979 Samuel T. Chanson

This paper presents a systematic method for estimating the saturation point of interactive computer systems in an environment of incomplete information (in particular, where per interaction information is unavailable). This method is an improvement over the ad hoc and time-consuming iterative approach commonly used. Following an operational analysis approach and using only commonly available data, a simple model was constructed to estimate the mean response time of a task characterized by its CPU and i/o requests running in a given system CPU, disk, drum and channel load factors. The systems saturation point is obtained from the model using a modified version of the method first proposed by Kleinrock. The interesting concept that tasks with different resource demands experience different saturation loads is discussed. It is shown that the saturation loads as seen by different tasks vary only slightly as the characteristics of the tasks change. The method has been applied on an IBM 370/168 running under the Michigan Terminal System at the University of British Columbia.

TR-79-08 A Logic for Default Reasoning, July 1979 Raymond Reiter

The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non monotonicity of any logic of defaults. .br In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.

TR-79-09 Designing an Operating System to be Verifiable, 1979 David R. Cheriton

(Abstract not available on-line)

TR-79-10 Process Identification in THOTH, October 1979 David R. Cheriton

A scheme is presented for the identification of processes in a minicomputer operating system. This scheme was designed and implemented as part of the development of the Thoth operating system. The scheme is efficient in time and space as well as exhibiting a reasonable lower bound on the minimum recycle time for process identifiers.

TR-79-11 Three BCPL Machines, January 1979 Harvey Abramson

We describe three virtual BCPL machines designed in the Department of Computer Science at the University of British Columbia. The first machine is the Pica-B which is an Intcode machine with an added interrupt register, plus several execute instructions to manipulate this register, and a modification of the routine calling mechanism so that interrupts can be treated as unexptected routine calls. An inline code command, the \underline{vile} command, allows the writing of interrupt routines in BCPL, and also facilitates the portability of the BCPL library by allowing certain routines such as level, longjump, and aptovec to be written in BCPL. The second machine is the SLIM (\underline{S}tack \underline{L}anguage for \underline{I}ntermediate \underline{M}achine) machine which is a stack machine with an accumulator. This architecture permits the representation of BCPL programs with fewer instructions than the OCODE representation requires. The third machine is the ICE machine which permits highly space-efficent representations of BCPL programs by having a large instruction set which includes many variants of the BCPL operators.

TR-79-12 The Pica-B Computer, January 1979 Harvey Abramson, Mark Fox, J. Peck, V. Manis and M. Gorlick

The Pica-B computer is a simple abstract machine designed to: \begin{enumerate} \item facilitate the portability of a simple single user operating environment written in BCPL. \item serve the pedagogic goal of providing a basis for teaching concepts of hardware and system architecture, systems programming and programming language design in a unified setting, and \item serve as a possible solution to the current and future software crisis caused by the advent of the micro-computer. \end{enumerate} The Pica-B is based on Richards' Intcode machine but differs from it in the addition of an (interrupt) status register and a PDP-11 style memory map of I/O devices. The status register and hence interrupt and device handlers may be programmed in Pica-B code (an extention of Intcode) or in a version of BCPL with an added inline code facility, the so-called \underline{vile} command. An example is given of how interrupts and I/O are handled in Pica-B computer.

TR-79-13 Approaching Discourse Computationally: A Review, January 1979 Richard S. Rosenberg

(Abstract not available on-line)

TR-79-14 Representation Spatial Experience \& Solving Spatial Problems in a Simulated Robot Environment, October 1979 Peter Forbes Rowat

This thesis is concerned with spatial aspects of perception and action in a simple robot. To this end, the problem of designing a robot-controller for a robot in a simulated robot-envoronment system is considered. The environment is two-dimentional tabletop with movable polygonal shapes on it. The robot has an eye which `sees' an area of the tabletop centred on itself, with a resolution which decreases from the centre to the periphery. Algorithms are presented for simulating the motion and collision of two dimentional shapes in this environment. These algorithms use representations of shape both as a sequence of boundary points and as a region in digital image. A method is outlined for constructing and updating the world model of the robot as new visual input is received from the eye. It is proposed that, in the world model, the spacial problems of path-finding and object-moving be based on algorithms that find the skeleton of the shape of empty space and of the shape of the moved object. A new iterative algorithm for finding the skeleton, with the property tbat the skeleton of a connected shape is connected, is presented. This is applied to path-finding and simple object-moving problems. Finally, directions for future work are outlined.

TR-79-15 The Design of a Verifiable Operating System Kernel, January 1979 T. Lockhart

(Abstract not available on-line)

TR-80-01 Stiff Stability Considerations for Implicit Runge-Kutta Methods, January 1980 James M. Varah

In this paper, we discuss some recent stiff stability concepts introduced for implicit Runge-Kutta methods, and focus on those properties which are really important from a practical point of view. In particular, the concept of stiff decay is introduced and examined, and applied to the common IRK methods available. The Radau IIA formulas are shown to be particularly useful from this point of view.

TR-80-02 Reformulation of Boundary Value Problems in ``Standard'' Form, February 1980 Uri Ascher and Robert D. Russell

Boundary value problems in ODE's arising in various applications are frequently not in the ``standard'' form required by the currently existing software. However, many problems can be converted to such a form, thus enabling the practitioner to take advantage of the availability and reliability of this general purpose software. Here, various conversion devices are surveyed.

TR-80-03 Automating Physical Reorganizational Requirements at the Access Path Level of a Relational Database System, March 1980 Grant Edwin Weddell

Any design of an access path level of a database management system must make allowance for physical reorganization requirements. The facilities provided for such requirements at the access path level have so far been primitive in nature (almost always, in fact, requiring complicated human intervention). This thesis begins to explore the notion of increasing the degree of automation of such requirements at the access path level; to consider the practical basis for self-adapting or self-organizing data management systems. Consideration is first given to the motivation (justification) of such a notion. Then, based on a review of the relevent aspects of a number of existing data management systems, we present a complete design specification and outline for a proposed access path level. Regarding this system we consider in detail the automation of two major aspects of physical organization: the clustering of records on mass storage media and the selection of secondary indices. The results of our analysis of these problems provides a basis for the ultimate demonstration of feasibility of such automation.

TR-80-04 On the Covering Relaxation Approach for the 0-1 Positive Polynomial Programming Problem, May 1980 Willem Vaessen

Covering relaxation algorithms were first developed by Granot et al for solving positive 0-1 polynomial programming (PP) problems which maximize a linear objective function in 0--1 variables subject to a set of polynomial inequalities containing only positive coefficients [``Covering Relaxation for Positive 0--1 Polynomial Programs'', Management Science, Vol. 25, (1979)]. The covering relaxation approach appears to cope successfully with the non-linearity of the PP problem and is able to solve modest size (40 variables and 40 constraints) sparse PP problems. This thesis develops a more sophisticated covering relaxation method which accelerates the performance of this approach, especially when solving PP problems with many terms in a constraint. Both the original covering relaxation algorithm and the newly introduced accelerated algorithm are cutting plane algorithms in which the relaxed problem is the set covering problem and the cutting planes are linear covering constraints. In contrast with other cutting plane methods in integer programming, the accelerated covering relaxation algorithm developed in this thesis does not solve the relaxed problem to optimality after the introduction of the cutting plane constraints. Rather, the augmented relaxed problem is first solved approximately and only if the approximate solution is feasible to the PP problem is the relaxed problem solved to optimality. The promise of this approach stems from the excellent performance of approximate procedures for solving integer programming problems. Indeed, the extensive computational experiments that were performed show that the accelerated algorithm has reduced both the number of set covering problems to be solved and the overall time required to solve a PP problem. The improvements are particularly significant for PP problems with many terms in a constraint.

TR-80-05 Optimal Load Control in Combined Batch-Interactive Computer Systems, April 1980 Samuel T. Chanson and Prem Swarup Sinha

(Abstract not available on-line)

TR-80-06 On the Integrity of Typed First Order Data Bases, April 1980 Raymond Reiter

A typed first order data base is a set of first order formulae, each quantified variable of which is constrained to range over some type. Formally, a type is simply a distinguished monadic relation, or some Boolean combination of these. Assume that with each data base relation other than the types is associated an integrity constraint which specifies which types of individuals are permitted to fill the argument positions of that relation. The problem addressed in this paper is the detection of violations of these integrity constraints in the case of data base updates with universally quantified formulae. The basic approach is to first transform any such formula to its so-called reduced typed normal form, which is a suitably determined set of formulae whose conjunction turns out to be equivalent to the original formula. There are then simple criteria which, when applied to this normal form, determine whether that formula violates any of the argument typing integrity constraints.

TR-80-07 Some Representational Issues in Default Reasoning, August 1980 Raymond Reiter and Giovanni Criscuolo

Although most commonly occurring default rules are normal when I viewed in isolation, they can interact with each other in ways that lead to the derivation of anomalous default assumptions. In order to deal with such anomalies it is necessary to re-represent these rules, in some cases by introducing non-normal defaults. The need to consider such potential interactions leads to a new concept of integrity, distinct from the conventional integrity issues of first order data bases. .br The non-normal default rules required to deal with default interactions all have a common pattern. Default theories conforming to this pattern are considerably more complex than normal default theories. For example, they need not have extensions, and they lack the property of semi-monotonicity. .br Current semantic network representations fail to reason correctly with defaults. However, when viewed as indexing schemes on logical formulae, networks can be seen to provide computationally feasible heuristics for the consistency checks required by default reasoning.

TR-80-08 A Spline Least Squares Method for Numerical Estimation in Differential Equations, January 1980 James M. Varah

In this paper, we describe a straightforward least squares approach to the problem of finding numerical values for parameters occurring in differential equations, so that the solution best fits some observed data. The method consists of first fitting the given data by least squares using cubic spline functions with knots chosen interactively, and then finding the parameters by least squares solution of the differential equation sampled at a set of points. We illustrate the method by three problems from chemical and biological modelling.

TR-80-09 Why Is a Goto Like a Dynamic Vector in the BCPL-Slim Computing System, November 1980 Harvey Abramson

The Slim computer is a new virtual machine which can be used in the translation and porting of the BCPL compiler, and eventually, in the porting of an operating system written in BCPL. For the purposes of this paper, the Slim computer is a stack machine with a single accumulator and a register which points to the top of the stack. The procedures LEVEL and LONGJUMP, traditionally used to implement transfers of control across BCPL procedures, and which are usually written in the assembler language of a host machine, cannot be used with this architecture. In developing procedures to implement \underline{all} transfers of control, we show how these essential procedures --- though highly dependant on the slim architecture --- can be written portably in BCPL, and discover an interesting connection between implementing jumps and dynamic vectors (by means of Aptovec) in the BCPL-Slim computing system. Some parameters of portability in rapping an abstract machine to host machines are identified, and it is shown how to maintain the portability of the above mentioned procedures in the face of various mapping problems. Finally, we are led to a comment on the design of BCPL to the effect that \underline{goto}'s are an unnecessary feature of the language.

TR-80-10 Automatic Rectification of Landsat Durages Using Features Derived from Digital Terrain Models, December 1980 James Joseph Little

Before two images of the same object can be compared, they must be brought into correspondence with some reference datum. This process is termed registration. The reference can be one of the images, a synthetic image, a map or other symbolic representation of the area imaged. A novel method is presented for automatically determining the transformation to align a Landsat image to a digital terrain model, a structure which represents the topography of an area. Parameters of an affine transformation are computed from the correspondence between features of terrain derived from the digital terrain model, and brightness discontinuities extracted from the Landsat image.

TR-81-01 Distributed I/O Using an Object-Based Protocol, January 1981 David R. Cheriton

The design of a distributed I/O system is described. The system is distributed in being implemented by server processes, client processes and a message communication mechanism between them. Data transfer between processes is achieved using a ``connectionless'' object-based protocol. The concept of \underline{file} is generalized to that of a \underline{view} of an object or activity managed by a server. This allows many objects, including application-defined objects, to be viewed or accessed within the program I/O paradigm. Files are instantiated as \underline{file instance} objects to allow access to the associated data. Conventional byte-stream program input/output facilities are supported by a subroutine library which makes the message-based implementation transparent to applications.

TR-81-02 Collocation for Singular Perturbation Problems I: First Order Systems with Constant Coefficients, February 1981 Uri Ascher and R. Weiss

The application of collocation methods for the numerical solution of singularly perturbed ordinary differential equations is investigated. Collocation at Gauss, Radau and Lobatto points is considered, for both initial and boundary value problems for first order systems with constant coefficients. Particular attention is paid to symmetric schemes for boundary value problems; these problems may have boundary layers at both interval ends. .br Our analysis shows that certain collocation schemes, in particular those based on Gauss or Lobatto points, do perform very well on such problems, provided that a fine mesh with steps proportional to the layers' width is used in the layers only, and a coarse mesh, just fine enough to resolve the solution of the reduced problem, is used in between. Ways to construct appropriate layer meshes are proposed. Of all methods considered, the Lobatto schemes appear to be the most promising class of methods, as they essentially retain their usual superconvergence power for the smooth, reduced solution, whereas Gauss-Legendre schemes do not. .br We also investigate the conditioning of the linear systems of equations arizing in the discretization of the boundary value problem. For a row equilibrated version of the discretized system we obtain a pleasantly small bound on the maximum norm condition number, which indicates that these systems can be solved safely by Gaussian elimination with scaled partial pivoting.

TR-81-03 On Pseudo-Similar Vertices in Trees, April 1981 David G. Kirkpatrick, Maria M. Klawe and D. G. Corneil

Two dissimilar vertices $u$ and $v$ in a graph $G$ are said to be pseudo-similar if $G \backslash u \cong G \backslash v$. A characterization theorem is presented for trees (later extended to forests and block-graphs) with strictly pseudo-similar (i.e. pseudo-similar but dissimilar) vertices. It follows from this characterization that it is not possible to have three or more mutually strictly pseudo-similar vertices in trees. Furthermcre, pseudo-similarity combined with an extension of pseudo-similarity to include the removal of first neighbourhoods of vertices is sufficient to imply similarity in trees. Neither of these results holds if we replace trees by arbitrary graphs.

TR-81-04 On Spline Basis Selection for Solving Differential Equations, April 1981 Uri Ascher, S. Pruess and Robert D. Russell

The suitability of B-splines as a basis for piecewise polynomial solution representation for solving differential equations is challenged. Two alternative local solution representations are considered in the context of collocating ordinary differential equations: ``Hermite-type'' and ``monomial''. Both are much easier and shorter to implement and somewhat more efficient than B-splines. .br A new condition number estimate for the B-splines and Hermite-type representations is presented. One choice of the Hermite-type representation is experimentally determined to produce roundoff errors at most as large as those for B-splines. The monomial representation is shown to have a much smaller condition number than the other ones, and correspondingly produces smaller roundoff errors, especially for extremely nonuniform meshes. The operation counts for the two local representations considered are about the same, the Hermite-type representation being slightly cheaper. It is concluded that both representations are preferable, and the monomial representation is particularly recommended.

TR-81-05 The Application of Optimal Stochastic Control Theory in Computer System Load Regulation, June 1981 Samuel T. Chanson and Raymond Lo

A method using some results and techniques of Optimal Stochastic Control Theory is introduced to compute the optimal admission policy for paged batch-interactive computer systems. The admission policy determines the optimal number of batch and terminal jobs that should be activated at each system state to maximize throughput. The system state is defined as the vector (N1,N2) where N1 and N2 are respectively the total number of terminal and batch jobs in the system. Thus the policy is adaptive to workload variation. As well, the quality of service given to each class of jobs (specifically their mean response times) can be adjusted by choosing a suitable weight for the terminal jobs. A large weight reduces the mean response time of the terminal jobs at the expense of the mean batch response time while maintaining the total system throughput at its maximum level. .br Unlike most existing adaptive control algorithms, the approach is based on mathematical modelling and its extension to cover the case of more than two classes of jobs is straightforward.

TR-81-06 The Computer and the State, July 1981 Richard S. Rosenberg

It is obviously difficult (if not impossible) to predict the impact on society of technological innovation. However, it is clear that such major events as the industrial revolution recreate society in a profound and enduring manner. In our own time, the development of the computer promises to transform dramatically the major industrial countries of the world. The resulting effects on the so-called Third World countries will hardly be less significant. .br The purpose of this paper is twofold. First, we wish to catalogue many of the ways computers have affected and are likely to affect our daily lives. A second purpose is to employ this analysis to explore the effect of the massive ``computerization'' of society on a number of its institutions. It is hoped that the material provided will be useful to those whose major concern is the evolution of the modern state in response to technological innovation.

TR-81-07 On the Complexity of General Graph Factor Problems, August 1981 David G. Kirkpatrick and P. Hell

For arbitrary graphs G and H, a G-factor of H is a spanning subgraph of H composed of disjoint copies of G. G-factors are natural generalizations of l-factors (or perfect matchings), in which G replaces the complete graph on two vertices. Our results show that the perfect matching problem is essentially the only instance of the G-factor problem that is likely to admit a polynomial time bounded solution. Specifically, if G has any component with three or more vertices then the existence question for G-factors is NP-complete. (In all other cases the question can be resolved in polynomial time.) .br The notion of a G-factor is further generalized by replacing G by an arbitrary family of graphs. This generalization forms the foundation for an extension of the traditional theory of matching. This theory, whose details will be developed elsewhere, includes, in addition to further NP-completeness results, new polynomial algorithms and simple duality results. Some indication of the nature and scope of this theory are presented here.

TR-81-08 Solvable Cases of the Travelling Salesman Problem, September 1981 Paul C. Gilmore

This paper is a chapter in a book on the travelling salesman problem edited by Eugene L. Lawler, Jan Karel Lenstra and Alexander H.G. Rinooy Kan. By a solvable case of the travelling salesman problem is meant a case of the distance matrix for which a polynomial algorithm exists. In this paper several previously known special cases are related and extended. Further, an upper bound is obtained on the cost of an optimal tour for a broad class of matrices.

TR-81-09 Strategy-Independent Program Restructuring Based on Bounded Locality Intervals, August 1981 Samuel T. Chanson and Bernard Law

A new program restructuring algorithm based on the phase/transition model of program behaviour is presented. The scheme places much more emphasis on those blocks in the transition phases in the construction of the connectivity matrix than the existing algorithms. This arises from the observation that the page fault rate during the transition phases is several orders of magnitude higher than that during the major phases. The strategy is found, for our reference strings, to outperform the critical working set strategy (considered to be the current best), by non-negligible amounts. Furthermore, the overhead involved is lower than that of CWS and not much higher than that of the Nearness method which is the simplest scheme known. Being strategy-independent, it also seems to respond better than CWS when the memory management strategy used is not the working set policy.

TR-81-10 Pitfalls in the Numerical Solution of Linear Ill-Posed Problems, January 1981 James M. Varah

Very special computational difficulties arise when attempting to solve linear systems arising from integral equations of the first kind. We examine here existence and uniqueness questions associated with so-called \underline{reasonable} solutions for such problems, and present results using the best- known methods on inverse Laplace transform problems. We also discuss the choice of free parameters occurring in these methods, from the same point of view.

TR-81-11 Optimal Macro-Scheduling, August 1981 Samuel T. Chanson and Prem Swarup Sinha

A multi-class macro-scheduler is described in this paper. The scheduler periodically determines the number of jobs from each class that should be activated to minimize a weighted sum of the mean system residence time without saturating the system. The computation is based on the estimated system workload in the next interval. Thus it is adaptive to workload variation. The service provided to each class (specifically, the mean response time) may be adjusted by changing the weight associated with the job class. .br The scheme is based on mathematical modelling. The solution is obtained through the use of operational analysis method and optimization theory. Exponential smoothing technique is employed to reduce the error of estimating the value of the model parameters. From our simulation results, the scheme appears to be both stable and robust. Performance improvement over some of the $S and the Knee criteria) is significant under some workloads. The overhead involved in its implementation is acceptable and the error due to some of the assumptions used in the formulation and solution of the model are discussed.

TR-81-12 Upper Bounds for Sorting Integers on Random Access Machines, September 1981 David G. Kirkpatrick and Stefan Reisch

Two models of Random Access Machines suitable for sorting integers are presented. Our main results show that i) a RAM with addition, subtraction, multiplication, and integer division can sort $n$ integers in the range $[0,2^{cn}]$ in $O(n \log c + n)$ steps; ii) a RAM with addition, subtraction, and left and right shifts can sort any $n$ integers in linear time; iii) a RAM with addition, subtraction, and left and right shifts can sort $n$ integers in the range $[0,n^{C}]$ in $O(n \log c + n)$ steps, where all intermediate results are bounded in value by the largest input.

TR-81-13 Optimal Search in Planar Subdivisions, January 1981 David G. Kirkpatrick

A planar subdivision is any partition of the plane into (possibly unbounded) polygonal regions. The subdivision search problem is the following: given a subdivision $S$ with $n$ line segments and a query point $p$, determine which region of $S$ contains $p$. We present a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage. Our subdivision search structure can be constructed in linear time from the subdivision representation used in many applications.

TR-81-14 A Convex Hull Algorithm Optimal for Point Sets in Even Dimensions, September 1981 Raimund Seidel

Finding the convex hull of a finite set of points is important not only for practical applications but also for theoretical reasons: a number of geometrical problems, such as constructing Voronoi diagrams or intersecting hyperspheres, can be reduced to the convex hull problem, and a fast convex hull algorithm yields fast algorithms for these other problems. .br This thesis deals with the problem of constructing the convex hull of a finite point set in $R^{d}$. Mathematical properties of convex hulls are developed, in particular, their facial structure, their representation, bounds on the number of faces, and the concept of duality. The main result of this thesis is an $O(n \log n + n^{\lfloor(d+1)/2\rfloor})$ algorithm for the construction of the convex hull of $n$ points in $R^{d}$. It is shown that this algorithm is worst case optimal for even $d \geq 2$.

TR-81-15 On the Shape of a Set of Points in the Plane, September 1981 H. Edelsbrunner, David G. Kirkpatrick and Raimund Seidel

A generalization of the convex hull of a finite set of points in the plane is introduced and analyzed. This generalization leads to a family of straight-line graphs, called ``shapes'', which seem to capture the intuitive notion of ``fine shape'' and ``crude shape'' of point sets. .br Additionally, close relationships with Delaunay triangulations are revealed and, relying on these results, an optimal algorithm that constructs ``shapes'' is developed.

TR-81-16 Optimization Techniques in Computer System Design \& Load Control, September 1981 Prem Swarup Sinha

Analytic modelling has proven to be cost-effective in the performance evaluation of computer systems. So far, queueing theory has been employed as the main tool. This thesis extends the scope of analytic modelling by using optimization techniques along with queuing theory in solving the decision-making problems of performance evaluation. Two different problems have been attempted in this thesis. .br First, a queueing network model is developed to find the optimal capacities and speeds of the memory levels in a memory hierarchy system operating in a multiprogrammed environment. Optimality is defined with respect to mean system response time under a fixed cost constraint. It is assumed that the number of levels in the hierarchy as well as the capacity of the lowest level are known. The effect of storage management strategy and program behaviour are characterised by the miss ratio function which, together with the device technology cost function, is assumed to be represented by power functions. It is shown that the solution obtained is globally optimal. .br Next, two adaptive schemes, SELF and MULTI-SELF, are developed to control the flow of jobs in a multiprogrammed computer system. They periodically determine the number of jobs from each class that should be activated to minimize the mean system residence time without saturating the system. The computation is based on the estimated system workload in the next interval. An exponential smoothing technique is used to reduce the error in estimating the values of the model parameters. The service provided to each class (specifically, the mean response time) may be adjusted by changing the weight associated with the job class. From our simulation results, the schemes appear to be both stable and robust. Performance improvement over $S and the Knee criteria) is significant under some workloads. The overhead involved in its implementation is acceptable and the error due to some of the assumptions in the formulation and solution of the model are discussed.

TR-82-01 Representing Techniques in Computer System Design \& Load Control, March 1982 Robert Ernest Mercer and Raymond Reiter

This paper is a first step towards the computation of an inference based on \underline{language use}, termed \underline{presupposition}. Natural languages, unlike formal languages, can be \underline{semantically ambiguous}. These ambiguities are resolved according to \underline{pragmatic rules}. We take the position that presuppositions are inferences generated from these pragmatic rules. Presuppositions are then used to generate the \underline{preferred interpretation} of the ambiguous natural language sentence. A preferred interpretation can be circumvented by an explicit inconsistency. This paper discusses the appropriateness of using \underline{default rules} (Reiter(1980)) to represent certain common examples of presupposition in natural language. We believe that default rules are not only appropriate for representing presuppositions, but also provide a formal explanation for a precursory consistency-based presuppositional theory (Gazdar(1979)).

TR-82-02 On Fitting Exponentials by Nonlinear Least Squares, January 1982 James M. Varah

This paper is concerned with the problem of fitting discrete data, or a continuous function, by least squares using exponential functions. We examine the questions of uniqueness and sensitivity of the best least squares solution, and provide analytic and numerical examples showing the possible non-uniqueness, and extreme sensitivity of these solutions.

TR-82-03 The Complexity of Regular Expressions with Goto and Boolean Variables, March 1982 Karl Abrahamson

Regular expressions can be extended by adding gotos and Boolean variables. Although such extensions do not increase the class of expressible languages, they do permit shorter expressions for some languages. The output space complexity of eliminating Boolean variables is shown to be double exponential. The complexity of eliminating a single goto from a regular expression is shown to be \( \Omega (n \log n) \), a surprising result considering that $n$ gotos can be eliminated in single exponential space.

TR-82-04 Collocation for Singular Perturbation Problems II, May 1982 Uri Ascher and Weiss R.

We consider singularly perturbed linear boundary value problems for ODES, with variable coefficients, but without turning points. Convergence results are obtained for collocation schemes based on Gauss and Lobatto points, showing that highly accurate numerical solutions for these problems can be obtained at a very reasonable cost using such schemes, provided that appropriate meshes are used. The implementation of the numerical schemes and the practical construction of corresponding meshes are discussed. .br These results extend those of a previous paper which deals with systems with constant coefficients.

TR-82-05 A Regression Model of a Swapping System, July 1982 Samuel T. Chanson

This paper describes a measurement experiment performed on a PDP 11/45 system running under UNIX (version six) which employs swapping rather than paging in managing memory. Regression equations relating the system's responsiveness to certain system and workload parameters are obtained. Sample applications such as predicting the system's performance due to workload and system changes, load control as well as representing the swapping behaviour in simulation and analytic models are presented. The similarities between the paging and swapping dynamics are discussed. The paper also includes a brief discussion of the accuracy of the model as well as the advantages and disadvantages of the regression technique.

TR-82-06 The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problems, August 1982 Alan K. Mackworth and E. C. Freuder

Constraint satisfaction problems play a central role in artificial intelligence. A class of network consistency algorithms for eliminating local inconsistencies in such problems has previously been described. In this paper we analyze the time complexity of several node, arc and path consistency algorithms. Arc consistency is achievable in time linear in the number of binary constraints. The Waltz filtering algorithm is a special case of the arc consistency algorithm. In that computational vision application the constraint graph is planar and so the complexity is linear in the number of variables.

TR-82-07 Unification Based Conditional Binding Constructs, September 1982 Harvey Abramson

The unification algorithm, heretofore used primarily in the mechanization of logic, can be used in applicative programming languages as a pattern matching tool. Using SASL (St. Andrews Static Language) as a typical applicative programming language, we introduce several unification based conditional binding (ie, pattern matching) constructs and show how these can promote clarity and conciseness of expression in applicative languages, and we also indicate some applications of these constructs. In particular, we present an interpreter for SASL functions defined by recursion equations.

TR-82-08 Performance of Some Local Area Network Technologies, August 1982 Samuel T. Chanson, A. Kumar and A. Nadkarni

This paper classifies local area network (LAN) technologies according to their topology and access method. The characteristics of the popular LAN technologies (namely Ring/Token passing, Ring/Message slots and Bus/Contention) are discussed. Analytic models are developed to estimate the mean packet delay time of each technology as a function of the network loading for various packet sizes and number of active stations. It is found that in the case of slotted rings (but not the other two technologies) an optimal value of the number of active stations exists which minimizes the mean delay time at all load levels given a packet arrival rate. The LAN technologies are compared with regard to their performance, reliability, availability, maintainability, extensibility, fairness and complexity. .br It is hoped that potential users may be able to select the appropriate technology for their intended applications based or their specific performance requirements and operation environment. As well, LAN designers may benefit from the insight provided with the analysis.

TR-82-09 Collocation for Singular Perturbation Problems III: Nonlinear Problems without Turning Points, October 1982 Uri Ascher and R. Weiss

A class of nonlinear singularly perturbed boundary value problems is considered, with restrictions which allow only well-posed problems with possible boundary layers, but no turning points. For the numerical solution of these problems, a close look is taken at a class of general purpose, symmetric finite difference schemes arising from collocation. .br It is shown that if locally refined meshes, whose practical construction is discussed, are employed, then high order uniform convergence of the numerical solution is obtained. Nontrivial examples are used to demonstrate that highly accurate solutions to problems with extremely thin boundary layers can be obtained in this way at a very reasonable cost.

TR-82-10 Standard Image Files, January 1982 William S. Havens

(Abstract not available on-line)

TR-82-11 Multi Process Structuring of X.25 Software, October 1982 Stephen Edward Deering

Modern communication protocols present the software designer with problems of asynchrony, real-time response, high throughput, robust exception handling, and multi-level interfacing. An operating system which provides lightweight processes and inexpensive inter-process communication offers solutions to all of these problems. This thesis examines the use of the multi-process structuring facilities of one such operating system, Verex, to implement the protocols defined by CCITT Recommendation X.25. The success of the multi-process design is confirmed by a working implementation that has linked a Verex system to the Datapac public network for over a year. .br The processes which make up the Verex X.25 software are organized into layers according to the layered definition of X.25. Within the layers, some processes take the form of finite-state machines which execute the state transitions specified in the protocol definition. Matching the structure of the software to the structure of the specification results in software which is easy to program, easy to understand, and likely to be correct. .br .br Multi-process structuring can be applied with similar benefits to protocols other than X.25 and systems other than Verex.

TR-82-12 Knowledge-Based Visual Interpretation Using Declarative Schemata, November 1982 Roger A. Browse

One of the main objectives of computer vision systems is to produce structural descriptions of the scenes depicted in images. Knowledge of the class of objects being imaged can facilitate this objective by providing models to guide interpretation, and by furnishing a basis for the structural descriptions. This document describes research into techniques for the representation and use of knowledge of object classes, carried out within the context of a computational vision system which interprets line drawings of human-like body forms. .br A declarative schemata format has been devised which represents structures of image features which constitute dep- ictions of body parts. The system encodes relations between these image constructions and an underlying three dimensional model of the human body. Using the component hierarchy as a structural basis, two layers of representation are developed. One references the fine resolution features, and the other references the coarse resolution. These layers are connected with links representative of the specialization/generalization hierarchy. The problem domain description is declarative, and makes no commitment to the nature of the subsequent interpretation processes. As a means of testing the adequacy of the representation, portions have been converted into a PROLOG formulation and used to ``prove'' body parts in a data base of assertions about image properties. .br The interpretation phase relies on a cue/model approach, using an extensive cue table which is automatically generated from the problem domain description. The primary mechanisms for control of interpretation possibilities are fashioned after network consistency methods. The operation of these mechanisms is localized and separated between operations at the feature level and at the model level. .br The body drawing interpretation system is consistent with aspects of human visual perception. The system is capable of intelligent selection of processing locations on the basis of the progress of interpretation. A dual resolution retina is moved about the image collecting fine level features in a small foveal area and coarse level features in a wider peripheral area. Separate interpretations are developed locally on the basis of the two different resolution levels, and the relation between these two interpretations is analyzed by the system to determine locations of potentially useful information.

TR-82-13 A Cooperative Scheme for Image Understanding Using Multiple Sources of Information, November 1982 Jay Glicksman

One method of resolving the ambiguity inherent in interpreting images is to add different sources of information. The multiple information source paradigm emphasizes the ability to utilize knowledge gained from one source that may not be present in another. However, utilizing disparate information may create situations in which data from different sources are inconsistent. .br A schemata-based system has been developed that can take advantage of multiple sources of information. Schemata are combined into a semantic network via the relations decomposition, specialization, instance of, and neighbour. Control depends on the structure of the evolving network and a cycle of perception. Schemata cooperate by message passing so that attention can be directed where it will be most advantageous. .br This system has been implemented to interpret aerial photographs of small urban scenes. Geographic features are identified using up to three information sources: the intensity image, a sketch map, and information provided by the user. The product is a robust system where the accuracy of the results reflects the quality and amount of data provided. Images of several geographic locales are analyzed, and positive results are reported.

TR-83-01 Formalizing Non-Monotonic Reasoning Systems, January 1983 David W. Etherington

In recent years, there has been considerable interest in non-monotonic reasoning systems. Unfortunately, formal rigor has not always kept pace with the enthusiastic propagation of new systems. Formalizing such systems may yield dividends in terms of both clarity and correctness. We show that Default Logic is a useful tool for the specification and description of non-monotonic systems, and present new results which enhance this usefulness.

TR-83-02 Data Types as Term Algebras, March 1983 K, Akira a and Karl Abrahamson

Data types in programming have been mathematically studied from two different viewpoints, namely data types as (initial) algebras and data types as complete partial orders. In this paper, we explore a possibility of finitaristic approach. For finitarists, the only sets accepted are ``recursively defined'' sets. We observe that recursive definition not only defines a set of terms but also basic operations over them, thus it induces an algebra of terms. We compare this approach to the existing two approaches. Using our approach we present finer classification of data types.

TR-83-03 Numeration Models of $\lambda$-Calculus, April 1983 K and Akira a

Models of $\lambda$-calculus have been studied by Church [2] and Scott [7]. In these studies, finding solutions to the isomorphic equations \(S \approx [S \rightarrow S] \) where \( [S \rightarrow S] \) is a certain set of functions from $S$ to $S$ is the main issue. In this paper, we present an example of such solutions which fails to be a model of $\lambda$-calculus. This example indicates the necessity of careful consideration of the syntax of $\lambda$-calculus, especially for the study of constructive models of $\lambda$-calculus. Taking this into account, we axiomatically show when a numeration of Er\u{s}ov [3] forms a model of $\lambda$-calculus. This serves as a general framework for countable models of $\lambda$-calculus. Various examples of such numerations are studied. An algebraic characterization of this class of numerations is also given.

TR-83-04 On the Complexity of Achieving K-Consistency, January 1983 Raimund Seidel

A number of combinatorial search problems can be formulated as constraint satisfaction problems. Typically backtrack search is used to solve these problems. To counteract the frequent thrashing behaviour of backtrack search, methods have been proposed to precondition constraint satisfaction problems. These methods remove inconsistencies involving only a small number of variables from the problem. In this note we analyze the time complexity of the most general of these methods, Freuder's $k$-consistency algorithm. We show that it takes worst case time $O(n^{k})$, where $n$ is the number of variables in the problem.

TR-83-05 A Linear Algorithm for Determining the Separation of Convex Polyhedra, January 1983 David P. Dobkin and David G. Kirkpatrick

The separation of two convex polyhedra is defined to be the minimum distance from a point (not necessarily an extreme point) of one to a point of the other. We present a linear algorithm for constructing a pair of points that realize the separation of two convex polyhedra in three dimensions. Our algorithm is based on a simple hierarchical description of polyhedra that is of interest in its own right. .br Our result provides a linear algorithm for detecting the intersection of convex polyhedra. Separation and intersection detection algorithms have applications in clustering, the intersection of half-spaces, linear programming, and robotics.

TR-83-06 (g,f) - Factors \& Packings, When g

(Abstract not available on-line)

TR-83-07 Marriage Before Conquest: A Variation on the Divide \& Conquer Paradigm, October 1983 David G. Kirkpatrick and Raimund Seidel

We present a new planar convex hull algorithm with worst case time complexity $O(n \log H)$ where $n$ is the size of the input set and $H$ is the size of the output set, i.e. the number ot vertices found to be on the hull. We also show that this algorithm is asymptotically worst case optimal on a rather realisic model of computation even if the complexity of the problem is measured in terms of input as well as output size. The algorithm relies on a variation of the divide-and-conquer paradigm which we call the ``marriage-before-conquest'' principle and which appears to be interesting in its own right.

TR-83-08 A Prological Definition of HASL, a Purely Functional Language with Unification Based Expressions, January 1983 Harvey Abramson

We present a definition in Prolog of a new purely functional (applicative) language HASL ({\em HA}rvey's {\em S}tatic {\em L}anguage). HASL is a descendant of Turner's SASL and differs from the latter in several significant points: it includes Abramson's unification based conditional binding constructs; it restricts each clause in a definition of a HASL function to have the same arity, thereby complicating somewhat the compilation of clauses to combinators, but simplifying considerably the HASL reduction machine; and it includes the single element domain \{fail\} as a component of the domain of HASL data structures. It is intended to use HASL to express the functional dependencies in a translator writing system based on denotational semantics, and to study the feasibility of using HASL as a functional sublanguage of Prolog or some other logic programming language. Regarding this latter application we suggest that since a reduction mechanism exists for HASL, it may be easier to combine it with a logic programming language than it was for Robinson and Siebert to combine LISP and LOGIC into LOGLISP: in that case a fairly complex mechanism had to be invented to reduce uninterpreted LOGIC terms to LISP values. .br The defnition is divided into four parts. The first part defines the lexical structure of the language by means of a simple Definite Clause Grammar which relates character strings to ``token'' strings. The second part defines the syntactic structure of the language by means of a more complex Definite Clause Grammar and relates token strings to a parse tree. The third part is semantic in nature and translates the parse tree definitions and expressions to a variable-tree string of combinators and global names The fourth part of the definition consists of a set of Prolog predicates which specifies how strings of combinators and global names are reduced to ``values'', ie., integers, truth values, characters, lists, functions fail, and has an operational flavour: one can think of this fourth part as the definition of a normal order reduction machine.

TR-83-09 R-Maple: A Concurrent Programming Language Based on Predicate Logic, Part I: Syntax \& Computation, January 1983 Paul J. Voda

(Abstract not available on-line)

TR-83-10 A Fast Data Compression Method, August 1983 Samuel T. Chanson and Jee Fung Pang

This paper presents a new data compression scheme. The scheme uses both fixed and variable length codes and gives a compression ratio of about one-third for English text and program source files (without leading and trailing blank suppression). This is very respectable compared to existing schemes. The compression ratio for numbers ranges from 52\% for numbers in scientific notation to 65\% for integers. The major advantage of the scheme is its simplicity. The scheme is at least six times faster than Huffman's code and takes about half the main memory space to execute.

TR-83-11 A Sound and Sometimes Complete Query Evaluation Algorithm for Relational Databases with Null Values, June 1983 Raymond Reiter

This paper presents a sound and, in certain cases, complete method for evaluating queries in relational databases with null values where these nulls represent existing but unknown individuals. The soundness and completeness results are proved relative to a formalization of such databases as suitable theories of first order logic.

TR-83-12 Acceptable Numerations of Function Spaces, October 1983 K and Akira a

We study when a numeration of the set of morphisms from a numeration to the other is well-behaved. We call well-behaved numerations ``acceptable numerations''. We characterize acceptable numerations by two axioms and show that acceptable numerations are recursively isomorphic to each other. We also show that for each acceptable numeration a fixed point theorem holds. Relation between Cartesian closedness and S-m-n property is discussed in terms of acceptable numerations. As an example of acceptable numerations, we study directed indexings of effective domains.

TR-84-01 Scale-Based Descriptions of Planar Curves, March 1984 Alan K. Mackworth and Farzin Mokhtarian

The problem posed in this paper is the description of planar curves at varying levels of detail. Five necessary conditiong are imposed on any candidate solution method. Two candidate methods are rejected. A new method that uses well-known Gaussian smoothing techniques but applies them in a path-based coordinate system is described. By smoothing with respect to a path-length parameter the difficulties of other methods are overcome. An example shows how the method extracts the major features of a curve, at varying levels of detail, based on segmentation at zeroes of the curvature, $\kappa$. The method satisfies the five necessary criteria.

TR-84-02 On Gapping Grammars, April 1984 Harvey Abramson and Veronica Dahl

A Gapping Grammar (GG) has rewriting rules of the form: \[\alpha_{1}, {\em gap}(x_{1}), \alpha_{2}, {\em gap}(x_{2}), \ldots, \alpha_{n-1}, {\em gap}(x_{n-1}), \alpha_{n} \rightarrow \beta\]

\[\alpha_{i} \epsilon V_{N} \bigcup V_{T}\]

\{ {\em gap}(x_{1}),{\em gap}(x_{2}), \ldots,{\em gap}(x_{n-1}) \}\]

\[x_{i} \epsilon V_{T}^{*}\]

\[\beta \epsilon V_{N}^{*} \bigcup V_{T}^{*} \bigcup G^{*}\] where $V_{T}$ and $V_{N}$ are the terminal and non-terminal vocabularies of the Gapping Grammar. Intuitively, a GG rule allows one to deal with unspecified strings of terminal symbols called {\em gaps}, represented by $x_{1}, x_{2}, \ldots, x_{n-1}$, in a given context of specified terminals and non-terminals, represented by $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}$ and then to distribute them in the right hand side $\beta$ in any order. GG's are a generalization of Fernando Pereira's {\em Extrapontion Grammars} where rules have the form (using our notation ): \begin{eqnarray*} \alpha_{1},{\em gap}(x_{1}), \alpha_{2}, {\em gap}(x_{2}), \ldots, {\em gap}(x_{n-1}), \alpha_{n} \rightarrow \\ \beta, {\em gap}(x_{1}), {\em gap}(x_{2}), \ldots, {\em gap}(x_{n-1}) \end{eqnarray*} i.e., gaps are rewritten in their sequential order in the rightmost positions of the rewriting rule. In this paper we motivate GG's by presenting grammatical examples where XGs are not adequate and we describe and discuss alternative implementations of GGs in logic.

TR-84-03 Definite Clause Translation Grammars, April 1984 Harvey Abramson

In this paper we introduce Definite Clause Translation Grammars, a new class of logic grammars which generalizes Definite Clause Grammars and which may be thought of as a logical implementation of Attribute Grammars. Definite Clause Translation Grammars permit the specification of the syntax and semantics of a language: the syntax is specified as in Definite Clause Grammars; but the semantics is specified by one or more semantic rules in the form of Horn clauses attached to each node of the parse tree (automatically created during syntactic analysis), and which control traversal(s) of the parse tree and computation of attributes of each node. The semantic rules attached to a node constitute therefore, a local data base for that node. The separation of syntactic and semantic rules is intended to promote modularity, simplicity and clarity of definition, and ease of modification as compared to Definite Clause Grammars, Metamorphosis Grammars, and Restriction Grammars.

TR-84-04 Classes of Numeration Models of $\lambda$-Calculus, April 1984 K and Akira a

In [4] the reflexive structures in the category of numeration were studied. It was shown that every numerated reflexive set forms a ``numeration model of $\lambda$-calculus''. In this short note we formalize the concept of numeration models of $\lambda$-calculus, and study several interesting subclasses. Even though the class of numeration models does not coincide with the class of numerated reflexive sets, we can show that the class of numeration models with ``$\lambda$-definability'' property is equivalent to the class of numerated reflexive sets with ``$\lambda$-representability'' property. Through this we observe relation between $\lambda$-definability and acceptability of numerations discussed in [5].

TR-84-05 On The Adequacy of Predicate Circumscription For Closed-World Reasoning, September 1984 David W. Etherington, Robert E. Mercer and Raymond Reiter

We focus on McCarthy's method of predicate circumscription in order to establish various results about its consistency, and about its ability to conjecture new information. A basic result is that predicate circumscription cannot account for the standard kinds of default reasoning. Another is that predicate circumscription yields no new information about the equality predicate. This has important consequences for the unique names and domain closure assumptions.

TR-84-06 A Unified Approach to the Geometric Rectification of remotely Sensed Imagery, May 1984 Frank Hay-Chee Wong

Many applications of remotely sensed digital imagery require images that are corrected for geometric distortions. It is often desirable to rectify different types of satellite imagery to a common data base. A high throughput rectification system is required for commercial application. High geometric and radiometric precision must be maintained.

The thesis has accomplished the following tasks: \begin{enumerate} \item The sensors used to obtain remotely sensed imagery have been investigated and the associated geometric distortions inherent with each sensor are identified. \item The transformation between image coordinates and datum coordinates has been determined and the values of the parameters in the transformation are estimated. \item A unified rectification approach has been developed, for all types of remotely sensed digital imagery, which yields a high system throughput. \item Use of digital terrain models in the rectification process to correct for relief displacement has been incorporated. \item An efficient image interpolation algorithm has been developed. This algorithm takes into account the fact that imagery does not always correspond to sampling on a uniform grid. \item The applications of rectified imagery such as mosaicking and multisensor integration have been studied. \item Extension of the rectification algorithm to a future planetary mission has been investigated. \end{enumerate}

The sensors studied include TIROS-N Landsat-1, -2 and -3 multispectral scanners, Seasat synthetic aperture radar, Landsat-4 thematic mapper and SPOT linear array detectors. Imagery from the last two sensors is simulated.

TR-84-07 Numeration models of $\lambda$B-Calculus, May 1984 K and Akira a

Numeration models of extensional $\lambda$-calculus have been studied (see [5,7]). In this paper, we study numeration models of $\lambda \beta$-calculus. Engeler's graph algebra construction [3] is applied to the category of numerations and is used as a tool to obtain numeration models of $\lambda \beta$-calculus. Several classes of numeration models are studied and several examples of them are presented.

TR-84-08 RF-Maple: A Logic Programming Language with Functions, Types \& Concurrency, April 1984 Paul J. Voda and Benjamin Yu

Currently there is a wide interest in the combination of functional programs with logic programs. The advantage is that both the compostion of functions and non-determinism of relations can be obtained. The language RF-Maple is an attempt to combine logic programming style with functional programming style. ``RF'' stands for ``Relational and Functional''. It is a true union of a relational programming language R-Maple and a functional programming language F-Maple.

R-Maple is a concurrent relational logic programming language which tries to strike a balance between control and meaning. Sequential and parallel execution of programs can be specified in finer details than in Concurrent Prolog. R-Maple uses explicit quantifiers and has negation. As a result, the declarative reading of R-Maple programs is never compromised by the cuts and commits of both Prologs.

F-Maple is a very simple typed functional programming language (it has only four constructs) which was designed as an operating system at the same time. It is a syntactically extensible language where the syntax of types and functions is entirely under the programmer's control.

In combining the two concepts of R-Maple and F-Maple producing RF-Maple, the readability of programs and the speed of execution are improved. The latter is due to the fact that many relations are functional and therefore, do not require backtracking. We believe its power as well as its expressiveness and ease of use go a little beyond the possibilities of the currently available languages.

TR-84-09 A View of Programming Languages as Symbiosis of Meaning \& Control, May 1984 Paul J. Voda

TR-84-10 Photometric Method for Determining Shape from Shading, July 1984 R. J. Woodham

A smooth opaque object produces an image in which brightness varies spatially even if the object is illuminated evenly and is covered by a surface material with uniform optical properties. Photometric methods relate image irradiance to object shape and surface material using physical models of the way surfaces reflect light. A reflectance map allows image irradiance to be written as a function of surface orientation, for a given surface material and light source distribution. Shape from shading algorithms use a reflectance map to analyze what is seen.

The development of photometric methods for determining shape from shading is discussed, beginning with examples from lunar astronomy. The results presented delineate shape information that can be determined from geometric measurements at object boundaries from shape information that can be determined from intensity measurements over sections of smooth surface. Recent work of Ikeuchi and Horn is presented which relaxes the requirement that the image irradiance equation be satisfied exactly. Instead, the image irradiance equation specifies one constraint that is combined with another constraint derived from general surface smoothness criteria. Shape from shading is expressed as a constrained minimization problem.

Another method uses multiple images in a technique called photometric stereo. In photometric stereo, the illumination is varied between successive images while the viewing direction remains constant. Multiple images obtained in this way provide enough information to determine surface orientation at each image point, without smoothness assumptions.

TR-84-11 Definite Clause Translation Grammars \& the Logical Specification of Data Types as Unambiguous Context Free Grammars, August 1984 Harvey Abramson

Data types may be considered as unambiguous context free grammars. The elements of such a data type are the derivation trees of sentences generated by the grammars. Furthermore, the generators and recognizers of non-terminals specified by such grammars provide the composition and decomposition operators which can be used to define functions or predicates over such data types. We present a modification of our Definite Clause Translation Grammars (Abramson 1984) which is used to logically specify data types as unambiguous context free grammars. For example, here is a grammatical specification of binary trees: \begin{center} string.$ \\ "(" , left :tree, " ,", right :tree, " )".$ \end{center} The decomposition ``operators'', left, right, and (implicitly) string, are semantic attributes generated by the compiler which translates these grammar rules to Prolog clauses; these operators, together with the parser for $tree$s, and the predicates $leaf$ and $branch$, can be used to construct more complex predicates over the data type $tree$. We show how such grammars can be used to impose a typing system on logic programs; and indicate how such grammars can be used to implement Kaviar, a functional programming language based on data types as context tree grammars.

TR-84-12 A Generalization of the Frank Matrix, January 1984 James M. Varah

In this paper, we give a generalization of the well-known Frank matrix and show how to compute its eigensystem accurately. As well, we attempt to explain the ill-condition of its eigenvalues by treating it as a perturbation of a defective matrix.

TR-84-13 Stability of Collocation at Gaussian Points, October 1984 Uri Ascher and G. Bader

Symmetric Runge-Kutta schemes are particularly useful for solving stiff two-point boundary value problems. Such A-stable schemes perform well in many cases, but it is demonstrated that in some instances the stronger property of algebraic stability is required.

A characterization of symmetric, algebraically stable Runge-Kutta schemes is given. The class of schemes thus defined turns out not to be very rich: The only collocation schemes in it are those based on Gauss points, and other schemes in the class do not seem to offer any advantage over collocation at Gaussian points.

TR-84-14 Photometric Method for Radiometric Dorrection of Multispectral Scanner Data, October 1984 R. J. Woodham and T. K. Lee

Radiometric correction of multispectral scanner data requires physical models of image formation in order to deal with variations in topography, scene irradiance, atmosphere and viewing conditions. The scene radiance equation is more complex for rugged terrain than for flat terrain since it must model elevation, slope and aspect dependent effects. A simple six parameter model is presented to account for differential amounts of solar irradiance, sky irradiance and path radiance across a scene. The model uses the idea of a reflectance map to represent scene radiance as a function of surface orientation. Scene radiance is derived from the bidirectional reflectance distribution function (BRDF) of the surface material and a distribution of light sources. The sun is treated as a collimated source and the sky is treated as a uniform hemispherical source. The atmosphere adds further complication and is treated as an optically thin, horizontally uniform layer.

The required six parameters account for atmospheric effects and can be estimated when a suitable digital terrain model (DTM) is available. This is demonstrated for Landsat MSS images using a test site near St. Mary Lake in southeastern British Columbia. An intrinsic surface albedo is estimated at each point, independent of how that point is illuminated and viewed.

It is argued that earlier conclusions about the usefulness of the Lambertian assumption for the radiometric correction of multispectral scanner data were premature. Correction methods proposed in the literature fail even if the surface is Lambertian. This is because sky irradiance is significant and must be dealt with explicitly, especially for slopes approaching the grazing angle of solar incidence.

TR-84-15 Scale-Based Description and Recognition of Planar Curves and Two-Dimensional Shapes, October 1984 Farzin Mokhtarian and Alan K. Mackworth

The problem of finding a description for planar curves and two-dimensional shapes at varying levels of detail and matching two such descriptions is posed and solved in this paper. A number of necessary criteria are imposed on any candidate solution method. Path-based Gaussian smoothing techniques are applied to the curve to find zeroes of curvature at varying levels of detail. The result is the `generalized scale space' image of a planar curve which is invariant under rotation, uniform scaling and translation of the curve. These properties make the scale space image suitable for matching. The matching algorithm is a modification of the uniform cost algorithm and finds the lowest cost match of contours in the scale space images. It is argued that this is preferable to matching in a stable scale of the curve because no such scale may exist for a given curve. This technique is applied to register a Landsat aerial image of the Strait of Georgia, B.C. (manually corrected for skew) to a map containing the shorelines of an overlapping area.

TR-84-16 A Theory of Schema Labelling, June 1984 William S. Havens

Schema labelling is a representation theory which focuses on composition and specialization as two major aspects of machine perception. Previous research in computer vision and knowledge representation have identified computational mechanisms for these tasks. We show that the representational adequacy of schema knowledge structures can be combined advantageously with the constraint propagation capabilities of network consistency techniques. In particular, composition and specialization can be realized as mutually- interdependent cooperative processes which operate on the same underlying knowledge representation. In this theory, a schema is a generative representation for a class of semantically related objects. Composition builds a structural description of the scene from rules defined in each schema. The scene description is represented as a network consistency graph which makes explicit the objects found in the scene and their semantic relationships. The graph is hierarchical and describes the input scene at varying levels of detail. Specialization applies network consistency techniques to refine the graph towards a global scene description. Schema labelling is being used for interpreting hand-printed Chinese characters [10], and for recognizing VLSI circuit designs from their mask layouts [2].

TR-84-17 Collocation for Two-Point Boundary Value Problems Revisited, November 1984 Uri Ascher

Collocation methods for two-point boundary value problems for higher order differential equations are considered. By using appropriate monomial bases, we relate these methods to corresponding one-step schemes for 1st order systems of differential equations. This allows us to present the theory for nonstiff problems in relatively simple terms, refining at the same time some convergence results and discussing stability. No restriction is placed on the meshes used.

TR-84-18 The Design of a Distributed Interpreter for Concurrent Prolog, November 1984 Chun Man Tam

Prolog is a programming language based on predicate logic. Its successor, Concurrent Prolog, was designed to meet the needs of a multiprocessing environment to the extent that it may be desirable as a succinct language for writing operating systems. Here, we demonstrate the feasibllity of implementing a distributed interpreter for Concurrent Prolog using traditional programming tools under a multiprocess structuring methodology. We will discuss the considerations that must be made in a distributed environment and how the constructs of the language may be implemented. In particular, several subtle pitfalls associated with the implementation of read-only variables and the propagation of new bindings will be illustrated. In addition, a modification to Shapiro's treatment of read-only variables is proposed in an attempt to ``clean up'' the semantics of the language.

(The discussion will centre around a primitive version of an interpreter for the language written in Zed (a language similar to C) on an Unix-like operating system, Verex. Although a brief introduction of Prolog and Concurrent Prolog will be given, it is assumed that the reader is familiar with the paper \underline{A Subset of Concurrent Prolog and Its Interpreter} by E.Y. Shapiro [Shapiro83].)

TR-84-19 Natural Deduction Based Set Theories: A New Resolution of the Old Paradoxes, January 1984 Paul C. Gilmore

The comprehension principle of set theory asserts that a set can be formed from the objects satisfying any given property. The principle leads to immediate contradictions if it is formalized as an axiom scheme within classical first order logic. A resolution of the set paradoxes results if the principle is formalized instead as two rules of deduction in a natural deduction presentation of logic. This presentation of the comprehension principle for sets as semantic rules, instead of as a comprehension axiom scheme, can be viewed as an extension of classical logic, in contrast to the assertion of extra-logical axioms expressing truths about a pre-existing or constructed universe of sets. The paradoxes are disarmed in the extended classical semantics because truth values are only assigned to those sentences that can be grounded in atomic sentences.

TR-84-20 An Alternative Characterization of Precomplete Numerations, November 1984 K and Akira a

Er\u{s}ov [1] characterized precomplete numerations as those numerations which satisfy the 2nd recursion theorem. In this short note we show that they are exactly those numerations which satisfy the strongest form of the 2nd recursion theorem.

TR-84-21 The File System of a Logic Operating System, November 1984 Anthony J. Kusalik

This paper describes the file system of an operating system for a logic inference machine. The file system is composed of a file system device and a collection of file system servers. The former provides the basic services of creation, access (reading or writing), removal, and stable storage of files. It realizes a simple, though powerful model: a file store as a special type of name server maintaining associations between identifiers and entities. A file is then a pair, $<$file name, file content$>$, of terms. Clients gain access to a file by sharing the file content term with the file system device. Reading the file corresponds to examination of the term; writing, to instantiation. There is no need of explicit read or write operations, or of file closure. File system servers enhance or modify this basic file abstraction. They can provide features of more conventional file systems, such as hierarchical directories or fixed, structured file formats.

Concurrent Prolog is assumed as the underlying machine language and the operating system implementation language. However, the ideas are also applicable to other parallel logic programming languages, such as PARLOG.

As a prerequisite to describing the file system, the Concurrent Prolog machine model is presented, as well as an overview of the entire operating system design.

TR-84-22 Recursion Theorems and Effective Domains, January 1984 K and Akira a

Every acceptable numbering of an effective domain is complete. Hence every effective domain admits the 2nd recursion theorem of Er\u{s}ov [1]. On the other hand for every effective domain, the 1st recursion theorem holds. In this note, we establish that for effective domains, the 2nd recursion theorem is strictly more general than the 1st recursion theorem, a generalization of an important result in recursive function theory.

TR-84-23 Nystroms Method vs. Founier Tupe Methods for the Numerical Solution of Integral Equations, December 1984 M Trummer and red R.

It is shown that Nystrom's method and Fourier type methods produce the same approximation to a solution of an integral equation at the collocation points for Nystrom's method. The quadrature rule for numerical integration must have these collocation points as abscissa.

TR-84-24 An Efficient Implementation of a Conformal Mapping Method Using the Szego Kernel, December 1984 Manfred R. Trummer

An implementation, based on iterative techniques, of a method to compute the Riemann mapping function is presented. The method has been recently introduced by N. Kerzman and the author; it expresses the Szego kernel as the solution of an integral equation of the second kind. It is shown how to treat symmetric regions. The algorithm is tested on five examples. The numerical results show that the method is competitive, with respect to accuracy, stability, and efficiency.

TR-84-25 Theory of Pairs, Part I: Provably Recursive Functions, January 1984 Paul J. Voda

TR-85-01 Recognizing VLSI Circuits from Mask Artwork, January 1985 Amir Alon and William S. Havens

The design of Very Large Scale Integrated (VLSI) circuits remains an art despite recent advances in Computer Assisted Design (CAD) techniques. Unfortunately, the sophistication of the design procss has not kept pace with the VLSI hardware technology. Very expensive errors proliferate into fabrication despite powerful design rule checkers and circuit simulators. We have developed an alternative approach derived from research in knowledge representation and schema-based computer vision. The system implemented recognizes an abstract logic function description of the VLSI circuit from its mask layout artwork. Our technique reverses the desiFn process thereby recovering the logical function actually fabricated in the chip. No simulation is necessary and conceptually all logical design errors can be detected. The work is a direct application of schema labelling techniques which were developed for the Mapsee2 sketch map understanding system. This prototype system has been tested on a number of logical chip designs with correct results. Some results are presented.

TR-85-02 Recovering Shape \& Determining Attitude from Extended Gaussian Images, April 1985 James Joseph Little

This dissertation is concerned with surface representations which record surface properties as a function of surface orientation. The Extended Gaussian Image (EGI) of an object records the variation of surface area with surface orientation. When the object is polyhedral, the EGI takes the form of a set of vectors, one for each face, parallel to the outer surface normal of the face. The length of a vector is the area of the corresponding face.

The EGI uniquely represents convex objects and is easily derived from conventional models of an object. An iterative algorithm is described which converts an EGI into an object model in terms of coordinates of vertices, edges, and faces. The algorithm converges to a solution by constrained optimization. There are two aspects to describing shape for polyhedral objects: first, the way in which faces intersect each other, termed the adjacency structure, and, second, the location of the faces in space The latter may change without altering the former, but not vice versa. The algorithm for shape recovery determines both elements of shape. The continuous support function is described in terms of the area function for curves, permitting a qualitative companson of the smoothness of the two functions. The next Section describes a method of curve segmentation based on extrema of the support function. Because the support function varies with translation, its behaviour under translation is studied, leading to proposals for several candidate centres of support. The study of these ideas suggests some interesting problems in computational geometry.

The EGI has been applied to determine object attitude, the rotation in 3-space bringing a sample object into correspondence with a prototype. The methods developed for the inversion problem can be applied to attitude determination. Experiments show attitude determination using the new method to be more robust than area matching methods. The method given here can be applied at lower resolution of orientation, so that it is possible to sample the space of attitudes more densely, leading to increased accuracy in attitude determination.

The discussion finally is broadened to include non-convex objects, where surface orientation is not unique. The generalizations of the EGI do not support shape reconstruction for arbitrary non-convex objects. However, surfaces of revolution do allow a natural generalization of the EGI The topological structure of regions of constant sign of curvature is invariant under Euclidcan motion and may be useful for recognition tasks.

TR-85-03 A Fast Divide \& Conquer Protocol for Contention Resolution on Broadcast Channels, April 1985 Karl Abrahamson

We describe a contention resolution protocol for an ethernet-like broadcast channel The protocol is based on tree algorithms, particularly that of Greenberg. We show how to obtain a simpler and more accurate estimate of the number of contending stations than Greenberg's method, and use the new estimation method to obtain an improved protocol.

TR-85-04 LNTP --- An Efficient Transport Protocol for Local Area Networks, February 1985 Samuel T. Chanson, K. Ravindran and Stella Atkins

As interests in local area networks (LANs) grow so is the demand for protocols that run on them. It is convenient and a common practice to adopt existing transport protocols that had been designed for long haul networks (LHNs) for use in LANs. DARPA's Transmission Control Protocol/lnternet Protocol (TCP/IP) for example, is available in 4.2 BSD UNIX for interface to Ethernet and other LANs. This is not desirable from a performance standpoint as the control structure is usually much more complex than is necessary, and LANs and LHNs have very different characteristics. Though there exists simpler transport protocols such as the User Datagram Protocol (UDP) most do not provide adequate flow control which, because of the much higher channel speed, is critical in the LAN environment.

This paper discusses the unique characteristics and requirements of LANs and describes a new transport level protocol (LNTP) specifically designed for use on LANs. The fundamental philosophy in the design of LNTP is simplicity. Any features irrelevant to the LAN environment is not included. As well, LNTP uses a simple but effective deferred flow control mechanism which is activated only when the traffic intensity exceeds some value. This protocol has been implemented and runs under 4.2 BSD UNIX in place of TCP/IP. Detailed comparisons between LNTP, TCP and a few other protocols are given in the paper. Measurement data showed an improvement in network throughput rate of at least 30%, over that of TCP. The problem of internet communication is also addressed.

TR-85-05 On Process Aliases in Distributed Kernel Design, April 1985 K. Ravindran and Samuel T. Chanson

As distributed computing systems become popular because of their functional, economics and reliability characteristics, a new class of problems has emerged. These problems are characterized by the fact that the resources being used by a process as well as the system state is distributed. The management of the processes and resources in such an environment present a challenge that cannot be satisfactorily met by the traditional procedural-based methods which often assume the existence of shared memories. This paper presents a multiagent structure consisting of corporate processes and their associated aliases as an efficient and systematic solution to this class of problems. The model and the kernel primitives necessary to implement the model together with some design considerations are outlined. An example described in terms of the model is also given.

TR-85-06 Performance Evaluation of the ARPANET Transmission Control Protocol in a Local Area Network Environment, July 1985 Samuel T. Chanson, K. Ravindran and Stella Atkins

The Transmission Control Protocol (TCP) of ARPANET is one of the most popular transport level communication protocols in use today. Originally designed to handle unreliable and hostile subnets in a long-haul network TCP has been adopted by many local area networks (LAN) as well. It is, for example, available in 4.2 BSD UNIX for interface to Ethernet and several other LAN technologies. This is convenient but not desirable from a performance standpoint since the control structure is far more complex than is necessary for LANs.

This paper describes what we learned in measuring and tuning the performance of TCP in transferring large files between two hosts of different speeds over the Ethernet. Models are presented which allow the optimal buffer size and the flow control parameter to be determined. Based on observed traffic patterns and those reported elsewhere, we formulated guidelines for the design of transport protocols for a single LAN environment. We then present a new, much simpler LAN transport level protocol which replaces TCP with significant improvement in network throughput. Internet packets will use the full TCP. This is done at the gateway. Since the majority of the packets in a LAN are for local usage, this scheme improves the overall network throughput rate as well as the mean packet delay time.

TR-85-07A Hierarchical Arc Consistency: Exploiting Structured Domains in Constraint Satisfaction Problems, June 1985 Alan K. Mackworth, Jan A. Mulder and William S. Havens

Constraint satisfaction problems can be solved by network consistency algorithms that eliminate local inconsistencies before constructing globa1 solutions. We describe a new algorithm that is useful when the variable domains can be structured hierarchically into recursive subsets with common properties and common relationships to subsets of the domain values for related variables. The algorithm, HAC, uses a technique known as hierarchical arc consistency. Its performance is analyzed theoretically and the conditions under which it is an improvement are outlined. The use of HAC in a program for understanding sketch maps, Mapsee3, is briefly discussed and experimental results consistent with the theory are reported.

TR-85-07 State Inconsistency Issues in Local Area Network-Based Distributed Kernels, August 1985 Samuel T. Chanson and K. Ravindran

State inconsistency is an inherent problem in distributed computing systems (DCS) because of the high degree of autonomy of the executing entities and the inherent delays and errors in communicating events among them. Thus any reliable DCS should provide means to recover from such errors. This paper discusses the state inconsistency issues and their solution techniques in local area network based distributed kernels. In particular, we deal with state inconsistencies due to i) failures of processes, machines and/or the network, ii) packet losses, iii) new machines joining or exiting from the network, and iv) processes or hosts migrating from one machine to another in the network. The solutions presented are mostly provided within the kernel itself and are transparent to the applications.

TR-85-08 Computation of Full Logic Programs Using One-Variable Environments, January 1985 Paul J. Voda

(Abstract not available on-line)

TR-85-09 A Generic and Portable Image Processing Environment for Computer Vision, January 1985 William S. Havens

Computer Vision research is flourishing although its growth has been hindered by the lack of good image processing systems. Existing systems are neither general nor portable despite various attempts at establishing standard image representations and software. Issues of hardware architecture and processing efficiency have frequently dominated system design. Often standard representations are primarily data formats for exchanging data among researchers working at affiliated laboratories using similar equipment. We argue that generality, portability and extensibility are the important criteria for developing image processing systems. The system described here, called {\em PIPS}, is based on these principles. An abstract image datatype is defined which is capable of representing a wide variety of imagery. The representation makes few assumptions about the spatial resolution, intensity resolution, or type of information contained in the image. A simple set of primitive operations are defined for direct and sequential access of images. These primitives are based on a bit stream access method that considers files and devices to be a long contiguous stream of bits that can be randomly read and written. Bit streams allow the word boundaries and file system architecture of the host computer system to be completely ignored and require only standard byte-wide direct-access I/O support. The standard image representation has encouraged the development of a library of portable generic image operators. These operators support interactive experimentation and make it easy to combine existing functions into new more complex operations. Finally, graphics device interfaces are defined in order to isolate graphics hardware from image processing algorithms. The system has been implemented under the Unix operating system.

TR-85-10 Specification \& Initialization of a Logic Computer System, July 1985 Anthony J. Kusalik

A logic computer system consists of an inference machine and a compatable logic operating system. This paper describes prospective models for a logic computer system, and its hardware and software components. The language Concurrent Prolog serves as the single implementation specification, and machine language. The computer system is represented as a logic programming goal {\em logic\_computer\_system}. Specification of the system corresponds to resolution of this goal. Clauses used to solve the goal --- and ensuing subgoals --- progressively refine the machine, operating system, and computer system designs. In addition, the accumulation of all clauses describing the logic operating system constitute its implementation. Logic computer systems with vastly different fundamental characteristics can be concisely specified in this manner. Two contrasting examples are given and discussed. An important characteristic of both peripheral devices and the overall computer system, whether they are restartable or perpetual, is examined. As well, a method for operational initialization of the logic computer system is presented. The same clauses which incrementally specify characteristics of the computer system also describe the manner in which this initialization takes place.

TR-85-11 A New Basis Implementation for a Mixed Order Boundary Value ODE Solver, January 1985 Uri Ascher and G. Bader

The numerical approximation of mixed order systems of multipoint value ordinary differential equations by collocation requires appropriate representation of the piecewise polynomial solutions. B-splines were originally implemented in the general purpose code COLSYS, but better alternatives exist. One promising alternative as proposed by Osborne and discussed by Ascher, Pruess and Russell.

In this paper we analyze the performance of the latter solution representation for cases not previously covered, where the mesh is not necessarily dense. This analysis and other considerations have led us to implement a basis replacement in COLSYS and we discuss some implementation details. Numerical results are given which demonstrate the improvement in performance of the code.

TR-85-12 A Functional Programming Language with Context Free Grammars as Data Types, August 1985 Violet R. Syrotiuk

(Abstract not available on-line)

TR-85-13 ``Coaxial Stereo \& Scale Based Matching'', September 1985 Itzhak Katz

The past decade has seen a growing interest in {\em computer stereo vision}: the recovery of the depth map of a scene from two-dimensional images. The main problem of computer stereo is in establishing correspondence between features or regions in two or more images. This is referred to as the {\em correspondence problem}.

One way to reduce the difficulty of the above problem is to constrain the {\em camera modeling}. Conventional stereo systems use two or more cameras, which are positioned in space at a uniform distance from the scene. These systems use {\em epipolar geometry} for their camera modeling, in order to curb the search space to be one-dimensional --- along {\em epipolar lines}.

Following Jain's approach, this thesis exploits a non-conventional camera modeling: the cameras are positioned in space one behind the other, such that their optical axes are collinear (hence the name {\em coaxial stereo}), and their distance apart is known. This approach complies with a simple case of epipolar geometry which further reduces the magnitude of the correspondence problem.

The displacement of the projection of a stationary point occurs along a {\em radial line}, and depends only on its spatial depth and the distance between the cameras. Thus, to simplify (significantly) the recovery of depth from {\em disparity}, complex logarithmic mapping is applied to the original images. The logarithmic part of the transformation introduces great distortion to the image's resolution. Therefore, to minimize this distortion, it is applied to the features used in the matching process.

The search for matching features is conducted along radial lines. Following Mokhtarian and Mackworth's approach, a {\em scale-space} image is constructed for each radial line by smoothing its intensity profile with a {\em Gaussian filter} and finding {\em zero-crossings} in the second derivative at varying scale levels. Scale-space images of corresponding radial lines are then matched, based on a modified uniform cost algorithm. The matching algorithm is written with generality in mind. As a consequence, it can be easily adopted to other stereoscopic systems.

Some new results on the structure of scale-space images of one dimensional functions are presented.

TR-85-14 Using Discrimination Graphs to Represent Visual Knowledge, September 1985 Jan A. Mulder

This dissertation is concerned with the representation of visual knowledge. Image features often havee many different local interpretalions. As a result, visual interpretations are often ambiguous and hypothetical. In many model-based vision systems the problem of representing ambiguous and hypothelical interpretations is not very specifically addressed. Generally specialization hierarchies are used to suppress a potential explosion in local interpretations. Such a solution has problems, as many local interpretalions cannot be represented by a single hierarchy. As well, ambiguous and hypothetical interpretations tend to be represented along more than one knowledge representation dimension limiting modularity in representation and control. In this dissertation a better solution is proposed.

Classes of objects which have local features with similar appearance in the image are represented by discrimination graphs. Such graphs are directed and acyclic. Their leaves represent classes of elementary objects. All other nodes represent abstract (and sometimes unnatural) classes of objects which intensionally represent the set of elementary object classes that descend from them Rather than interpreting each image feature as an elementary object we use the abstract class that represents the complete set of possible (elementary) objects. Following the principle of least commitment the interpretation of each image feature is repeatedly forced into more restrictive classes as the context for the image feature is expanded until the image no longer provides subclassification information.

This approach is called discrimination vision and it has several attractive features. First, hypothetical and ambiguous interpretations can be represented along one knowledge representation dimension. Second the number of hypotheses represented for a single image feature can be kept small. Third in an interpretation graph competing hypotheses can he represented in the domain of a single variable. This often eliminates the need for restructuring the graph when a hypothesis is invalidated. Fourth, the problem of resolving ambiguity can be treated as a constraint satisfaction problem which is a well researched problem in Computational Vision.

Our system has been implemented as Mapsee-3, a program for interpreting sketch maps. A hierarchical arc consistency algorithm has been used to deal with the inherently hierarchical discrimination graphs. Experimental data show that, for the domain implemented, this algorithm is more efficient than standard arc consislency algorithm.

TR-85-15 Constraint Satisfaction, September 1985 Alan K. Mackworth

(Abstract not available on-line)

TR-85-16 Remote Interprocess Communication \& Its Performance in Team Shoshin, November 1985 Don Acton

Team Shoshin is an extension of Shoshin, a testbed for distributed software developed at the University of Waterloo. Part of the functionality of Shoshin can be attributed to its transparent treatment of remote interprocess communication. This is accomplished by having a special system process, the communications manager, handle the exchange of messages between machines. Shoshin's new hardware environment is significantly different from what it was originally designed on. This thesis describes the problems the new hardware presented and how those problems were overcome. Performance measurements of the time required for both local and remote message exchanges are made and compared. Using this empirical data, a simple model of the remote message exchange protocol is developed to try and determine how to improve performance. The software and hardware enhancements made to Shoshin have resulted in an improvement in system interprocess communication performance by a factor of four. Finally as a demonstration of Shoshin's interprocess communications facilities a simple UNIX based file server is implemented.

TR-85-17 Typed Recursion Theorems --- out of print, November 1985 K and Akira a

In recursion theory, recursion theorems are usually considered for effective functions over an effective universal set, like the set $N$ of all natural numbers or the set $RE$ of all recursively enumerable sets.

We observe that certain effective subsets of these effective universes have rich structure, and we study recursion theorems for these effective subsets.

TR-86-01 Retracts of Numerations --- out of print, January 1986 K and Akira a

In this paper we study some important properties of numerations which can be passe to their retracts. Furthermore we show a sufficient condition for a category $Ret(\alpha)$ of retracts of a numeration $\alpha$ and morphisms to be Cartesian closed, in terms of $\alpha$.

TR-86-02 Choices in, \& Limitations of, Logic Programming, January 1986 Paul J. Voda

(Abstract not available on-line)

TR-86-03 The Bit Complexity of Randomized Leader Election on a Ring, February 1986 Karl Abrahamson, Andrew Adler, Rachel Gelbart, Lisa Higham and David G. Kirkpatrick

The inherent bit complexity of leader election on asynchronous unidirectional rings of processors is examined under various assumptions about global knowledge of the ring. If processors have unique identities with a maximum of $m$ bits, then the expected number of communication bits sufficient to elect a leader with probability 1, on a ring of (unknown) size $n$ is $O(nm)$. If the ring size is known to within a multiple of 2, then the expected number of communication bits sufficient to elect a leader with probability 1 is $O(n \log n)$.

These upper bounds are complemented by lower bounds on the communication complexity of a related problem called solitude verification that reduces to leader election in $O(n)$ bits. If processors have unique identities chosen from a sufficiently large universe of size $s$, then the average, over all choices of identities, of the communication complexity of verifying solitude is $\Omega (n \log s)$ bits. When the ring size is known only approximately, then $\Omega (n \log n)$ bits are required for solitude verification. The lower bounds address the complexity of certifying solitude. This is modelled by tbe best case behaviour of non-deterministic solitude verification algorithms.

TR-86-04 A Distributed Kernel for Reliable Group Communication, February 1986 Samuel T. Chanson and K. Ravindran

Multicasting provides a convenient and efficient way to perform one-to-many process communication. This paper presents a kernel model which supports reliable group communication in a distributed computing environment. We introduce new semantic tools which capture the nondeterminism of the underlying low level events concisely and describe a process alias-based structuring technique for the kernel to handle the reliability problems that may arise during group communication. The scheme works by maintaining a close association between group messages and their corresponding reply messages. We also introduce a dynamic binding scheme which maps group id's to multicast addresses. The scheme allows the detection and subsequent recovery from inconsistencies in the binding information. Sample programs illustrating how the semantic tools may be used are also included.

TR-86-05 Host Identification in Reliable Distributed Kernels, February 1986 Samuel T. Chanson and K. Ravindran

Acquisition of a host identifier (id) is the first and foremost network level activity initiated by a machine joining a network. It allows the machine to assume an identity in the network and build higher levels of abstraction that may be integrated with those on other machines. In order that the system may work properly, host id's must satisfy certain properties such as uniqueness.

In recent years, distributed systems consisting of workstations connected by a high speed local area network have become popular. Hosts in such systems interact with one another more frequently and in a manner much more complex than is possible in the long haul network environment. The kernels in such systems are called upon to support various inter-host operations, requiring additional properities for host id's. This paper discusses the implications of these properties with respect to distributed kernel reliability. A new scheme to generate and manage host id's satisfying the various properties is also presented. The scheme is distributed and robust, and works even under network partitioning.

TR-86-06 Semi-Automatic Implementation of Network Protocols, February 1986 Daniel A. Ford

A compiler which achieves automatic implementation of network protocols by transforming specifications written in {\em FDT} into {\em C} programs is presented. A brief introduction to the the fundamentals of {\em FDT}, a standard language developed by ISO/TC97/SC 16/WG 1 Subgroup B for specifying network protocols, is given. We then present an overview of the compiler and discuss the problem of {\em PASCAL} to {\em C} translation. Transformation of a {\em FDT} specification into code is explained and illustrated by two implementation examples. The first example illustrates the implementation strategy by tracing the processing of a simple protocol The second example demonstrates the validity of using automatically generated implementations by showing how a communication path was established between two hosts using code generated for the alternating bit protocol.

TR-86-07 Implementation of Microcomputers in Elementary Schools: A Survey and Evaluation --- don't reprint, May 1986 Christine Chan

The objective of this thesis is to investigate the uses of computer aided learning (CAL) at the elementary level. Some recent publications on CAL are summarized and discussed. A questionnaire was used and interviews uere conducted with elementary teachers in four chosen school districts in Vancouver and Toronto. From this field research, information was collected on teachers' perceptions on the use or CAL in the elementary classroom. This data is compared with observations presented in the relevant literature, and the comparison discussed within Robert Taylor's framework or using the computer as tutor, tool, and tutee. Included are the results from the questionnaire. The thesis concludes with a discussion on the role of the teacher in the use or computers in the classroom, a flexible approach to adopting CAL, and possible areas for future research.

TR-86-08 Implementation of Team Shoshin: An Exercise in Porting and Multiprocess Structuring of the Kernel, March 1986 Huay-Yong Wang

Team Shoshin is an extension of Shoshin, a testbed for distributed software originally developed on the LSI 11/23s at the University of Waterloo. This thesis presents a description of the implementation of Team Shoshin on the Sun Workstation. With wide disparity in the underlying hardware, a major part of our initial development effort is to port Shoshin to its new hardware. The problems and design decisions faced by the porting effort and how they are overcome will be discussed. The development of Team Shoshin has provided us the opportunity to investigate the use of multiprocess structuring techniques at the kernel level. We will describe the design and implementation of the proposed kernel multiprocess structure and the rationale behind it. The applicability of the proposed kernel multiprocess structure and its impact on operating system design will be discussed drawing from experience gained through actual implementation.

TR-86-09 Precomplete Negation \& Universal Quantification, April 1986 Paul J. Voda

This paper is concerned with negation in logic programs. We propose to extend negation as failure by a stronger form of negation called precomplete negation. In contrast to negation as failure, precomplete negation has a simple semantic characterization given in terms of computational theories which deliberately abandon the law of the excluded middle (and thus classical negation) in order to attain computational efficiency. The computation with precomplete negation proceeds with the direct computation of negated formulas even in the presence of free variables. Negated formulas are computed in a mode which is dual to the standard positive mode of logic computations. With negation as failure the formulas with tree variables must be delayed until the latter obtain values. Consequently, in situations where delayed formulas are never sufficiently instantiated, precomplete negation can find solutions unattainable with negation as failure. As a consequence of delaying, negation as failure cannot compute unbounded universal quantifiers whereas precomplete negation can. Instead of concentrating on the model-theoretical side of precomplete negation this paper deals with questions of complete computations and efficient implementations.

TR-86-11 Model \& Solution Strategy for Placement of Rectangular Blocks in the Euclidean Plane, May 1986 Amir Alon and Uri Ascher

This paper describes a nonlinear optimization model for the placement of rectangular blocks with some wire connections among them in the Euclidian plane, such that the total wire length is minimized. Such a placement algorithm is useful as a CAD tool for VLSI and PCB layout designs.

The mathematical model presented here ensures that the blocks will not overlap and minimizes the sum of the distances of the interconnections of the blocks with respect to their orientation as well as their position. We also present mechanisms for solving more restrictive placement problems, including one in which there is a set of equally spaced, discrete angles to be used in the placement. The mathematical model is based on the Lennard-Jones 6-12 potential equation, on a sine wave shaped penalty function, and on minimizing the sum of the squares of the Euclidian distances of the block interconnections. We also present some experimental results which show that good placements are achieved with our techniques.

TR-86-12 Shape Analysis, May 1986 Robert J. Woodham

(Abstract not available on-line)

TR-86-13 Structuring Reliable Interactions in Distributed Server Architectures, January 1986 K. Ravindran and Samuel T. Chanson

(Abstract not available on-line)

TR-86-14 Reasoning with Incomplete Information Investigations of Non-Monotonic Reasoning, January 1986 David W. Etherington

(Abstract not available on-line)

TR-86-15 Productive Sets and Constructively Nonpartial-Recursive Functions, August 1986 K and Akira a

(Abstract not available on-line)

TR-86-16 On the Visual Discrimination of Self-Similar Random Textures, September 1986 R. Rensink

This work investigates the ability of the human visual system to discriminate self-similar Gaussian random textures. The power spectra of such textures are similar to themselves when rescaled by some factor $h > 1$. As such, these textures provide a natural domain for testing the hypothesis that texture perception is based on a set of spatial-frequency channels characterized by filters of similar shape.

Some general properties of self-similar random textures are developed. In particular, the relations between their covariance functions and power spectra are established, and are used to show that many self-similar random textures are stochastic fractals. These relations also lead to a simple texture-generation algorithm that allows independent and orthogonal variation of several properties of interest.

Several sets of psychophysical experiments are carried out to determine the statistical properties governing the discrimination of self-similar line textures. Results show that both the similarity parameter $H$ and the scaling ratio $h$ influence discriminability. These two quantities, however, are insufficient to completely characterize perceived texture.

The ability of the visual system to discriminate between various classes of self-similar random texture is analyzed using a simple multichannel model of texture perception. The empirical results are found to be compatible with the hypothesis that texture perception is mediated by the set of spatial-frequency channels putatively involved in form vision.

TR-86-17 Additional Requirements for Matrix \& Transposed Matrix Products, October 1986 M. Kaminski, David G. Kirkpatrick and N. H. Bshouty

Let $M$ be an $s \times t$ matrix and let $M^{T}$ be the transpose of $M$. Let {\bf x} and {\bf y} be $t$- and $s$-dimensional indeterminate column vectors, respectively. We show that any linear algorithm $A$ that computes $M${\bf x} has associated with it a natural dual linear algorithm denoted $A^{T}$ that computes $M^{T}${\bf y}. Furthermore, if $M$ has no zero rows or columns then the number of additions used by $A^{T}$ exceeds the number of additions used by $A$ by exactly $s-t$. In addition, a strong correspondence is established between linear algorithms that compute the product $M{\bf x}$ and bilinear algorithms that compute the bilinear form ${\bf y}^{T}M{\bf x}$.}

TR-86-18 Conditioning of the Steady State Semiconductor Service Problem, January 1986 Uri Ascher, P. A. Markowich, C. Schmeiser, H. Steinruck and R.", Weiss

When solving numerically the steady state semiconductor device problem using appropriate discretizations, extremely large condition numbers are often encountered for the linearized discrete device problem. These condition numbers are so large that, if they represented a sharp bound on the amplification of input errors, or even of roundoff errors, then the obtained numerical solution would be meaningless.

As it turns out, one major reason for these large numbers is due to poor row and column scaling, which is essentially harmless and/or can be fixed. But another reason could be an ill-conditioned device, which yields a true loss of significant digits in the numerical calculation.

In this paper we carry out a conditioning analysis for the steady state device problem. We consider various quasilinearizations as well as Gummel-type iterations and obtain stability bounds which may indeed allow ill-conditioning in general. These bounds are exponential in the potential variation, and are sharp e.g. for a thyristor. But for devices where each smooth subdomain has an Ohmic contact, e.g. a pn-diode, moderate bounds guaranteeing well-conditioning are obtained. Moreover, the analysis suggests how various row and column scalings should be applied in order for the measured condition numbers to correspond more realistically to the true loss of significant digits in the calculations.

TR-86-19 On Collocation Implementation for Singularly Perturbed Two-Point Problems, November 1986 Uri Ascher and Simon Jacobs

We consider the numerical solution of singularly perturbed two-point boundary value problems in ordinary differential equations. Implementation methods for general purpose solvers of first order linear systems are examined, with the basic difference scheme being collocation at Gaussian points. Adaptive mesh selection is based on localized error estimates at the collocation points. These methods are implemented as modifications to the successful collocation code COLSYS, which was originally designed for mildly stiff problems only. Efficient high order approximations to extremely stiff problems are obtained, and comparisons to COLSYS show that the modifications work relatively much better as the singular perturbation parameter gets small (i.e. the problem gets stiff), for both boundary layer and turning point problems.

TR-86-20 A Semi-Automatic Approach to Protocol Implementation --- The ISO Class 2 Transport Protocol as an Example, November 1986 Allen C. Lau

Formal Description Techniques (FDTs) for specifying communication protocols, and the adopted FDT standards such as Estelle have opened a new door for the possibility of automating the implementation of a complex communication protocol directly from its specification. After a brief overview of Estelle FDT, we present the basic ideas and the encountered problems in developing a C-written Estelle compiler, which accepts an Estelle specification of protocols and produces a protocol implementation in C. The practicality of this tool --- the Estelle compiler --- has been examined via a semi-automatic implementation of the ISO class 2 Transport Protocol using the tool. A manual implementation in C/UNIX 4.2bsd of this protocol is also performed and compared with the semi-automatic implementation. We find the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one. These advantages include correctness and modularity in protocol implementation code and reduction in implementation development time. In this thesis, we discuss our experience on using the semi-automatic approach in implementing the ISO class 2 Transport Protocol.

TR-86-21 Handling Call Idempotency Issues in Replicated Distributed Programs, January 1986 K. Ravindran and Samuel T. Chanson

(Abstract not available on-line)

TR-86-22 Factors and Flows, November 1986 P. Hell and David G. Kirkpatrick

(Abstract not available on-line)

TR-86-23 An Environment Theory with Precomplete Negation over Pairs, November 1986 James H. Andrews

A formal semantics of Voda's Theory of Pairs is given which takes the natural- deduction form of Gilmore's first-order set theory. The complete proof theory corresponding to this semantics is given. Then, a logic programming system is described in the form of a computational proof theory for the Gilmore semantics. This system uses parallel disjunction and the technique of precomplete negation; these features are shown to make it more complete than conventional logic programming languages.

Finally, some alternative formulations are explored which would bring the logic programming system described closer to conventional systems. The semantic problems arising from these alternatives are explored.

Included in appendices are the proof of completeness of the complete proof theory, and the environment solution algorithm which is at the heart of precomplete negation over pairs.

TR-86-24 Compiling Functional Programming Constructs to a Logic Engine, November 1986 Harvey Abramson and Peter Ludemann

In this paper we consider how various constructs used in functional programming can be efficiently translated to code for a Prolog engine (designed by L\"{u}demann) similar in spirit but not identical to the Warren machine. It turns out that this Prolog engine which contains a delay mechanism designed to permit coroutining and sound implementation of negation is sufficient to handle all common functional programming constructs; indeed, such a machine has about the same efficiency as a machine designed specifically for a functional programming language. This machine has been fully implemented.

TR-86-25 Efficiently Implementing Pure Prolog or: Not ``YAWAM'', November 1986 Peter Ludemann

High performance hardware and software implementations of Prolog are now being developed by many people, using the Warren Abstract Machine (or ``WAM'') [Warr83]. We have designed a somewhat different machine which supports a more powerful language than Prolog, featuring: \begin{itemize} \item efficiency similar to the WAM for sequential programs, \item tail recursion optimization (TRO) [Warr86], \item sound negation, \item pseudo-parallelism (co-routining) with full backtracking, \item dynamic optimization of clause order, \item efficiency {\em if-then-else} (``shallow'' backtracking), \item simple, regular instruction set designed for easily optimized compilation, \item efficient memory utilization, \item integrated object-oriented virtual memory, \item predicates as first class objects. \end{itemize}

Our design gives the programmer more flexibility in designing programs than is provided by standard Prolog, yet it retains the efficiency of more limited designs.

TR-86-26 Probabilistic Solitude Detection on Rings of Known Size, December 1986 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Upper and lower bounds that match to within a constant factor are found for the expected bit complexity of a problem on asynchronous unidirectional rings of known size $n$, for algorithms that must reach a correct conclusion with probability at least $1 - \epsilon$ for some small preassigned $\epsilon \geq 0$. The problem is for a nonempty set of contenders to determine whether there is precisely one contender. If distributive termination is required, the expected bit complexity is \( \Theta (n \min ( \log \nu (n) + \sqrt{ \log \log (1 / \epsilon)}, \sqrt{ \log n}, \log \log (1 / \epsilon))) \), where $ \nu (n) $ is the least nondivisor of $n$. For nondistributive termination, $ \sqrt{\log \log (1 / \epsilon)} $ and $ \sqrt{\log n}$ are replaced by $\log \log \log (l/ \epsilon)$ and $\log \log n$ respectively. The lower bounds hold even for probabilistic algorithms that exhibit some nondeterministic features.

TR-87-01 Analytic Method for Radiometric Correction of Satellite Multispectral Scanner Data, January 1987 R. J. Woodham and M. H. Gray

The problem of radiometric correction of multispectral scanner data is posed as the problem of determining an intrinsic reflectance factor characteristic of the surface material being imaged and invariant to topography, position of the sun, atmosphere and position of the viewer. A scene radiance equation for remote sensing is derived based on an idealized physical model of image formation. The scene radiance equation is more complex for rugged terrain than for flat terrain since it must model slope, aspect and elevation dependent effects. Scene radiance is determined by the bidirectional reflectance distribution function (BRDF) of the surface material and the distribution of light sources. The sun is treated as a collimated source and the sky is treated as a uniform hemispherical source. The atmosphere is treated as an optically thin, horizontally uniform layer. The limits of this approach are reviewed using results obtained with Landsat MSS images and a digital terrain model (DTM) of a test site near St. Mary Lake, British Columbia, Canada.

New results, based on regression analysis, are described for the St. Mary Lake site. Previous work is extended to take advantage of explicit forest cover data and to consider numeric models of sky radiance. The calculation of sky irradiance now takes occlusion by adjacent terrain into account. The results for St. Mary Lake suggest that the cosine of the incident solar angle and elevation are the two most important correction terms. Skylight and inter-reflection from adjacent terrain, however, also are significant.

TR-87-02 A Schema \& Constraint-Based Representation to Understanding Natural Language, January 1987 Eliza Wing-Mun Kuttner

This thesis attempts to represent the syntax and semantics of English sentences using a schema and constraint-based approach. In this approach, syntactic and semantic knowledge that are represented by schemata are processed in parallel with the utilization of network consistency techniques and an augmented version of Earley's context-free parsing algorithm. A sentence's syntax and semantics are disambiguated incrementally as the interpretation proceeds left to right, word by word. Each word and recognized grammatical constituent provide additional information that helps to guide the interpretation process.

It is desirable to attempt to apply network consistency techniques and schema-knowledge representations on understanding natural language since the former has been proven to be quite efficient and the latter provides modularity in representing knowledge. In addition, this approach is appealing because it can cope with ambiguities in an efficient manner. Multiple interpretations are retained if ambiguity exists as indicated by the words processed so far. How- ever, incorrect interpretations are eliminated as soon as their inappropriateness is discovered. Thus, backtracking search which is known to be inefficient is avoided.

TR-87-03A Application-Driven Failure Semantics of Interprocess Communication in Distributed Programs, June 1988 K. Ravindran, Samuel T. Chanson and Ramakrishnam

Distributed systems are often modelled after the client-server paradigm where resources are managed by servers, and clients communicate with servers for operations on the resources. These client-server communications fall into two categories --- connection-oriented and connection-less, depending on whether the servers maintain state information about the clients or not. Additionally, each of the servers may itself be distributed, i.e., structured as a group of identical processes; these processes communicate with one another to manage shared resources (intra-server communications). Thus, the activities of a distributed program may be viewed as a sequence of client-server communications interspersed with intra-server communications.

In this paper, we identify suitable interprocess communication (IPC) abstractions for such communications --- remote procedure calls for client-server communications and {\em application-driven shared variables} (a shared memory-like abstraction) for intra-server group communication. We specify the properties of these abstractions to handle partial failures that may occur during program execution.

The issues of orphans and consistency arising due to partial failures are examined, and solution techniques application to be incorporated in the run-time system. Examples are given to illustrate the use of these bstractions as primitives for constructing distributed programs.

TR-87-03 Failure Transparency in Remote Procedure Calls, 1987 K. Ravindran and Samuel T. Chanson

Remote procedure call (RPC) is a communication abstraction widely used in distributed programs. The general premise entwined in existing approaches to handle machine and communication failures during RPC is that the applications which interface to the RPC layer cannot tolerate the failures. The premise manifests as a top level constraint on the failure recovery algorithms used in the RPC layer in these approaches. However, our premise is that applications can tolerate certain types of failures under certain situations. This may in turn relax the top level constraint on failure recovery algorithms and allow exploiting the inherent tolerance of applications to failures in a systematic way to simplify failure recovery. Motivated by the premise. The paper presents a model of RPC. The model reflects certain generic properties of the application layer that may be exploited by the RPC layer during failure recovery. Based on the model a new technique of adopting orphans caused by failures is described. The technique minimizes Ihe rollback which may be required in orphan killing techniques. Algorithmic details of the adoption technique are described followed by a quantitative analysis. The model has been implemented as a prototype on a local area network. The simplicity and generality of the failure recovery renders the RPC model useful in distributed systems. particularly those that are large and heterogeneous and hence have complex failure modes.

TR-87-04 Adequacy Criteria for Visual Knowledge Representation, January 1987 Alan K. Mackworth

(Abstract not available on-line)

TR-87-05 Stable Representation of Shape, February 1987 R. J. Woodham

(Abstract not available on-line)

TR-87-06 Semi-Automatic Implementation of Protocols Using an Estelle-C Compiler, March 1987 Son T. Vuong, Allen Chakming Lau and Robin Isaac Man-Hang Chan

In this paper, we present the basic ideas underlying an {\em Estelle-C} compiler, which accepts an {\em Estelle} protocol specification and produces a protocol implementation in C. We discuss our experience gained from using the semi-automatic approach to implement the ISO class 2 transport protocol. A manual implementation of the protocol is performed and compared with the semi-automatic implementation. We find the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one, including correctness and modularity in protocol implementation code, conformance to the specification and reduction in implementation time. Finally, we present our ongoing development of a new {\em Estelle-C} compiler.

TR-87-07 The Set Conceptual Model and the Domain Graph Method of Table Design, March 1987 Paul C. Gilmore

A purely set-based conceptual model SET is described along with a specification/query language DEFINE. SET is intended for modelling all phases of database design and data processing. The model for an enterprise is its set schema, consisting of all the sets that are declared for it.

The domain graph method of table design translates the set schema for an enterprise into a table schema in which each table is a defined user view declared as a set in DEFINE. But for one initial step, the method can be fully automated. The method makes no use of normalization.

Two kinds of integrity constraints are supported in the SET model. Although these constraints take a simple form in the set schema for an enterprise, they can translate into referential integrity constraints for the table schema of a complexity not previously considered.

The simplicity of the constraints supported in the SET model, together with the simplicity of the domain graph table design method, suggests that a conceptual view of an enterprise provided by the SET model is superior to the lower level data presentation view provided by the relational model.

TR-87-08 Probabilistic Solitude Detection I: Ring Size Known Approximately, March 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Matching upper and lower bounds for the bit complexity of a problem on asynchronous unidirectional rings are established, assuming that algorithms must reach a correct conclusion with probability $1 - \epsilon$, for some $\epsilon > 0$. Processors can have identities, but the identities are not necessarily distinct. The problem is that of a distinguished processor determining whether it is the only distinguished processor. The complexity depends on the processors' knowledge of the size $n$ of the ring. When no upper bound is known, only nondistributive termination is possible, and $\Theta (n \log (1 / \epsilon)) $ bits are necessary and sufficient. When only an upper bound $N$ is known, distributive termination is possible, but the complexity of achieving distributive termination is $ \Theta (n \sqrt{\log ( \frac{N}{n})} + n \log (\frac{1}{\epsilon}))$. When processors know that $(\frac{1}{2} + \rho)N \leq n \leq N$ for $\rho > 0$, then the bound drops to $\Theta (n \log \log (\frac{1}{\epsilon}) + n \log (\frac{1}{\rho}))$, for both distributive and nondistributive termination, for sufficiently large $N$.

TR-87-09 Justification and Applications of the Set Conceptual Model, April 1987 Paul C. Gilmore

In an earlier paper, the SET conceptual model was described. along with the domain graph method of table design. In this paper a justification for the method is provided, and a simple condition shown to be sufficient for the satisfaction of the degree constraints of a set schema. The basis for the consistency of the model is also described. Applications of the SET model to the full range of data processing are suggested, as well as to the problems raised by incomplete information.

TR-87-10 A Foundation for the Entity Relationship Model: Why \& How, April 1987 Paul C. Gilmore

(Abstract not available on-line)

TR-87-11 Probabilistic Solitude Detection II: Ring Size Known Exactly, April 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Upper and lower bounds that match to within a constant factor are found for the expected bit complexity of a problem on asynchronous unidirectional rings of known size $n$, for algorithms that must reach a correct conclusion with probability at least $1 - \epsilon$ for some small preassigned $\epsilon \geq 0$. The problem is for a nonempty set of contenders to determine whether there is precisely one contender. If distributive termination is required, the expected bit complexity is \( \Theta (n \min ( \log \nu (n) + \sqrt{\log \log (\frac{1}{\epsilon})}, \sqrt{\log n}, \log \log (\frac{1}{\epsilon}))) \), where $\nu (n)$ is the least nondivisor

of $n$. For nondistributive termination, $ \sqrt{\log \log (\frac{1}{\epsilon})}$ and $\sqrt{\log n}$ are

replaced by $\log \log \log(\frac{1}{\epsilon})$ and $\log \log n$ respectively. The lower bounds hold even for probabilistic algorithms that exhibit some nondeterministic features.

TR-87-12 Establishing Order in Planar Subdivisions, May 1987 David G. Kirkpatrick

A planar subdivision is the partition of the plane induced by an embedded planar graph. A representation of such a subdivision is {\em ordered} if, for each vertex $\upsilon$ of the associated graph $G$, the (say) clockwise sequence of edges in the embedding of $G$ incident with appears explicitly.

The worst-case complexity of establishing order in a planar subdivision --- i.e. converting an unordered representation into an ordered one --- is shown to be \( \Theta (n + \log \lambda(G)) \), where $n$ is the size (number of vertices) of the underlying graph $G$ and $\lambda (G)$ is the number of topologically distinct embeddings of $G$ in the plane.

TR-87-13 Parallel Construction of Subdivision Hierarchies, May 1987 Norm Dadoun and David G. Kirkpatrick

A direct, simple and general parallel algorithm is described for the preprocessing of a planar subdivision for fast (sequential) search. In essence, the hierarchical subdivision search structure described by Kirkpatrick [K] is constructed in parallel. The method relies on an efficient parallel algorithm for constructing large independent sets in planar graphs. This is accomplished by a simple reduction to the same problem for lists.

Applications to the manipulation of convex polyhedra are described including an \( O(\log^{2}n \log^{*}n) \) parallel time algorithm for constructing the convex hull of $n$ points in $R^{3}$ and an \( O( \log n \log^{*}n) \) parallel time algorithm for detecting the separation of convex polyhedra.

TR-87-14 A Simple Optimal Parallel List Ranking Algorithm, May 1987 Karl Abrahamson, Norm Dadoun, David G. Kirkpatrick and Teresa Maria Przytycka

We describe a randomized parallel algorithm to solve list ranking in $O(\log n)$ expected time using $n/ \log n$ processors, where $n$ is the length of the list. The algorithm requires considerably less load rebalancing than previous algorithms.

TR-87-15 A Parallel Algorithm for Finding Maximal Independent Sets in Planar Graphs, June 1987 Norm Dadoun and David G. Kirkpatrick

The problem of constructing parallel algorithms for finding Maximal Independent Sets in graphs has received considerable attention. In the case that the given graph is planar, the simple efficient parallel algorithm described here may be employed. The method relies on an efficient parallel algorithm for constructing large independent sets in bounded degree graphs. This is accomplished by a simple reduction to the same problem for lists.

Using a linear number of EREW processors, the algorithm identifies a maximal independent set in an arbitrary planar graph in O$(\log n \log^{*} n)$ parallel time. A randomized version of the algorithm runs in O$(\log n)$ expected parallel time.

TR-87-19 Time-Space Tradeoffs for Branching Programs Contrasted With Those for Straight-Line Programs, June 1987 Karl Abrahamson

This paper establishes time-space tradeoffs for some algebraic problems in the branching program model, including convolution of vectors, matrix multiplication, matrix inversion, computing the product of three matrices and computing $PAQ$ where $P$ and $Q$ are permutation matrices. While some of the results agree well with known results for straight- line programs, one of them (for matrix multiplication) surprisingly is stronger, and one (for computing $PAQ$) is necessarily weaker. Some of the tradeoffs are proved for expected time and space, where all inputs are equally likely.

TR-87-20 Randomized Function Evaluation on a Ring, May 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Let $R$ be a unidirectional asynchronous ring of $n$ processors each with a single input bit. Let $f$ be any cyclic non-constant function of $n$ boolean variables. Moran and Warmuth [8] prove that any deterministic algorithm for $R$ that evaluates $f$ has communication complexity $\Omega (n \log n)$ bits. They also construct a cyclic non-constant boolean function that can be evaluated in $O(n \log n)$ bits by a deterministic algorithm.

This contrasts with the following new results: \begin{enumerate} \item There exists a cyclic non-constant boolean function which can be evaluated with expected complexity $O (n \sqrt{\log n})$ bits by a randomized algorithm for $R$.

\item Any nondeterministic algorithm for $R$ which evaluates any cyclic non-constant function has communication complexity $\Omega (n \sqrt{\log n})$ bits. \end{enumerate

TR-87-21 Knowledge Structuring \& Constraint Satisfaction: the Mapsee Approach, June 1987 Jan A. Mulder, Alan K. Mackworth and William S. Havens

This paper shows how to integrate constraint satisfaction techniques with schema-based representations for visual knowledge. This integration is discussed in a progression of three sketch map interpretation programs: Mapsee-l, Mapsee-2, and Mapsee-3. The programs are evaluated by the criteria of descriptive and procedural adequacy. The evaluation indicates that a schema-based representation used in combination with a hierarchical arc consistency algorithm constitutes a modular, efficient, and effective approach to the structured representation of visual knowledge. The schemata used in this representation are embedded in composition and specialization hierarchies. Specialization hierarchies are further expanded into discrimination graphs.

TR-87-24 The Logic of Depiction, June 1987 Raymond Reiter and Alan K. Mackworth

We propose a theory of depiction and interpretation that formalizes image domain knowledge, scene domain knowledge and the depiction mapping between the image and scene domains. This theory is illustrated by specifying some general knowledge about maps, geographic objects and their depiction relationships in first order logic with equality.

An interpretation of an image is defined to be a logical model of the general knowledge and a description of that image. For the simple map world we show how the task level specification may be refined to a provably correct implementation by invoking model preserving transformations on the logical representation. In addition, we sketch logical treatments for querying an image, incorporating contingent scene knowledge into the interpretation process, occlusion, ambiguous image descriptions, and composition.

This approach provides a formal framework for analyzing existing systems such as Mapsee, and for understanding the use of constraint satisfaction techniques. It also can be used to design and implement vision and graphics systems that are correct with respect to the task and algorithm levels.

TR-87-25 On the Modality of Convex Polygons, June 1987 Karl Abrahamson

Under two reasonable definitions of random convex polygons, the expected modality of a random convex polygon grows without bound as the number of vertices grows. This refutes a conjecture of Aggarwall and Melville.

TR-87-26 Formalizing Attribution by Default, July 1987 Paul C. Gilmore

Attribution by default occurs when, in the absence of information to the contrary, an entity is assumed to have a property. The provision of information to the contrary results in the withdrawal of the attribution. It has been argued that classical logic cannot formalize such commonsense reasoning and that the development of a nonmonotonic logic is necessary. Evidence is offered in this note that this is not the case for some important defaults.

TR-87-27 On Numerical Differential Algebraic Problems with Application to Semiconductor Device Simulation, July 1987 Uri Ascher

This paper considers questions of conditioning of and numerical methods for certain differential algebraic equations subject to initial and boundary conditions. The approach taken is that of separating ``differential'' and ``algebraic'' solution components, at least theoretically.

This yields conditioning results for differential algebraic boundary value problems in terms of ``pure'' differential problems, for which existing theory is well-developed. We carry the process out for problems with (global) index 1 or 2.

For semi-explicit boundary value problems of index 1 (where solution components are separated) we give a convergence theorem for a special class of collocation methods. For general index 1 problems we discuss advantages and disadvantages of certain symmetric difference schemes. For initial value problems with index 2 we discuss the use of BDF schemes, summarizing conditions for their successful and stable utilization.

Finally, the present considerations and analysis are applied to two problems involving differential algebraic equations which arise in semiconductor device simulation.

TR-87-28 General Framework, Stability and Error Analysis for Numerical Stiff Boundary Value Methods, July 1987 Uri Ascher and R. M. Mattheij

This paper provides a general framework, called ``theoretical multiple shooting'', within which various, numerical methods for stiff boundary value ordinary differential problems can be analyzed. A global stability and error analysis is given, allowing (as much as possible) the specificities of an actual numerical method to come in only locally. We demonstrate the use of our results for both one-sided and symmetric difference schemes. The class of problems treated includes some with internal (e.g. ``turning point'') layers.

TR-87-29 Update on Computational Vision: Shape Representation, Object Recognition \& Constraint Satisfaction --- replaced, see 89-12, July 1987 Alan K. Mackworth

(Abstract not available on-line)

TR-87-30 A Parallel Tree Contraction Algorithm, August 1987 Karl Abrahamson, Norm Dadoun, David G. Kirkpatrick and Teresa Maria Przytycka

A simple reduction from the tree contraction problem to the list ranking problem is presented. The reduction takes O$(\log n)$ time for a tree with $n$ nodes, using O$(n / \log n)$ EREW processors. Thus tree contraction can be done as efficiently as list ranking.

A broad class of parallel tree computations to which the tree contraction techniques apply is described. This subsumes earlier characterizations. Applications to the computation of certain properties of cographs are presented in some detail.

TR-87-31 Concepts \& Methods for Database Design, August 1987 Paul C. Gilmore

This report consists of drafts of chapters of a book prepared as course material for CSCI 404 at the University of British Columbia.

TR-87-32 Generalized LL(K) grammars for Concurrent Logic Programming Languages, October 1987 Harvey Abramson

We examine the compilation of the LL(k) deterministic context free grammars to Horn clause logic programs and sequential and concurrent execution of these programs. In the sequential case, one is able to take advantage of the determinism to eliminate the generation of unnecessary backtracking information during execution of the compiled logic program. In the concurrent case, grammar rules are simply and directly translated to clauses of Concurrent Prolog, Parlog, or Guarded Horn Clause programs, allowing grammatical processing directly in the setting of committed or ``don't care'' nondeterminism. LL(k) grammar rules are generalized so that grammatical processing of streams involving derivations of infinite length is possible. A top-down analogue of Marcus's deterministic parser is a possible application of these generalized LL(k) grammars.

TR-87-33 Towards an Expert System for Compiler Development, October 1987 Harvey Abramson

(Abstract not available on-line)

TR-87-34 The Design \& Control of Visual Routines for the Computation of Simple Geometric Properties \& Relations, October 1987 Marc H. J. Romanycia

The present work is based on the Visual Routine theory of Shimon Ullman. This theory holds that efficient visual perception is managed by first applying spatially parallel methods to an initial input image in order to construct the basic representation-maps of features within the image. Then, this phase is followed by the application of serial methods --- visual routines --- which are applied to the most salient items in these and other subsequently created maps.

Recent work in the visual routine tradition is reviewed, as well as relevant psychological work on preattentive and attentive vision. An analysis is made of the problem of devising a visual routine language for computing geometric properties and relations. The most useful basic representations to compute directly from a world of 2-D geometric shapes are determined. An argument is made for the case that an experimental program is required to establish which basic operations and which methods for controlling them will lead to the efficient computation of geometric properties and relations.

A description is given of an implemented computer system which can correctly compute, in images of simple 2-D geometric shapes, the properties {\em vertical}, {\em horizontal}, {\em closed}, and {\em convex}, and the relations {\em inside}, {\em outside}, {\em touching}, {\em centred-in}, {\em connected}, {\em parallel}, and {\em being-part-of}. The visual routines which compute these, the basic operations out of which the visual routines are composed, and the important logic which controls the goal-directed application of the routines to the image are all described in detail. The entire system is embedded in a Question-and-Answer system which is capable of answering questions of an image, such as ``Find all the squares inside triangles'' or ``Find all the vertical bars outside of closed convex shapes.'' By asking many such questions about various test images, the effectiveness of the visual routines and their controlling logic is demonstrated.

TR-87-35 A Default Logic Approach to the Derivation of Natural Language Presuppositions, October 1987 Robert E. Mercer

(Abstract not available on-line)

TR-87-36 An Estelle-C Compiler for Automatic Protocol Implementation, November 1987 Robin Isaac Man-Hang Chan

Over the past few years, much experience has been gained in semi-automatic protocol implementation using an existing Estelle-C compiler developed at the University of British Columbia. However, with the continual evolution of the Estelle language, that compiler is now obsolete. The present study found substantial syntactic and semantic differences between the Estelle language as implemented by the existing compiler and that specified in the latest ISO document to warrant the construction of a new Estelle-C compiler. The result is a new compiler which translates Estelle as defined in the second version of the ISO Draft Proposal 9074 into the programming language C. The new Estelle-C compiler addresses issues such as dynamic reconfiguration of modules snd maintenance of priority relationships among nested modules. A run-time environment capable of supporting the new Estelle features is also presented. The implementation strategy used in the new Estelle-C compiler is illustrated by using the alternating bit protocol found in the ISO Draft Proposal 9074 document.

TR-87-37 The Renormalized Curvature Scale Space and the Evolution Properties of Planar Curves, November 1987 Alan K. Mackworth and Farzin Mokhtarian

The Curvature Scale Space Image of a planar curve is computed by convolving a path-based parametric representation of the curve with a Gaussian function of variance $\sigma^{2}$, extracting the zeroes of curvature of the convolved curves and combining them in a scale space representation of the curve. For any given curve $\Gamma$, the process of generating the ordered sequence of curves \{ \( \Gamma_{\sigma} \mid \sigma \geq 0\) \} is known as the evolution of $\Gamma$.

It is shown that the normalized arc length parameter of a curve is, in general, not the normalized arc length parameter of a convolved version of that curve. A new method of computing the curvature scale space image reparametrizes each convolved curve by its normalized arc length parameter. Zeroes of curvature are then expressed in that new parametrization. The result is the Renormalized Curvature Scale Space Image and is more suitable for matching curves similar in shape.

Scaling properties of planar curves and the curvature scale space image are also investigated. It is shown that no new curvature zero-crossings are created at the higher scales of the curvature scale space image of a planar curve in $C_{2}$ if the curve remains in $C_{2}$ during evolution. Several positive and negative results are presented on the preservation of various properties of planar curves under the evolution process. Among these results is the fact that every polynomially represented planar curve in $C_{2}$ intersects itself just before forming a cusp point during evolution.

TR-87-38 Multi-Scale Description of Space Curves and Three-Dimensional Objects, November 1987 Farzin Mokhtarian

This paper addresses the problem of representing the shape of three-dimensional or space curves. This problem is important since space curves can be used to model the shape of many three-dimensional objects effectively and economically. A number of shape representation methods that operate on two-dimensional objects and can be extended to apply to space curves are reviewed briefly and their shortcomings discussed.

Next, the concepts of curvature and torsion of a space curve are explained. The curvature and torsion functions of a space curve specify it uniquely up to rotation and translation. Arc-length parametrization followed by Gaussian convolution is used to compute curvature and torsion on a space curve at varying levels of detail. Larger values of the scale parameter of the Gaussian bring out more basic features of the curve. Information about the curvature and torsion of the curve over a continuum of scales are combined to produce the curvature and torsion scale space images of the curve. These images are essentially invariant under rotation, uniform scaling and translation of the curve and are used as a representation for it. Using this representation, a space curve can be successfully matched to another one of similar shape.

The application of this technique to a common three-dimensional object is demonstrated. Finally, the proposed representation is evaluated according to several criteria that any shape representation method should ideally satisfy. It is shown that the curvature and torsion scale space representation satisfies those criteria better than other possible candidate methods.

TR-87-39 Advanced Topics in Automated Deduction, November 1987 Wolfgang Bibel

(Abstract not available on-line)

TR-87-40 Constraint Satisfaction from a Deductive Viewpoint, December 1987 Wolfgang Bibel

This paper reports the result of testing the author's proof techniques on the class of constraint satisfaction problems (CSP). This experiment has been successful in the sense that a completely general proof technique turns out to behave well also for this special class of problems which itself has received considerable attention in the community. So at the same time the paper happens to present a new (deductive) mechanism for solving constraint satisfaction problems that is of interest in its own right. This mechanism may be characterized as a bottom-up, lazy-evaluation technique which reduces any such problem to the problem of evaluating a database expression typically involving a number of joins. A way of computing such an expression is proposed.

TR-88-01 Parallel Recognition of Complement Reducible Graphs and Cotree Construction, January 1988 David G. Kirkpatrick and Teresa Maria Przytycka

A simple parallel algorithm is presented for constructing parse tree representations of graphs in a rich family known as cographs. From the parse tree representation of a cograph it is possible to compute in an efficient way many properties which are difficult for general graphs. The presented algorithm runs in O$(\log^{2} n)$ parallel time using O$(n^{3} / \log^{2} n)$ processors on a CREW PRAM.

TR-88-02 On Lower Bound for Short Noncontractible Cycles in Embedded Graphs, January 1988 Teresa Maria Przytycka and J. H. Przytycki

Let $C_{g,n}$ be a constant such that for each triangulation of a surface of genus $g$ with a graph of $n$ vertices there exists a noncontractible cycle of length at most $C_{g,n}$. Hutchinson in [H87] conjectures that $C(g,n)$ = $O(\sqrt{n/g})$ for $g > 0$. In this paper, we present a construction of a triangulation which disproves this conjecture.

TR-88-03 Protocol Specification and Verification using the Significant Event Temporal Logic, January 1988 George K. Tsiknis and Son T. Vuong

In this report we discuss the Significant Event Temporal Logic specification technique (SIGETL), a method for protocol specification and verification using a temporal logic axiomatic system. This technique is based on the idea that the state and the behaviour of a module can be completely described by the sequence of the significant events with which the module was involved in communicating with its environment till the present time. The behaviour of a module at any time is specified by simple temporal logic formulas, called transition axioms or properties of the module. Both, the safety and liveness properties of a module, as well as the global properties of a system, can be proven from its axioms using the axiomatic temporal logic system. As an example, we apply SIGETL to specify and verify a simple data transfer protocol. The general correspondence between SIGETL and ESTELLE FDT is also discussed.

TR-88-04 The Inconsistency of Belief Revision System, January 1988 George K. Tsiknis

In 1987 Chern Seet developed a belief revision algorithm and a deduction system by which, he claims, default reasoning can be accomplished. We show that his deduction system is inconsistent. Some obvious corrections are suggested but the resulting system is still inconsistent. Its behaviour is similar to that of a closed-world- assumption reasoner. We examine a case in which the modified system behaves like the predicate circumscription and also has a reasonable performance. Finally, we discuss some problems pertaining to Seet's revision strategy. A similar revision algorithm for normal default logic is outlined and the use of the SET model for handling exceptions --- and default reasoning --- is briefly discussed.

TR-88-05 The Connection Method for Non-Monotonic \& Autoepistemic Logic, January 1988 George K. Tsiknis

In this paper, first we present a connection method for non-monotonic logic, together with its soundness and completeness proof. Then, we extend this to a proof procedure for autoepistemic logic. In the last section, we also discuss some improvements on the method through structure sharing techniques.

TR-88-06 On the Comparative Complexity of Resolution and the Connection Method, February 1988 Wolfgang Bibel

Quadratic proofs of the pigeonhole formulas are presented using the connection method proof techniques. For this class of formulas exponential lower bounds are known for the length-of-resolution refutations. This indicates a significant difference in the power of these two proof techniques. While short proofs of these formulas are known using extended resolution, this particular proof technique, in contrast to both the connection method and resolution, seems not suitable for the actual proof search.

TR-88-07 The Technological Change of Reality Opportunities and Dangers, March 1988 Wolfgang Bibel

This essay discusses the trade-off between the opportunities and the dangers involved in technological change mainly from the perspective of Artificial Intelligence technology. In order to lay the foundation for the discussion, the symptoms of general unease which are associated with current technological progress, the concept of reality, and the field of Artificial Intelligence are very briefly discussed. In the main body of the essay, the dangers are contrasted with the potential benefits of such high technology. Besides discussing more well known negative and positive aspects we elaborate on the disadvantages of executive systems and the advantages of legislative systems. It is argued that only the latter might enable the re-establishment of the feedback-mechanism which proved so successful in earlier phases of evolution.

TR-88-08 Evolution Properties of Space Curves, February 1988 Farzin Mokhtarian

The Curvature Scale Space and Torsion Scale Space Images of a space curve are a multi-scale representation for that curve which satisfies several criteria for shape representation and is therefore a preferred representation method for space curves. \n The torsion scale space image of a space curve is computed by convolving a path-based parametric representation of the curve with Gaussian functions of varying widths, extracting the torsion zero-crossings of the convolved curves and combining them in a torsion scale space image of the curve. The curvature scale space image of the curve is computed similarly but curvature level-crossings are extracted instead. An evolved version of a space curve $\Gamma $ is obtained by convolving a parametric representation of that curve with a Gaussian function of variance $\sigma ^{2}$ and denoted by $\Gamma _{\sigma }$. The process of generating the ordered sequence of curves $ \{ \Gamma _{\sigma }\mid \sigma \geq 0\} $ is referred to as the {\it evolution} of $\Gamma $. \n A number of evolution properties of space curves are investigated in this paper. It is shown that the evolution of space curves is invariant under rotation, uniform scaling and translation of those curves. This property makes the representation suitable for recognition purposes. It is also shown that properties such as connectedness and closedness of a space curve are preserved during evolution of the curve and that the center of mass of a space curve remains the same as the curve evolves. Among other results is the fact that a space curve contained inside a simple, convex object, remains inside that object during evolution. \n The two main theorems of the paper examine a space curve during its evolution just before and just after the formation of a cusp point. It is shown that strong constraints on the shape of the curve in the neighborhood of the cusp point exist just before and just after the formation of that point.

TR-88-09 Fingerprint Theorems for Curvature and Torsion Zero-Crossings, April 1988 Farzin Mokhtarian

The {\em scale space image} of a signal f(x) is constructed by extracting the zero-crossings of the second derivative of a Gaussian of variable size a convolved with the signal, and recording them in the $x-\sigma $ map. \n Likewise, the {\em curvature scale space image} of a planar curve is computed by extracting the curvature zero-crossings of a parametric representation of the curve convolved with a Gaussian of variable size. The curvature level-crossings and torsion zero-crossings are used to compute the {\em curvature} and {\em torsion scale space images} of a space curve respectively. \n It has been shown [Yuille and Poggio 1983] that the scale space image of a signal determines that signal uniquely up to constant scaling and a harmonic function. This paper presents a generalization of the proof given in [Yuille and Poggio 1983]. It is shown that the curvature scale space image of a planar curve determines the curve uniquely, up to constant scaling and a rigid motion. Furthermore, it is shown that the torsion scale space of a space curve determines the function $\tau (u) \kappa ^{2} (u)$ modulus a scale factor where $\tau (u)$ and $\kappa (u)$ represent the torsion and curvature functions of the curve respectively. Our results show that a 1-D signal can be reconstructed using only one point from its scale space image. This is an improvement of the result obtained by Yuille and Poggio. \n The proofs are constructive and assume that the parametrizations of the curves can be represented by polynomials of finite order. The scale maps of planar and space curves have been proposed as representations for those curves [Mokhtarian and Mackworth 1986, Mokhtarian 1988]. The result that such maps determine the curves they are computed from uniquely, shows that they satisfy an important criterion for any shape representation technique.

TR-88-10 Solving Diagnostic Problems using Extended Truth Maintenance Systems, July 1988 Gregory M. Provan

We describe the use of efficient, extended Truth Maintenance Systems (TMSs) for diagnosis. We show that for complicated diagnostic problems, existing ATMSs need some method of ranking competing explanations and pruning the search space in order to maintain computational efficiency. We describe a specific implementation of an ATMS for efficient problem solving that incorporates the full Dempster Shafer theory in a semantically clear and efficient manner. Such an extension allows the Problem Solver to rank competing solutions and explore only the ``most likely'' solutions. We also describe several efficient algorithms for computing both exact and approximate values for Dempster Shafer belief functions.

TR-88-11 The Computational Complexity of Truth Maintenance Systems, July 1988 Gregory M. Provan

We define the complexity of the problems that the Assumption-Based TMS (ATMS) solves. Defining the conjunction of the set of input clauses as a Boolean expression, it is shown that an ATMS solves two distinct problems: (l) generating a set of minimal supports (or label) for each database literal; and (2) computing a minimal expression (or set of maximal contexts) from the set of minimal supports. The complexity of determining the set of minimal supports for a set $x$ of literals with respect to a set $X$ of clauses is exponential in the number of assumptions for almost all Boolean expressions, even though a satisfying assignment for the literals occurring in $X$ can be found in linear time. Generating a minimal expression is an NP-hard problem. The ATMS algorithms can be used with many control mechanisms to improve their performance for both problems; however, we argue that manipulating the label set (which is exponential in the number of assumptions) requires considerable computational overhead (in terms of space and time), and that it will be infeasible to solve moderately large problems without problem restructuring.

TR-88-12 On Symmetric Schemes and Differential-Algebraic Equations., June 1988 Uri Ascher

An example is given which demonstrates a potential risk in using symmetric difference schemes for initial value differential-algebraic equations (DAEs) or for very stiff ODEs. The basic difficulty is that the stability of the scheme is controlled by the stability of an auxiliary (ghost) ODE problem which is not necessarily stable even when the given problem is. \n The stability of symmetric schemes is better understood in the context of boundary value problems. In this context, such schemes are more naturally applied as well. For initial value problems, better alternatives may exist. A computational algorithm is proposed for boundary value index-l DAEs.

TR-88-13 Using Multigrid for Semiconductor Device Simulation in 1-D, September 1988 Uri Ascher and Stephen E. Adams

This paper examines the application of the multigrid method to the steady state semiconductor equations in one dimension. A number of attempts reported in the literature have yielded only limited success in applying multigrid algorithms to this sensitive problem, suggesting that a more careful look in relatively simple circumstances is worthwhile. \nSeveral modifications to the basic multigrid algorithm are evaluated based on their performance for a one-dimensional model problem. It was found that use of a symmetric Gauss-Seidel relaxation scheme, a special prolongation based on using the difference operator, and local relaxation sweeps near junctions, produced a robust and efficient code. This modified algorithm is also successful for a wide variety of cases, and its performance compares favourably with other multigrid algorithms that have been applied to the semiconductor equations.

TR-88-14 Spatial and Spectral Description of Stationary Gaussian Fractals, July 1988 R. Rensink

A general treatment of stationary Gaussian fractals is presented. Relations are established between the fractal properties of an $n$-dimensional random field and the form of its correlation function and power spectrum. These relations are used to show that the second-order parameter $H$ commonly used to describe fractal texture (e.g., in [4][5]) is insufficient to characterize all fractal aspects of the field. A larger set of measures --- based on the power spectrum --- is shown to provide a more complete description of fractal texture. \n Several interesting types of ``non-fractal'' self-similarity are also developed. These include a generalization of the fractional Gaussian noises of Mandelbrot and van Ness [6], as well as a form of ``locally'' self-similar behaviour. It is shown that these have close relations to the Gaussian fractals, and consequently, that textures containing these types of self-similarity can be described by the same set of measures as used for fractal texture.

TR-88-15 Probabilistic Evaluation of Common Functions On Rings of Known Size, June 1988 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

In [5]. Duris and Galil prove an $ \Omega (n \log n)$ lower bound for the average number of messages required by any deterministic algorithm which elects a leader on an asynchronous ring with distinct identifiers where ring size $n$ is known and is a power of 2. Their results imply the same lower bound for the expected complexity of any randomized leader election algorithm for an anonymous ring of known size 2^{k}$. If their new techniques are used to achieve the randomized result directly, the resulting proof is significantly simpler than the original deterministic one. This simplicity facilitates extension of the result in two directions; namely, for arbitrary known ring size, and for algorithms that permit error with probability at most $\epsilon $. Specifically, we prove that the expected message complexity of any probabilistic algorithm that selects a leader with probability at least $1 - \epsilon $ on an anonymous ring of known size $n$, is $\Omega (n \min(\log n, \log \log(l / \epsilon ))) $. A number of common function evaluation problems (including AND, OR, PARITY, and SUM) on rings of known size, are shown to inherit this complexity bound and that the bound is tight to within a constant factor.

TR-88-16 An Incremental Method for Generating Prime Implicants/Implicates, July 1988 Alex Kean and George K. Tsiknis

Given the recent investigation of clause management systems(CMSs) for Artificial Intelligence applications, there is an urgent need for an efficient incremental method for generating prime implicants. Given a set of clauses $\cal F$, a set of prime implicants $\Pi$ of $\cal F$ and a clause $C$, the problem can be formulated as finding the set of prime implicants for $\Pi \bigcup \{ C \} $. Intuitively, the property of implicants being prime implies that any effort to generate prime implicants from a set of prime implicants will not yield any new prime implicants but themselves. In this paper, we exploit the properties of prime implicants and propose an incremental method for generating prime implicants from a set of existing prime implicants plus a new clause. The correctness proof and complexity analysis of the incremental method are presented, and the intricacy of subsumptions in the incremental method is also examined. Additionally, the role of prime implicants in the CMS is also mentioned.

TR-88-17 A Logical Framework for Depiction and Image Interpretation, August 1988 Raymond Reiter and Alan K. Mackworth

We propose a logical framework for depiction and interpretation that formalizes image domain knowledge, scene domain knowledge and the depiction mapping between the image and scene domains. This framework requires three sets of axioms: image axioms, scene axioms and depiction axioms. An interpretation of an image is defined to be a logical model of these axioms. \n The approach is illustrated by a case study, a reconstruction in first order logic of a simplified map understanding program, Mapsee. The reconstruction starts with a description of the map and a specification of general knowledge of maps, geographic objects and their depiction relationships. For the simple map world we show how the task level specification may be refined to a provably correct implementation by applying model-preserving transformations to the initial logical representation to produce a set of propositional formulas. The implementation may use known constraint satisfaction techniques to find the set of models of these propositional formulas. In addition, we sketch preliminary logical treatments for image queries, contingent scene knowledge, ambiguity in image description, occlusion, complex objects, preferred interpretations and image synthesis. \n This approach provides a formal framework for analyzing and going beyond existing systems such as Mapsee, and for understanding the use of constraint satisfaction techniques. It can be used as a foundation for the specification, design and implementation of vision and graphics systems that are correct with respect to the task and algorithm levels.

TR-88-18 A Principle-Based System for Natural Language Analysis and Translation, August 1988 Matthew Walter Crocker

Traditional views of grammatical theory hold that languages are characterized by sets of constructions. This approach entails the enumeration of all possible constructions for each language being described. Current theories of transformational generative grammar have established an alternative position. Specifically, Chomsky's Government-Binding theory proposes a system of principles which are common to human language. Such a theory is referred to as a ``Universal Grammar'' (UG). Associated with the principles of grammar are parameters of variation which account for the diversity of human languages. The grammar for a particular language is known as a ``Core Grammar'', and is characterized by an appropriately parametrized instance of UG. Despite these advances in linguistic theory, construction-based approaches have remained the status quo within the field of natural language processing. This thesis investigates the possibility of developing a principle-based system which reflects the modular nature of the linguistic theory. That is, rather than stipulating the possible constructions of a language, a system is developed which uses the principles of grammar and language specific parameters to parse language. Specifically, a system is presented which performs syntactic analysis and translation for a subset of English and German. The cross-linguistic nature of the theory is reflected by the system which can be considered a procedural model of UG.

TR-88-19 Valira/Valisyn-Protocol Validator/Synthesizer User's Manual (Version 1.2), January 1988 Son T. Vuong and T. Lau

(Abstract not available on-line)

TR-88-20 The Impact of Artificial Intelligence on Society, September 1988 Richard S. Rosenberg

This paper presents an introduction to a number of social issues which may arise as a result of the diffusion of Artificial Intelligence (AI) applications from the laboratory to the workplace and marketplace. Four such applications are chosen for discussion: expert systems, image processing, robotics, and natural language understanding. These are briefly characterized and possible areas of misuse are explored. Of the many social issues of concern, four are selected for treatment here as representative of other potential problems likely to follow such a powerful technology as AI. These four are work (how much and of what kind), privacy (on which the assault continues), decision-making (by whom and for whose benefit), and social organization (how in a society in which intelligent systems perform so many functions). Finally it is argued that both a major programme of study in this field be launched and that practitioners assume the responsibility to inform the public about their work.

TR-88-21 Clause Management System, October 1988 George K. Tsiknis and Alex Kean

In this paper, we study the full extent of the Clause Management System(CMS) proposed by Reiter and de Kleer. The CMS is adapted specifically for aiding a reasoning system (Reasoner) in explanations generation. The Reasoner transmits propositional formulae representing its knowledge to the CMS and in return, the Reasoner can query the CMS for concise explanations w.r.t the CMS knowledge base. We argue that based on the type of tasks the CMS performs, it should represent its knowledge base $\Sigma $ using a set of prime implicates $PI(\Sigma )$. The classification of implicates as prime, minimal, trivial and minimal trivial is carefully examined. Similarly, the notion of a support (or roughly, an explanation) for a clause including prime, minimal, trivial and minimal trivial is also elaborated. The methods to compute these supports from implicates and a preference ordering schema for the set of supports for a given clause are also presented. The generalization of the notion of a minimal support for a conjunction of clauses is also shown. Finally, two logic based diagnostic reasoning paradigms aided by the CMS are shown to exemplify the functionality of the CMS.

TR-88-22 Invariants of Chromatic Graphs, November 1988 Teresa Maria Przytycka and J. H. Przytycki

In the paper we construct abstract algebras which yield invariants of graphs (including graphs with colored edges --- chromatic graphs). We analyse properties of those algebras. We show that various polynomials of graphs are yielded by models of the algebras (including Tutte and matching polynomials). In particular we consider a generalization of Tutte's polynomial to a polynomial of chromatic graphs. We analyse relation of graph polynomials with recently discovered link polynomials. \n It is known that computing of the Tutte polynomial is NP-hard. We show that a part of Tutte polynomial (and its generalization) can be computed faster than in exponential time.

TR-88-23 Resampled Curvature and Torsion Scale Space Representation of Planar and Space Curves, December 1988 Farzin Mokhtarian

The curvature scale space representations of planar curves are computed by combining information about the curvature of those curves at multiple levels of detail. Similarly, curvature and torsion scale space representations of space curves are computed by combining information about the curvature and torsion of those curves at varying levels of detail. \nCurvature and torsion scale space representations satisfy a number of criteria such as efficiency, invariance, detail, sensitivity, robustness and uniqueness [Mokhtarian \& Mackworth 1986] which makes them suitable for recognizing a noisy curve at any scale or orientation. \nThe renormalized curvature and torsion scale space representations [Mackworth \& Mokhtarian 1988] are more suitable for recognition of curves with non-uniform noise added to them but can only be computed for closed curves. \nThe resampled curvature and torsion scale space representations introduced in this paper are shown to be more suitable than the renormalized curvature and torsion scale space representations for recognition of curves with non-uniform noise added to them. Furthermore, these representations can also be computed for open curves. \n A number of properties of the representation are also investigated and described. An important new property presented in this paper is that no new curvature zero-crossing points can be created in the resampled curvature scale space representation of simple planar curves.

TR-88-24 Design and Implementation of a Ferry-based Protocol Test System, December 1988 Samuel T. Chanson, B. P. Lee and N. J. Parakh

The Ferry Clip concept can be used to build a Test System for protocol testing. By structuring the system into a set of modules, it is possible to minimize the effort required in using such a system to test different protocol implementations. In this paper we describe a method for structuring and implementing a Ferry Clip based Test System. Implementation issues encountered in building such a system under different environments are also discussed.

TR-89-01 Organization of Smooth Image Curves at Multiple Scales, January 1989 David G. Lowe

While edge detection is an important first step for many vision systems, the linked lists of edge points produced by most existing edge detectors lack the higher level of curve description needed for many visual tasks. For example, they do not specify the tangent direction or curvature of an edge or the locations of tangent discontinuities. In this paper, a method is presented for describing linked edge points at a range of scales by selecting intervals of the curve and scales of smoothing that are most likely to represent the underlying structure of the scene. This multi-scale analysis of curves is complementary to any multi-scale detection of the original edge points. A solution is presented for the problem of shrinkage of curves during Gaussian smoothing, which has been a significant impediment to the use of smoothing for practical curve description. The curve segmentation method is based on a measure of smoothness minimizing the third derivative of Gaussian convolution. The smoothness measure is used to identify discontinuities of curve tangents simultaneously with selecting the appropriate scale of smoothing. The averaging of point locations during smoothing provides for accurate subpixel curve localization. This curve description method can be implemented efficiently and should prove practical for a wide range of applications including correspondence matching, perceptual grouping, and model-based recognition.

TR-89-02 Using Deficiency Measure For Tiebreaking the Minimum Degree Algorithm, January 1989 Ian A. Cavers

The minimum degree algorithm is known as an effective scheme for identifying a fill reduced ordering for symmetric, positive definite, sparse linear systems. Although the original algorithm has been enhanced to improve the efficiency of its implementation, ties between minimum degree elimination candidates are still arbitrarily broken. For many systems, the fill levels of orderings produced by the minimum degree algorithm are very sensitive to the precise manner in which these ties are resolved. This paper introduces several tiebreaking schemes for the minimum degree algorithm. Emphasis is placed upon a tiebreaking strategy based on the deficiency of minimum degree elimination candidates, which can consistently identify low fill orderings for a wide spectrum of test problems. The tiebreaking strategies are integrated into a quotient graph form of the minimum degree algorithm with uneliminated supernodes. Implementations of the enhanced forms of the algorithm are tested on a wide variety of sparse systems to investigate the potential of the tiebreaking strategies.

TR-89-03 A New Approach To Test Sequence Derivation Based on External Behavior Expression (EBE), January 1989 Jianping Wu and Samuel T. Chanson

This paper presents a new approach to test sequence derivation from formal protocol specifications for protocol conformance testing. The approach is based on a model of External Behaviour Expression (EBE) which specifies only the external behaviour of a protocol in terms of the input/output sequences and their logical (function and predicate) relations, and can be obtained from formal protocol specifications in either Estelle or LOTOS. A basic test derivation theory is defined for the purpose of formalizing test sequence derivation strategies. Based on the EBE of a protocol, a test sequence derivation method is proposed to identify associations between inputs and outputs through the interaction paths and their I/O subpaths. Generic Test Cases generated from these I/O subpaths are based on specific testing purposes. Abstract Test Cases are selected in terms of particular test methods and additional requirements. Comparison to other existing schemes shows the method proposed here is simple and concise, and the resulting set of test sequences is complete and effective. It is our belief that this approach to test sequence derivation can provide the basis of a formalized framework for protocol conformance testing.

TR-89-04 Explanation and Prediction: An Architecture for Default and Abductive Reasoning, March 1989 David Poole

Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.

TR-89-05 Randomized Distributed Computing on Rings, January 1989 Lisa Higham

The communication complexity of fundamental problems in distributed computing on an asynchronous ring are examined from both the algorithmic and lower bound perspective. A detailed study is made of the effect on complexity of a number of assumptions about the algorithms. Randomization is shown to influence both the computability and complexity of several problems. Communication complexity is also shown to exhibit varying degrees of sensitivity to additional parameters including admissibility of error, kind of error, knowledge of ring size, termination requirements, and the existence of identifiers. \n A unified collection of formal models of distributed computation on asynchronous rings is developed which captures the essential characteristics of a spectrum of distributed algorithms --- those that are error free (deterministic, Las Vegas, and nondeterministic) and those that err with small probability (Monte Carlo and nondeterministic/probabilistic). The nondeterministic and nondeterministic/probabilistic models are introduced as natural generalizations of the Las Vegas and Monte Carlo models respectively, and prove useful in deriving lower bounds. The unification helps to clarify the essential differences between the progressively more general notions of a distributed algorithm. In addition, the models reveal the sensitivity of various problems to the parameters listed above. \n Complexity bounds derived using these models typically vary depending on the type of algorithm being investigated. The lower bounds are complemented by algorithms with matching complexity while frequently the lower bounds hold on even more powerful models than those required by the algorithms. \n Among the algorithms and lower bounds presented are two specific results which stand out because of their relative significance. \n\begin{enumerate} \item If $g$ is any nonconstant cyclic function of n variables. then any nondeterministic algorithm for computing $g$ on an anonymous ring of size $n$ has complexity $\Omega (n \sqrt{\log n})$ bits of communication; and. there is a is nonconstant cyclic boolean function $f$, such that f can be computed by a Las Vegas algorithm in $O (n \sqrt{\log n})$ expected bits of communication on a ring of size $n$. \n\item The expected complexity of computing AND (and a number of other natural functions) on a ring of fixed size n in the Monte Carlo model is \( \Theta (n \min \{ \log n, \log \log ( 1 / \epsilon ) \} ) \) messages and bits where $\epsilon $ is the allowable probability of error. \end{enumerate}

TR-89-06 A Completeness Theorem for NaD Set, January 1989 Paul C. Gilmore

(Abstract not available on-line)

TR-89-07 How Many Real Numbers Are There?, August 1989 Paul C. Gilmore

The question posed in the title of this paper is raised by a reexamination of Cantor's diagonal argument. Cantor used the argument in its most general form to prove that no mapping of the natural numbers into the reals could have all the reals as its range. It has subsequently been used in a more specific form to prove, for example, that the computable reals cannot be enumerated by a Turing machine. The distinction between these two forms of the argument can be expressed within a formal logic as a distinction between using the argument with a parameter F, denoting an arbitrary map from the natural numbers to the reals, and with a defined term F, representing a particular defined map. \nThe setting for the reexamination is a natural deduction based set theory, NaDSet, presented within a Gentzen sequent calculus. The elementary and logical syntax of NaDSet, as well as its semantics, is described in the paper. The logic extends an earlier form by removing a restriction on abstraction, and by replacing first and second order quantifiers by a single quantifier. The logic remains second order, however; this is necessary to avoid an abuse of use and mention that would otherwise arise from the interpretation of atomic sentences. \nThat there can be doubt about the number of reals is suggested by the failure of the general form of Cantor's diagonal argument in NaDSet. To provide a basis for discussion, a formalization of Godel-Bernays set theory is provided within NaDSet. The subsequent discussion reinforces Skolem's relativistic answer to the question posed. \nAlthough the general form of Cantor's argument fails in NaDSet, a rule of deduction which formalizes the argument for defined maps is derived. A simple application of the rule is provided. \nFeferman has argued that a type-free logic is required for the formalization of category theory since no existing logic or set theory permits self-reference of the kind required by the theory. A demonstration is provided elsewhere that category theory can be formalized within NaDSet. An elementary form of the demonstration is provided in the paper by proving a theorem suggested by _{A}>$ for which $\oplus $ is a binary, commutative. and associative _{A},$ is itself such a structure under cartesian product and isomorphism.

TR-89-08 A Logic-Based Analysis of Dempster Shafer Theory, December 1989 Gregory M. Provan

We formulate Dempster Shafer Theory in terms of Propositional Logic, using the implicit notion of probability underlying Dempster Shafer Theory. Dempster Shafer theory can be modeled in terms of propositional logic by the tuple $(\Sigma , \varrho )$, where $\Sigma $ is a set of propositional clauses and $\varrho $ is an assignment of measure to each clause $\Sigma_i \in \Sigma $. We show that the disjunction of minimal support clauses for a clause $i$ with respect to a set $\Sigma $ of propositional clauses, $ \xi ( \Sigma_{i}, \Sigma )$, is a symbolic representation of the Dempster Shafer Belief function for $\Sigma_{i}$. The combination of Belief functions using Dempster's Rule of Combination corresponds to a combination of the corresponding support clauses. The disjointness of the Boolean formulae representing DS Belief functions is shown to be necessary. Methods of computing disjoint formulae using Network Reliability techniques are discussed. \n In addition, we explore the computational complexity of deriving Dempster Shafer Belief functions, including that of the logic-based methods which are the focus of this paper. Because of Intractability even for moderately-sized problem instances, we propose the use of effluent approximation methods for such computations. Finally, we examine implementations of Dempster Shafer theory, based on domain restrictions of DS theory, hypertree embeddings, and the ATMS.

TR-89-10 Cooperative Systems for Perceptual Tasks in a Remote Sensing Environment, January 1989 Alan K. Mackworth

To design and implement knowledge-based systems for perceptual tasks, such as interpreting remotely-sensed data, we must first evaluate the appropriateness of current expert system methodology for these tasks. That evaluation leads to four conclusions which form the basis for the theoretical and practical work described in this paper. The first conclusion is that we should build `cooperative systems' that advise and cooperate with a human interpreter rather than `expert systems' that replace her. The second conclusion is that cooperative systems should place the user and the system in symmetrical roles where each can query the other for facts, rules, explanations and interpretations. The third conclusion is that most current expert system technology is {\em ad hoc}. Formal methods based on logic lead to more powerful, and better understood systems that are just as efficient when implemented using modern Prolog technology. The fourth conclusion is that, although the first three conclusions can be, arguably, accepted for high-level rule-based symbol-manipulation tasks, there are difficulties in accepting them for perceptual tasks that rely on visual expertise. In the rest of the paper work on overcoming those difficulties in the remote sensing environment is described. In particular, the issues of representing and reasoning about image formation, map-based constraints, shape descriptions and the semantics of depiction are discussed with references to theories and prototype systems that address them.

TR-89-11 Tool Box-Based Routines for Macintosh Timing and Display, June 1989 R. Rensink

Pascal routines are described for performing and testing various timing and display operations on Macintosh computers. Millisecond timing of internal operations is described, as is a method to time inputs more accurately than tick timing. Techniques are also presented for placing arbitrary bit-image displays on the screen within one screen refresh. All routines are based on Toolbox procedures applicable to the entire range of Macintosh computers.

TR-89-12 Computer-Vision Update, June 1989 R. M. Haralick, Alan K. Mackworth and S. L. Tanimoto

(Abstract not available on-line)

TR-89-13 A Model-Based Vision System for Manipulator Position Sensing, June 1989 I. Jane Mulligan, Alan K. Mackworth and Lawrence

The task and design requirements for a vision system for manipulator position sensing in a telerobotic system are described. Model-based analysis-by-synthesis techniques offer generally applicable methods with the potential to meet the system's requirement for accurate, fast and reliable results. Edge-based chamfer matching allows efficient computation of a measure, E, of the local difference between the real image and a synthetic image generated from arm and camera models. Gradient descent techniques are used to minimize E by adjusting joint angles. The dependence of each link position on the position of the link preceding it allows the search to be broken down into lower dimensional problems. Intensive exploitation of geometric constraints on the possible position and orientation of manipulator components results in a correct and efficient solution to the problem. Experimental results demonstrate the use of the implemented prototype system to locate the boom, stick and bucket of an excavator, given a single video image.

TR-89-14 A Theory of Multi-Scale Curvature-Based Shape Representation for Planar Curves, August 1989 Farzin Mokhtarian and Alan K. Mackworth

This paper presents a multi-scale, curvature-based shape representation technique for planar curves which satisfies several criteria, considered necessary for any shape representation method, better than other shape representation techniques. As a result, the representation is suitable for tasks which call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation. \n The method rests on the concept of describing a curve at varying levels of detail using features that are invariant with respect to transformations which do not change the shape of the curve. Three different ways of computing the representation are described in this paper. These three methods result in three different representations: the curvature scale space image, the renormalized curvature scale space image, and the resampled curvature scale space image. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution of that curve. Several evolution properties of planar curves are described in this paper. Some of these properties show that evolution is a physically plausible operation and characterize possible behaviours of planar curves during evolution. Some show that the representations proposed in this paper in fact satisfy some of the required criteria. Others impose constraints on the location of a planar curve as it evolves. Together, these evolution properties provide a theoretical foundation for the representation methods introduced in this paper.

TR-89-15 The Asymptotic Optimality of Spider-Web Networks, January 1989 Nicholas Pippenger

We determine the limiting behavior of the linking probability for large spider web networks. The result confirms a conjecture made by Ikeno in 1959. We also show that no balanced crossbar network, in which the same components are interconnected according to a different pattern, can have an asymptotically larger linking probability.

TR-89-16 A Simple Linear Time Algorithm for Concave One-Dimensional Dynamic Programming, January 1989 Maria M. Klawe

Following [KK89] we will say that an algorithm for finding the column minima of a matrix is ordered if the algorithm never evaluates the $(i,j)$ entry of the matrix until the minima of columns $1, 2, \ldots , i$ are known. This note presents an extremely simple linear time ordered algorithm for finding column minima in triangular totally monotone matrices. Analogous to [KK89] this immediately yields a linear time algorithm for the concave one-dimensional dynamic programming problem. Wilber [W88] gave the first linear time algorithm for the concave one-dimensional dynamic programming problem, but his algorithm was not ordered and hence could not be applied in some situations. Examples of these situations are given in [GP89] and [L89]. Galil and Park [GP89] and Larmore [L89] independently found quite different ordered linear time algorithms. All of these algorithms, and ours as well, rely on the original linear-time algorithm known as SMAWK for finding column minima in totally monotone matrices [AKMSW87]. The constant in our algorithm is essentially the same of that of the Galil-Park algorithm, and since our algorithm is so simple to program, we expect it to be the algorithm of choice in implementations.

TR-89-17 Exactly Solvable Telephone Switching Problems, January 1989 Nicholas Pippenger

For a certain class of telephone switching problems, much of our understanding arises from an analogy with statistical mechanics that was proposed by Benes in 1963. This analogy has lead to the exact solution of a number of idealized problems, which we survey in this paper.

TR-89-18 The Expected Capacity of Concentrators, January 1989 Nicholas Pippenger

We determine the ``expected capacity'' of a class of sparse concentrators called ``modular concentrators''. In these concentrators, each input is connected to exactly two outputs, each output is connected to exactly three inputs, and the ``girth'' (the length of the shortest cycle in the connexion graph) is large. We consider two definitions of expected capacity. For the first (which is due to Masson and Morris), we assume that a batch of customers arrives at a random set of inputs and that a maximum matching of these customers to servers at the outputs is found. The number of unsatisfied requests is negligible if customers arrive at fewer than one-half of the inputs, and it grows quite gracefully even beyond this threshold. We also consider the situation in which customers arrive sequentially, and the decision as to how to serve each is made randomly, without knowledge of future arrivals. In this case, the number of unsatisfied requests is larger, but still quite modest.

TR-89-19 On Parallel Methods for Boundary Value Odes, September 1989 Uri Ascher and S. Y. Pat Chan

Some of the traditional methods for boundary value ODEs, such as standard multiple shooting, finite difference and collocation methods, lend themselves well to parallelization in the independent variable: the first stage of the construction of a solution approximation is performed independently on each subinterval of a mesh. However, the underlying possibly fast bidirectional propagation of information by fundamental modes brings about stability difficulties when information from the different subintervals is combined to form a global solution. Additional difficulties occur when a very stiff problem is to be efficiently and stably solved on a parallel architecture. \n In this paper parallel shooting and difference methods are examined, a parallel algorithm for the stable solution of the resulting algebraic system is proposed and evaluated, and a parallel algorithm for stiff boundary value problems is proposed.

TR-89-20 A Methodology for Using a Default and Abductive Reasoning System, September 1989 David Poole

This paper investigates two different activities that involve making assumptions: predicting what one expects to be true and explaining observations. In a companion paper, a logic-based architecture for both prediction and explanation is proposed and an implementation is outlined. In this paper, we show how such a hypothetical reasoning system can be used to solve recognition, diagnostic and prediction problems. As part of this is the assumption that the default reasoner must be ``programmed'' to get the right answer and it is not just a matter of ``stating what is true'' and hoping the system will magically find the right answer. A number of distinctions have been found in practice to be important: between predicting whether something is expected to be true versus explaining why it is true; and between conventional defaults (assumptions as a communication convention), normality defaults (assumed for expediency) and conjectures (assumed only if there is evidence). The effects of these distinctions on recognition and prediction problems are presented. Examples from a running system are given.

TR-89-21 Optimal Parallel Algorithms for Convex Polygon Separation, September 1989 Norm Dadoun and David G. Kirkpatrick

Cooperative parallel algorithms are presented for determining convex polygon separation and constructing convex polygon mutual tangents. Given two $n$-vertex convex polygons, using $k$ CREW processors $(1 \leq k \leq n)$, each of these algorithms has an $ \Theta (\log n/(l + \log k)) $ time bound. This provides algorithms for these problems which run in $O(\log n)$ time sequentially or in constant time using a quasi-linear $(n^{\alpha}$ for some $\alpha > 0$) number of processors. \n\n These algorithms make use of hierarchical data structures to solve their respective problems. The polygonal hierarchy used by our algorithms is available implicitly (with no additional preprocessing) within standard representations of polygons.

TR-89-22 A New Proof of the NP Completeness of Visual Match, September 1989 R. Rensink

A new proof is presented of Tsotsos' result [1] that the VISUAL MATCH problem is NP-complete when no (high-level) constraints are imposed on the search space. Like the proof given by Tsotsos, it is based on the polynomial reduction of the NP-complete problem KNAPSACK [2] to VISUAL MATCH. Tsotsos' proof, however, involves limited-precision real numbers, which introduces an extra degree of complexity to his treatment. The reduction of KNAPSACK to VISUAL MATCH presented here makes no use of limited-precision numbers, leading to a simpler and more direct proof of the result.

TR-89-23 A Data Management Strategy for Transportable Natural Language Interfaces, January 1989 J. Johnson

This thesis focuses on the problem of designing a highly portable domain independent natural language interface for standard relational database systems. It is argued that a careful strategy for providing the natural language interface (NLI) with morphological, syntactic, and semantic knowledge about the subject of discourse and the database is needed to make the NLI portable from one subject area and database to another. There has been a great deal of interest recently in utilizing the database system to provide that knowledge. Previous approaches attempted to solve this challenging problem by capturing knowledge from the relational database (RDB) schema, but were unsatisfactory for the following reasons: 1.) RDB schemas contain referential ambiguities which seriously limit their usefulness as a knowledge representation strategy for NL understanding. 2.) Knowledge captured from the RDB schema is sensitive to arbitrary decisions made by the designer of the schema. In our work we provide a new solution by applying a conceptual model for database schema design to the design of a portable natural language interface. It has been our observation that the process used for adapting the natural language interface to a new subject area and database overlaps considerably with the process of designing the database schema. Based on this important observation, we design an enhanced natural language interface with the following significant features: complete independence of the linguistic component from the database component, economies in attaching the natural language and DB components, and sharing of knowledge about the relationships in the subject of discourse for database schema design and NL understanding.

TR-89-24 Bar-Representable Visibility Graphs and a Related Network Flow Problem, August 1989 Stephen Kenneth Wismath

A bar layout is a set of vertically oriented non-intersecting line segments in the plane called bars. The visibility graph associated with a layout is defined as the graph whose vertices correspond to the bars and whose edges represent the horizontal visibilities between pairs of bars. \n This dissertation is concerned with the characterization of bar-representable graphs: those graphs which are the visibility graphs of some bar layout. A polynomial time algorithm for determining if a given graph is bar-representable, and the subsequent construction of an associated layout are provided. Weighted and directed versions of the problem are also formulated and solved; in particular, polynomial time algorithms for the layout of such graphs are developed. \n The Planar Full Flow problem is to determine a plane embedding and an (acyclic) orientation of an undirected planar network that admits a feasible flow, that uses all arcs (except those incident upon the source or sink) to full capacity and maintains planarity. The connection of this flow problem to bar-representable graphs is exploited to solve the weighted case of the latter. As evidence that both the acyclicity and planarity constraints are necessary to obtain a polynomial algorithm for this problem, two natural variants of the Full Flow problem are shown to be strongly NP-Complete.

TR-89-25 Efficient Construction of Binary Trees with Almost Optimal Weighted Path Length, January 1989 David G. Kirkpatrick and Teresa Maria Przytycka

We present sequential and parallel algorithms to construct binary trees with almost optimal weighted path length. Specifically, assuming that weights are normalized (to sum up to one) and error refers to the (absolute) difference between the weighted path length of a given tree and the optimal tree with the same weights, we present: an $O(\log n)$ time and \n\( n \frac{\log \log n}{\log n} \)EREW processor algorithm which constructs a tree with error less than 0.172; an $O(k \log n \log^{*} n)$ time and $n^{2}$ CREW processor algorithm which produces a tree with error at most $ \frac{1}{n^{k}} $, and an $O(k^{2} \log n) $ time and $n^{2}$ CREW processor algorithm which produces a tree \n\nwith error at most $\frac{1}{n^{k}} $ As well, we present two sequential algorithms: an $O(kn)$ time algorithm which produces a tree with error at most $\frac{1}{n^{2^{k}}}$ and $O(kn)$ time algorithm which produces a tree with error at most $ \frac{1}{2^{n^{2^{k}}}} $ .The last two algorithms use different computation models.

TR-89-26 Fitting Parameterized 3-D Models to Images, December 1989 David G. Lowe

Model-based recognition and tracking from 2-D images depends upon the ability to solve for projection and model parameters that will best fit a 3-D model to matching image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulations, variable dimensions, or surface deformations. Numerical stabilization methods are developed that take account of inherent inaccuracies in the image measurements and allow useful solutions to be determined even when there are fewer matches than unknown parameters. A standardized modeling language has been developed that can be used to define models and their internal parameters for efficient application to model-based vision. These new techniques allow model- based vision to be used for a much wider class of problems than was possible with earlier methods.

TR-89-27 Towards Structured Parallel Computing --- Part 1 --- A Theory of Algorithm Design and Analysis for Distributed-Memory Architectures, December 1989 Feng Gao

This paper advocates a architecture-independent, hierarchical approach to algorithm design and analysis for distributed-memory architectures, in contrast to the current trend of tailoring algorithms towards specific architectures. We show that, rather surprisingly, this new approach can achieve uniformity without sacrificing optimality. In our particular framework there are three levels of algorithm design: design of a network-independent algorithm in a network-independent programming environment, design of virtual architectures for the algorithm, and design of emulations of the virtual architectures on physical architectures. We propose and substantiate through a complete complexity analysis of the example of ordinary matrix multiplication, the following thesis: architecture-independent optimality can lead to portable optimality. Namely, a single network-independent algorithm, when optimized network-independently, with the support of properly chosen virtual architectures, can be implemented on a wide spectrum of networks to achieve optimality on each of them with respect to both computation and communication. Besides its implications to the methodology of parallel algorithm design, our theory also suggests new questions for theoretical research in parallel computation on interconnection networks.

TR-90-01 A Theory of Multi-Scale, Curvature- and Torsion-Based Shape Representation for Space Curve, January 1990 Farzin Mokhtarian

This paper introduces a novel and new multi-scale shape representation technique for space curves which satisfies several criteria considered necessary for any shape representation method. This property makes the representation suitable for tasks which call for recognition of a noisy curve at any scale or orientation. \n The method rests on the concept of describing a curve at varying levels of detail using features that are invariant with respect to transformations which do not change the shape of the curve. Three different ways of computing the representation are described in this paper. These three methods result in the following representations: the curvature and torsion scale space images, the renormalized curvature and torsion scale space images, and the resampled curvature and torsion scale space images. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution of that curve. Several evolution properties of space curves are described in this paper. Some of these properties show that evolution is a physically plausible operation and characterize possible behaviours of space curves during evolution. Some show that the representations proposed in this paper in fact satisfy the required criteria. Others impose constraints on the location of a space curve as it evolves. Together, these evolution properties provide a theoretical foundation for the representation methods introduced in this paper.

TR-90-02 Logical Foundations for Category Theory, May 1990 Paul C. Gilmore and George K. Tsiknis

Category theory provides an abstract and uniform treatment for many mathematical structures, and increasingly has found applications in computer science. Nevertheless, no suitable logic within which the theory can be developed has been provided. The classical set theories of Zermelo-Fraenkel and Godel-Bernays, for example, are not suitable because of the use category theory makes of self-referencing abstractions, such as in the theorem that the set of categories forms a category. That a logic for the theory must be developed, Feferman has argued, follows from the use in the theory of concepts fundamental to logic, namely, propositional logic, quantification and the abstractions of set theory. \nIn this paper a demonstration that the logic and set theory NaDSet is suitable for category theory is provided. Specifically, a proof of the cited theorem of category theory is provided within NaDSet. \nNaDSet succeeds as a logic for category theory because the resolution of the paradoxes provided for it is based on a reductionist semantics similar to the classical semantics of Tarski for first and second order logic. Self-membership and self-reference is not explicitly excluded. The reductionist semantics is most simply presented as a natural deduction logic. In this paper a sketch of the elementary and logical syntax or proof theory of the logic is described. \nFormalizations for most of the fundamental concepts and constructs in category theory are presented. NaDSet definitions for natural transformations and functor categories are given and an equivalence relation on categories is defined. Additional definitions and discussions on products, comma categories, universals limits and adjoints are presented. They provide enough evidence to support the claim that any construct, not only in categories, but also in toposes, sheaves, triples and similar theories can be formalized within NaDSet.

TR-90-03 Optimal Algorithms for Probalistic Solitude Detection On Anomymous Rings, January 1990 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Probabilistic algorithms that err with probability at most $\epsilon \geq 0$ are developed for the Solitude Detection problem on anonymous asynchronous unidirectional rings. Solitude Detection requires that a nonempty set of distinguished processors determine whether or not there is only one distinguished processor. The algorithms transmit an optimal expected number of bits, to within a constant factor. Las Vegas and Monte Carlo algorithms that terminate both distributively and nondistributively are developed. Their bit complexities display a surprisingly rich dependence on the kind of algorithm and on the processors' knowledge of the size of the ring.

TR-90-04 Tight Lower Bounds for Probabilistic Solitude Verification on Anomynous Rings, January 1990 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Tight lower bounds on the expected bit complexity of the Solitude Verification problem on anonymous asynchronous unidirectional rings are established that match the upper bounds demonstrated in a companion paper [5]. In the algorithms of [5], a variety of techniques are applied; In contrast, we find that a single technique, applied carefully, suffices for all of the lower bounds. The bounds demonstrate that, for this problem, the expected bit complexity depends subtly on the processors' knowledge of the size of the ring, and on the type of algorithm (Las Vegas or Monte Carlo / distributive or nondistributive termination).

TR-90-05 Direct Evidence of Occlusion in Stereo and in Motion, January 1990 James Joseph Little and Walter E. Gillett

Discontinuities of surface properties are the most important locations in a scene; they are crucial for segmentation because they often coincide with object boundaries [TMB85]. Standard approaches to discontinuity detection decouple detection of disparity discontinuities from disparity computation. We have developed techniques for locating disparity discontinuities using information internal to the stereo algorithm of [DP86], rather than by post-processing the stereo data. The algorithm determines displacements by maximizing the sum, at overlapping small regions, of local comparisons. The detection methods are motivated by analysis of the geometry of matching and occlusion and the fact that detection is not just a pointwise decision. Our methods can be used in a combination to produce robust performance. This research is part of a project to build a ``Vision Machine'' [PLG+88] at MIT that integrates outputs from early vision modules. Our techniques have been extensively tested on real images.

TR-90-06 A Measure of Semantic Relatedness for Resolving Ambiguities in Natural Language Database Requests, January 1990 Julia A. Johnson and Richard S. Rosenberg

A measure of semantic relatedness based on distance between objects in the database schema has previously been used as a basis for solving a variety of natural language understanding problems including word sense disambiguation, resolution of semantic ambiguities, and attachment of post noun modifiers. The use of min/max values which are usually recorded as part of the process of designing the database schema is proposed as a basis for solving the given problems as they arise in natural language database requests. The min/max values provide a new source of knowledge for resolving ambiguities and a semantics for understanding what knowledge has previously been used by distance measures in database schemas.

TR-90-07 Automatic Generation of Interactive Applications, February 1990 Emanuel G. Noik

As user interfaces become more powerful and easier to use, they are often harder to design and implement. This has created a great demand for tools which help programmers create interactive applications. While existing interface tools simplify interface creation, they typically focus only on the interface, do not provide facilities for simplifying application generation, and are too low-level. We have developed a tool which automatically generates complete interactive applications from a high-level description of the application's semantics. We argue that our system provides a very simple yet powerful environment for application development. Key advantages include: ease of use, separation of interface and application, interface and machine independence, more comprehensive programming aids, and greater potential for software reusability. While we tend to focus on the practical motivations for using such a tool, we conclude that this approach should form the basis of an important category of interface tools and deserves further study.

TR-90-08 A Characterizing Diagonases and Systems, June 1990 Johan de Kleer, Alan K. Mackworth and Raymond Reiter

Most approaches to model-based diagnosis describe a diagnosis for a system as a set of failing components that explains the symptoms. In order to characterize the typically very large number of diagnoses, usually only the minimal such sets of failing components are represented. This method of characterizing all diagnoses is inadequate in general, in part because not every superset of the faulty components of a diagnosis necessarily provides a diagnosis. In this paper we analyze the notion of diagnosis in depth exploiting the notions of implicate/implicant and prime implicate/implicant. We use these notions to propose two alternative approaches for addressing the inadequacy of the concept of minimal diagnosis. First, we propose a new concept, that of kernel diagnosis, which is free of the problems of minimal diagnosis. Second, we propose to restrict the axioms used to describe the system to ensure that the concept of minimal diagnosis is adequate.

TR-90-09 Assumption Based Reasoning and Clause Management Systems, May 1990 Alex Kean and George K. Tsiknis

A {\em truth maintenance system} is a subsystem that manages the utilization of assumptions in the reasoning process of a problem solver. Doyle's original motivation for creating a truth maintenance system was to augment a reasoning system with a control strategy for activities concerning its non-monotonic state of beliefs. Hitherto, much effort has been invested in designing and implementing the concept of truth maintenance and little effort has been dedicated to the formalization that is essential to understanding it. This paper provides a complete formalization of the principle of truth maintenance. Motivated by Reiter and de Kleer's preliminary report on the same subject, this paper extends their study and gives a formal account of the concept of truth maintenance under the general title of {\em assumption based reasoning}. The concept of assumption based theory is defined and the notions of explanation and direct consequence are presented as forms of plausible conclusion with respect to this theory. Additionally, the concept of extension and irrefutable sentences are discussed together with other variations of explanation and direct consequence. A set of algorithms for computing these conclusions for a given theory are presented using the notion of prime implicates. Finally, an extended example on Boolean circuit diagnosis is shown to examplify these ideas.

TR-90-10 Parallel Algorithms for Routing in Non-Blocking Networks, January 1990 Lin and Nicholas Pippenger

(Abstract not available on-line)

TR-90-11 Multiple Light Source Optical Flow, January 1990 Robert J. Woodham

(Abstract not available on-line)

TR-90-12 Selection Networks, May 1990 Nicholas Pippenger

We establish an upper bound asymptotic to $2n \log_{2}n$ for the number of comparators required in a network that classifies $n$ values into two classes each containing $n/2$ values, with each value in one class less than or equal to each value in the other. (The best lower bound known for this problem is asymptotic to $(n/2) \log_{2}n.$)

TR-90-13 On a Lower Bound for the Redundancy of Reliable Networks with Noisy Gates, May 1990 Nicholas Pippenger, George D. Stamoulis and John N. Tsitsiklis

We prove that a logarithmic redundancy factor is necessary for the reliable computation of the parity function by means of a networks with noisy gates. This result is the same as one claimed by Dobrushin and Ortyukov in 1977, but the proof they gave appears to be incorrect.

TR-90-14 Convergence Properties of Curvature and Torsion, May 1990 Farzin Mokhtarian

Multi-scale, curvature-based shape representation techniques for planar curves and multi-scale, torsion-based shape representation techniques for space curves have been proposed to the computer vision community by Mokhtarian \& Mackworth [1986], Mackworth \& Mokhtarian [1988] and Mokhtarian [1988]. These representations are referred to as the regular, renormalized and resampled curvature and torsion scale space images and are computed by combining information about the curvature or torsion of the input curve at a continuum of detail levels. \n Arc length parametric representations of planar or space curves are convolved with Gaussian functions of varying standard deviation to compute evolved versions of those curves. The process of generating evolved versions of a curve as the standard deviation of the Gaussian function goes from 0 to $\infty$ is referred to as the evolution of that curve. When evolved versions of the curve are computed through an iterative process in which the curve is reparametrized by arc length in each iteration, the process is referred to as arc length evolution. \n This paper contains a number of important results on the convergence properties of curvature and torsion scale space representations. It has been shown that every closed planar curve will eventually become simple and convex during evolution and arc length evolution and will remain in that state. This result is very important and shows that curvature scale space images are well-behaved in the sense that we can always expect to find a scale level at which the number of curvature zero-crossing points goes to zero and know that new curvature zerocrossing points will not be created beyond that scale level which can be considered to be the high end of the curvature scale space image. \n It has also been shown that every closed space curve will eventually tend to a closed planar curve during evolution and arc length evolution and that every closed space curve will eventually enter a state in which new torsion zero-crossing points will not be created during evolution and arc length evolution and will remain in that state. \n Furthermore, the proofs are not difficult to comprehend. They can be understood by readers without an extensive knowledge of mathematics.

TR-90-15 Mathematical Foundation for Orientation Based Representations of Shape, May 1990 Ying Li

Mathematical foundations for orientation based shape representation are reviewed. Basic tools include support function, mixed volume, vector addition, Blaschke addition, and the corresponding decompositions, as well as some basic facts about convex bodies, are presented. Results on several types of curvature measures such as spherical images, m-th order area functions are summarized. As a case study, the EGI approach is examined to see how the classical results on Minkowski's problem are utilized in computational vision. Finally, results on Christoffel's problem are surveyed, including constructive proofs.

TR-90-16 An Analysis of Exact and Approximation Algorithms for Dempster Shafer Theory, January 1990 Gregory M. Provan

TR-90-17 Polygon Triangulation in $0(N \log \log N)$ Time with Simple Data Structures, June 1990 David G. Kirkpatrick, Maria M. Klawe and Robert E. Tarjan

We give a new $O (n \log \log n )$-time deterministic algorithm for triangulating simple $n$-vertex polygons, which avoids the use of complicated data structures. In addition, for polygons whose vertices have integer coordinates of polynomially bounded size, the algorithm can be modified to run in $O (n \log^{*} n )$ time. The major new techniques employed are the efficient location of horizontal visibility edges that partition the interior of the polygon into regions of approximately equal size, and a linear-time algorithm for obtaining the horizontal visibility partition of a subchain of a polygonal chain, from the horizontal visibility partition of the entire chain. The latter technique has other interesting applications, including a linear-time algorithm to convert a Steiner triangulation of a polygon into a true triangulation.

TR-90-18 The Blocking Probability of Spider-Web Networks, June 1990 Nicholas Pippenger

We determine the limiting behaviour of the blocking probability for spider-web networks, a class of crossbar switching networks proposed by Ikeno. We use a probabilistic model proposed by the author, in which the busy links always form disjoint routes through the network. We show that if the occupancy probability is below the 0.5857 \ldots$, then the blocking probability tends to zero, whereas above this threshold it tends to one. This provides a theoretical explanation for results observed empirically in simulations by Bassalygo, Neiman and Vvedenskaya.

TR-90-19 The Effect of Knowledge on Belief: Conditioning, Specificity and the Lottery Paradox in Default Reasoning, June 1990 David Poole

How should what one knows about an individual affect default conclusions about that individual? This paper contrasts two views of ``knowledge'' in default reasoning systems. The first is the traditional view that one knows just what is in one's knowledge base. It is shown how, under this interpretation, having to know an exception is too strong for default reasoning. It is argued that we need to distinguish ``background'' and ``contingent'' knowledge in order to be able to handle specificity, and that this is a natural distinction. The second view of knowledge is what is contingently known about the world under consideration. Using this view of knowledge, a notion of conditioning that seems like a minimal property of a default is defined. Finally, a qualitative version of the lottery paradox is given; if we want to be able to say that individuals that are typical in every respect do not exist, we should not expect to conclude the conjunction of our default conclusions. \n This paper expands on work in the proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning [35].

TR-90-20 Projected Implicit Runge-Kutta Methods for Differential-Algebraic Equations, August 1990 Uri Ascher and Linda R. Petzold

In this paper we introduce a new class of numerical methods, Projected Implicit Runge-Kutta methods, for the solution of index-two Hessenberg systems of initial and boundary value differential-algebraic equations (DAEs). These types of systems arise in a variety of applications, including the modelling of singular optimal control problems and parameter estimation for differential-algebraic equations such as multibody systems. The new methods appear to be particularly promising for the solution of DAE boundary value problems, where the need to maintain stability in the differential part of the system often necessitates the use of methods based on symmetric discretizations. Previously defined symmetric methods have severe limitations when applied to these problems, including instability, oscillation and loss of accuracy; the new methods overcome these difficulties. For linear problems we define an essential underlying boundary value ODE and prove well-conditioning of the differential (or state-space) solution components. This is then used to prove stability and superconvergence for the corresponding numerical approximations for linear and nonlinear problems.

TR-90-21 Automating the Generation of Interactive Applications, January 1990 Emanuel G. Noik

As user interfaces become more powerful and easier to use they are often harder to design and implement. This has caused a great demand for interface tools. While existing tools ease interface creation, they typically do not provide mechanisms to simplify application development and are too low-level. Furthermore, existing tools do not provide effective mechanisms to port interactive applications across user interfaces. While some tools provide limited mechanisms to port applications across user interfaces which belong to the same class (e.g., the class of all standard graphical direct-manipulation user interfaces), very few can provide the ability to port applications across different interface classes (e.g., command-line, hypermedia, speech recognition and voice synthesis, virtual reality, etc.). \n With my approach, the programmer uses an abstract model to describe the structure of the application including the information that the application must exchange with the user, rather than describing a user interface which realizes these characteristics. By specifying application semantics at a very high level of abstraction it is possible to obtain a much greater separation between the application and the user interface. Consequently, the resulting applications can be ported not only across user interfaces which belong to a common interface class, but across interfaces which belong to distinct classes. This can be realized through simple recompilation --- source code does not have to be modified. \n NAAG (Not Another Application Generator), a tool which embodies these ideas, enables programmers to create interactive applications with minimal effort. An application is modelled as a set of operations which manipulate objects belonging to user-defined object classes. The input to NAAG is a source program which describes classes, operations and their inputs and outputs, and the organization of operations within the application. Classes and operations are implemented as data structures and functions in a conventional programming language such as C. This model simplifies not only the specification and generation of the user interface, but the design and implementation of the underlying application. \n NAAG utilizes existing technology such as macro-preprocessors, compilers, make programs, and low-level interface tools, to reduce the programming task. An application that is modified by adding, removing, or reorganizing artifacts (classes, operations, and menus), can be regenerated with a single command. Traditionally, software maintenance has been a very difficult task as well. Due to the use of a simple abstract model, NAAG applications are also easier to maintain. Furthermore, this approach encourages software reuse: applications consisting of arbitrary collections of original and pre-existing artifacts can be composed easily; functions which implement abstract operations are independent of both, user interface aspects, and the context in which they are employed. \n Application development is further simplified in the following ways: the programmer describes the semantics of the user interface --- a conventional explicit specification is not required; output primitives are defined in an interface-independent manner; many programming tasks such as resource management, event processing, and communication, are either handled directly by the tool or else simplified greatly for the programmer. \n NAAG is currently used by the members of the Laboratory for Computational Vision at the University of British Columbia to maintain a sophisticated image processing system.

TR-90-22 Logical Foundations for Programming Semantics, August 1990 Paul C. Gilmore and George K. Tsiknis

This paper was presented to the Sixth Workshop on Mathematical Foundations of Programming Semantics held at Queen's University, May 15-19,1990. \n\nThe paper provides an introduction to a natural deduction based set theory, NaDSet, and illustrates its use in programming semantics. The need for such a set theory for the development of programming semantics is motivated by contrasting the presentation of recursive definitions within first order logic with their presentation within NaDSet. Within first order logic such definitions are always incomplete in a very simple sense: Induction axioms must be added to the given definitions and extended with every new recursive definition. Within a set theory such as NaDSet, recursive definitions of sets are represented as terms in the theory and are complete in the sense that all properties of the set can be derived from its definition. Such definitions not only have this advantage of completeness, but they also permit recursively defined sets to be members of the universe of discourse of the logic and thereby be shown to be members of other defined sets. \nThe resolution of the paradoxes provided by NaDSet is dependant upon replacing the naive comprehension axiom scheme of an inconsistent first order logic with natural deduction rules for the introduction of abstraction terms into arguments. The abstraction terms admitted are a generalization of the abstraction terms usually admitted into set theory. In order to avoid a confusion of use and mention, the nominalist interpretation of the atomic formulas of the logic forces NaDSet to be second order, although only a single kind of quantifier and variable is required. \nThe use of NaDSet for programming semantics is illustrated for a simple flow diagram language that has been used to illustrate the principles of denotational semantics. The presentation of the semantics within NaDSet is not only fully formal, in contrast to the simply mathematical presentation of denotational semantics, but because NaDSet is formalized as a natural deduction logic, its derivations can be simply checked by machine.

TR-90-23 A Formalization of Category Theory in NaDSet, August 1990 Paul C. Gilmore and George K. Tsiknis

This paper was presented to the Sixth Workshop on Mathematical Foundations of Programming Semantics held at Queen's University, May 15-19, 1990. \nBecause of the increasing use of category theory in programming semantics, the formalization of the theory, that is the provision of an effective definition of what constitutes a derivation for category theory, takes on an increasing importance. Nevertheless, no suitable logic within which the theory can be formalized has been provided. The classical set theories of Zermelo-Fraenkel and Godel-Bernays, for example, are not suitable because of the use category theory makes of self-referencing abstractions, such as in the theorem that the set of categories forms a category. In this paper, a formalization of category theory and a proof of the cited theorem is provided within the logic and set theory NaDSet. NaDSet definitions for natural transformations and functor categories are given and an equivalence relation on categories is defined. Additional definitions and discussions on products, comma categories, universals limits and adjoints are presented. They provide evidence that any construct, not only in categories, but also in toposes, sheaves, triples and similar theories can be formalized within NaDSet.

TR-90-24 Errors and Perturbations in Vandermonde Systems, July 1990 James M. Varah

The Bjorck-Pereyra algorithm for Vandermonde systems is known to produce extremely accurate results in some cases, even when the matrix is very ill-conditioned. Recently, Higham has produced an error analysis of the algorithm which identifies when this behaviour will take place. In this paper, we observe that this analysis also predicts the error behaviour very well in general, and illustrate this with a series of extensive numerical tests. Moreover, we relate the computational error to that caused by perturbations in the matrix elements, and show that they are not always commensurate. We also discuss the relationship between these error and perturbation estimates with the ``effective well-condition'' of Chan and Foulser.

TR-90-25 A Tight Lower Bound on the Size of Planar Permutation Networks, July 1990 Maria M. Klawe and Tom Leighton

(Abstract not available on-line)

TR-90-26 Superlinear Bounds for Matrix Searching Problems, July 1990 Maria M. Klawe

Matrix searching in classes of totally monotone partial matrices has many applications in computer science, operations research, and other areas. This paper gives the first superlinear bound for matrix searching in classes of totally monotone partial matrices, and also contains some new upper bounds for a class with applications in computational geometry and dynamic programming. \n The precise results of this paper are as follows. We show that any algorithm for finding row maxima or minima in totally monotone partial $2n \times n$ matrices with the property that the non-blank entries in each column form a contiguous segment, can be forced to evaluate $\Omega (n \alpha (n))$ entries of the matrix in order to find the row maxima or minima, where $\alpha (n)$ denotes the very slowly growing inverse of Ackermann's function. A similar result is obtained for $n \times 2n$ matrices with contiguous non-blank segments in each row. The lower bounds are proved by introducing the concept of an independence set in a partial matrix and showing that any matrix searching algorithm for these types of partial matrices can be forced to evaluate every element in the independence set. A result involving lower bounds for Davenport-Schinzel sequences is then used to construct an independence set of size $\Omega (n \alpha (n))$ in the matrices of size $2n \times n$ and $n \times 2n$. \n We also give two algorithms to find row maxima and minima in totally monotone partial $n \times m$ matrices with the property that the non-blank entries in each column form a contiguous segment ending at the bottom row. The first algorithm evaluates at most $O (m \alpha (n) + n)$ entries of the skyline matrix and performs at most that many comparisons, but may have $O (m \alpha (n) \log \log n+ n)$ total running time. The second algorithm is simpler and has $O (m \log \log n+ n)$ total running time. \n A preliminary version of this paper appeared in the Proceedings of the First ACM/SIAM Symposium on Discrete Algorithms, 1990. The research in this paper was partially supported by an NSERC Operating Grant.

TR-90-27 Generic Specification of Digital Hardware, September 1990 Jeffrey J. Joyce

This paper argues that generic description is a powerful concept in the context of formal verification, in particular, the formal verification of digital hardware. The paper also describes a technique for creating generic specifications in any language with (at least) the expressive power of higher-order logic. This technique is based on the use of higher-order predicates parameterized by function variables and type variables. We believe that this technique is a very direct (if not the most direct) way to specify hardware generically. Two examples of generic specification are given in the pap er: a resettable counter and the programming level model of a very simple microprocessor.

TR-90-28 Parallel Techniques for Construction of Trees and Related Problems, October 1990 Teresa Maria Przytycka

The concept of a tree has been used in various areas of mathematics for over a century. In particular, trees appear to be one of the most fundamental notions in computer science. Sequential algorithms for trees are generally well studied. Unfortunately many of these sequential algorithms use methods which seem to be inherently sequential. One of the contributions of this thesis is the introduction of several parallel-techniques for the construction of various types of trees and the presentation of new parallel tree construction algorithms using these methods. Along with the parallel tree construction techniques presented here, we develop techniques which have broader applications. \n We use the Parallel Random Access Machine as our model of computation. We consider two basic methods of constructing trees: {\em tree expansion} and {\em tree synthesis}. \n In the {\em tree expansion method}, we start with a single vertex and construct a tree by adding nodes of degree one and/or by subdividing edges. We use the parallel tree expansion technique to construct the tree representation for graphs in the family of graphs known as cographs. \n In the {\em tree synthesis method}, we start with a forest of single node subtrees and construct a tree by adding edges or (for rooted trees) by creating parent nodes for some roots of the trees in the forest. We present a family of parallel and sequential algorithms to construct various approximations to the Huffman tree. All these algorithms apply the tree synthesis method by constructing a tree in a level-by-level fashion. To support one of the algorithms in the family we develop a technique which we call the {\em cascading sampling technique}. \n One might suspect that the parallel tree synthesis method can be applied only to trees of polylogarithmic height, but this is not the case.We present a technique which we call the {\em valley filling technique} and develop its accelerated version called the {\em accelerated valley filling technique}. We present an application of this technique to an optimal parallel algorithm for construction of minimax trees.

TR-90-29 Surface Curvature from Photometric Stereo, October 1990 R. J. Woodham

A method is described to compute the curvature at each point on a visible surface. The idea is to use the intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. This is the idea of photometric stereo. Previously, photometric stereo has been used to obtain local estimates of surface orientation. Here, an extension to photometric stereo is described in which the spatial derivatives of the intensity values are used to determine the principal curvatures, and associated directions, at each point on a visible surface. The result shows that it is possible to obtain reliable local estimates of both surface orientation and surface curvature without making global smoothness assumptions or requiring prior image segmentation. \n The method is demonstrated using images of several pottery vases. No prior assumption is made about the reflectance characteristics of the objects to be analyzed. Instead, one object of known shape, a solid of revolution, is used for calibration purposes.

TR-90-30 A Theory of Multi-Scale, Curvature and Torsion Based Shape Representation for Planar and Space Curves, October 1990 Farzin Mokhtarian

This thesis presents a theory of multi-scale, curvature and torsion based shape representation for planar and space curves. The theory presented has been developed to satisfy various criteria considered useful for evaluating shape representation methods in computer vision. The criteria are: invariance, uniqueness, stability, efficiency, ease of implementation and computation of shape properties. The regular representation for planar curves is referred to as the curvature scale space image and the regular representation for space curves is referred to as the torsion scale space image. Two variants of the regular representations, referred to as the renormalized and resampled curvature and torsion scale space images, have also been proposed. A number of experiments have been carried out on the representations which show that they are very stable under severe noise conditions and very useful for tasks which call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation. \n Planar or space curves are described at varying levels of detail by convolving their parametric representations with Gaussian functions of varying standard deviations. The curvature or torsion of each such curve is then computed using mathematical equations which express curvature and torsion in terms of the convolutions of derivatives of Gaussian functions and parametric representations of the input curves. Curvature or torsion zero-crossing points of those curves are then located and combined to form one of the representations mentioned above. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution or arc length evolution of that curve. This thesis contains a number of theorems about evolution and arc length evolution of planar and space curves along with their proofs. Some of these theorems demonstrate that evolution and arc length evolution do not change the physical interpretation of curves as object boundaries and others are in fact statements on the global properties of planar and space curves during evolution and arc length evolution and their representations. Other theoretical results shed light on the local behavior of planar and space curves just before and just after the formation of a cusp point during evolution and arc length evolution. Together these results provide a sound theoretical foundation for the representation methods proposed in this thesis.

TR-90-31 The Approximation of Implicates and Explanations, January 1990 Alex Kean

This paper studies the continuum between implicates and minimal implicates; and the continuum between explanations and minimally consistent explanations. The study is based on the approximation of the set of these objects. A general definition for approximated minimal implicates, called selective implicates, is presented. Three specific instances of selective implicates: query-based, ATMS and length-based are studied. Using the set of query-based minimal implicates, explanations are generated and the properties of these explanations are studied. The goal of these studies is to extract computationally feasible properties in order to achieve tractable abduction. The setting is the compiled approach using minimal implicates in Clause Management Systems.

TR-90-32 Performance Monitoring in Multi-transputer Networks, October 1990 Jie Cheng Jiang

Parallel architectures, like the transputer-based multicomputer network, offer potentially enormous computational power at modest cost. However, writing programs on a multicomputer to exploit parallelism is very difficult due to the lack of tools to help users understand the run-time behavior of the parallel system and detect performance bottlenecks in their programs. This thesis examines the performance characteristics of parallel programs in a multicomputer network, and describes the design and implementation of a real-time performance monitoring tool on transputers. \nWe started with a simple graph theoretical model in which a parallel computation is represented as a weighted directed acyclic graph, called the {\em execution graph}. This model allows us to easily derive a variety of performance metrics for parallel programs, such as program execution time, speedup, efficiency, etc. From this model, we also developed a new analysis method called {\em weighted critical path analysis} (WCPA), which incorporates the notion of parallelism into critical path analysis and helps users identify the program activities which have the most impact on performance. Based on these ideas, the design of a real-time performance monitoring tool was proposed and implemented on a 74-node transputer-based multicomputer. Major problems in parallel and distributed monitoring addressed in this thesis are: global state and global clock, minimization of monitoring overhead, and the presentation of meaningful data. New techniques and novel approaches to these problems have been investigated and implemented in our tool. Lastly, benchmarks are used to measure the accuracy and the overhead of our monitoring tool. We also demonstrate how this tool was used to improve the performance of an actual parallel application by more than 50\%.

TR-90-33 On the Power of a Posteriori Error Estimation for Numerical Integration and Function Approximation, November 1990 Feng Gao

We show that using a type of divided-difference test as an a posteriori error criterion, the solutions of a class of simple adaptive algorithms for numerical integration and function approximation such as a piecewise Newton-Cotes rule or a piecewise Lagrange interpolation, are guaranteed to have an approximation-theoretic property of near-optimality. Namely, upon successful termination of the algorithm the solution is guaranteed to be close to the solution given by the spline interpolation method on the same mesh to within any prescribed tolerance.

TR-90-34 Embedding All Binary Trees in the Hypercube, January 1990 Alan S. Wagner

An ${\cal O} (N^{2})$ heuristic algorithm is presented that embeds all binary trees, with dilation 2 and small average dilation, into the optimal sized hypercube. The heuristic relies on a conjecture about all binary trees with a perfect matching. It provides a practical and robust technique for mapping binary trees into the hypercube and ensures that the communication load is evenly distributed across the network assuming any shortest path routing strategy. One contribution of this work is the identification of a rich collection of binary trees that can be easily mapped into the hypercube.

TR-90-35 More Reasons Why Higher-Order Logic is a Good Formalism for Specifying and Verifying Hardware, January 1990 Jeffrey J. Joyce

(Abstract not available on-line)

TR-90-36 From Formal Verification to Silicon Compilation, January 1990 Jeffrey J. Joyce, Liu, Rushby, Shankar, Suaya and von Henke

Formal verification is emerging as a viable method for increasing design assurance for VLSI circuits. Potential benefits include reduction in the time and costs associated with testing and redesign, improved documentation and ease of modification, and greater confidence in the quality of the final product. This paper reports on an experiment whose main purpose was to identify the difficulties of integrating formal verification with conventional VLSI CAD methodology. Our main conclusion is that the most effective use of formal hardware verification will be at the higher levels of VLSI system design, with lower levels best handled by conventional VLSI CAD tools.

TR-90-37 The UBC OSI Distributed Application Programming Environment, January 1991 G. Neufeld, M. Goldberg and B. Brachman

(Abstract not available on-line)

TR-90-38 The Generation of Phrase-Structure Representation from Principles, January 1990 David C. LeBlanc

Implementations of grammatical theory have traditionally been based upon Context-Free Grammar (CFG) formalisms which all but ignore questions of learnability. Even implementations which are based upon theories of Generative Grammar (GG), a paradigm which is supposedly motivated by learnability, rarely address such questions. In this thesis we will examine a GG theory which has been formulated primarily to address questions of learnability and present an implementation based upon this theory. The theory argues from Chomsky's definition of epistemological priority that principles which match elements and structures from prelinguistic systems with elements and structures in linguistic systems are preferable to those which are defined purely linguistically or non-linguistically. A procedure for constructing phrase-structure representations from prelinguistic relations using principles of node percolation (rather than the traditional $ \overline{X}$-theory of GG theories or phrase-structure rules of CFG theories) is presented and this procedure integrated into a left-right, primarily bottom-up parsing mechanism. Specifically, we present a parsing mechanism which derives phrase-structure representations of sentences from Case- and $\Theta $-relations using a small number of Percolation Principles. These Percolation Principles simply determine the categorial features of the dominant node of any two adjacent nodes in a representational tree, doing away with explicit phrase structure rules altogether. The parsing mechanism also instantiates appropriate empty categories using a filler-driven paradigm for leftward argument and non-argument movement. Procedures modelling learnability are not implemented in this work, but the applicability of the presented model to a computational model of language is discussed.

TR-90-39 Finding Extrema With Unary Predicates, January 1990 Feng Gao, Leonidas J. Guibas, David G. Kirkpatrick, William T. Laaser and James Saxe

We consider the problem of determining the maximum and minimum elements {x_{1}, \ldots ,x_{n}}$, drawn from some finite universe $\cal U$ of real numbers, using only unary predicates of the inputs. It is shown that $\Theta (n + \log |{\cal U} |) $ unary predicate evaluations are necessary and sufficient, in the worst case. Results are applied to i) the problem of determining approximate extrema of a set of real numbers, in the same model, and ii) the multiparty broadcast communication complexity of determining the extrema of an arbitrary set of numbers held by distinct processors.

TR-90-40 Markov Random Fields in Visual Reconstruction, January 1990 Ola Siksik

Markov Random Fields (MRFs) are used in computer vision as an effective method for reconstructing a function starting from a set of noisy, or sparse data, or in the integration of early vision processes to label physical discontinuities. The MRF formalism is attractive because it enables the assumptions used to be explicitly stated in the energy function. The drawbacks of such models have been the computational complexity of the implementation, and the difficulty in estimating the parameters of the model. \n In this thesis, the deterministic approximation to the MRF models derived by Girosi and Geiger [10] is investigated, and following that approach, a MIMD based algorithm is developed and implemented on a network of T800 transputers under the Trollius operating system. A serial version of the algorithm has also been implemented on a SUN 4 under Unix. \n The network of transputers is configured as a 2-dimensional mesh of processors (currently 16 configured as a $4 \times 4$ mesh), and the input partitioning method is used to distribute the original image across the network. \n The implementation of the algorithm is described, and the suitability of the transputer for image processing tasks is discussed. \n The algorithm was applied to a number of images for edge detection, and produced good results in a small number of iterations.

TR-91-01 Revision in ACMS, April 1991 Alex Kean, 19 pages

The motivation for creating truth maintenance systems is two fold. First, it is used for the abduction process of generating explanation; and second, to perform the necessary {\em bookkeeping} of revision of the knowledge based. The process of revision is defined as addition and deletion of knowledge from the knowledge base. A logical scheme for tracking conclusions in an assumption based clause management system (ACMS) for the purpose of abduction and revision is proposed. As a consequence, an incremental deletion scheme is derived. A protocol for assumption revision is demonstrated by a backtrack search example. The proposed ACMS is the first truth maintenance system that employs incremental deletion as part of its capability.

TR-91-02 A Multigrid Method for Shape from Shading, April 1991 Uri M. Ascher and Paul M. Carter, 17 pages

The shape-from-shading problem has received much attention in the Computer Vision literature in recent years. The basic problem is to recover the {\em shape z(x,y)} of a surface from a given map of its {\em shading}, i.e. its variation of brightness over a given domain. Mathematically, one has to solve approximately the {\em image irradiance equation}\\ \begin{center} E(x,y) \end{center} relating a given image irradiance {\em E(x,y)} to the radiance of the surface at each point {\em (x,y)}, with {\em R(p,q)} a given {\em reflectance z_ {x}$ and $q = z_ {y}$. \nA possible presence of noise and lack of adequate boundary conditions adds to the difficulty of this problem. A number of different approaches towards its solution have been proposed in the Vision literature, including various regularization models. However, a reliable, efficient solution method for practical instances has remained elusive so far. \nIn this paper we analyze the various solution models proposed with the aim of applying an efficient multigrid solver. A combination of an FMG-continuation technique with an appropriate discretization of one such solution model proposed by B. Horn yields an efficient solver. Our results are demonstrated by examples.

TR-91-03 Stability of computational methods for constrained dynamics systems, May 1991 Uri M. Aschre and Linda R. Petzold, 28 pages

Many methods have been proposed for numerically integrating the differential-algebraic systems arising from the Euler-Lagrange equations for constrained motion. These are based on various problem formulations and discretizations. We offer a critical evaluation of these methods from the standpoint of stability. \nConsidering a linear model, we first give conditions under which the differential-algebraic problem is well-conditioned. This involves the concept of an essential underlying ODE. We review a variety of reformulations which have been proposed in the literature and show that most of them preserve the well-conditioning of the original problem. Then we consider stiff and nonstiff discretizations of such reformulated models. In some cases, the same implicit discretization may behave in a very different way when applied to different problem formulations, acting as a stiff integrator on some formulations and as a nonstiff integrator on others. We present the approach of projected invariants as a method for yielding problem reformulations which are desirable in this sense.

TR-91-04 Starshaded Sets, Their Distance Functions and Star Hulls, May 1991 Ying Li, 23 pages

TR-91-05 FDT Tools For Protocol Development, May 1991 A. A. F. Loureiro, Samuel T. Chanson and Song T. Vuong, 45 pages

FDT tools support protocol development by making certain activities feasible, easier to be performed, more reliable, and faster. This paper discusses the desirable properties of FDT tools and classifies them according to the different stages of the protocol development cycle. An assessment of the tools available so far and projections (or suggestions) of the tools to come are given. A list of the tools that have appeared since the mid 1980's is also included.

TR-91-06 Parallel and Distributed Algorithms for Constraint Networks", May 1991 Ying Zhang and Alan K. Mackworth, 21 pages

This paper develops two new algorithms for solving a finite constraint satisfaction problem (FCSP) in parallel. In particular, we give a parallel algorithm for the EREW PRAM model and a distributed algorithm for networks of interconnected processors. Both of these algorithms are derived from arc consistency algorithms which are preprocessing algorithms in general, but can be used to solve an FCSP when it is represented by an acyclic constraint network. If an FCSP can be represented by an acyclic constraint network of size \fIn\fP with width bounded by a constant then (1) the parallel algorithm takes \fIO(log n)\fP time using \fIO(n)\fP processors and (2) there is a mapping of this problem to a distributed computing network of \fIpoly(n)\fP processors which stabilizes in \fIO(log n)\fP time.

TR-91-15 Formulations of an Extended NaDSet, August 1991 Paul C. Gilmore and George K. Tsiknis, 28 pages

NaDSet, a {\underline N}atural {\underline D}eduction based {\underline Set} theory and logic, of this paper is an extension of an earlier logic of the same name. It and some of its applications have been described in earlier papers. A proof of the consistency and completeness of NaDSet is provided elsewhere. In all these earlier papers NaDSet has been formulated as a Gentzen sequent calculus similar to the formulation LK by Gentzen of classical first order logic, although it was claimed that any natural deduction formalization of first order logic, such as Gentzen's natural deduction formulation NK, could be simply extended to be a formalization of NaDSet. This is indeed the case for the method of semantic tableaux of Beth or for Smullyan's version of the tableaux, but the extensions needed for other formalizations, including NK and the intuitionistic version NJ, require some care. The consistency of NaDSet is dependant upon restricting its axioms to those of the form ${\bf A} \rightarrow {\bf A}$, where {\bf A} is an atomic formula; an equivalent restriction for the natural deduction formulation is not obvious. The main purpose of this paper is to describe the needed restriction and to prove the equivalence of the resulting natural deduction logic with the Gentzen sequent calculus formulation for both the intuitionistic and the classical versions of NaDSet. Additionally the paper provides a brief sketch of the motivation for NaDSet and some of its proven and potential applications. \nThe authors gratefully acknowledges support from the Natural Science and Engineering Research Council of Canada.

TR-91-16 A Simple Primal Algorithm for Intersecting 3-Polyhedra in Linear Time, July 1991 Andrew K. Martin, 44 pages

This thesis presents, in full, a simple linear time algorithm for intersecting two convex 3-polyhedra P and Q. This differs from the first such algorithm -- due to Chazelle -- in that it operates entirely in primal space, whereas Chazelle's algorithm relies heavily on duality transforms. We use the hierarchical representations of polyhedra due to Dobkin and Kirkpatrick to induce a cell complexes between coarse approximations, P^k and P_k of P satisfying P_k subset-of P subset-of P^k, and similar approximations Q^k and Q_k of Q satisfying Q_k subset-of Q subset-of Q^k. We show that the structure of such complexes allows intersection queries to be answered efficiently. In particular, the sequence of cells intersected by a ray can be identified in time proportional to the length of the sequence. The algorithm operates by recursively computing the intersections: P^k intersect Q_k and P_k intersect Q^k . Then edges of the union of approximations P intersect Q^k and Q intersect P^k are traversed by tracing their intersection with the two cell complexes. We show that each such edge can be traversed in constant time. In the process, most of the edges of P intersect Q which lie simultaneously on the boundary of P and Q will be traced. We show that the total time needed to construct those which remain is linear in the size of P and Q . Though based on the same general principles, the algorithm presented here is somewhat simpler than that described by Chazelle, which uses only the cell complexes induced by the inner hierarchical representations of P and Q . By extending Chazelle's search structure to the space exterior to the given polyhedra, we avoid having to operate simultaneously in primal and dual spaces. This permits us to conceptualise the algorithm as traversing the edges of the boundary of (P intersect Q^k) union (Q intersect P^k). As a side effect, we avoid one half of Chazelle's recursive calls, which leads to a modest improvement in the asymptotic constants.

TR-91-17 On the CONSISTENCY and COMPLETENESS of an EXTENDED NaDSet, August 1991 Paul C. Gilmore, 54 pages

NaDSet in its extended form has been defined in several previous papers describing its applications. It is a {\underline N}atural {\underline D}eduction based {\underline Set} theory and logic. In this paper the logic is shown to enjoy a form of $\omega$-consistency from which simple consistency follows. The proof uses transfinite induction over the ordinals up to $\varepsilon_0$, in the style of Gentzen's consistency proof for arithmetic. A completeness proof in the style of Henkin is also given. Finally the cut rule of deduction is shown to be redundant.

TR-91-18 Photometric Stereo: Lambertian Reflectance and Light Sources with Unknown Direction and Strength, August 1991 R. J. Woodham, Y. Iwahori and Rob A. Barman, 11 pages

This paper reconsiders the familiar case of photometric stereo under the assumption of Lambertian surface reflectance and three distant point sources of illumination. Here, it is assumed that the directions to and the relative strengths of the three light sources are not known {\it a priori}. Rather, estimation of these parameters becomes part of the problem formulation. \nEach light source is represented by a 3-D vector that points in the direction of the light source and has magnitude proportional to the strength of the light source. Thus, nine parameters are required to characterize the three light sources. It is shown that, regardless of object shape, triples of measured intensity values are constrained to lie on a quadratic surface having six degrees of freedom. Estimation of the six parameters of the quadratic surface allows the determination of the nine parameters of the light sources up to an unknown rotation. \nThis is sufficient to determine object shape, although attitude with respect to the world-based or the camera-based coordinate system can not be simultaneously recovered without additional information.

TR-91-20 Backward Error Estimates for Toeplitz and Vandermonde Systems, September 1991 Jim M. Varah, 14 pages

b$, it is of interest to find nearby systems with $\overline{x}$ as exact solution, and which have the same structure as A. In this paper, we show that the distance to these nearby structured systems can be much larger than for the corresponding general perturbation for Toeplitz and Vandermonde systems. In fact, even the correctly rounded solution $\hat{x}$ may require a structured perturbation of $O(\eta\parallel\hat{x}\parallel)$, not $O(\eta)$ as might be expected.

TR-91-21 Leaders Election Without a Conflict Resolution Rule - Fast and Efficient Randomized Simulations among CRCW PRAMs, October 1991 Joseph Gil and Yossi Matias, 26 pages

We study the question of fast leaders election on TOLERANT, a CRCW PRAM model which tolerates concurrent write but does not support symmetry breaking. We give a randomized simulation of MAXIMUM (a very strong CRCW PRAM) on TOLERANT. The simulation is optimal, reliable, and runs in nearly doubly logarithmic time and linear space. This is the first simulation which is fast, optimal {\it and} space-efficient, and therefore grants true comparison of algorithms running on different CRCW PRAMs. Moreover, it implies that the memory to which concurrent read or concurrent write are assumed should {\it never} be more than linear-the rest of the memory can always be addressed under the EREW convention. The techniques presented in this paper tackle fundamental difficulties in the design of fast parallel algorithms.

TR-91-22 Model-Guided Grouping for 3-D Motion Tracking, October 1991 Xun Li and David G. Lowe, 14 pages

The objective of this paper is to develop a robust solution to the correspondence problem in model-based motion tracking even when the frame-to-frame motion is relatively fast. A new approach called Model-Guided Grouping, which is used to derive intermediate-level structures as our matching tokens, is introduced. The groupings are guided and derived locally, with the contemporary use of model structures, around the predicted model during the object tracking. We choose junctions and parallel pairs as our matching tokens, thus the information coded in these structures is relatively invariant in consecutive frames. The matching strategy is coarse-to-fine, and partial matching will also be allowed when occlusions are present. The method for evaluation of probability of accidental match based on junction groupings will be discussed. Systematic testing shows that matches based on these new methods improve correspondence reliability by about an order of magnitude over previous method based on matching individual line segments.

TR-91-23 The Tree Model for Hashing: Lower and Upper Bounds, December 1991 Joseph Gil, Friedhelm auf Der Heide Meyer and Avi Wigderson, 29 pages

TR-91-24 Surface Reconstruction by Coupled Dept/Slope Model with Natural Boundary, November 1991 Ying Li, 27 pages

This report reports on an IRIS project of reconstructing surface height from gradient. The coupled depth/slope model developed by J.G. Harris has been used and augmented with natural boundary conditions. Experiments have been conducted with emphasis on how to deal with uncertainties about boundary values. Experiments have shown that the reconstructed surfaces are confined to the original shapes if accurate boundary values are given. The algorithm fails to produce correct shapes when inaccurate boundary values are used. Natural boundary conditions are necessary conditions for the problem of variational calculus to be solved. Experiments have shown that natural boundary conditions can be relied upon when no estimations of boundary values can be made, except on occluding boundaries. When relative boundary values of occluding boundaries can be assumed, good reconstruction results can be obtained.

TR-91-25 Computational Architectures for Responsive Vision: the Vision Engine, November 1991 James J. Little, Rod Barman, Stewart Kingdon and Jiping Lu, 10 pages

To respond actively to a dynamic environment, a vision system must process perceptual data in real time, and in multiple modalities. The structure of the computational load varies across the levels of vision, requiring multiple architectures. We describe the Vision Engine, a system with a pipelined early vision architecture, Datacube image processors, connected to a MIMD intermediate vision system, a set of Transputers. The system uses a controllable eye/head for tasks involving motion, stereo and tracking. \nA simple pipeline model describes image transformation through multiple functional stages in early vision. Later processing (e.g., segmentation, edge linking, perceptual organization) cannot easily proceed on a pipeline architecture. A MIMD architecture is more appropriate for the irregular data and functional parallelism of later visual processing. \nThe Vision Engine is designed for general vision tasks. Early vision processing, both optical flow and stereo, is implemented in near real-time using the Datacube, producing dense vector fields with confidence measures, transferred at near video rates to Transputer subsystem. We describe a simple implementation combining, in the Transputer system, stereo and motion information from the Datacube.

TR-91-26 The Logic of Constraint Satisfaction, November 1991 Alan K. Mackworth, 20 pages

The Constraint Satisfaction Problem (CSP) formalization has been a productive tool within Artificial Intelligence and related areas. The Finite CSP (FCSP) framework is presented here as a restricted logical calculus within a space of logical representation and reasoning systems. FCSP is formulated in a variety of logical settings: theorem proving in first order predicate calculus, propositional theorem proving (and hence SAT), the Prolog and Datalog approaches, constraint network algorithms, a logical interpreter for networks of constraints, the Constraint Logic Programming (CLP) paradigm and propositional model finding (and hence SAT, again). Several standard, and some not-so-standard, logical methods can therefore be used to solve these problems. By doing this we obtain a specification of the semantics of the common approaches. This synthetic treatment also allows algorithms and results from these disparate areas to be imported, and specialized, to FCSP; the special properties of FCSP are exploited to achieve, for example, completeness and to improve efficiency. It also allows export to the related areas. By casting CSP both as a generalization of FCSP and as a specialization of CLP it is observed that some, but not all, FCSP techniques lift to CSP and, perhaps, thereby to CLP. Various new connections are uncovered, in particular between the proof-finding approaches and the alternative model-finding approaches that have arisen in depiction and diagnosis applications.

TR-91-28 On Detecting Regularity of Functions: A Probabilistic Analysis, November 1991 F. Gao and G.W. Wasilkowski, 11 pages

TR-91-29 The EAN X.500 Directory Service, November 1991 Barry Brachman, Murray Goldberg Gerald Neufeld and Duncan Stickings, 38 pages

The OSI directory system manages a distributed directory information database of named objects, defining a hierarchical relationship between the objects. An object consists of a set of attributes as determined by a particular class. Attributes are tuples that include a type and one or more values. The directory database is partitioned among a set of directory system agents. The directory service is provided by a collection of agents and incorporates distributed algorithms for name resolution and search, resulting in a network transparent service. The objects can represent many real-world entities. The service is intended to serve a very large and diverse user community. This paper describes experiences gained in implementing the directory service. It also points out a number of areas in the current OSI directory design that require further work and then describes how the EAN directory system has addressed these difficulties.

TR-91-30 Implementing a Normative Theory of Communication in a Framework for Default Reasoning, November 1991 Andrew Csinger, 58 pages

This thesis presents a framework for inter-agent communication, represented and partially implemented with default reasoning. I focus on the limited goal of determining the meaning for a Hearer-agent of an utterance $\omega$ by a Speaker-agent, in terms of the beliefs of the interlocutors. This meaning is generally more than just the explicit propositional contents of $\omega$, and more than just the Speaker's goal to convey her belief that $\omega$. \nOne way of determining this meaning is to let the Hearer take stock of the implicit components of the Speaker's utterances. Among the implicit components of the meaning of $\omega$, I show in particular how to derive certain of its presuppositions with a set of default schemata using a framework for default reasoning. \nMore information can be extracted from the communications channel between interlocutors by adopting a normative model of inter-agent communication, and using this model to explain or 'make sense' of the Speaker's utterances. I construct such a model expressed in terms of a set of default principles of communication using the same framework for default reasoning. \nThe task of deriving the meaning of an utterance is similar to the job required of a user-interface, where the user is the Speaker-agent, and the interface itself is the Hearer-agent. The goal of a user-interface as Hearer is to make maximal use of the data moving along the communications channel between user and application. \nThe result is an integrated theory of normative, inter-agent communications expressed within an ontologically and logically minimal framework. This work demonstrates the development and application of a methodology for the use of default reasoning. The implementation of the theory is also presented, along with a discussion of its applicability to practical user-interfacing. A view emerges of user-modelling as a component of a user-interface.

TR-91-31 Solving Domain Equations in NaDSet, December 1991 Paul C. Gilmore and George K. Tsiknis, 23 pages

Solutions of systems of domain equations is the basis for what is called the Strachey-Scott approach to providing a denotational semantics for programming languages. The solutions offered by the mathematical methods developed by Scott, however, do not provide a computational basis for the semantics in the form of a proof theory. The purpose of this paper is to provide such a theory using the logic and set theory NaDSet. \nThe development of NaDSet was motivated by the following three principles:\\ 1. Abstraction, along with truth functions and quantification, is one of the three fundamental concepts of logic and should be formalized in the same manner as the other two.\\ 2. Natural deduction presentations of logic provide a transparent formalization of Tarski's reductionist semantics.\\ 3. Atomic formulas receive their truth values from a nominalist interpretation. \nThat these three principles lead to a successful resolution of the set theoretic paradoxes and to a sound formulation of NaDSet has been demonstrated elsewhere with proofs of consistency and completeness. Applications of NaDSet to programming language semantics, category theory, non-well-founded sets, and to foundational questions of mathematics have also been demonstrated. \nThe financial support of the Natural Science and Engineering Research Council of Canada is gratefully acknowledged.

TR-91-32 Character Animation using Hierarchical B-Splines, September 1, 1991 David R. Forsey, 15 pages

The challenge of building and animating computer generated human and animal figures is complicated by the need to smoothly and realistically control the deformation of the surface around the articulations of the underlying skeleton. This paper reviews approaches to surface modelling for character animation and describes a geometric (as apposed to physically-based) approach to character modelling using an extension to the hierarchical B-spline. This techinique provides differential attachment of the surface to the skeleton and allows multi-resolution control of surface deformation during animation. The attachment mechanism is simple, easy to use, inexpensive, extensible and can drastically reduce the effort required to animate a surface. The techniques introduced are illustrated using examples of human and animal forms.

TR-92-01 Conditional Logics for Default Reasoning and Belief Revision, January 1992 Craig Boutilier, 223 pages

Much of what passes for knowledge about the world is defeasible, or can be mistaken. Our perceptions and premises can never be certain, we are forced to jump to conclusions in the presence of incomplete information, and we have to cut our deliberations short when our environment closes in. For this reason, any theory of artificial intelligence requires at its heart a theory of default reasoning, the process of reaching plausible, but uncertain, conclusions; and a theory of belief revision, the process of retracting and adding certain beliefs as information becomes available.

In this thesis, we will address both of these problems from a logical point of view. We will provide a semantic account of these processes and develop conditional logics to represent and reason with default or normative statements, about normal or typical states of affairs, and statements of belief revision. The conditional logics will be based on standard modal systems, and the possible worlds approach will provide a uniform framework for the development of a number of such systems.

Within this framework, we will compare the two types of reasoning, determining that they are remarkably similar processes at a formal level of analysis. We will also show how a number of disparate types of reasoning may be analyzed within these modal systems, and to a large extent unified. These include normative default reasoning, probabilistic default reasoning, autoepistemic reasoning, belief revision, subjunctive, hypothetical or counterfactual reasoning, and abduction.

TR-92-02 Probabilistic Horn abduction and Bayesian networks, January 1992 David Poole, 57 pages

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates some assumptions about the rule base and some independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a descrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequancy. It also shows how Bayesian networks can be extended beyond a propositional language, and shows a relationship between probability and argument based systems.

TR-92-03 Shallow Grates, January 1992 Maria M. Klawe, 7 pages

This note proves the existence of acyclic directed graphs of logarithmic depth, such that a superlinear number of input-output pairs remain connected after the removal of any sufficiently small linearly sized subset of the vertices. The technique can be used to prove the analogous, and asymptotically optimal, result for graphs of arbitrary depth, generalizing Schnitger's grate construction for graphs large depth. Interest in this question relates to efforts to use graph theoretic methods to prove circuit complexity lower bounds for algebraic problems such as matrix multiplication. In particular, it establishes the optimality of Valiant's depth reduction technique as a method of reducing the number of connected input-output pairs. The proof uses Schnitger's grate construction, but also involves a lemma on expanding graphs which may be of independent interest.

TR-92-04 Two Algorithms for Decision Tree Search, February 1992 Runping Qi and David Poole, 26 pages

In this paper two algorithms for decision tree search are presented. The basic idea behind these algorithms is to make use of domain dependent information, in the form of an evaluation function as that used by AO*, along with a search mechanism similar to the alpha-beta technique for minimax trees. One of the advantages of our algorithms over AO* is that our algorithms need only linear space. The solution computed by the first algorithm can be either optimal or sub-optimal, depending on the admissibility of the evaluation function. The second algorithm is an approximate algorithm which can case a tradeoff between computational efficiency and solution quality. Some results are presented on the correctness of the algorithms and on the quality of the solutions computed by the algorithms.

TR-92-05 Approximating Polygons and Subdivisions with Minimum-Link Paths, March 1992 Leonidas J. Guibas, John E. Hershberger, Joseph S. B. Mitchell and Jack Scott Snoeyink, 27 pages

We study several variations on one basic approach to the task of simplifying a plane polygon or subdivision: Fatten the given object and construct an approximation inside the fattened region. We investigate fattening by convolving the segments or vertices with disks and attempt to approximate objects with the minimum number of line segments, or withem near the minimum, by using efficient greedy algorithms. We give some variants that have linear or $O (n \log n)$ algorithms approximating polygonal chains of {\em n} segments, and show that for subdivisions or chains with no self-intersections it is {\em NP}-hard to compute the best approximation.

TR-92-06 Cepstral Analysis of Optical Flow, November 1992 B, Esf ari, iar and James J. Little, 23 pages

Visual flow analysis from image sequences can be viewed as detection and retrieval of echoes or repeated patterns in two dimensional signals. In this paper we introduce a new methodology for optical flow analysis based on the cepstral filtering method. Cepstral filtering is a non-linear adaptive correlation technique used extensively in phoneme chunking and echo removal in speech understanding and signal processing. Different cepstral methodologies, in particular power cepstrum, are reviewed and more efficient variations for real image analysis are discussed.

Power cepstrum is extended to multiframe analysis. A correlative cepstral technique, cepsCorr, is developed; cepsCorr significantly increases the signal to noise ratio, virtually eliminates errors, and provides a predictive or multi-evidence approach to visual motion analysis.

TR-92-07 Speeding Up the Douglas-Peucker Line-Simplification Algorithm, April 1992 John Hershberger and Jack Snoeyink, 16 pages

We analyze the line simplification algorithm reported by Douglas and Peucker and show that its worst case is quadratic in n, the number of input points. Then we give a algorithm, based on path hulls, that uses the geometric structure of the problem to attain a worst-case running time proportional to n log_2(n), which is the best case of the Douglas algorithm. We give complete C code and compare the two algorithms theoretically, by operation counts, and practically, by machine timings.

TR-92-08 Symmetry in Self-Correcting Cellular Automata, April 1992 Nicholas Pippenger, 11 pages

We study a class of cellular automata that are capable of correcting finite configurations of errors within a finite amount of time. Subject to certain natural conditions, we determine the geometric symmetries such automata may possess. In three dimensions the answer is particularly simple: such an automaton may be invariant under all proper rotations that leave the underlying lattice invariant, but cannot be invariant under the inversion that takes each configuration into its mirror image.

TR-92-09 An Elementary Approach to Some Analytic Asymptotics, April 1992 Nicholas Pippenger, 20 pages

(Abstract not available on-line)

TR-92-10 Constraint Nets: A Semantic Model for Real-Time Embedded Systems, May 1992 Ying Zhang and Alan K. Mackworth, 16 pages

We take a real-time embedded system to be the control system of a plant in an open environment, where the control is realized by computation in digital or analog form. The key characteristic of a real-time embedded system is that computation is interleaved or in parallel with actions of the plant and events in the environment. Various models for real-time embedded systems have been proposed in recent years, most of which are extensions of existing concurrency models with delays or time bounds on transitions. In this paper, we present a different approach to modeling real-time systems. We take the overall system as a dynamic system, in which time or event structures are considered as an intrinsic dimension. Our model, called the Constraint Net model (CN), is capable of expressing dynamic behaviors in real-time embedded systems. It captures the most general structure of dynamic systems so that systems with discrete as well as dense time and asynchronous as well as synchronous event structures can be modeled in a unified framework. It models the dynamics of the environment as well as the dynamics of the plant and the dynamics of the computation and control. It provides multiple levels of abstraction so that a system can be modeled and developed hierarchically. By explicitly representing locality, CN can be used to explore true concurrency in distributed systems. With its rigorous formalization, CN provides a programming semantics for the design of real-time embedded systems. It also serves as a foundation for specification, verification, analysis and simulation of the complete dynamic system.

TR-92-11 Robust Model-based Motion Tracking Through the Integration of Search and Estimation, May 1992 David G. Lowe, 15 pages

A computer vision system has been developed for real-time motion tracking of 3-D objects, including those with variable internal parameters. This system provides for the integrated treatment of matching and measurement errors that arise during motion tracking. These two sources of error have very different distributions and are best handled by separate computational mechanisms. These errors can be treated in an integrated way by using the computation of variance in predicted feature measurements to determine the probability of correctness for each potential matching feature. In return, a best-first search procedure uses these probabilities to find consistent sets of matches, which eliminates the need to treat outliers during the analysis of measurement errors. The most reliable initial matches are used to reduce the parameter variance on further iterations, minimizing the amount of search required for matching more ambiguous features. These methods allow for much larger frame-to-frame motions than most previous approaches. The resulting system can robustly track models with many degrees of freedom while running on relatively inexpensive hardware. These same techniques can be used to speed verification during model-based recognition.

TR-92-12 A Correct Optimized Algorithm for Incrementally Generating Prime Implicates, November 1992 Alex Kean and George Tsiknis, 15 pages

In response to the demands of some applications, we had developed an algorithm, called IPIA, to incrementally generate the prime implicates/implicants of a set of clauses. In an attempt to improve IPIA some optimizations were also presented. It was pointed out to us that some of these optimizations, namely the {\em subsumption} and {\em history restriction}, are self conflicting. Subsumption is a necessary operation to guarantee the primeness of implicates/implicants and {\em history restriction} is a scheme that exploits the history of consensus operation to avoid generating non prime implicant/implicates. The original IPIA, where {\em history restriction} was not consider, was proven correct. However, when {\em history restriction} was introduced later in the optimized version, it interacted with the subsumption operation to produce an incomplete set of prime implicants/implicates. This paper explains the problem in more details, proposes a solution and provides a proof of its correctness.

TR-92-13 An Introduction to Formal Hardware Verification, June 1992 Carl-Johan Seger, 27 pages

Formal hardware verification has recently attracted considerable interest. The need for ``correct'' designs in safety-critical applications, coupled with the major cost associated with products delivered late, are two of the main factors behind this. In addition, as the complexity of the designs increase, an ever smaller percentage of the possible behaviors of the designs will be simulated. Hence, the confidence in the designs obtained by simulation is rapidly diminishing. This paper provides an introduction to the topic by describing three of the main approaches to formal hardware verification: theorem-proving, model checking and symbolic simulation. We outline the underlying theory behind each approach, we illustrate the approaches by applying them to simple examples and we discuss their strengths and weaknesses. We conclude the paper by describing current on-going work on combining the approaches to achieve multi-level verification approaches.

TR-92-14 Rearrangeable Circuit-Switching Networks, June 1992 Nicholas Pippenger, 11 pages

We present simple proofs of the basic results concerning the complexity of rearrangeable connectors and superconcentrators. We also define several types of networks whose connectivity properties interpolate between these extremes, and show that their complexities also interpolate.

TR-92-15 The Raven System, August 1992 Donald Acton, Terry Coatta and Gerald Neufeld, 43 pages

This report describes the distributed object-oriented system, Raven. Raven is both a distributed operating system as well as a programming language. The language will be familiar to C programmers in that it has many constructs similar to the C programming language. Raven uses a simple, uniform object model in which all entities, at least conceptually, are objects. The object model supports classes for implementation inheritance and type checking. Both static and dynamic typing as well as static and dynamic method binding is supported. Object behavior is defined by the class of the object. The language is compiled (rather than interpreted) and supports several features that improve performance. Support also exists for parallel and distributed computing as well as persistent data. Raven is designed specifically for high-performance parallel and distributed applications.

Note:

The Raven System has undergone many changes since this report was published. The report acurately reflects the basic operating principles of Raven, but the syntactic details of Raven are no longer the same.

TR-92-16 The Parallel Protocol Framework, August 1992 Murray W. Goldberg, Gerald W. Neufeld and Mabo R. Ito, 48 pages

(Abstract not available on-line)

TR-92-17 Stabilization of DAEs and invariant manifolds, August 1992 Uri M. Ascher, Hongsheng Qin and Sebastian Reich, 28 pages

Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice.

Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible.

We discuss the best methods thus obtained and make concrete algorithmic suggestions.

TR-92-18 Collocation Software for Boundary Value Differential - Algebraic Equations, December 1992 Uri M. Ascher and Raymond J. Spiteri, 20 pages

We describe the methods and implementation of a general-purpose code, COLDAE. This code can solve boundary value problems for nonlinear systems of semi-explicit differential-algebraic equations (DAEs) of index at most 2. Fully implicit index-1 boundary value DAE problems can be handled as well.

The code COLDAE is an extension of the package COLNEW (COLSYS) for solving boundary value ODEs. The implemented method is piecewise polynomial collocation at Gaussian points, extended as needed by the projection method of Ascher-Petzold. For general semi-explicit index-2 problems, as well as for fully implicit index-1 problems, we define a {\em selective projected collocation} method, and demonstrate its use. The mesh selection procedure of COLSYS is modified for the case of index-2 constraints. We also discuss shooting for initial guesses.

The power and generality of the code are demonstrated by examples.

TR-92-19 The Numerical Solution of Delay-Differential-Algebraic Equations of Retarded and Neutral Type, December 1992 Uri M. Ascher and Linda R. Petzold, 29 pages

In this paper we consider the numerical solution of initial value delay-differential-algebraic equations (DDAEs) of retarded and neutral types, with a structure corresponding to that of Hessenberg DAEs. We give conditions under which the DDAE is well-conditioned, and show how the DDAE is related to an underlying retarded or neutral delay-ODE (DODE). We present convergence results for linear multistep and Runge-Kutta methods applied to DDAEs of index 1 and 2, and show how higher-index Hessenberg DDAEs can be formulated in a stable way as index-2 Hessenberg DDAEs.

TR-92-20 Probabilistic Horn abduction and Bayesian networks, August 1992 David Poole, 61 pages

This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language.

TR-92-23 Sequences of Revisions: On the Semantics of Nested Conditionals, September 1992 Craig Boutilier, 63 pages

The truth conditions for conditional sentences have been well-studied, but few compelling attempts have been made to define means of evaluating iterated or nested conditionals. We start with a semantic account of subjunctive conditionals based on the AGM model of revision, and extend this model in a natural fashion to account for right-nesting of conditionals, describing a process called ``natural revision''. These sentences capture sequences of propositional revisions of a knowledge base. We examine the properties of this model, demonstrating that the maximum amount of conditional information in a belief set is preserved after revision. Furthermore, we show how any sequence of revisions can be reduced to natural revision by a single sentence. This demonstrates that any arbitrarily nested sentence is equivalent to a sentence without nesting of the conditional connective. We show cases where revision models, even after the processing of an arbitrary sequence of revisions, can be described purely propositionally, and often in a manner that permits tractable inference. We also examine a form of revision known as ``paranoid revision'' which appears to be the simplest form of belief revision that fits within the AGM framework, and captures semantically the notion of full meet revision.

TR-92-24 Search for computing posterior probabilities in Bayesian networks, September 1992 David Poole, 35 pages

This paper provides a general purpose search-based technique for computing posterior probabilities in arbitrary discrete Bayesian Networks. This is an ``anytime'' algorithm, that at any stage can estimate prior and posterior probabilities with a known error bound. It is shown how well it works for systems that have normality conditions that dominate the probabilities, as is the case in many diagnostic situations where we are diagnosing systems that work most of the time, and for commonsense reasoning tasks where normality assumptions (allegedly) dominate. We give a characterisation of those cases where it works well, and discuss how well it can be expected to work on average. Finally we give a discussion on a range of implementations, and discuss why some promising approaches do not work as well as may be expected.

TR-92-25 The Rapid Recovery of three-Dimensional Orientation from Line Drawings, September 1992 R. A. Rensink, 188 pages

A computational theory is developed that explains how line drawings of polyhedral objects can be interpreted rapidly and in parallel at early levels of human vision. The key idea is that a time-limited process can correctly recover much of the three-dimensional structure of these objects when split into concurrent streams, each concerned with a single aspect of scene structure.

The work proceeds in five stages. The first extends the framework of Marr to allow a process to be analyzed in terms of resource limitations. Two main concerns are identified: (i) reducing the amount of nonlocal information needed, and (ii) making effective use of whatever information is obtained. The second stage traces the difficulty of line interpretation to a small set of constraints. When these are removed, the remaining constraints can be grouped into several relatively independent sets. It is shown that each set can be rapidly solved by a separate processing stream, and that co-ordinating these streams can yield a low-complexity ``approximation'' that captures much of the structure of the original constraints. In particular, complete recovery is possible in logarithmic time when objects have rectangular corners and the scene-to-image projection is orthographic. The third stage is concerned with making good use of the available information when a fixed time limit exists. This limit is motivated by the need to obtain results within a time independent of image content, and by the need to limit the propagation of inconsistencies. A minimal architecture is assumed, viz., a spatiotopic mesh of simple processors. Constraints are developed to guide the course of the process itself, so that candidate interpretations are considered in order of their likelihood. The fourth stage provides a specific algorithm for the recovery process, showing how it can be implemented on a cellular automaton. Finally, the theory itself is tested on various line drawings. It is shown that much of the three-dimensional structure of a polyhedral scene can indeed be recovered in very little time. It also is shown that the theory can explain the rapid interpretation of line drawings at early levels of human vision.

TR-92-26 The Complexity of Constraint Satisfaction Revisited, September 1992 Alan K. Mackworth and Eugene C. Freuder, 5 pages

This paper is a retrospective account of some of the developments leading up to, and ensuing from, the analysis of the complexity of some polynomial network consistency algorithms for constraint satisfaction problems.

TR-92-27 Starshaped Sets, The Radial Function and 3-D Attitude Determination, October 1992 Ying Li and Robert J. Woodham, 18 pages

Attitude characterizes the three rotational degrees of freedom between the coordinate system of a known object and that of a viewer. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. The domain of the representation is the entire sphere. Imaging from a single viewpoint typically determines a hemisphere of the representation. Matching the visible region to the full spherical model for a known object estimates 3-D attitude.

The radial function is used to define a new orientation-based representation of shape. The radial function is well-defined for a class of sets called {\em starshaped} in mathematics. A starshaped set contains at least one interior point from which all boundary points are visible. The radial function records the distance from the origin of the coordinate system to each boundary point. The novel contribution of this paper is to extend previous mathematical results on the matching problem for convex objects to starshaped objects. These results then allow one to transform the attitude determination problem for starshaped sets into an optimization problem for which standard numerical solutions exist. Numerical optimization determines the 3-D rotation that brings a sensed surface into correspondence with a known model.

The required surface data can be obtained, for example, from laser range finding or from shape-from-shading. A proof-of-concept system has been implemented and experiments conducted on real objects using surface data derived from photometric stereo.

TR-92-28 The Psychology of Visualization, November 1992 Andrew Csinger, 27 pages

The Psychology of Visualization

This document is a review of the literature of three related areas: psychophysical vision research, automatic display generation, and multi-dimensional data visualization. Common threads are explored, and a model of the visualization process is proposed which integrates aspects of these three areas.

In the review of psychophysics, attempts to find a set of primitive perceptual channels are explored. In the literature on automatic generation and visualization, attempts to employ these psychophysical findings are investigated. Finally, the proposed model is a framework which might facilitate this kind of cooperation.

TR-92-29 A Proposed Framework for Characterization of Robotic Systems, November 1992 Jane Mulligan, 14 pages

The plethora of approaches to planning and action in robotic systems calls for a unified framework onto which we can map various systems and compare their structure and performance. We propose such a thinking framework which allows us to examine these systems in a uniform way and discuss their adequacy for robotic problems. The inclusion of an environment specification in problem specifications is proposed as a means of clarifying a robot's abilities and creating a notion of context. Robotic systems are described as a set of {\em $<$information-source; computation$>$} strategies combined with a set of actuators.

TR-92-30 Parallel and Distributed Finite Constraint Satisfaction: Complexity, Algorithms and Experiments, November 1992 Ying Zhang and Alan K. Mackworth, 36 pages

This paper explores the parallel complexity of finite constraint satisfaction problems (FCSPs) by developing three algorithms for deriving minimal constraint networks in parallel. The first is a parallel algorithm for the EREW PRAM model, the second is a distributed algorithm for fine-grain interconnected networks, and the third is a distributed algorithm for coarse-grain interconnected networks. Our major results are: given an FCSP represented by an acyclic constraint network (or a join tree) of size n with treewidth bounded by a constant, then (1) the parallel algorithm takes O(log n) time using O(n) processors, (2) there is an equivalent network, of size poly(n) with treewidth also bounded by a constant, which can be solved by the fine-grain distributed algorithm in O(\log n) time using poly(n) processors and (3) the distributed algorithm for coarse-grain interconnected networks has linear speedup and linear scaleup. In addition, we have simulated the fine-grain distributed algorithm based on the logical time assumption, experimented with the coarse-grain distributed algorithm on a network of transputers, and evaluated the results against the theory.

TR-92-31 Will the Robot Do the Right Thing?, November 1992 Ying Zhang and Alan K. Mackworth, 20 pages

Constraint Nets have been developed as an algebraic on-line computational model of robotic systems. A robotic system consists of a robot and its environment. A robot consists of a plant and a controller. A constraint net is used to model the dynamics of each component and the complete system. The overall behavior of the system emerges from the coupling of each of its components. The question posed in the title is decomposed into two questions: first, what is the right thing? second, how does one guarantee the robot will do it? We answer these questions by establishing a formal approach to the specification and verification of robotic behaviors. In particular, we develop a real-time temporal logic for the specification of behaviors and a new verification method, based on timed forall-automata, for showing that the constraint net model of a robotic system satisfies the specification of a desired global behavior of the system. Since the constraint net model of the controller can also serve as the on-line controller of the real plant, this is a practical way of building well-behaved robots. Running examples of a coordinator for a two-handed robot performing an assembly task and a reactive maze traveler illustrate the approach.

TR-92-32 The Support Function, Curvature Functions and 3-D Attitude, November 1992 Ying Li and Robert J. Woodham

Attitude determination finds the rotation between the coordinate system of a known object and that of a sensed portion of its surface. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. They are useful in attitude determination because they rotate in the same way as the object rotates. Three such representations are defined using, respectively, the support function and the first and second curvature functions. The curvature representations are unique for smooth, strictly convex objects. The support function representation is unique for any convex object.

The essential mathematical basis for these representations is provided. The paper extends previous results on convex polyhedra to the domain of smooth, strictly convex surfaces. Combinations of the support function of a known object with curvature measurements from a visible surface transform attitude determination into an optimization problem for which standard numerical solutions exist.

Dense measurements of surface curvature are required. Surface data can be obtained from laser range finding or from shape-from-shading methods, including photometric stereo. A proof-of-concept system has been implemented and experiments conducted on a real object using surface orientation and surface curvature data obtained directly from photometric stereo.

TR-92-34 A Mathematically Precise Two-Level Formal Hardware Verification Methodology*, December 1992 Carl-Johan H. Seger and Jeffrey J. Joyce, 34 pages

Theorem-proving and symbolic trajectory evaluation are both described as methods for the \fIformal verification of hardware\fP. They are both used to achieve a common goal -- correctly designed hardware -- and both are intended to be an alternative to conventional methods based on non-exhaustive simulation. However, they have different strengths and weaknesses. The main significance of this paper is the description of a two-level approach to formal hardware verification, where the HOL theorem prover is combined with the Voss verification system. From symbolic trajectory evaluation we inherit a high degree of automation and accurate models of circuit behavior and timing. From interactive theorem-proving we gain access to powerful mathematical tools such as induction and abstraction. The interface between the HOL and Voss is, however, more than just an ad hoc translation of verification results obtained by one tool into input for the other tool. We have developed a ``mathematical'' interface where the results of the Voss system is embedded in HOL. We have also prototyped a hybrid tool and used this tool to obtain verification results that could not be easily obtained with previously published techniques.

TR-92-33 Bringing Mathematical Research to Life in the Schools, November 1992 Maria M. Klawe, 16 pages

(Abstract not available on-line)

TR-92-35 Solving the Classic Radiosity Equation Using Multigrid Techniques, February 4, 1992 Robert R. Lewis, 8 pages

We investigate the application of multigrid techniques to the solution of the ``classic'' radiosity equation. After overviews of the global illumination problem and of radiosity, we describe the latter's solution via multigrid methods. An implementation of the multigrid algorithm presented here is able to solve the classic radiosity equation in about 50% of the time required by the more commonly-used Gauss-Seidel approach. Although few researchers currently use classic radiosity, we discuss possibilities for the adaption of multigrid methods to more recent radiosity solution techniques.

TR-92-36 Multi-Resolution Surface Approximation for Animation, October 30, 1992 David Forsey and LiFeng Wang, 11 pages

This paper considers the problem of approximating a digitized surface in R3 with a hierarchical bicubic B-spline to produce a manipulatable surface for further modeling or animation. The 3D data's original mapping from R3 (multiple rows of cylindrical scans) is mapped into the parametric domain of the B-splice (also in R3) using a modified chord-length parameterization. This mapping is used to produce a gridded sampling of the surface, and a modified full multi-grid (FMG) technique is employed to obtain a high-resolution B-spline approximation. The intermediate results of the FMG calculations generate the component overlays of a hierarchical spline surface reconstruction. Storage requirements of the hierarchical representation are reduced by eliminating offsets wherever their removal will not increase the error in the approximation by more than a given amount. The resulting hierarchical spline surface is interactively modifiable (modulo the size of the dataset and computing power) using the editing capabilities of the hierarchical surface representation allowing either local or global changes to surface shape while retaining details of the scanned data.

TR-92-37 A Ray Tracing Accelerator Based on a Hierarchy of 1D Sorted Lists, October 30, 1992 Alain Fournier and Pierre Poulin, 9 pages

Since the introduction of ray tracing as a rendering technique, several approaches have been proposed to reduce the number of ray/object tests. This paper presents yet another such approach based on a hierarchy of 1D sorted lists. A bounding box aligned with the axes encloses an object. The coordinates of each bounding box are ordered in three sorted lists (one for each axis) and are treated as events. Traversing a scene with a ray consists of traversing each sorted list in order, intersecting an object only when for this object a first event has been encountered (entered) in every dimension before a second event has been encountered (exited) in any dimension. To reduce the number of events (entries and exits) traversed, a hierarchy of sorted lists is constructed from a hierarchy of bounding boxes. The results are favourable for scenes ranging from moderate to high complexity. Further applications of the technique to hardware assist for ray tracing and to collision detection are discussed.

TR-92-38 Common Illumination between Real and Computer Generated Scenes, October 30, 1992 Alain Fournier, Atjeng S. Gunawan and Chris Romanzin, 9 pages

The ability to merge a real video image (RVI) with a computer-generated image (CGI) enhances the usefulness of both. To go beyond ``cut and paste'' and chroma-keying, and merge the two images successfully, one must solve the problems of common viewing parameters, common visibility and common illumination. The results can be dubbed Computer Augmented Reality (CAR). We present in this paper techniques for approximating the common global illumination for RVIs and CGIs, assuming some elements of the scene geometry of the real world and common viewing parameters are known. Since the real image is a projection of the exact solution for the global illumination in the real world (done by nature), we approximate the global illumination of the merged image by making the RVI part of the solution to the common global illumination computation. The objects in the real scene are replaced by few boxes covering them; the image intensity of the RVI is used as the initial surface radiosity of the visible part of the boxes; the surface reflectance of the boxes is approximated by subtracting an estimate of the illuminant intensity based on the concept of ambient light; finally global illumination using a classic radiosity computation is used to render the surface of the CGIs with respect to their new environment and for calculating the amount of image intensity correction needed for surfaces of the real image. An example animation testing these techniques has been produced. Most of the geometric problems have been solved in a relatively ad hoc manner. The viewing parameters were extracted by interactively matching of the synthetic scene with the RVIs. The visibility is determined by the relative position of the ``blocks'' representing the real objects and the computer generated objects, and a moving computer generated light has been inserted. The results of the merging are encouraging, and would be effective for many applications.

TR-92-39 Harnessing Preattentive Processes for Multivariate Data Visualization, October 30, 1992 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 11 pages

A new method for designing multivariate data visualization tools is presented. These tools allow users to perform simple tasks such as estimation, target detection, and detection of data boundaries rapidly and accurately. Our design technique is based on principles arising from an area of cognitive psychology called preattentive processing. Preattentive processing involves visual features that can be detected by the human visual system without focusing attention on particular regions in an image. Examples of preattentive features include colour, orientation, intensity, size, shape, curvature, and line length. Detection is performed very rapidly by the visual system, almost certainly using a large degree of parallelism. We studied two known preattentive features, hue and orientation. The particular question investigated is whether rapid and accurate estimation is possible using these preattentive features. Experiments that simulated displays using our preattentive visualization tool were run. Analysis of the results of the experiments showed that rapid and accurate estimation is possible with both hue and orientation. A second question, whether interaction occurs between the two features, was answered negatively. This suggests that these and perhaps other preattentive features can be used to create visualization tools which allow high-speed multivariate data analysis.

TR-92-40 Investigating the Effectiveness of Direction Manipulation of 3D B-Spline Curves Using, October 30, 1992 Stanley Jang, Kellogg S. Booth, David R. Forsey and Peter Graf, 10 pages

the Shape-Matching Paradigm curves. These formulations are found in a variety of applications, including interactive curve design. Previous research has shown that the B-spline is an effective formulation for this setting. However, a possible drawback for the novice user in using the B-spline is the fact that control vertices may lie far away from the curve, making its manipulation unintuitive. This problem is compounded in three dimensions. A direct manipulation technique, allowing a curve to be manipulated with points that lie on the curve itself, offers an alternative to control vertex manipulation. An experiment was conducted to compare the interactive design of 3D curves using control vertex manipulation of B-spline curves and a particular type of direct manipulation of B-spline curves. The results of the experiment revealed that direct manipulation was significantly faster than control vertex manipulation, without sacrificing accuracy in the shape of the final 3D curve. A general testbed designed for this investigation and related studies of 3D interaction techniques was used to conduct the experiment.

TR-92-41 Filtering Normal Maps and Creating Multiple Surfaces, October 30, 1992 Alain Fournier, 14 pages

``Bump'' mapping is a variant of texture mapping where the texture information is used to alter the surface normal. Current techniques to pre-filter textures are all relying on the fact that the texture information can be linearly "factored out" of the shading equation, and therefore can be pre-averaged in some way. This is not the case with bump maps, and those techniques fail to filter them correctly. We propose here a technique to pre-filter bump maps by building a pyramid where each level stores distributions of normal vectors reconstructed from the distribution given by the underlying bump map. The distributions are represented as sums of a small number of Phong-like spreads of normal vectors. The technique, besides allowing an effective and smooth transition between a bump map and a single surface description, gives rise to the concept of a multiple surface, where each point of the single surface is characterized by more than one normal vector. This allows the description of visually complex surfaces by a trivial modification of current local illumination models. When a surface has an underlying microstructure, masking and self-shadowing are important factors in its appearance. Along with the filtering of normals we include the filtering of the masking and self-shadowing information. This is accomplished by computing the limiting angles of visibility and their variance along the two texture axes for the reconstructed distribution of normals. These techniques allow the modeling of any surface whose microstructure we can model geometrically. This includes complex but familiar surfaces such as anisotropic surfaces, many woven cloth, and stochastic surfaces.

TR-92-48 Spline Overlay Surfaces, October 30, 1993 Richard H. Bartels and David R. Forsey, 9 pages

We consider the construction of spline features on spline surfaces. The approach taken is a generalization of the hierarchical surface introduced in [Forsey88]. Features are regarded as spline-defined vector displacement fields that are overlain on existing surfaces. No assumption is made that the overlays are derived from the base surface. They may be applied with any orientation in a non-hierarchical fashion. In particular, we present a ``cheap'' version of the concept in which the displacement field is mapped to the base surface approximately, through the mapping of its control vectors alone. The result is a feature that occupies the appropriate position in space with respect to the base surface. It may be manipulated and rendered as an independent spline, thus avoiding the costs of a true displacement mapping. This approach is useful for prototyping and previewing during design. When a finished product is desired, of course, true displacement mapping is employed.

TR-93-01 Multi-evidential Correlation \& Visual Echo Analysis, January 1993 B, Esf ari, iar and James J. Little, 22 pages

Visual motion, stereo, texture, and symmetric boundaries are all repetition of similar patterns in time or space. These repetitions can be viewed as "echoes" of one another, and the measurement of disparities, segmentation of textons, or detection of boundary symmetries translates into detection of echo arrival periods.

Cepstral filtering, as well as Polyspectral techniques and waveform analysis, are some of the techniques used successfully for echo detection. This paper examines the application of cepstral analysis to computational vision, introduces modified improvements to the traditional methods, and provides a comparison with other routines presently used.

Finally, we introduce a general multi-evidential correlation approach which lends itself to several computational routines. CepsCorr, as we call it, is a simple general technique that can accept different matching routines, such as cepstrum and/or phase correlation as its measurement kernel. The evidences provided by each iteration of cepsCorr can then be combined to provide a more accurate estimate of motion or binocular disparity.

TR-93-02 Free Speech, Pornography, Sexual Harassment, and Electronic Networks, February 1993 Richard Rosenberg, 38 pages

Linking most universities and many companies around the world is a vast computer network called Internet. More than 7 million people, at 1.2 million attached hosts, in 117 countries are able to receive and send messages to about 4,000 newsgroups, representing the diverse interests of its users, as they are usually called. Some of these newsgroups deal with technical computer issues, some are frivolous, and some carry obscene or pornographic material. For purposes of this essay, it will be assumed that by most standards the postings of concern consisting of stories with themes of bestiality, bondage, and incest and encrypted pictures with scenes of nude women and men, and even children, are pornographic and offensive to many people. The issue under discussion is what to do about such offensive material. Issues of free speech, censorship, and sexual harassment arise. These as well as many others are explored and recommendations are made.

TR-93-03 Automatic Synthesis of Sequential Synchronizations, March 1993 Zheng Zhu and Steven D. Johnson, 17 pages

(Abstract not available on-line)

TR-93-04 Design and Analysis of Embedded Real-Time Systems: An Elevator Case Study, February 1993 Yang Zhang and Alan K. Mackworth, 44 pages

Constraint Nets have been developed as an algebraic on-line computational model of robotic systems. Timed forall-automata have been studied as a logical specification language of real-time behaviors. A general verification method has been proposed for checking whether a constraint net model satisfies a timed forall-automaton specification. In this paper, we illustrate constraint net modeling and timed forall-automata analysis using an elevator example. We start with a description of the functions and user interfaces of a simple elevator system, and then model the complete system in Constraint Nets. The analysis of a well-designed elevator system should guarantee that any request will be served within some bounded time. We specify such requirements in timed forall-automata, and show that the constraint net model of the elevator system satisfies the timed forall-automaton specification.

TR-93-05 On Seeing Robots, March 1993 Alan K. Mackworth, 13 pages

Good Old Fashioned Artificial Intelligence and Robotics (GOFAIR) relies on a set of restrictive Omniscient Fortune Teller Assumptions about the agent, the world and their relationship. The emerging Situated Agent paradigm is challenging GOFAIR by grounding the agent in space and time, relaxing some of those assumptions, proposing new architectures and integrating perception, reasoning and action in behavioral modules. GOFAIR is typically forced to adopt a hybrid architecture for integrating signal-based and symbol-based approaches because of the inherent mismatch between the corresponding on-line and off-line computational models. It is argued that Situated Agents should be designed using a unitary on-line computational model. The Constraint Net model of Zhang and Mackworth satisfies that requirement. Two systems for situated perception built in our laboratory are described to illustrate the new approach: one for visual monitoring of a robot's arm, the other for real-time visual control of multiple robots competing and cooperating in a dynamic world.

TR-93-06 A Computational Theory of Decision Networks, March 1993 Nevin Lianwen Zhang, Runping Qi and David Poole, 47 pages

Decision trees (Raiffa 1968) are the first paradigm where an agent can deal with multiple decisions. The non-forgetting influence diagram formalism (Howard and Matheson 1983, Shachter 1986) improves decision trees by exploiting random variables' independencies of decision variables and other random variables. In this paper, we introduce a notion of decision networks that further explore decision variables' independencies of random variables and other decision variables. We also drop the semantic constraints of total ordering of decisions and of one value node. Only the fundamental constraint of acyclicity is kept. \par

From a computational point of view, it is desirable if a decision network is stepwise-solvable, i.e if it can be evaluated by considering one decision at a time. However, decision network in the most general sense need not to be stepwise-solvable. A syntactic constraint called stepwise-decomposability is therefore imposed. We show that stepwise-decomposable decision networks can be evaluated not only by considering one decision at a time, but also by considering one portion of the network at a time.

TR-93-07 On Finite Covering of Infinite Spaces for Protocol Test Selection, April 1993 Masaaki Mori and Son T. Vuong, 17 pages

The core of the protocol test selection problem lies in how to derive a finite test suite from an infinite set of possible execution sequences (protocol behaviors). This paper presents two promising approaches to this problem : (i) the metric based topological approach, and (ii) the formal language theoretic approach ; both aim at producing finite coverings of an infinite set of execution sequences. The former approach makes use of the property of compactness of metric space, which guarantees the infinite metric space can be fully covered by a finite number of open "balls" (subspaces). The latter approach relies on the property that the Parikh mapping of a set of all execution sequences can be represented by a finite union of linear sets. Two simple protocol examples are given to elucidate the formal language theoretic approach.

TR-93-08 Formal Verification by Symbolic Evaluation of Partially-Ordered Trajectories, April 1993 Carl Johan H. Seger, R Bryant and al E., 38 pages

Symbolic trajectory evaluation provides a means to formally verify properties of a sequential system by a modified form of symbolic simulation. The desired system properties are expressed in a notation combining Boolean expressions and the temporal logic ``next-time'' operator. In its simplest form, each property is expressed as an assertion [ A => C ], where the antecedent A expresses some assumed conditions on the system state over a bounded time period, and the consequent C expresses conditions that should result. A generalization allows simple invariants to be established and proven automatically.

The verifier operates on system models in which the state space is ordered by ``information content''. By suitable restrictions to the specification notation, we guarantee that for every trajectory formula, there is a unique weakest state trajectory that satisfies it. Therefore, we can verify an assertion [ A => C ] by simulating the system over the weakest trajectory for A and testing adherence to C. Also, establishing invariants correspond to simple fixed point calculations.

This paper presents the general theory underlying symbolic trajectory evaluation. It also illustrates the application of the theory to the task of verifying switch-level circuits as well as more abstract implementations.

TR-93-09 Decision Graph Search, April 1993 Runping Qi and David Poole, 43 pages

A decision graph is an AND/OR graph with a certain evaluation function. Decision graphs have been found a very useful representation for a variety of decision making problems. This article present a number of search algorithms computing an optimal solution from a given decision graph. These algorithms include one {\it depth-first heuristic-search algorithm}, one {\it best-first heuristic-search algorithm}, one {\it anytime-algorithm} and two {\it iterative-deepening depth-first heuristic-search algorithms}. Similar to the *-minimax search algorithms of Ballard, our depth-first heuristic-search algorithm is developed from the alpha--beta algorithm for minimax tree search. In addition, we show how heuristic knowledge can be used to improve search efficiency. Furthermore, we present an anytime algorithm which is conveniently obtained from the depth-first heuristic-search algorithm without incurring much overhead. The best-first heuristic-search algorithm is obtained by modifying the well known AO* algorithm for AND/OR graphs with additive costs. The iterative-deepening algorithms result from combining the iterative-deepening techniques with the depth-first search techniques. Some experimental data on some of these algorithms performance are given.

TR-93-10 A New Method for Influence Diagram Evaluation, May 1993 Runping Qi and David Poole, 40 pages

As Influence diagrams become a popular representational tool for decision analysis, influence diagram evaluation attracts more and more research interests. In this article, we present a new, two--phase method for influence diagram evaluation. In our method, an influence diagram is first mapped into a decision graph and then the analysis is carried out by evaluating the decision graph. Our method is more efficient than Howard and Matheson's two--phase method because, among other reasons, the size of the decision graph generated by our method from an influence diagram can be much smaller than that by Howard and Matheson's method for the same influence diagram. Like those most recent algorithms reported in the literature, our method can also exploit independence relationship among variables of decision problems, and provides a clean interface between influence diagram evaluation and Bayesian net evaluation, thus, various well--established algorithms for Bayesian net evaluation can be used in influence diagram evaluation. In this sense, our method is as efficient as those algorithms. Furthermore, our method has a few unique merits. First, it can take advantage of asymmetric processing in influence diagram evaluation. Second, by using heuristic search techniques, it provides an explicit mechanism for making use of heuristic information that may be available in a domain--specific form. These additional merits make our method more efficient than the current algorithms in general. Finally, by using decision graphs as an intermediate representation, the value of perfect information can be computed in a more efficient way.

TR-93-11 A Framework for Interoperability Testing of Network Protocols, April 1993 Jadranka Alilovic-Curgus and Son T. Vuong, 29 pages

In this report, we extend the testing theory based on formal specifications by formalizing testing for interoperability with a new relation {\em intop}. Intuitively, $P$ $intop_{S}$ $ Q$ if, for every event offered by either $P$ or $Q$, the concurrent execution of $P$ and $Q$ will be able to proceed with the traces in $ S$, where $S$ is their (common) specification. This theory is applicable to formal description methods that allow a semantic interpretation of specifications in terms of labelled transition systems. Existing notions of implementation preorders and equivalences in protocol testing theory are placed in this framework and their discriminating power for identifying processes which will interoperate is examined. As an example, a subset of the ST-II protocol is formally specified and its possible implementations are shown to interoperate if each implementation satisfies the $ intop$ relation with respect to $S$, the specification of the ST-II protocol (subset).

TR-93-12 Orientation-Based Representations of Shape and Attitude Determination, April 1993 Ying Li, 162 pages

The three rotational degrees of freedom between the coordinate system of a sensed object and that of a viewer define the attitude of the object. Orientation-based representations record 3-D surface properties as a function of position on the unit sphere. All orientation-based representations considered share a desirable property: the representation of a rotated object is equal to the rotated representation of the object before rotation. This makes the orientation-based representations well-suited to the task of attitude determination.

The mathematical background for orientation-based representations of shape is presented in a consistent framework. Among the orientation-based representations considered, the support function is one-to-one for convex bodies, the curvature functions are one-to-one for convex bodies up to a translation and the radial function is one-to-one for starshaped sets.

Using combinations of the support function and the curvature functions for convex bodies, the problem of attitude determination is transformed into an optimization problem. Previous mathematical results on the radial function for convex objects are extended to starshaped objects and the problem of attitude determination by the radial function also is transformed into an optimization problem. Solutions to the optimization problems exist and can be effectively computed using standard numerical methods.

A proof-of-concept system has been implemented and experiments conducted both on synthesized data and on real objects using surface data derived from photometric stereo. Experimental results verify the theoretical solutions.

Novel contributions of the thesis include: the representation of smooth convex objects by the support function and curvature functions; the definition of a new orientation-based representation for starshaped sets using the 3-D radial function; and solutions to the 3-D attitude determination problem using the aforementioned representations. In particular, the scope of orientation-based representations has been extended, both in theory and in practice, from convexity to starshapedness.

TR-93-13 Symplectic Integration of Constrained Hamiltonian Systems by Runge-Kutta Methods, April 1993 Sebastian Reich, 24 pages

Recent work reported in the literature suggests that for the long--term integration of Hamiltonian dynamical systems one should use methods that preserve the symplectic structure of the flow. In this paper we investigate the symplecticity of numerical integrators for constrained Hamiltonian systems. In the first part of the paper we show that those implicit Runge--Kutta methods which result in symplectic integrators for unconstrained Hamiltonian systems can be directly applied to constrained Hamiltonian systems. The resulting discretization scheme is symplectic but does not, in general, preserve the constraints. In the second part of the paper we discuss partitioned Runge--Kutta methods. Again it turns out that those partitioned Runge--Kutta methods which are symplectic for unconstrained systems can be applied to constrained Hamiltonian systems. We show that, in contrast to implicit Runge--Kutta methods, the class of symplectic partitioned Runge--Kutta methods includes methods that also preserve the constraints. In the third part of the paper we discuss constrained Hamiltonian systems with separable Hamiltonian from a Lie algebraic point of view. This approach not only provides a different approach to the numercial integration of Hamiltonian systems but also allows for a straighforward backward error analysis.

TR-93-14 The use of conflicts in Searching Bayesian networks, May 1993 David Poole, 24 pages

This paper discusses how conflicts (as used by the consistency-based diagnosis community) can be adapted to be used in a search-based algorithm for computing prior and posterior probabilities in discrete Bayesian Networks. This is an ``anytime'' algorithm, that at any stage can estimate the probabilities and give an error bound. Whereas the most popular Bayesian net algorithms exploit the structure of the network for efficiency, we exploit probability distributions for efficiency; this algorithm is most suited to the case with extreme probabilities. This paper presents a solution to the inefficiencies found in naive algorithms, and shows how the tools of the consistency-based diagnosis community (namely conflicts) can be used effectively to improve the efficiency. Empirical results with networks having tens of thousands of nodes are presented.

TR-93-15 Implicit-Explicit Methods for Time-Dependent PDE's, May 1993 Uri M. Ascher, Steven J. Ruuth and Brian Wetton, 27 pages

Implicit-explicit (IMEX) schemes have been widely used, especially in conjunction with spectral methods, for the time integration of spatially discretized PDEs of diffusion-convection type. Typically, an implicit scheme is used for the diffusion term and an explicit scheme is used for the convection term. Reaction-diffusion problems can also be approximated in this manner. In this work we systematically analyze the performance of such schemes, propose improved new schemes and pay particular attention to their relative performance in the context of fast multigrid algorithms and of aliasing reduction for spectral methods.

For the prototype linear advection-diffusion equation, a stability analysis for first, second, third and fourth order multistep IMEX schemes is performed.

Stable schemes permitting large time steps for a wide variety of problems and yielding appropriate decay of high frequency error modes are identified.

Numerical experiments demonstrate that weak decay of high frequency modes can lead to extra iterations on the finest grid when using multigrid computations with finite difference spatial discretization, and to aliasing when using spectral collocation for spatial discretization. When this behaviour occurs, use of weakly damping schemes such as the popular combination of Crank-Nicolson with second order Adams-Bashforth is discouraged and better alternatives are proposed.

Our findings are demonstrated on several examples.

TR-93-16 A Real-Time 3D Motion Tracking System, April 1993 Johnny Wai Yee Kam, 95 pages

Vision allows one to react to rapid changes in the surrounding environment. The ability of animals to control their eye movements and follow a moving target has always been a focus in biological research. The biological control system that governs the eye movements is known as the oculomotor control system. Generally, the control of eye movements to follow a moving visual target is known as gaze control.

The primary goal of motion tracking is to keep an object of interest, generally known as the visual target, in the view of the observer at all time. Tracking can be driven by changes perceived from the real world. One obvious change introduced by a moving object is the change in its location, which can be described in terms of displacement. In this project, we will show that by using stereo disparity and optical flow, two significant types of displacements, as the major source of directing signals in a robotic gaze control system, we can determine where the moving object is located and perform the tracking duty, without recognizing what the object is.

The recent advances in computer hardware, exemplified by our Datacube MaxVideo 200 system and a network of Transputers, make it possible to perform image processing operations at video rates, and to implement real-time systems with input images obtained from video cameras. The main purposes of this project are to establish some simple control theories to monitor changes perceived in the real world, and to apply such theories in the implementation of a real-time three-dimensional motion tracking system on a binocular camera head system installed in the Laboratory for Computational Intelligence (LCI) at the Department of Computer Science of the University of British Columbia (UBC).

The control scheme of our motion tracking system is based on the Perception-Reasoning-Action (PRA) regime. We will describe an approach of using an active monitoring process together with a process for accumulating temporal data to allow different hardware components running at different rates to communicate and cooperate in a real-time system working on real world data. We will also describe a cancellation method to reduce the unstable effects of background optical flow generated from ego-motion, and create a ``pop-out'' effect in the motion field to ease the burden of target selection. The results of various experiments conducted, and the difficulties of tracking without any knowledge of the world and the objects will also be discussed.

TR-93-17 Multiresolution Geometric Algorithms Using Wavelets I: Representation for Parametric Curves and Surfaces, May 1993 L. M. Reissell, 75 pages

We apply wavelet methods to the representation of multiscale digitized parametric curves and surfaces. These representations will allow the derivation of efficient multiresolution geometric algorithms. In this report, we outline the definitions and basic properties of the resulting surface and curve hierarchies, and discuss the criteria on wavelets used in geometric representation. We then construct a new family of compactly supported symmetric biorthogonal wavelets, {\it pseudo-coiflets}, well suited to geometric multiresolution representation: these wavelets have coiflet-like moment properties, and in particular, the reconstructing scaling functions will be interpolating. The methods have been implemented in C/C++ software, which have been tested on geographic image data and other examples, comparing different choices of underlying wavelets.

TR-93-18 Linking BDD-Based Symbolic Evaluation to Interactive Theorem-Proving, May 1993 Jeffrey J. Joyce and Carl-Johan H. Seger, 6 pages

A novel approach to formal hardware verification results from the combination of symbolic trajectory evaluation and interactive theorem-proving. From symbolic trajectory evaluation we inherit a high degree of automation and accurate models of circuit behaviour and timing. From interactive theorem-proving we gain access to powerful mathematical tools such as inductions and abstraction. We have prototyped a hybrid tool and used this tool to obtain verification results that could not be easily obtained with previously published techniques.

TR-93-19 Fault Coverage Evaluation of Protocol Test Sequences, June 1993 Jinsong Zhu and Samuel T. Chanson, 22 pages

In this paper, we investigate the quality of a given protocol test sequence in detecting faulty implementations of the specification. The underlying model is a deterministic finite state machine (FSM). The basic idea is to construct all FSMs having n + i states (where n is the number of states in the specification and i a small integer) that will accept the test sequence but do not conform to the specification. It differs from the conventional simulation method in that it is not necessary to consider various forms of fault combinations and is guaranteed to identify any faulty machines. Preprocessing and backjumping techniques are used to reduce the computational complexity. We have constructed a tool based on the model and used it in assessing several UIO-based optimization techniques. We observed that the use of multiple UIO sequences and overlaps can sometimes weaken the fault coverage of the test sequence. The choice of the transfer sequences and order of the test subsequences during optimization may also affect fault coverage. Other observations and analysis on the properties of test sequences are also made.

TR-93-20 Numerical Integration of the Generatized Euler Equations, June 1993 Sebastian Reich, 16 pages

(Abstract not available on-line)

TR-93-21 Performance Measures for Robot Manipulators: A Unified Approach, June 1993 Kees van den Doel and Dinesh Pai, 60 pages

(Abstract not available on-line)

TR-93-22 How Fast can ASN.1 Encoding Rules Go?, July 1993 Mike Sample, 6 pages

(Abstract not available on-line)

TR-93-23 Abduction As Belief Revision, July 1993 Craig Boutilier and Veronica Becher, 16 pages

(Abstract not available on-line)

TR-93-24 Sequential Regularization Methods for higher index DAES with constraint singularities: I Linear Index-2 Case, July 1993 Uri Ascher and Ping Lin, 25 pages

Sequential regularization methods for higher index DAEs with constraint singularities: I. Linear index-2 case

Standard stabilization techniques for higher index DAEs often involve elimination of the algebraic solution components. This may not work well if there are singularity points where the constraints Jacobian matrix becomes rank-deficient. This paper proposes instead a sequential regularization method (SRM) -- a functional iteration procedure for solving problems with isolated singularities which have smooth differential solution components.

For linear index-2 DAEs we consider both initial and boundary value problems. The convergence of the SRM is described and proved in detail. Various aspects of the subsequent numerical discretization of the regularized problems are discussed as well and some numerical verifications are carried out.

TR-93-25 Testgen+: An Environment for Protocol Test Suite Generation Selection and Validation, July 1993 Son T. Vuong and Sangho Leon, 31 pages

(Abstract not available on-line)

TR-93-26 Computational Methods for the Shape from Shading Problem, July 1993 Paul M. Carter, 159 pages

(Abstract not available on-line)

TR-93-27 Unit Disk Graph Recognition is NP-Hard*, August 1993 Heinz Breu and David Kirkpatrick, 22 pages

Unit disk graphs are the intersection graphs of unit diameter closed disks in the plane. This paper reduces SATISFIABILITY to the problem of recognizing unit disk graphs. Equivalently, it shows that determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have sphericity at most 3, is NP-hard. We show how this reduction can be extended to 3 dimensions, thereby showing that unit sphere graph recognition, or determining if a graph has sphericity 3 or less, is also NP-hard. We conjecture that K-sphericity is NP-hard for all fixed K greater than 1.

TR-93-28 Generating Random Monotone Polygrams, September 1993 Jack Snoeyink and Chong Zhu, 20 pages

We proposed an algorithm that generates x-monotone polygons for any given set of n points uniformly at random. The time complexity of our algorithm is O(K), where n <= K <= n^2 is the number edges of the visibility graph of the x-monotone chain whose vertices are the given n points. The space complexity of our algorithm is O(n).

TR-93-29 A Compact Piecewise-Linear Voronoi Diagram for Convex sites in the Plane, October 1993 Mike McAllister, D. Kirkpatrick and J. Snoeyink, 30 pages

A Compact Piecewise-Linear Voronoi Diagram for Convex Sites in the Plane

In the plane, the post-office problem, which asks for the closest site to a query site, and retraction motion planning, which asks for a one-dimensional retract of the free space of a robot, are both classically solved by computing a Voronoi diagram. When the sites are k disjoint convex sets, we give a compact representation of the Voronoi diagram, using O(k) line segments, that is sufficient for logarithmic time post-office location queries and motion planning. If these sets are polygons with n total vertices given in standard representations, we compute this diagram optimally in O(k log n) deterministic time for the Euclidean metric and in O(k log n log m) deterministic time for the convex distance function defined by a convex m-gon.

TR-93-30 Tentative Prune- and- Search Fixed-Points with Applications to Geometric Computation, October 1993 D. Kirkpatrick and J. Snoeyink, 16 pages

Tentative Prune-and-Search for Computing Fixed-Points with Applications to Geometric Computation

Motivated by problems in computational geometry, we investigate the complexity of finding a fixed-point of the composition of two or three continuous functions that are defined piecewise. We show that certain cases require nested binary search taking Theta(log^2(n)) time. Others can be solved in logarithmic time by using a prune-and-search technique that may make tentative discards and later revoke or certify them. This work finds application in optimal subroutines that compute approximations to convex polygons, dense packings, and Voronoi vertices for Euclidean and polygonal distance functions.

TR-93-31 Objects that cannot be taken apart with two hands, October 1993 Jack Snoeyink and J. Stolfe, 15 pages

It has been conjectured that every configuration C of convex objects in 3-space with disjoint interiors can be taken apart by translation with two hands: that is, some proper subset of C can be translated to infinity without disturbing its complement. We show that the conjecture holds for five or fewer objects and give a counterexample with six objects. We extend the counterexample to a configuration that cannot be taken apart with two hands using arbitrary isometries (rigid motions).

Note: some figures have been omitted from the online version to save space.

TR-93-32 Counting and Reporting Red/Blue Segment Intersections, October 1993 Larry Palazzi, 20 pages

We simplify the red/blue segment intersection algorithm of Chazelle et al: Given sets of n disjoint red and n disjoint blue segments, we count red/blue intersections in O(n log(n)) time using O(n) space or report them in additional time proportional to their number. Our algorithm uses a plane sweep to presort the segments; then it operates on a list of slabs that efficiently stores a single level of a segment tree. With no dynamic memory allocation, low pointer overhead, and mostly sequential memory reference, our algorithm performs well even with inadequate physical memory.

TR-93-33 Analysis of a Recurrence Arising from a construction for Non-Blocking Networks, October 1993 Nicholas Pippenger, 32 pages

Define f on the integers n > 1 by the recurrence

f(n) = min{n, min 2f(m) + 3f(n/m)}. m|n

The function f has f(n) = n as its upper envelope, attained for all prime n. Our goal in this paper is to determine the corresponding lower envelope. We shall show that this has the form f(n) ~ C(log n)^(1+1/g) for certain constants g and C, in the sense that for any epsilon > 0, the inequality f(n) <= (C + epsilon)(log n)^(1 + 1/g) holds for infinitely many n, while f(n) <= (C - epsilon)(log n)^(1 + 1/g) holds for only finitely many. In fact, g = 0.7878.... is the unique real solution of the equation 2^(-g) + 3^(-g) = 1, and C = 1.5595... is given by the expression

_ _ 1/g | -g g -g g | g | 2 log 2 + 3 log 3 | |_ _| C = -------------------------------------------------------------------------- _ __ __ _ 1/g | -g g+1 -g \ g+1 \ g+1 | (g+1) | 15 log (5/2) + 3 /_ log ((k+1)/k) + /_ log ((k+1)/k) | |_ 5 <= k <= 7 8 <= k <= 15 _|

We also consider the function f0 defined by replacing the integers n > 1 with the reals x > 1 in the above recurrence:

f0(x) = min{x, inf 2f0(y) + 3f0(x / y)} 1 < y < x

We shall show that f0(x) ~ C0(log x)^(1+1/g), where C0 = 1.5586... is given by

_ _ 1/g | -g -g -g -g | 1 + 1/g C0 = 6e | 2 log 2 + 3 log 3 | (g / (g + 1)) |_ _|

and is smaller than C by a factor of 0.9994...

TR-93-34 Self-Routing Superconcentrators, October 1993 Nicholas Pippenger, 17 pages

Superconcentrators are switching systems that solve the generic problem of interconnecting clients and servers during sessions, in situations where either the clients or the servers are interchangable (so that it does not matter which client is connected to which server). Previous constructions of superconcentrators have required an external agent to find the interconnections appropriate in each instance. We remedy this shortcoming by constructing superconcentrators that are ``self-routing'', in the sense that they compute for themselves the required interconnections.

Specifically, we show how to construct, for each n, a system S_n with the following properties. (1) The system S_n has n inputs, n outputs, and O(n) components, each of which is of one of a fixed finite number of finite automata, and is connected to a fixed finite number of other components through cables, each of which carries signals from a fixed finite alphabet. (2) When some of the inputs, and an equal number of outputs, are ``marked'' (by the presentation of a certain signal), then after O(log n) steps (a time proportional to the ``diameter'' of the network) the system will establish a set of disjoint paths from the marked inputs to the marked outputs.

TR-93-35 A Model Checker for Statecharts, October 1993 Nancy Day, 98 pages

Computer-Aided Software Engineering (CASE) tools encourage users to codify the specification for the design of a system early in the development process. They often use graphical formalisms, simulation and prototyping to help express ideas concisely and unambiguously. Some tools provide little more than syntax checking of the specification but others can test the model for reachability of conditions, nondeterminism or deadlock.

Formal methods include powerful tools like automatic model checking to exhaustively check a model against certain requirements. Integrating formal techniques into the system development process is an effective method of providing more thorough analysis of specifications than conventional approaches employed by Computer-Aided Software Engineering (CASE) tools. In order to create this link, the formalism used by the CASE tool must have a precise formal semantics that can be understood by the verification tool.

The CASE tool STATEMATE makes use of an extended state transition notation called statecharts. We have formalized an operational semantics for statecharts by embedding them in the logical framework of an interactive proof-assistant system called HOL. A software interface is provided to extract a statechart directly from the STATEMATE database.

Using HOL in combination with Voss, a binary decision diagram-based verification tool, we have developed a model checker for statecharts which tests whether an operational specification, given by a statechart, satisfies a descriptive specification of the system requirements. The model checking procedure is a simple higher-order logic function which executes the semantics of statecharts.

In this technical report, we describe the formal semantics of statecharts and the model checking algorithm. Various examples, including an intersection with a traffic light and an arbiter, are presented to illustrate the method.

TR-93-36 The Raven Kernel: a Microkernel for shared memory multiprocessors, October 1993 Stuart Ritchie

The Raven kernel is a small, lightweight operating system for shared memory multiprocessors. Raven is characterized by its movement of several traditional kernel abstractions into user space. The kernel itself implements tasks, virtual memory management, and low level exception dispatching. All thread management, device drivers, and message passing functions are implemented completely in user space. This movement of typical kernel-level abstractions into user space can drastically reduce the overall number of user/kernel interactions for fine-grained parallel applications.

TR-93-37 Ranking and Unranking of Trees Using Regular Reductions, October 1993 Pierre Kelsen, 19 pages

(Abstract not available on-line)

TR-93-38 Detection and Estimation of Multiple Disparities by Multi-evidential correlation, October 1993 B, Esf ari, iar and James J. Little, 21 pages

(Abstract not available on-line)

TR-93-39 Tridiagonalization Costs of the Bandwidth contraction and Rutistauser-Schwarz Algorithms, November 1993 Ian Cavers, 20 pages

In this paper we perform detailed complexity analyses of the Bandwidth Contraction and Rutishauser-Schwarz tridiagonalization algorithms using a general framework for the analysis of algorithms employing sequences of either standard or fast Givens transformations. Each algorithm's analysis predicts the number of flops required to reduce a generic densely banded symmetric matrix to tridiagonal form. The high accuracy of the analyses is demonstrated using novel symbolic sparse tridiagonalization tools, Xmatrix and Trisymb.

TR-93-40 Automatic Verification of Asynchronous Circuits, November 1993 Trevor W. S. Lee, Mark R. Greenstreet and Carl-Johan H. Seger, 28 pages

Asynchronous circuits are often used in interface circuitry where traditional, synchronous design methods are not applicable. However, the verification of asynchronous designs is difficult, because standard simulation techniques will often fail to reveal design errors that are only manifested under rare circumstances. In this paper, we show how asynchronous designs can be modeled as programs in the Synchronized Transitions language, and how this representation facilitates rigorous and efficient verification of the designs using ordered binary diagrams (OBDDs). We illustrate our approach with two examples: a novel design of a transition arbiter and a design of a toggle element from the literature. The arbiter design was derived by correcting an error in an earlier attempt. It is noteworthy that the error in the original design, found very quickly using the methods described in this paper, went unnoticed during more than 50 hours of CPU time simulation 2^31 state transitions.

TR-93-41 A Simple Theorem Prover Based on symbolic Trajectory Evaluation and OBDD's, November 1993 Scott Hazelhurst and Carl-Johan H. Seger, 44 pages

Formal hardware verification based on symbolic trajectory evaluation shows considerable promise in verifying medium to large scale VLSI designs with a high degree of automation. However, in order to verify today's designs, a method for composing partial verification results is needed. One way of accomplishing this is to use a general purpose theorem prover to combine the verification results obtained by other tools. However, a special purpose theorem prover is more attractive since it can more easily exploit symbolic trajectory evaluation (and may be easier to use). Consequently we explore the possibility of developing a much simpler, but more tailor made, theorem prover designed specifically for combining verification results based on trajectory evaluation. In the paper we discuss the underlying inference rules of the prover as well as more practical issues regarding the user interface. We finally conclude with a couple of examples in which we are able to verify designs that could not have been verified directly. In particular, the complete verification of a 64 bit multiplier takes approximately 15 minutes on a Sparc 10 machine.

TR-93-42 Juggling Networks, November 1993 Nicholas Pippenger, 13 pages

Switching networks of various kinds have come to occupy a prominent position in computer science as well as communication engineering. The classical switching network technology has been space-division-multiplex switching, in which each switching function is performed by a spatially separate switching component (such as a crossbar switch). A recent trend in switching network technology has been the advent of time-division-multiplex switching, wherein a single switching component performs the function of many switches at successive moments of time according to a periodic schedule. This technology has the advantage that nearly all of the cost of the network is in inertial memory (such as delay lines), with the cost of switching elements growing much more slowly as a function of the capacity of the network.

In order for a classical space-division-multiplex network to be adaptable to time-division-multiplex technology, its interconnection pattern must satisfy stringent requirements. For example, networks based on randomized interconnections (an important tool in determining the asymptotic complexity of optimal networks) are not suitable for time-division-multiplex implementation. Indeed, time-division-multiplex implementations have been presented for only a few of the simplest classical space-division-multiplex constructions, such as rearrangeable connection networks.

This paper shows how interconnection patterns based on explicit constructions for expanding graphs can be implemented in time-division-multiplex networks. This provides time-division-multiplex implementations for switching networks that are within constant factors of optimal in memory cost, and that have asymptotically more slowly growing switching costs. These constructions are based on a metaphor involving teams of jugglers whose throwing, catching and passing patterns result in intricate permutations of the balls. This metaphor affords a convenient visualization of time-division-multiplex activities that should be of value in devising networks for a variety of switching tasks.

TR-93-43 Similarity Metric Learning for a Variable-Kernel Classifier, November 1993 David G. Lowe, 15 pages

Nearest-neighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on several standard classification data sets, and on these problems it shows better generalization than back propagation and most other learning methods. An important advantage is that the system can operate as a black box in which no model minimization parameters need to be experimentally set by the user. The number of parameters that must be determined through optimization are orders of magnitude less than for back-propagation or RBF networks, which may indicate that the method better captures the essential degrees of variation in learning. Other features of VSM learning are discussed that make it relevant to models for biological learning in the brain.

TR-93-44 Discrete Conservative Approximation of Hybrid Systems, November 1993 Carl-Johan H. Seger and Andrew Martin, 39 pages

Systems that are modeled using both continuous and discrete mathematics are commonly called hybrid systems. Although much work has been done to develop frameworks in which both types of systems can be handled at the same time, this is often a very difficult task. Verifying that desired properties hold in such hybrid models is even more daunting. In this paper we attack the problem from a different direction. First we make a distinction between two models of the system. A detailed model is developed as accurately as possible. Ultimately, one must trust in its correctness. An abstract model, which is typically less detailed, is actually used to verify properties of the system. The detailed model is typically defined in terms of both continuous and discrete mathematics, whereas the abstract one is typically discrete. We formally define the concept of conservative approximation, a relationship between models, that holds with respect to a translation between the questions that can be asked of them. We then progress by developing a theory that allows us to build a complicated detailed model by combining simple primitives, while simultaneously, building a conservative approximation, by similarly combining pre-defined parameterised approximations of those primitives.

TR-93-45 VOSS - A Formal Hardware Verification System User's Guide, November 1993 Carl-Johan Seger

The Voss system is a formal verification system aimed primarily at hardware verification. In particular, verification using symbolic trajectory evaluation is strongly supported. The Voss system consists of a set of programs. The main one is called fl and is the core of the verification system. Since the metalanguage in fl is a fully general functional language in which Ordered Binary Decision Diagrams (OBDDs) have been built in, the verification system is not only useful for carrying out trajectory evaluation, but also for experimenting with various verification (formal and informal) techniques that require the use of OBDDs. This document is intended as both a user's guide and (to some extent) a reference guide. For the Voss alpha release, this document is still quite incomplete, but work is underway to remedy this.

TR-93-47 We Have Never-Forgetful Flowers In Our Garden: Girls' Responses to Electronic Games, December 1993 Kori Inkpen, Rena Upitis, Maria Klawe, Joan Lawry, Ann Anderson, Mutindi Ndunda, Kamran Sedighian, Steve Lerous and David Hsu

Electronic Games for Education in Math and Science (E-GEMS) is a large-scale research project designed to increase the proportion of children who enjoy learning and mastering mathematical concepts through the use of electronic games. This paper describes one piece of research that examines how girls interact within an electronic games environment. Three interrelated questions are addressed in this paper: What interest do girls show in electronic games when the games are presented in an informal learning environment? How do girls play and watch others play? How does the presence of others in the immediate vicinity influence the ways that girls play?

The research described was conducted at an interactive science museum, Science World BC, during the summer of 1993. Children were observed while they played with various electronic games, both video and computer. In addition, interviews were conducted with the children and timed samplings were recorded. Our observations and interviews show that girls have an interest in electronic games and enjoy playing. Girls were particularly interested when given the opportunity to socially interact with others. In addition, they indicated a preference for playing on computers over video game systems.

TR-93-49 Simulated Annealing for Profile and Fill Reduction of Sparse Matrices, March 21, 1993 Robert R. Lewis, 49 pages

Simulated annealing can minimize both profile and fill of sparse matrices. We applied these techniques to a number of sparse matrices from the Harwell-Boeing Sparse Matrix Collection. We were able to reduce profile typically to about 80% of that attained by conventional profile minimization techniques (and sometimes much lower), but fill reduction was less successful (85% at best). We present a new algorithm that significantly speeds up profile computation during the annealing process. Simulated annealing is, however, still much more time-consuming than conventional techniques and is therefore likely to be useful only in situations where the same sparse matrix is being used repeatedly.

TR-93-50 Volume Models for Volumetric Data, May 30, 1993 Vishwa Ranjan and Alain Fournier, 26 pages

In order to display, transform, and compare volumetric data, it is often convenient or necessary to use different representations derived from the original discrete voxel values. In particular, several methods have been proposed to compute and display an iso-surface defined by some threshold value. In this paper we describe a method to represent the volume enclosed by an iso-surface as the union of simple volume primitives. The needed properties (displayed image, volume, surface, etc.) are derived from this representation. After a survey of properties that might be needed or useful for such representations, we show that some important ones are lacking in the representations used so far. Basic properties include efficiency of computation, storage, and display. Some other properties of interest include stability (the fact that the representation changes little for a small change in the data, such as noise or small distortions), the ability to determine the similarities between two data sets, and the computation of simplified models. We illustrate the concept with two distinct representations, one based on the union of tetrahedra derived from a Delaunay tetrahedralization of boundary points, and an other based on overlapping spheres. The former is simple and efficient in most respects, but is not stable, while the latter needs heuristics to be simplified, but can be made stable and useful for shape comparisons. This approach also helps to develop metrics indispensable to study and compare such representations.

TR-93-51 High-Speed Visual Estimation Using Preattentive Processing, March 3, 1993 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 27 pages

A new method is presented for performing rapid and accurate numerical estimation. It is derived from principles arising in an area of cognitive psychology called preattentive processing. Preattentive processing refers to an initial organization of the human visual system based on operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, intensity, orientation, size, and motion. We believe that studies from preattentive vision should be used to assist in the design of visualization tools, especially those for which high speed target, boundary, and region detection are important. In our present study, we investigated two known preattentive features (hue and orientation) in the context of a new task (numerical estimation) in order to see whether preattentive estimation was possible. Our experiments tested displays that were designed to visualize data from simulations being run in the Department of Oceanography. The results showed that rapid and accurate estimation is indeed possible using either hue or orientation. Furthermore, random variation of one of these features resulted in no interference when subjects estimated the numerosity of the other. To determine the robustness of our results, we varied two important display parameters, display duration and feature difference, and found boundary conditions for each. Implications of our results for application to real-word data and tasks are discussed.

TR-93-52 From the Look of Things, October 30, 1993 Alain Fournier, 10 pages

We can't help wonder occasionally about what we do. The following is the result of such wondering, using a unique opportunity to get a paper in without much scrutiny.

TR-93-53 A Model for Coordinating Interacting Agents, October 30, 1993 Paul Lalonde, Robert Walker, Jason Harrison and David Forsey, 8 pages

SPAM (Simulated Platform for Animating Motion) is a simulation software system designed to address synchronization issues pertaining to both animation and simulation. SPAM provides application programs with the manipulation, configuration, and synchronization tools needed when simulations are combined to create animations. It is designed to be used as the glue between applications that supply lists of the parameters to animate and the callback procedures to invoke when a user wishes to modify the parameters directly. SPAM does not impose a particular model of simulation, accommodating keyframing, physical simulation, or a variety of other models, providing they can be abstracted into a set of externally modifiable values. In SPAM we recognize that the important part of simulation is not the state of the system at each time step, but rather the change in states between steps. Thus SPAM uses an interval representation of time, explicitly representing the intervals over which change occurs. In a complex animation or simulation, multiple actions will access the same resource at the same time. SPAM defines a strategy for recognizing such conflicts that increases the use and re-use of sequences.

TR-94-01 Exploring Common Conceptions About Boys and Electronic Games, January 1994 Joan Lawry, Rena Upitis, Maria Klawe, Kori Inken, Ann Anderson, Kori Inkpen, M. Ndunda, David Hsu, Stephen Leroux and Kamran Sedighian, 20 pages

Electronic games are an integral part of many boys' lives. Based on observations made over a two-month period at an electronic games exhibit in an interactive science museum in Vancouver, Canada, we examine three commonly held views about boys and electronic game culture: (a) electronic games and boys' behaviour while playing them contain elements of aggression, violence, competition, fast-action, and speed; (b) electronic games encourage anti-social, ``loner'' behaviour; and (c) boys who play electronic games are susceptible to becoming so devoted to playing the games that they neglect other areas of their lives, such as school, physical activity, and family. Our findings indicate the following: (a) while violent games are popular, many boys prefer games that challenge them mentally; (b) there appears to be little connection between anti-social behavior and electronic game playing; and (c) many boys who play electronic games have interests also in music, programming, reading, and school.

This paper depicts one facet of the first, exploratory phase of the Electronic Games for Education in Math and Science (E-GEMS) enterprise. E-GEMS is an ongoing research project with the ultimate goal of increasing the proportion of children who enjoy learning and using math and science---specifically by engaging children's interest in these subjects through the play of electronic games in the context of existing classroom educational methods. Hence, we also consider some of the implications for educational electronic game design in view of our findings about current commercial electronic games.

TR-94-02 Generalized Ternary Simulation of Sequential Circuits, January 1994 C. J. Seger Seger and J. A. Brzozowski, 20 pages

(Abstract not available on-line)

TR-94-03 A Multigrid Solver for the Steady State Navier-Stokes Equations Using The Pressure-Poisson Formulation, January 1994 David Sidilkover and Uri M. Ascher, 13 pages

This paper presents an efficient multigrid solver for steady-state Navier-Stokes equations in 2D on non-staggered grids. The pressure Poisson equation formulation is used, together with a finite volume discretization. A discretization of the boundary conditions for pressure and velocities is presented. An efficient multigrid algorithm for solving the resulting discrete equations is then developed. The issue of the numerical treatment of advection is also addressed: a family of stable and accurate difference schemes for the advection-dominated flow are presented. This family includes also second order accurate schemes.

TR-94-04 Model-Based Object Recognition - A Survey of Recent Research, January 1994 Arthur R. Pope, 33 pages

We survey the main ideas behind recent research in model-based object recognition. The survey covers representations for models and images and the methods used to match them. Perceptual organization, the use of invariants, indexing schemes, and match verification are also reviewed. We conclude that there is still much room for improvement in the scope, robustness, and efficiency of object recognition methods. We identify what we believe are the ways improvements will be achieved.

TR-94-05 Cooperative Learning in the Classroom: The Importance of a Collaborative Environment for Computer-Based Education, February 1994 Kori Inkpen, Kellogg S. Booth, Maria Klawe and Rena Upitis, 11 pages

Cooperative Learning in the Classroom: The Importance of a Collaborative Envi ronment for Computer-Based Education

Cooperative behavior of students playing an educational computer game was investigated. The combination of gender and whether one or two computers were present significantly affected the level of achievement as measured by the number of puzzles completed in the game. Female/Female pairs playing on two computers, on average, completed less puzzles than any other pairs in any other condition. Differences were also observed for gender pairs sharing control of the mouse while playing on a single computer. Male/Male pairs had a higher number and percentage of refusals to give up control of the mouse.

TR-94-06 Constant Time Parallel Indexing of Points in a Triangle, February 1994 Simon Kahan and Pierre Kelsen, 9 pages

Consider a triangle whose three vertices are grid points. Let k denote the number of grid points in the triangle. We describe an indexing of the triangle: a bijective mapping from {0, ..., k-1} to the grid points in the triangle. Computing such a mapping is a fundamental subroutine in fine-grained parallel computation arising in graphics applications such as ray-tracing. We describe a very fast indexing algorithm: after a preprocessing phase requiring time proportional to the number of bits in the vertices of the triangle, a grid point in the triangle can be computed in constant time from its index. The method requires only constant space.

TR-94-07 A Novel Constraint-Based Data Fusion System for Limited-Angle Computed Tomography, March 1994 Jeffrey E. Boyd, 136 pages

(Abstract not available on-line)

TR-94-08 A Computational Theory of Decision Networking, March 1994 Nevin L. Zhang, 196 pages

(Abstract not available on-line)

TR-94-09 Abduction to Plausible Causes: An Event-Based Model of Belief Update, March 1994 Craig Boutilier, 29 pages

The Katsuno and Mendelzon theory of belief update has been proposed as a reasonable model for revising beliefs about a changing world. However, the semantics of update relies on information which is not readily available. We describe an alternative semantical view of update in which observations are incorporated into a belief set by: a) explaining the observation in terms of a set of plausible events that might have caused that observation; and b) predicting further consequences of those explanations. We also allow the possibility of conditional explanations. We show that this picture naturally induces an update operator under certain assumptions. However, we argue that these assumptions are not always reasonable, and they restrict our ability to integrate update with other forms of revision when reasoning about action.

TR-94-10 Probabilistic Conflicts in a Search Algorithm for Estimating Posterior Probabilities in Bayesian Networks, March 1994 David Poole, 40 pages

This paper presents a search algorithm for estimating prior and posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used by the consistency-based diagnosis community) can be adapted to speed up the search. This is an `anytime' algorithm, that at any stage can estimate probabilities and give an error bound. This algorithm is especially suited to the case where there are skewed distributions, although nothing about the algorithm or the definitions depends on skewness of conditional distributions. Empirical results with Bayesian networks having tens of thousands of nodes are presented.

TR-94-11 Semantics, Consistency and Query Processing of Empirical Deductive Databases, April 1994 Raymond T. Ng, 35 pages

In recent years, there has been growing interest in reasoning with uncertainty in logic programming and deductive databases. However, most frameworks proposed thus far are either non-probabilistic in nature or based on subjective probabilities. In this paper, we address the problem of incorporating empirical probabilities -- that is, probabilities obtained from statistical findings -- in deductive databases. To this end, we develop a formal model-theoretic basis for such databases. We also present a sound and complete algorithm for checking the consistency of such databases. Moreover, we develop consistency-preserving ways to optimize the algorithm for practical usage. Finally, we show how query answering for empirical deductive databases can be carried out. {\bf Keywords:} deductive databases, empirical probabilities, model semantics, constraint satisfaction, optimizations, query answering

TR-94-12 Topology Building and Random Polygon Generation, April 1994 Chong Zhu, 93 pages

(Abstract not available on-line)

TR-94-13 Efficient and Effective Clustering Methods for Spatial Data Mining, May 1994 Raymond T. Ng and Jiawei Han, 25 pages

Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLARANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLARANS. Our analysis and experiments show that with the assistance of CLARANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLARANS with that of existing clustering methods show that CLARANS is the most efficient.

TR-94-14 Topological Aspects of Regular Languages, May 1994 Nicholas Pippenger, 17 pages

We establish a number of new results (and rederive some old results) concerning regular languages, using essentially topological methods. Our development is based on the duality (established by Stone) between Boolean algebras and certain topological spaces (which are now called "Stone spaces"). (This duality does not seem to have been recognized in the literature on regular languages, even though it is well known that the regular languages over a fixed alphabet form a Boolean algebra and that the "implicit operations" with a fixed number of operands form a Stone space!) By exploiting this duality, we are able to obtain a much more accessible account of the Galois correspondence between varieties of regular languages (in the sense of Eilenberg) and certain sets of "implicit identities". The new results include an analogous Galois correspondence for a generalization of varieties, and an explicit characterization by means of closure conditions of the sets of implicit identities involved in these correspondences.

TR-94-15 Computing Common Tangents Without a Separating Line, May 1994 David Kirkpatrick and Jack Snoeyink, 10 pages

Given two disjoint convex polygons in standard representations, one can compute outer common tangents in logarithmic time without first obtaining a separating line. If the polygons are not disjoint, there is an additional factor of the logarithm of the intersection or union size, whichever is smaller.

TR-94-16 Transformations in High Level Synthesis: Axiomatic Specification and Efficient Mechanical Verification, May 1994 P. Sreeranga Rajan, 71 pages

In this work, we investigate the specification and mechanical verification of the correctness of transformations used during high level synthesis in hardware design. The high level synthesis system we address here is due, in part, to the SPRITE project at Philips Research Labs. The transformations in this system are used for refinement and optimization of descriptions specified in a signal flow graph language called SPRITE Input Language (SIL). SIL is an intermediate language used during the synthesis of hardware described using languages such as VHDL. Besides being an intermediate language, it also forms the backbone of TRADES synthesis system from University of Twente. SIL has been used in the design of hardware for audio and video applications.

We use Prototype Verification System (PVS) from SRI International, to specify and verify the correctness of the transformations. The PVS specification language allows us to investigate the correctness problem using a convenient level of representation. While, the PVS verifier features automatic procedures and interactive verification rules to check properties of specifications. It has permitted us to examine not only the correctness, but also generalization and composition of transformations of SIL.

TR-94-17 Link Strength in Bayesian Networks, May 1994 Brent Boerlage, 102 pages

This thesis introduces the concept of a connection strength (CS) between the nodes in a propositional Bayesian network (BN). Connection strength generalizes node independence from a binary property to a graded measure. The connection strength from node A to node B is a measure of the maximum amount that the belief in B will change when the truth value of A is learned. If the belief in B does not change, they are independent (zero CS), and if it changes a great deal, they are strongly connected (high CS).

Another concept introduced is the link strength (LS) between two adjacent nodes, which is an upper bound on that part of their connection strength which is due only to the link between them (and not other paths which may connect them). Calculating connection strengths is computationally expensive, while calculating link strengths is not. A linear complexity algorithm is provided which finds a bound on the connection strength between any two nodes by combining link strengths along the paths connecting them. Such an algorithm lends substance to notions of an "effect" or "influence" flowing along paths, and "effect" being attenuated by "weak" links, which is terminology that has appeared often in the literature, but only as an intuitive idea.

An algorithm for faster, approximate BN inference is presented, and connection strengths are used to provide bounds for its error. A system is proposed for BN diagrams to be drawn with strong links represented by heavy lines and weak links by fine lines, as a visualization aid for humans. Another visualization aid which is explored is the CS contour map, in which connection strengths from one node to the rest are represented as contour lines super-imposed on a regular BN diagram, allowing the viewer to quickly assess which nodes that node influences the most (or which nodes influence it the most). A non-trivial example BN is presented, some of its connection strengths are calculated, CS contour maps are constructed for it, and it is displayed with link strength indicated by line width.

TR-94-18 Prescriptions: A Language for Describing Software Configurations, June 1994 Jim Thornton, 44 pages

Automation of software configuration management is an important practical problem. Any automated tool must work from some specifications of correct or desired configurations. This report introduces a language for describing acceptable configurations of common systems, so that automated management is possible. The proposed language is declarative, while at the same time there are efficient algorithms for modifying a system to conform to a specification in many cases of practical importance.

TR-94-19 Vision Servers and Their Clients, October 1994 James J. Little, 13 pages

Robotic applications impose hard real-time demands on their vision components. To accommodate the real-time constraints, the visual component of robotic systems are often simplified by narrowing the scope of the vision system for a particular task. Another option is to build a generalized vision (sensor) processor and provides multiple interfaces, of differing scales and content, to other modules in the robot. Both options can be implemented in many ways, depending on computational resources.

The tradeoffs among these alternatives become clear when we study the vision process as a server whose clients request information about the world. We model the interface on client-server relations in user interfaces and operating systems. We examine the relation of this model to robot and vision sensor architecture and explore its application to a variety of vision sensor implementations.

TR-94-20 An Analysis of Buffer Sharing and Prefetching Techniques for Multimedia Systems, September 1994 Raymond T. Ng and Jinhai Yang, 30 pages

In this paper, we study the problem of how to maximize the throughput of a continuous-media system, given a fixed amount of buffer space and disk bandwidth both pre-determined at design-time. Our approach is to maximize the utilizations of disk and buffers. We propose doing so in two ways. First, we analyze a scheme that allows multiple streams to share buffers. Our analysis and preliminary simulation results indicate that buffer sharing could lead to as much as 50\% reduction in total buffer requirement. Second, we develop three prefetching strategies: SP, IP1 and IP2. As will be demonstrated by SP, straightforward prefetching is not effective at all. In contrast, IP1 and IP2, which prefetch more intelligently than does SP, could be valuable in maximizing the effective use of buffers and disk. Our preliminary simulation results show that IP1 and IP2 could lead to a 40\% improvement in throughput.

TR-94-21 Incremental Algorithms for Optimizing Model Computation Based on Partial Instantiation, September 1994 Raymond T. Ng and Xiaomei Tian, 28 pages

It has been shown that mixed integer programming methods can effectively support minimal model, stable model and well-founded model semantics for ground deductive databases. Recently, a novel approach called partial instantiation has been developed which, when integrated with mixed integer programming methods, can handle non-ground logic programs. The goal of this paper is to explore how this integrated framework based on partial instantiation can be optimized. In particular, we develop an incremental algorithm that minimizes repetitive computations. We also develop several optimization techniques to further enhance the efficiency of our incremental algorithm. Experimental results indicate that our algorithm and optimization techniques can bring about very significant improvement in run-time performance.

TR-94-22 How Fast Will The Flip Flop?, September 1994 Mark R. Greenstreet and Peter Cahoon, 10 pages

This report describes an experimental investigation of the application of dynamical systems theory to the verification of digital VLSI circuits. We analyze the behavior of a nine-transistor toggle element using a simple, SPICE-like model. We show how such properties as minimum and maximum clock frequency can be identified from topological features of solutions to the corresponding system of differential equations. This dynamical systems perspective also gives a clear, continuous-model interpretations of such phenomena as dynamic storage and timing hazards.

TR-94-23 Informal, Semi-Formal, and Formal Approaches to the Specification of Software Requirements., September 1994 Helene Marie Wong, 376 pages

(Abstract not available on-line)

TR-94-24 Defeasible Preferences and Goal Derivations, October 1994 Craig Boutilier, 47 pages

(Abstract not available on-line)

TR-94-25 RASP - Robotics and Animation Simulation Platform, October 1994 Gene S. Lee, 221 pages

(Abstract not available on-line)

TR-94-26 A Foundation for the Design and Analysis of Robotic Systems and Behaviors, October 1994 Zhang Ying, 245 pages

Robots are generally composed of electromechanical parts with multiple sensors and actuators. The overall behavior of a robot emerges from coordination among its various parts and interaction with its environment. Developing intelligent, reliable, robust and safe robots, or real-time embedded systems, has become a focus of interest in recent years. In this thesis, we establish a foundation for modeling, specifying and verifying discrete/continuous hybrid systems and take an integrated approach to the design and analysis of robotic systems and behaviors.

A robotic system in general is a hybrid dynamic system, consisting of continuous, discrete and event-driven components. We develop a semantic model for dynamic systems, that we call Constraint Nets (CN). CN introduces an abstraction and a unitary framework to model discrete/continuous hybrid systems. CN provides aggregation operators to model a complex system hierarchically. CN supports multiple levels of abstraction, based on abstract algebra and topology, to model and analyze a system at different levels of detail. CN, because of its rigorous foundation, can be used to define programming semantics of real-time languages for control systems.

While modeling focuses on the underlying structure of a system --- the organization and coordination of its components --- requirements specification imposes global constraints on a system's behavior, and behavior verification ensures the correctness of the behavior with respect to its requirements specification. We develop a timed linear temporal logic and timed $\forall$-automata to specify timed as well as sequential behaviors. We develop a formal verification method for timed $\forall$-automata specification, by combining a generalized model checking technique for automata with a generalized stability analysis method for dynamic systems.

A good design methodology can simplify the verification of a robotic system. We develop a systematic approach to control synthesis from requirements specification, by exploring a relation between constraint satisfaction and dynamic systems using constraint methods. With this approach, control synthesis and behavior verification are coupled through requirements specification.

To model, synthesize, simulate, and understand various robotic systems we have studied in this research, we develop a visual programming and simulation environment that we call ALERT: A Laboratory for Embedded Real-Time systems.

TR-94-27 DECISION GRAPHS: Algorithms and Applications to Influence Diagram Evaluation and High-Level Path Planning Under Uncertainty, October 1994 Runping Qi, 225 pages

(Abstract not available on-line)

TR-94-28 Computing the largest inscribed isothetic rectangle, December 1994 Helmut Alt, David Hsu and Jack Snoeyink, 6 pages

This paper describes an algorithm to compute, in Theta(log n) time, a rectangle that is contained in a convex n-gon, has sides parallel to the coordinate axes, and has maximum area. With a slight modification it will compute the smallest perimeter. The algorithm uses a tentative prune-and-search approach, even though this problem does not appear to fit into the functional framework of Kirkpatrick and Snoeyink.

TR-94-29 Conservative Approximations of Hybrid Systems, October 1994 Andrew K. Martin and Carl-Johan Seger, 29 pages

Systems that are modeled using both continuous and discrete mathematics are commonly called hybrid systems. Although much work has been done to develop frameworks in which both types of systems can be modeled at the same time, this is often a very difficult task. Verifying that desired properties hold in such hybrid models is even more daunting. In this paper we attack the problem from a different direction. First we make a distinction between two models of the system. A detailed model is developed as accurately as possible. Ultimately, one must trust in its correctness. An abstract model, which is typically less detailed, is actually used to verify properties of the system. The detailed model is typically defined in terms of both continuous and discrete mathematics, whereas the abstract one is typically discrete. We formally define the concept of conservative approximation, a relationship between models, that holds with respect to a translation between specification languages. We then progress by developing a theory that allows us to build a complicated detailed model by combining simple primitives. Simultaneously, we build a conservative approximation by similarly combining pre-defined parameterized approximations of those primitives.

TR-94-30 Weird-a-gons and Other Folded Objects: The Influence of Computer Animation, Paper Models and Cooperative Mediation on Spatial Understanding, October 1994 Rena Upitis, Ricard Dearden, Kori Inkpen, Joan Lawry, Maria Klawe, Kelly Davidson, Stephen Leroux, David Hsu, Nic Thorne, Kamran Sedighian, Robert Scharein and Ann Anderson, 20 pages

(Abstract not available on-line)

TR-94-31 Exploiting Structure in Policy Construction, November 1994 Craig Boutilier, Richard Dearden and M. Goldszmidt

(Abstract not available on-line)

TR-94-32 Modeling Positional Uncertainty in Object Recognition, November 1994 Arthur R. Pope and David G. Lowe, 22 pages

Iterative alignment is one method for feature-based matching of an image and a model for the purpose of object recognition. The method alternately hypothesizes feature pairings and estimates a viewpoint transformation from those pairings; at each stage a refined transformation estimate is used to suggest additional pairings.

This paper extends iterative alignment in the domain of 2D similarity transformations so that it represents the uncertainty in the position of each model and image feature, and that of the transformation estimate. A model describes probabilistically the significance, position, and intrinsic attributes of each feature, plus topological relations among features. A measure of the match between a model and an image integrates all four of these, and leads to an efficient matching procedure called probabilistic alignment. That procedure supports both recognition and a learning procedure for acquiring models from training images.

By explicitly representing uncertainty, one model can satisfactorily describe appearance over a wider range of viewing conditions. Thus, when models represent 2D characteristic views of a 3D object, fewer models are needed. Experiments demonstrating the effectiveness of this approach are reported.

TR-94-33 Multiresolution Rough Terrain Motion Planning, November 1994 Dinesh K. Pai and L. M. Reissell, 24 pages

(Abstract not available on-line)

TR-94-34 Langwidere: A New Facial Animation System, January 31, 1994 David R. Forsey and Carol L.-Y. Wang, 12 pages

This paper presents Langwidere, a facial animation system. Langwidere is the basis for a flexible system capable of imitating a wide range of characteristics and actions, such as speech or expressing emotion. Langwidere integrates a hierarchical spline modeling system with simulated muscles based on local area surface deformation. The multi-level shape representation allows control over the extent of deformations, at the same time reducing the number of control vertices needed to define the surface. The head model is constructed from a single closed surface allowing the modeling of internal structures such as tongue and teeth, rather than just a mask. Simulated muscles are attached to various levels of the surface with more rudimentary levels substituting for bone such as the skull and jaw. The combination of a hierarchical model and simulated muscles provides precise, flexible surface control and supports easy generation of new characters with a minimum of recoding.

TR-94-35 A Multilevel Approach to Surface Response in Dynamically Deformable Models, January 31, 1994 Larry Palazzi and David R. Forsey, 12 pages

Discretized representations of deformable objects, based upon simple dynamic point-mass systems, rely upon the propagation of forces between neighbouring elements to produce a global change in the shape of the surface. Attempting to make such a surface rigid produces stiff equations that are costly to evaluate with any numerical stability. This paper introduces a new multilevel approach for controlling the response of a deformable object to external forces. The user specifies the amount of flexibility or stiffness of the surface by controlling how the applied forces propagate through the levels of a multi-resolution representation of the object. A wide range of surface behaviour is possible, and rigid motion is attained without resort to special numerical methods. This technique is applied to the displacement constraints method of Gascuel and Gascuel to provide explicit graduated control of the response of a deformable object to imposed forces.

TR-94-36 Chebyshev Polynomials for Boxing and Intersections of Parametric Curves and, January 31, 1994 Alain Fournier and John Buchanan, 17 pages

Surfaces Computer Graphics and Computer Aided Design. Ray-tracing is a versatile and popular rendering technique. There is therefore a strong incentive in developing fast, accurate and reliable algorithms to intersect rays and parametric curves and surfaces. We propose and demonstrate the use of Chebyshev basis functions to speed up the computation of the intersections between rays and parametric curves or surfaces. The properties of Chebyshev polynomials result in the computation of better and tighter enclosing boxes. For surfaces they provide a better termination criterion to decide on the limits of subdivision, and allow the use of bilinear surfaces for the computation of the intersection when needed. The efficiency of the techniques used depends on the relative magnitude of the coefficients of the Chebyshev basis functions. We show from a statistical analysis of the characteristics of several thousands surfaces of different origin that these techniques will result most of the time in significant improvement in speed and accuracy over other boxing and subdivision technqiues.

TR-94-37 A Kinematic Model for Collision Response, July 13, 1994 Jason Harrison and David Forsey, 20 pages

One aspect of traditional 3D animation using clay or plasticine is the ease with which the object can be deformed. Animators take for granted the ability to interactively press complex objects together. In 3D computer animation, this ability is severely restricted and any improvement would drastically increase the range and style of animations that can be created within a production environment. This paper presents a simple, fast, geometric approach to controlling the nature, extent and timing of the surface deformations arising from the interpenetration of kinematically controlled animated objects. Rather than using dynamic simulations, which are difficult to configure, code, and control, the algorithm presented here formulates collision response kinematically by moving points on a multiresolution surface towards goal points at a certain rate. This new multi-resolution approach to deformation provides control over the response of the surface using a small number of parameters that determine how each level in the multi-resolution representation of the surface reacts to the interpenetration. The deformations are calculated in linear time and space proportional to the number of points used to define the surface.

TR-94-38 Making Shaders More Physically Plausible, March 4, 1993 Robert R. Lewis, 14 pages

There is a need to develop shaders that not only "look good", but are more physically plausible. From physical and geometric considerations, we review the derivation of a shading equation expressing reflected radiance in terms of incident radiance and the bidirectional reflectance distribution function (BRDF). We then examine the connection between this equation and conventional shaders used in computer graphics. Imposing the additional physical constraints of energy conservation and Helmholtz reciprocity allows us to create variations of the conventional shaders that are more physically plausible.

TR-94-39 Real-Time Multivariate Data Visualization Using Preattentive Processing, January 31, 1994 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 35 pages

A new method is presented for visualizing data as they are generated from real-time applications. These techniques allow viewers to perform simple data analysis tasks such as detection of data groups and boundaries, target detection, and estimation. The goal is to do this rapidly and accurately on a dynamic sequence of data frames. Our techniques take advantage of an ability of the human visual system called preattentive processing. Preattentive processing refers to an initial organization of the visual system based on operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, orientation, intensity, size, curvature, and line length. We believe that studies from preattentive processing should be used to assist in the design of visualization tools, especially those for which high speed target, boundary, and region detection are important. Previous work has shown that results from research in preattentive processing can be used to build visualization tools which allow rapid and accurate analysis of individual, static data frames. We extend these techniques to a dynamic real-time environment. This allows users to perform similar tasks on dynamic sequences of frames, exactly like those generated by real-time systems such as visual interactive simulation. We studied two known preattentive features, hue and curvature. The primary question investigated was whether rapid and accurate target and boundary detection in dynamic sequences is possible using these features. Behavioral experiments were run that simulated displays from our preattentive visualization tools. Analysis of the results of the experiments showed that rapid and accurate target and boundary detection is possible with both hue and curvature. A second question, whether interactions occur between the two features in a real-time environment, was answered positively. This suggests that these and perhaps other visual features can be used to create visualization tools that allow high-speed multidimensional data analysis for use in real-time applications. It also shows that care must be taken in the assignment of data elements to preattentive features to avoid creating certain visual interference effects.

TR-94-40 Three-Dimensional Analysis of Scoliosis Surgery Using Stereophotogrammetry, January 31, 1994 Stanley B. Jang, Kellogg S. Booth, Chris W. Reily, Bonita J. Sawatzky and Stephen J. Tredwell, 8 pages

Scoliosis is a deformity characterized by coronal, sagittal and axial rotation of the spine. Surgical instrumentation (metal pins and rods) and eventual fusion are required in severe cases. Assessment of the correction requires enough accuracy to allow rational proactive planning of individual interventions or implant design. Conventional 2-D radiography and newer 3-D CT scanning do not provide this accuracy. A new stereophotogrammetric analysis and 3-D visualization allow accurate assessment of the scoliotic spine during instrumentation. Stereophoto pairs taken at each stage of the operation and robust statistical techniques are used to compute 3-D transformations of the vertebrae between stages. These determine rotation, translation, goodness of fit, and overall spinal contour. A polygonal model of the spine using a commercial 3-D modeling package is used to produce an animation sequence of the transformation. The visualizations have provided some important observations. Correction of the scoliosis is achieved largely through vertebral translation and coronal plane rotation, contrary to claims that large axial rotations are required. The animations provide valuable qualitative information for surgeons assessing the results of scoliotic correction. A detailed study of derotation provided by different instrumentation systems and the assessment of hook position patterns is underway.

TR-95-01 Buffer Sharing Schemes for Continuous-Media Systems, January 1995 Dwight J. Makaroff and Raymond T. Ng, 28 pages

Buffer management in continuous-media systems is a frequently studied topic. One of the most interesting recent proposals is the idea of buffer sharing for concurrent streams. As analyzed in~\cite{ny94}, by taking advantage of the temporal behaviour of concurrent streams, buffer sharing can lead to a 50\% savings in total buffer space. In this paper, we study how to actually implement buffer sharing. To this end, we develop the CES Buffer Sharing scheme that is very efficient to implement, and that permits savings asymptotically very close to the ideal savings predicted by the analysis in~\cite{ny94}. We show that the CES scheme can operate effectively under varying degrees of disk utilizations, and during transition periods when the number of concurrent streams changes. We also demonstrate how the scheme can be further improved, particularly for situations when the number of concurrent streams is small. In ongoing work, we will integrate the proposed scheme into a distributed continuous-media file system which is under development at the University of British Columbia.

TR-95-02 Geometric and Computational Aspects of Manufacturing Processes, January 1995 Prosenjit K. Bose, 103 pages

Two of the fundamental questions that arise in the manufacturing industry concerning every type of manufacturing process are:

1) Given an object, can it be built using a particular process? 2) Given that an object can be built using a particular process, what is the best way to construct the object?

The latter question gives rise to many different problems depending on how best is qualified. We address these problems for two complimentary categories of manufacturing processes: rapid prototyping systems and casting processes. The method we use to address these problems is to first define a geometric model of the process in question and then answer the questions on that model.

In the category of rapid prototyping systems, we concentrate on stereolithography, which is emerging as one of the most popular rapid prototyping systems. We model stereolithography geometrically and then study the class of objects that admit a construction in this model. For the objects that admit a construction, we find the orientations that allow a construction of the object.

In the category of casting processes, we concentrate on gravity casting and injection molding. We first model the process and its components geometrically. We then characterize and recognize the objects that can be formed using a re-usable two-part cast. Given that a cast of an object can be formed, we determine a suitable location for the pin gate, the point from which liquid is poured or injected into a mold. Finally, we compute an orientation of a mold that ensures a complete fill and minimizes the number of venting holes for molds used in gravity casting processes.

TR-95-03 No Quadrangulation is Extremely Odd, January 1995 Prosenjit Bose and Godfried Toussaint, 17 pages

Given a set S of n points in the plane, a quadrangulation of S is a planar subdivision whose vertices are the points of S, whose outer face is the convex hull of S, and every face of the subdivision (except possibly the outer face) is a quadrilateral. We show that S admits a quadrangulation if and only if S does not have an odd number of extreme points. If S admits a quadrangulation, we present an algorithm that computes a quadrangulation of S in O(n log n) time even in the presence of collinear points. If S does not admit a quadrangulation, then our algorithm can quadrangulate S with the addition of one extra point, which is optimal. We also provide an \Omega(n \log n) time lower bound for the problem. Finally, our results imply that a k-angulation of a set of points can be achieved with the addition of at most k-3 extra points within the same time bound.

TR-95-04 Performance Measures for Constrained Systems, February 1995 Kees van den Doel and Dinesh K. Pai, 27 pages

We present a geometric theory of the performance of robot manipulators, applicable to systems with constraints, which may be non-holonomic. The performance is quantified by a geometrical object, the induced metric tensor, from which scalars may be constructed by invariant tensor operations to give performance measures. The measures thus defined depend on the metric structure of configuration and work space, which should be chosen appropriately for the problem at hand. The generality of this approach allows us to specify a system of joint connected rigid bodies with a large class of metrics. We describe how the induced metric can be computed for such a system of joint connected rigid bodies and describe a MATLAB program that allows the automatic computation of the performance measures for such systems. We illustrate these ideas with some computations of measures for the SARCOS dextrous arm, and the Platonic Beast, a multi-legged walking machine.

TR-95-05 Rigidity Checking of 3D Point Correspondences Under Perspective Projection, March 1995 Daniel P. McReynolds and David G. Lowe, 39 pages

An algorithm is described which rapidly verifies the potential rigidity of three dimensional point correspondences from a pair of two dimensional views under perspective projection. The output of the algorithm is a simple yes or no answer to the question ``Could these corresponding points from two views be the projection of a rigid configuration?'' Potential applications include 3D object recognition from a single previous view and correspondence matching for stereo or motion over widely separated views. Our analysis begins with the observation that it is often the case that two views cannot provide an accurate structure-from-motion estimate because of ambiguity and ill- conditioning. However, it is argued that an accurate yes/no answer to the rigidity question is possible and experimental results support this assertion with as few as six pairs of corresponding points over a wide range of scene structures and viewing geometries. Rigidity checking verifies point correspondences by using 3D recovery equations as a matching condition. The proposed algorithm improves upon other methods that fall under this approach because it works with as few as six corresponding points under full perspective projection, handles correspondences from widely separated views, makes full use of the disparity of the correspondences, and is integrated with a linear algorithm for 3D recovery due to Kontsevich. The rigidity decision is based on the residual error of an integrated pair of linear and nonlinear structure-from-motion estimators. Results are given for experiments with synthetic and real image data. A complete implementation of this algorithm is being made publicly available.

TR-95-06 The UBC Distributed Continuous Media File System: Internal Design of Server, March 1995 Dwight J., Hutchinson, Norman C. Makaroff and Gerald W. Neufeld, 21 pages

This report describes the internal design of the UBC Distributed Continuous Media File Server as of April 1995. The most significant unique characteristic of this system is its approach to admission control which utilizes the time-varying requirements of the variable bit-rate data streams currently admitted into the system to properly allocate disk resources. The structure of the processes which implement the file server are described in detail as well as the communication between client processes and server processes. Each major client interface interaction is covered, as well as the detailed operation of the server in response to client requests. Buffer management considerations are introduced as they affect the admission control and disk operations. We conclude with the status of the implementation and plans for completion of the design and implementation. This document provides a snapshot of our design which has not yet been fully implemented and we expect to see significant evolution of the design as the implementation proceeds.

TR-95-07 Real Time Threads Interface, March 1995 David Finkelstein, Norman C. Hutchinson and Dwight J. Makaroff, 22 pages

The Real Time Threads package (abbreviated RT Threads) provides a user-level, preemptive kernel running inside a single address space (e.g., within a UNIX process). RT Threads implements thread management, synchronization, and communication functions, including communication between RT Threads environments (i.e., with different address spaces, possibly on different machines and different architectures). Threads are scheduled using a real-time, multi-priority, preemptive scheduling algorithm. Each thread is scheduled on the basis of its modifiable scheduling attributes: starting time, priority and deadline. No thread is scheduled before its starting time. Schedulable threads (i.e., threads whose starting time has passed) are scheduled on a highest priority first basis. Schedulable threads of equal priority use an earliest deadline first (EDF) scheduling policy. An RT Threads environment is cooperative in the sense that memory is shared among all threads, and each thread runs to completion unless preempted on the basis of priorities and deadlines. Alternate scheduling policies, such as time slicing, can be implemented at the application level using the scheduling mechanisms provided by RT Threads. This report describes the interface to the RT Threads package.

TR-95-09 Reflectance and Shape from a Rotating Object, April 1995 Jiping Lu and Jim Little, 26 pages

In this paper we show that the reflectance function of a rotating object illuminated under a collinear light source (where the light source lies on or near the optical axis) can be estimated from the image sequence of the object and applied to surface recovery. We first calculate the 3D locations of some singular points from the image sequence, and extract the brightness values of these singular points during the object rotation to estimate the surface reflectance function. Then we use the estimated reflectance function for surface recovery from the images of the rotating object. Two subprocedures are used in surface recovery. The first subprocedure computes the depth around a point of known depth and surface orientation by using first-order Taylor series approximation. The other computes the surface orientation of a surface point from its image brightness values in the two different images by applying the estimated reflectance function. Starting from surface points of known depth values and surface orientations and iteratively applying the two subprocedures, the surface depth and orientation are recovered simultaneously over the whole object surface. The experimental results on real image sequences of both matte and specular surfaces show that the technique is feasible and robust.

TR-95-11 Numerical Simulations of Semiconductor Devices by Streamline-Diffusion Methods, April 1995 Xunlei Jiang, 153 pages

Theoretical and practical aspects of the design and implementation of the streamline-diffusion (SD) method for semiconductor device models are explored systematically. Emphasis is placed on the hydrodynamic (HD) model, which is computationally more challenging than the drift-diffusion (DD) model, but provides some important physical information missing in the DD model.

We devise a non-symmetric SD method for device simulations. This numerical method is uniformly used for the HD model (including a proposed simplification (SHD)) and the DD model. An appropriate SD operator is derived for the general non-symmetric convection-diffusion system. Linear stability analysis shows that our proposed numerical method is stable if the system can be symmetrized. Stability arguments and numerical experiments also suggest that the combination of the method of lines and the semi-discrete SD method may not be appropriate for the transient problem, a fact which often has been ignored in the literature.

An efficient method, consistent with the SD method used for conservation laws, is developed for the potential equation. The method produces a more accurate electric field than the conventional Galerkin method. Moreover, it solves for the potential and electric field in a decoupled manner.

We apply our numerical method to the diode and MESFET devices. Shocks for the diode in one and two space dimensions and the electron depletion near the gate for the MESFET in two space dimensions are simulated. Model comparisons are implemented. We observe that the difference in solutions between the HD and DD models is significant. The solution discrepancy between the full HD and SHD models is almost negligible in MESFET simulation, as in many other engineering applications. However, an exceptional case is found in our experiments.

TR-95-12 Forward Dynamics, Elimination Methods, and Formulation Stiffness in Robot Simulation, April 1995 Dinesh Pai Uri Ascher and Benoit Cloutier, 15 pages

The numerical simulation problem of tree-structured multibody systems, such as robot manipulators, is usually treated as two separate problems: (i) the forward dynamics problem for computing system accelerations, and (ii) the numerical integration problem for advancing the state in time. The interaction of these two problems can be important and has led to new conclusions about the overall efficiency of multibody simulation algorithms [ClPaAs95]. In particular, the fastest forward dynamics methods are not necessarily the most numerically stable, and in ill-conditioned cases may slow down popular adaptive step-size integration methods. This phenomenon is called "formulation stiffness".

In this paper, we first unify the derivation of both the composite rigid body method [WalkerOrin82] and the articulated-body method [Featherstone83,Featherstone87] as two elimination methods to solve the same linear system, with the articulated body method taking advantage of sparsity. Then the numerical instability phenomenon for the composite rigid body method is explained as a cancellation error that can be avoided, or at least minimized, when using an appropriate version of the articulated body method. Specifically, we show that the articulated-body method is better suited to deal with certain types of ill-conditioning than the composite rigid body method. The unified derivation also clarifies the underlying linear algebra of forward dynamics algorithms and is therefore of interest in its own accord.

TR-95-13 A Simple Proof Checker for Real-Time Systems, June 1995 Catherine Leung, 193 pages

This thesis presents a practical approach to verifying real-time properties of VLSI designs. A simple proof checker with built-in decision procedures for linear programming and predicate calculus offers a pragmatic approach to verifying real-time systems in return for a slight loss of formal rigor when compared with traditional theorem provers. In this approach, an abstract data type represents the hypotheses, claim, and pending proof obligations at each step. A complete proof is a program that generates a proof state with the derived claim and no pending obligations. The user provides replacements for obligations and relies on the proof checker to validate the soundness of each operation. This design decision distinguishes the proof checker from traditional theorem provers, and enhances the view of ``proofs as programs''. This approach makes proofs robust to incremental changes, and there are few ``surprises'' when applying rewrite rules or decision procedures to proof obligations. A hand-written proof constructed to verify the timing correctness of a high bandwidth communication protocol was verified using this checker.

TR-95-14 Sequential Regularization Methods for Nonlinear Higher Index DAE's, July 1995 Uri M. Ascher and Ping Lin, 25 pages

Sequential regularization methods relate to a combination of stabilization methods and the usual penalty method for differential equations with algebraic equality constraints. The present paper extends an earlier work \cite{al} to nonlinear problems and to DAEs with index higher than 2. Rather than having one ``winning'' method, this is a class of methods from which a number of variants are singled out as being particularly effective methods in certain circumstances.

We propose sequential regularization methods for index-2 and index-3 DAEs, both with and without constraint singularities. In the case of no constraint singularity we prove convergence results. Numerical experiments confirm our theoretical predictions and demonstrate the viability of the proposed methods. The examples include constrained multibody systems.

TR-95-15 The Creation, Presentation and Implications of Selected Auditory Illusions, July 1995 Scott Flinn and Kellogg S. Booth, 43 pages

This report describes the initial phase of a project whose goal is to produce a rich acoustic environment in which the behaviour of multiple independent activities is communicated through perceptually distinguishable auditory streams. While much is known about the perception of isolated auditory phenomena, there are few general guidelines for the selection of auditory elements that can be composed to achieve a display that is effective in situations where the ambient acoustic conditions are uncontrolled. Several auditory illusions and effects are described in the areas of relative pitch discrimination, perception of auditory streams, and the natural association of visual and auditory stimuli. The effects have been evaluated informally through a set of demonstration programs that have been presented to a large and varied audience. Each auditory effect is introduced, suggestions for an effective demonstration are given, and our experience with the demonstration program is summarized. Implementation issues relevant to the reproduction of these effects on other platforms are also discussed. We conclude by describing several experiments aimed at resolving issues raised by our experience with these effects.

TR-95-16 Coordinating Heterogeneous Time-Based Media Between Independent Applications, July 1995 Scott Flinn, 42 pages

This report discusses the requirements and design of an event scheduler that facilitates the synchronization of independent, heterogeneous media streams. The work is motivated by the synchronization requirements of multiple, periodic, logically independent auditory streams, but extends naturally to include time-based media of arbitrary type. The scheduler design creates a framework within which existing synchronization techniques are composed to coordinate the presentation activities of cooperating or independent application programs. The scheduler is especially effective for the presentation of repetitive sequences, and guarantees long term synchronization with a hardware clock, even when scheduler capacity is temporarily exceeded on platforms lacking real time system support. The implementations of the scheduler and of several application programs, class libraries and other tools designed to use or support it are described in detail.

TR-95-17 XTP Application Programming Interface, July 1995 Rol Mechler, and Gerald W. Neufeld, 24 pages

The Xpress Transport Protocol (XTP) is a lightweight transport protocol intended for high-speed networks. High-speed networks provide bandwidths of 100 Mbps and beyond, enabling a new class of applications (e.g., multimedia). So as not to be a bottleneck in the delivery of data, a transport protocol must provide high performance. Features of XTP which enhance performance include implicit connection setup, sender driven acknowledgement, selective retransmission, fixed format word aligned packet structures and suitability for parallel implementation. Since the new generation of applications may require a variety of services from the transport layer, a transport protocol designed for high-speed networks should also be flexible enough to provide these services. XTP provides the mechanisms to allow applications to tailor the functionality of the protocol to their individual needs. In particular, XTP provides flow control, rate control and error control, the use of each being optional and orthogonal to the others. This report describes an Application Programming Interface designedfor a multi-threaded implementation of XTP. The API allows all XTP parameters to be set from the application level, and addresses the issue of performance by providing a mechanism for zero copy transmission and reception of data.

TR-95-18 Model Checking Partially Ordered State Spaces, July 1995 Scott Hazelhurst and Carl J. H. Seger, 31 pages

The state explosion problem is the fundamental limitation of verification through model checking. In many cases, representing the state space of a system as a lattice is an effective way of ameliorating this problem. The partial order of the state space lattice represents an information ordering. The paper shows why using a lattice structure is desirable, and why a quaternary temporal logic rather than a traditional binary temporal logic is suitable for describing properties in systems represented this way. The quaternary logic not only has necessary technical properties, it also expresses degrees of truth. This is useful to do when dealing with a state space with an information ordering defined on it, where in some states there may be insufficient or contradictory information available. The paper presents the syntax and semantics of a quaternary valued temporal logic.

Symbolic trajectory evaluation (STE) has been used to model check partially ordered state spaces with some success. The limitation of STE so far has been that the temporal logic used (a two-valued logic) has been restricted, whereas a more expressive temporal logic is often useful. This paper generalises the theory of symbolic trajectory evaluation to the quaternary temporal logic, which potentially provides an effective method of model checking an important class of formulas of the logic. Some practical model checking algorithms are briefly described and their use illustrated. This shows that not only can STE be used to check more expressive logics in principle, but that it is feasible to do so.

TR-95-19 A Shared 4-D Workspace, August 1995 Mir Ko, a and Peter Cahoon, 18 pages

A Shared four-dimensional workspace is a shared animation of three-dimensional databases. Since shared animated workspaces are important to many different types of users, a pilot project was undertaken to implement a shared, time varying workspace. This software permitted the give and take of control by users running the same application across an ATM fiber link running at 100 mbits/sec.

The project was divided into two parts. In the first phase, animation of three-dimensional databases in stereo was implemented. In the second phase, sharing of the animation across the ATM network was added.

This paper dicusses the interfaces to the program and presents outlines of implementations of the features in each of the project's two phases.

TR-95-20 On the Maximum Tolerable Noise for Reliable Computation by Formulas, September 1995 William Evans and Nicholas Pippenger, 16 pages

It is shown that if a formula is constructed from noisy 2-input NAND gates, with each gate failing independently with probability e, then reliable computation can or cannot take place according as e is less than or greater than e_0 = (3-sqrt(7))/4 = 0.08856....

TR-95-21 Verification of Benchmarks 17 and 22 of the IFIP WG10.5 Benchmark Circuit Suite, October 1995 Scott Hazelhurst and Carl J. H. Seger, 32 pages

This paper reports on the verification of two of the IFIP WG10.5 benchmarks --- the multiplier and systolic matrix multiplier. The circuit implementations are timed, detailed gate-level descriptions, and the specification is given using the temporal logic TLn, a quaternary-valued temporal logic. A practical, integrated theorem-proving/model checking system based on the compositional theory for TLn and symbolic trajectory evaluation is used to verify the circuits. A 64-bit version of multiplier circuit (Benchmark 17) containing approximately 28~000 gates takes about 18 minutes of computation time to verify. A 4 by 4, 32-bit version of the matrix multiplier (Benchmark 22) containing over 110~000 gates take about 170 minutes of computation time to verify. A significant timing error was discovered in this benchmark.

Keywords: symbolic trajectory evaluation, benchmarks, compositional verification, temporal logic, theorem proving.

TR-95-22 Optimal Algorithms to Embed Trees in a Point Set, October 1995 Prosenjit Bose, Michael McAllister and Jack Snoeyink, 11 pages

We present optimal Theta(n log n) time algorithms to solve two tree embedding problems whose solution previously took quadratic time or more: rooted-tree embeddings and degree-constrained embeddings. In the rooted-tree embedding problem we are given a rooted-tree T with n nodes and a set of n points P with one designated point p and are asked to find a straight-line embedding of T into P with the root at point p. In the degree-constrained embedding problem we are given a set of n points P where each point is assigned a positive degree and the degrees sum to 2n-2 and are asked to embed a tree in P using straight lines that respects the degrees assigned to each point of P. In both problems, the points of P must be in general position and the embeddings have no crossing edges.

TR-95-23 Pure versus Impure Lisp, October 1995 Nicholas Pippernger, 11 pages

The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, set-car! and set-cdr! in Scheme) that change the state of pairs without creating new pairs. It is well known that cyclic list structures can be created by impure programs, but not by pure ones. In this sense, impure Lisp is ``more powerful'' than pure Lisp. If the inputs and outputs of programs are restricted to be sequences of atomic symbols, however, this difference in computability disappears. We shall show that if the temporal sequence of input and output operations must be maintained (that is, if computations must be ``on-line''), then a difference in complexity remains: for a pure program to do what an impure program does in n steps, O(n log n) steps are sufficient, and in some cases Omega(n log n) steps are necessary.

TR-95-24 Three-Dimensional Analysis of Scoliosis Surgery Using Stereo Photogrammetry, December 1995 Kellogg S. Booth, Stanley B. Jang, Chris W. Reily and Bonita J. Sawatzky

Scoliosis is a deformity characterized by coronal, sagittal and axial rotation of the spine. Surgical instrumentation (metal pins and rods) and eventual fusion of the spine are required in severe cases. Assessment of the deformity requires enough accuracy to allow proactive planning of individual interventions or implant designs. Conventional 2-D radiography and even 3-D CT scanning do not provide this, but our new stereophotogrammetric analysis and 3-D visualization tools do. Stereophoto pairs taken at each stage of the operation and robust statistical techniques can be used to determine rotation, translation, goodness of fit, and overall spinal contour before, during and after the surgical instrumentation.

Novel features of our software include 3-D digitizing software that improves existing stereophotogrammetry methods, robust statistical methods for measuring 3-D deformity and estimating errors, full 3-D visualization of spinal deformities with optional head-coupled stereo ("fish tank virtual reality"), full integration with commercial animation software, use of consumer PhotoCD technology that significantly lowers costs for data collection and storage while increasing accuracy, and simultaneous 3-D viewing and control at remote locations over high-speed ATM networks.

TR-95-25 Separating Reflection Functions for Linear Radiosity, December 1995 Alain Fournier

Classic radiosity assumes diffuse reflectors in order to consider only pair-wise exchanges of light between elements. It has been previously shown that one can use the same system of equations with separable bi-directional reflection distrubution functions (BRDFs), that is BRDFs that can be put in the form of a product of two functions, one of the incident direction and one of the reflected direction.

We show here that this can be easily extended to BRDFs that can be approximated by sums of such terms. The classic technique of Singular Value Decomposition (SVD) can be used to compute those terms given an analytical or experimental BRDF. We use the example of the traditional Phong model for specular-like reflection to extract a separable model, and show the results in terms of closeness to ordinary Phong shading. We also show an example with experimental BRDF data. Further work will indicate whether the quality of linear radiosity images will be improved by this modification.

TR-95-26 From Local to Global Illumination and Back, December 1995 Alain Fournier

The following being musings about illumination problems and illumination answers, more particularly about the evolution and the interplay between local and global illumination concerns.

TR-95-27 Vide Hoc: A Visualization for Homogenous Coordination, December 1995 Robert R. Lewis

VideHoc is an interactive graphical program that visualizes two-dimensional homogeneous coordinates. Users manipulate data in one of four views and all views are dynamically updated to reflect the change.

TR-95-28 Light-Driven Global Illumination with a Wavelet Representation of Light Transport, December 1995 Robert R. Lewis and Alain Fournier

We describe the basis of the work he have currently under way to implement a new rendering algorithm called "light-driven global illumination". This algorithm is a departure from conventional raytracing and radiosity renderers which addresses a number of deficiencies intrinsic to those approaches.

TR-95-29 Union of Spheres Model for Volumetric Data, December 1995 Vishwa Ranjan and Alain Fournier

A stable representation of an object means that the representation is unique, is independent of the sampling geometry, resolution, noise, and other small distortions in the data, and is instead linked to the shape of the object. Stable representations help characterize shapes for comparison or recognition; skeletal (or medial axis) and volumetric primitive models have been popular in vision for the same reason. Piecewise polyhedral representations, e.g., tetrahedra, and voxel representations, e.g., octrees, generally tend to be unstable. We propose a representation for 3D objects based on the set union of overlapping sphere primitives. This union of spheres (UoS) model has some attractive properties for computer graphics, computational vision, and scientific visualization.

TR-95-30 Shape Transformations Using Union of Spheres, December 1995 Vishwa Ranjan and Alain Fournier

Shape interpolation is the process of transforming continuously one object into another. This is useful in applications such as object recognition, object registration and computer animation. Unfortunately, "good" shape interpolation is as ill-defined as "shape" itself. To be able to control the process in a useful way, we need a representation for the objects using primitives which capture at least some aspects of their shape, with methods to convert other representations to this one. We present here a method to interpolate between two objects represented as a union of spheres. We briefly describe the representation and its properties, and show how to use it to interpolate. Once a distance metric between the spheres is defined (we show different metric producing controlled effects), the algorithm optimally matches the spheres in the two models using a bipartite graph. The transformation then consists in interpolating between the matched spheres. If the union of spheres has been simplified, the other spheres are matched as a function of their positions within their representative cluster. Examples are shown and discussed with two- and three-dimensional objects. The results show that the union of spheres helps capture some notion of shape, and helps to automatically match and interpolate shapes.

TR-95-31 Multiresolution Surface Approximation, December 1995 David R. Forsey and David Wong

This paper presents a method for automatically generating a hierarchical B-spline surface from an initial set of control points. Given an existing mesh of control points a mesh with half the resolution is constructed by simultaneously approximating the finer mesh while minimizing a smoothness constraint using weighted least squares. Curvature measures are used to identify features that need only be represented in the finer mesh. The resulting hierarchical surface accurately and economically reproduces the original mesh, is free from excessive undulations in the intermediate levels and produces a multiresolution representation suitable for animation and interactive modelling.

TR-95-32 Pasting Spline Surfaces, December 1995 C. Banghiel, Richard H. Bartels and David R. Forsey

Details can be added to spline surfaces by applying displacement maps. However, the computational intensity of displacement mapping prevents its useful for interactive design. In this paper we explore a form of simulated displacement that can be used for interactive design by providing a preview of true displacements at low computational cost.

TR-95-33 Surface Fitting with Hierarchical Splines, December 1995 David R. Forsey and Richard H. Bartels

We consider the fitting of tensor product parametric spline surfaces to gridded data. The continuity of the surface is provided by the basis chosen. When tensor product splines are used with gridded data, the surface fitting problem decomposes into a sequence of curve fitting processes, making the computations particularly efficient. The use of a hierarchical representation for the surface adds further efficiency by adaptively decomposing the fitting process into subproblems involving only a portion of the data. Hierarchy also provides a means of storing the resulting surface in a compressed format. Our approach is compared to multiresolution analysis and the use of wavelets.

TR-95-34 Regularization Methods for Differential Equations and Their Numerical Solution, December 1995 Ping Lin, 168 pages

The objectives of the thesis are to propose and to investigate approximate methods for various differential equations with or without constraints. Most attention is paid to ordinary and partial differential equations with constraints (where the solution is know to lie in an explicitly defined invariant manifold). We propose and analyze a regularization method called sequential regularization method (SRM) and its numerical approximation. A very important improvement of the SRM over usual regularization methods is that the problem after regularization need not be stiff. Hence explicit difference schemes can be used to avoid solving nonlinear systems. This makes the computation much simpler. The method is applied in several application fields such as mechanical constrained multi-body systems, nonstationary incompressible Navier-Stokes equations which is an example of partial differential equations with constraints (PDAE), and miscible displacement in porous media in reservoir simulation. Improvements over stabilization methods that stabilize the invariant manifold over long time intervals and extra benefits for these applied problems are also achieved.

We finally discuss the numerical solution of several singular perturbation problems which come from many applied areas and regularized problems. Uniformly convergent schemes with respect to the perturbation parameter are constructed and proved. A spurious solution phenomenon for an upwinding scheme is analyzed.

TR-95-35 Illumination Problems in Computer Augmented Reality, January 31, 1994 Alain Fournier, 22 pages

The ability to merge a real video image (RVI) with a computer-generated image (CGI) enhances the usefulness of both. To go beyond "cut and paste" and chroma-keying, and merge the two images successfully, one must solve the problems of common viewing parameters. common visibility and common illumination. The result can be dubbed Computer Augmented Reality (CAR). The solution needs contributions from both computer graphics and computer vision. The problems of common illumination are especially challenging, because they test our understanding and practice of shadow and global illumination computation. In this paper we will describe and illustrate work in our laboratory where the emphasis is on extracting illumination information from real images and computing the common illumination between the real and the computer generated scene.

TR-96-01 Diamonds are not a Minimum Weight Triangulation's Best Friend, January 1996 Prosenjit Bose, Luc Devroye and William Evans, 10 pages

Two recent methods have increased hopes of finding a polynomial time solution to the problem of computing the minimum weight triangulation of a set $S$ of $n$ points in the plane. Both involve computing what was believed to be a connected or nearly connected subgraph of the minimum weight triangulation, and then completing the triangulation optimally. The first method uses the light graph of $S$ as its initial subgraph. The second method uses the \lmt-skeleton of $S$. Both methods rely, for their polynomial time bound, on the initial subgraphs having only a constant number of components. Experiments performed by the authors of these methods seemed to confirm that randomly chosen point sets displayed this desired property. We show that there exist point sets where the number of components is linear in $n$. In fact, the expected number of components in either graph on a randomly chosen point set is linear in $n$, and the probability of the number of components exceeding some constant times $n$ tends to one.

TR-96-02 Compositional Model Checking of Partially Ordered State Spaces, January 1996 Scott Hazelhurst, 268 pages

Symbolic trajectory evaluation (STE) --- a model checking technique based on partial order representations of state spaces --- has been shown to be an effective model checking technique for large circuit models. However, the temporal logic that it supports is restricted, and as with all verification techniques has significant performance limitations. The demand for verifying larger circuits, and the need for greater expressiveness requires that both these problems be examined.

The thesis develops a suitable logical framework for model checking partially ordered state spaces: the temporal logic \TL\ and its associated satisfaction relations, based on the quaternary logic $\Q$. \TL\ is appropriate for expressing the truth of propositions about partially ordered state spaces, and has suitable technical properties that allow STE to support a richer temporal logic. Using this framework, verification conditions called \emph{assertions} are defined, a generalised version of STE is developed, and three STE-based algorithms are proposed for investigation. Advantages of this style of proof include: models of time are incorporated; circuits can be described at a low level; and correctness properties are expressed at a relatively high level.

A primary contribution of the thesis is the development of a compositional theory for \TL\ assertions. This compositional theory is supported by the partial order representation of state space. To show the practical use of the compositional theory, two prototype verification systems were constructed, integrating theorem proving and STE. Data is manipulated efficiently by using binary decision diagrams as well as symbolic data representation methods. Simple heuristics and a flexible interface reduce the human cost of verification.

Experiments were undertaken using these prototypes, including verifying two circuits from the IFIP WG 10.5 Benchmark suite. These experiments showed that the generalised STE algorithms were effective, and that through the use of the compositional theory it is possible to verify very large circuits completely, including detailed timing properties.

TR-96-03 The Sounds of Physical Shapes, February 1996 Kees van den Doel and Dinesh K. Pai, 17 pages

We propose a general framework for the simulation of sounds produced by colliding physical objects in a real time graphics environment. The framework is based on the vibration dynamics of bodies. The computed sounds depend on the material of the body, its shape, and the location of the impact.

Specifically, we show how to compute (1) the spectral signature of each body (its natural frequencies), which depends on the material and the shape, (2) the ``timbre'' of the vibration (the relative amplitudes of the spectral components) generated by an impulsive force applied to the object at a grid of locations, (3) the decay rates of the various frequency components which correlates with the type of material, based on the its internal friction parameter and finally (4) the mapping of sounds on to the object's geometry for real time rendering of the resulting sound.

The framework has been implemented in a Sonic Explorer program, which simulates a room with several objects such as a chair, tables, and rods. After a preprocessing stage, the user can hit the objects at different points to interactively produce realistic sounds.

TR-96-04 Heterogeneous Process Migration: The Tui System, February 1996 Peter Smith and Norman C. Hutchinson, 32 pages

Heterogeneous Process Migration is a technique whereby an active process is moved from one machine to another. It must then continue normal execution and communication. The source and destination processors can have a different architecture, that is, different instruction sets and data formats. Because of this heterogeneity, the entire process memory image must be translated during the migration.

"Tui" is a prototype migration system that is able to translate the memory image of a program (written in ANSI-C) between four common architectures (m68000, SPARC, i486 and PowerPC). This requires detailed knowledge of all data types and variables used with the program. This is not always possible in non type-safe (but popular) languages such as C, Pascal and Fortran.

The important features of the Tui algorithm are discussed in great detail. This includes the method by which a program's entire set of data values can be located, and eventually reconstructed on the target processor. Initial performance figures demonstrating the viability of using Tui for real migration applications are given.

TR-96-05 Simplifying Terrain Models and Measuring Terrain Model Accuracy, May 1996 David Scott Andrews, 48 pages

We describe a set of TIN simplification methods that enable the use of the triangulation hierarchy introduced by Kirkpatrick and modified by de Berg and Dobrindt. This triangulation hierarchy can be used to form a terrain model combining areas with varying levels of detail. One variant of the delete simplification method formed simplifications with accuracy close to the greedy method.

We also investigated different variables that can be used to measure the accuracy of our simplified terrain models. Although the use of derivative statistics did not significantly alter our evaluation of the performance of our simplification methods, we recommend that any future comparisons should be aware of these alternative variables of surface characterization.

TR-96-06 Importance Ordering for Real-Time Depth of Field, February 1996 Paul Fearing

Depth of field (DOF) is an important component of real photography. As such, it is a valuable addition to the library of techniques used in photorealistic rendering. Several methods have been proposed for implementing DOF effects. Unfortunately, all existing methods require a great deal of computation. This prohibitive cost has precluded DOF effects from being used with any great regularity.

This paper introduces a new way of computing DOF that is particularly effective for sequences of related frames (animations). It computes the most noticeable DOF effects first, and works on areas of lesser importance only if there is enough time. Areas that do not change between frames are not computed.

All pixels in the image are assigned an importance value. This importance gives priority to pixels that have recently changed in color, depth, or degree of focus. Changes originate from object and light animation, or from variation in the camera's position or focus.

Image pixels are then recomputed in order of importance. At any point, the computation can be interrupted and the results displayed. Varying the interruption point allows a smooth tradeoff between image accuracy and result speed. If enough time is provided, the algorithm generates the exact solution.

Practically, this algorithm avoids the continual recomputing of large numbers of unchanging pixels. This can provide order-of-magnitude speedups in many common animation situations. This increase in speed brings DOF effects into the realm of real-time graphics.

TR-96-07 Wavelet Radiative Transfer and Surface Interaction, May 1996 Robert R. Lewis

In this work-in-progress paper, we present a solution to the illumination problem that is intermediate between the conventional local and global approaches to illumination. It involves the representation of radiance on a surface as a finite element expansion in terms of wavelets.

By expanding in terms of ``Nusselt coordinates'', we show how irradiance, transport, and surface interaction can be evaluated simply and directly in terms of wavelet coefficients. We present an example of transport.

TR-96-09 A Perceptual Colour Segmentation Algorithm, June 1996 Christopher G. Healey and James T. Enns, 170 pages

A Perceptual Colour Segmentation Algorithm Christopher G. Healey and James T. Enns

This paper presents a simple method for segmenting colour regions into categories like red, green, blue, and yellow. We are interested in studying how colour categories influence colour selection during scientific visualization. The ability to name individual colours is also important in other problem domains like real-time displays, user-interface design, and medical imaging systems. Our algorithm uses the Munsell and CIE LUV colour models to automatically segment a colour space like RGB or CIE XYZ into ten colour categories. Users are then asked to name a small number of representative colours from each category. This provides three important results: a measure of the perceptual overlap between neighbouring categories, a measure of a category's strength, and a user-chosen name for each strong category.

We evaluated our technique by segmenting known colour regions from the RGB, HSV, and CIE LUV colour models. The names we obtained were accurate, and the boundaries between different colour categories were well defined. We concluded our investigation by conducting an experiment to obtain user-chosen names and perceptual overlap for ten colour categories along the circumference of a colour wheel in CIE LUV.

TR-96-10 Choosing Effective Colours for Data Visualization, June 1996 Christopher G. Healey, 10 pages

Choosing Effective Colours for Data Visualization Christopher G. Healey

In this paper we describe a technique for choosing multiple colours for use during data visualization. Our goal is a systematic method for maximizing the total number of colours available for use, while still allowing an observer to rapidly and accurately search a display for any one of the given colours. Previous research suggests that we need to consider three separate effects during colour selection: colour distance, linear separation, and colour category. We describe a simple method for measuring and controlling all of these effects. Our method was tested by performing a set of target identification studies; we analysed the ability of thirty-eight observers to find a colour target in displays that contained differently coloured background elements. Results showed that our method can be used to select a group of colours that will provide good differentiation between data elements during visualization.

TR-96-11 Experimental Design: Input Device Protocols and Collaborative Learning, June 1996 Joanna McGrenere, Kori Inkpen, Kellogg Booth and Maria Klawe, 49 pages

This document outlines an experimental design for a study that investigates peer collaboration in a computer supported learning environment. Sections of this document have been adapted from a CPSC 533b term report by McGrenere et al. (1995) which outlined a similar experimental desgin. In the proposed study we examine different ways of supporting peer colloration which, for the purposes of this study, refers to two students working on a single comptuer playing an electronic game. The standard computer is configured with only one mouse and therefore when two students share a computer they need to share the mouse as well. We want to investigate the impact of adding second mouse to the configuration such that each child would have their own mouse.

TR-96-12 Design: Educational Multi-Player Games A Literature Review, June 1996 Joanna McGrenere, 92 pages

Over the past two decades electronic games have become ingrained in our culture Children's fixation with these games initially alarmed parents and educators, but educational researchers soon questioned whether the motivation to play could be tapped and harnessed for educational purposes. A number of educational electronic games have been developed and their success has been mixed. The great majority of these games are designed for singler players; it there is more than one player, the players are usually required to take turns playing. Althought learning within a cooperative group setting has been found to be extremely effective, designing educational games to support multiple players working together has received little atention. using a multi-player game format could provide the motivation that children need to learn and at the same time enhance both the achievement and the social interactions of the children. In order to design multiplayer educational games we must understand what motivates children to play electronic games, how to incoporate educational content into electronic games, and how to develop appropriate multi-person educational tasks. An understanding of design issues for multi-user software is also required.

This essay is a literature review that addresses the issues involved in the design of educational electronic multi-player games. The relevant bodies of literature include human-computer interaction, electronic games, educational electronic games, electronic multi-player games. Two of the most relevant areas of the human-computer interaction literature are Computer-Supported Cooperative Work (CSCW) and Computer-Supported Collaborative Learning (CSCL). All of the bodies of literature are discussed with respect to educational electronic multiplayer games, areas where further research is required are noted, and gegneral design guidelines for educational eelectronic multi-player games are offered.

TR-96-13 Shared 3D Workspaces, July 31, 1996 Joanne McGrenere and Kellogg S. Booth, 24 pages

The literature on Computer Supported Collaborative Work (CSCW) includes considerable research on 2D shared spaces and metaphors. Much less exists concerning 3D shared spaces and metaphors. This report defines terminology relevant to shared 3D workspaces, summarizes a number of areas within the literature on 2D and 3D interaction and collaboration, and identifies pertinent issues for further research. Among the issues identified are: - Can a 2D shared space metaphor be extended to 3D? - How is interaction different in 3D than in 2D? - Can metaphors for single-user 3D interaction be extended to shared 3D interaction?

TR-96-15 Algorithmic Aspects of Constrained Unit Disk Graphs, September 1996 Heinz Breu, 369 pages

Computational problems on graphs often arise in two- or three- dimensional geometric contexts. Such problems include assigning channels to radio transmitters (graph colouring), physically routing traces on a printed circuit board (graph drawing), and modelling molecules. It is reasonable to expect that natural graph problems have more efficient solutions when restricted to such geometric graphs. Unfortunately, many familiar NP-complete problems remain NP-complete on geometric graphs.

Indifference graphs arise in a one-dimensional geometric context; they are the intersection graphs of unit intervals on the line. Many NP-complete problems on arbitrary graphs do have efficient solutions on indifference graphs. Yet these same problems remain NP-complete for the intersection graphs of unit disks in the plane (unit disk graphs), a natural two-dimensional generalization of indifference graphs. What accounts for this situation, and how can algorithms be designed to deal with it?

To study these issues, this thesis identifies a range of subclasses of unit disk graphs in which the second spatial dimension is gradually introduced. More specifically, tau-strip graphs ``interpolate'' between unit disk graphs and indifference graphs; they are the intersection graphs of unit-diameter disks whose centres are constrained to lie in a strip of thickness tau. This thesis studies algorithmic and structural aspects of varying the value tau for tau-strip graphs.

The thesis takes significant steps towards characterizing, recognizing, and laying out strip graphs. We will also see how to develop algorithms for several problems on strip graphs, and how to exploit their geometric representation. In particular, we will see that problems become especially tractable when the strips are ``thin'' (tau is small) or ``discrete'' (the number of possible y-coordinates for the disks is small). Note again that indifference graphs are the thinnest (tau=0) and most discrete (one y-coordinate) of the nontrivial tau-strip graphs.

The immediate results of this research concern algorithms for a specific class of graphs. The real contribution of this research is the elucidation of when and where geometry can be exploited in the development of efficient graph theoretic algorithms.

TR-96-16 Civil Law and the Development of Software Engineering, September 1996 Martina Shapiro, 40 pages

This paper provides software engineers with an understanding of the basic tenets of civil law and how legal liability principles may be applied by the courts so as to affect the future direction of software engineering. The issues discussed are based on a review of selected literature in software engineering, civil law reported case decisions and texts as well as on interviews with legal consultants. Examples of some court rulings in cases involving software malfunction are included, along with some projections relating to the possible implications for software development arising from the anticipated reaction of the courts to the novel issues presented by software engineering.

TR-96-19 Lower Bounds for Noisy Boolean Decision Trees, September 1996 William Evans and Nicholas Pippernger, 18 pages

We present a new method for deriving lower bounds to the expected number of queries made by noisy decision trees computing Boolean functions. The new method has the feature that expectations are taken with respect to a uniformly distributed random input, as well as with respect to the random noise, thus yielding stronger lower bounds. It also applies to many more functions than do previous results. The method yields a simple proof of the result (previously established by Reischuk and Schmeltz) that almost all Boolean functions of n arguments require Omega(n log n) queries, and strengthens this bound from the worst-case over inputs to the average over inputs. The method also yields bounds for specific Boolean functions in terms of their spectra (their Fourier transforms). The simplest instance of this spectral bound yields the result (previously established by Feige, Peleg, Raghavan and Upfal) that the parity function of n arguments requires Omega(n log n) queries, and again strengthens this bound from the worst-case over inputs to the average over inputs. In its full generality, the spectral bound applies to the "highly resilient" functions introduced by Chor, Friedman, Goldreich, Hastad, Rudich and Smolensky, and it yields non-linear lower bounds whenever the resiliency is asymptotic to the number of arguments.

TR-96-18 Temporally coherent stereos improving performance through knowledge of motion, September 1996 Vladimir Tucakov and David G. Lowe, 8 pages

This paper introduces the idea of temporally extending results of a stereo algorithm in order to improve the algorithm's performance. This approach anticipates the changes between two consecutive depth maps resulting from the motion of the cameras. Uncertainties in motion are accounted for by computation of an ambiguity area and a resulting disparity range for each pixel. The computation is used to verify and refine the anticipated values, rather than calculate them without prior knowledge. The paper compares the performance of the algorithm under different constraints on motion. Speedups of up to 400\% are achieved without significant errors.

TR-96-20 Drag-and-Drop vs. Point-and-Click Mouse Interaction for Children, November 1996 Kori Inkpen, Kellogg S. Booth and Maria Klawe, 7 pages

This paper presents the results of a study on girls' and boys' usage of two common mouse interaction techniques. The two techniques, drag-and-drop and point-and-click, were compared to determine whether one method was superior to the other in terms of speed, error rate, and preference. For girls, significant differences between the two methods were found for speed, error rate and preference. Point-and-click was faster, fewer errors were committed, and it was preferred over drag-and-drop. For boys, a significant difference was found for speed but not for error rate or preference. Point-and-click was faster than drag-and-drop, the errors rates were comparable and, although more boys preferred point-and-click, the difference was not significant.

TR-97-01 Soundness and Cut-Elimination in NaDSyL, February 1997 Paul C. Gilmore, 27 pages

NaDSyL, a Natural Deduction based Symbolic Logic, like some earlier logics, is motivated by the belief that a confusion of use and mention is the source of the set theoretic paradoxes. However NaDSyL differs from the earlier logics in several important respects.

"Truth gaps", as they have been called by Kripke, are essential to the consistency of the earlier logics, but are absent from NaDSyL; the law of the excluded middle is derivable for all the sentences of NaDSyL. But the logic has an undecidable elementary syntax, a departure from tradition that is of little importance, since the semantic tree presentation of the proof theory can incorporate the decision process for the elementary syntax.

The use of the lambda calculus notation in NaDSyL, rather than the set theoretic notation of the earlier logics, reflects much more than a change of notation. For a second motivation for NaDSyL is the provision of a higher order logic based on the original term models of the lambda calculus rather than on the Scott models. These term models are the "natural" intepretation of the lambda calculus for the naive nominalist view that justifies the belief in the source of the paradoxes. They provide the semantics for the first order domain of the second order logic NaDSyL.

The elementary and logical syntax or proof theory of NaDSyL is fully described, as well as its semantics. Semantic proofs of the soundness of NaDSyL with cut and of the completeness of NaDSyL without cut are given. That cut is a redundant rule follows form these results. Some applications of the logic are also described.

TR-97-02 On Digital Money and Card Technologies, January 1997 Edwin M. Knorr, 24 pages

We survey two related fields: digital money and card technologies (especially smart cards), for possible PhD research topics. We believe that digital money and card technologies will revolutionize life in the 21st century. It will be shown that privacy issues are of serious concern, but that well-designed implementations can have long-term strategic and economic benefits to society. We have been following these two fields for a number of years. It is only very recently that digital money and card technologies have captured the attention of the North American marketplace. We believe that there will be significant research opportunities in these areas for years to come, as evidenced by recent commercial interest in supporting financial transactions via the Internet. This paper examines various aspects of digital money and card technologies, and attempts to provide a comprehensive overview of these fields and their research prospects.

TR-97-03 Video and Audio Streams Over an IP/ATM Wide Area Network, July 28, 1997 Mark McCutcheon, Mabo R. Ito and Gerald W. Neufeld, 101 pages

ABSTRACT This is a survey of the state of the art in delivering IP services over ATM networks, as it stands in the second quarter of 1997. It also includes a look at the alternatives to that set of technologies. The technology and the choices are changing "on the fly", and have evolved significantly during the course of this project. Moreover, the issues are not exclusively technical, but in many respects reflect the great schism in the data communications world: connection-oriented versus connectionless networks. We have tried to present the technical issues and solutions along with an unbiased overview of the more "philosophical" issues. We indicate how we think the technology and the installed base of equipment is going to develop over the next few years, in order to give a picture of the future of ATM in data networking.

TR-97-04 Random Interval Graphs, February 1997 Nicholas Pippenger, 22 pages

We consider models for random interval graphs that are based on stochastic service systems, with vertices corresponding to customers and edges corresponding to pairs of customers that are in the system simultaneously. The number N of vertices in a connected component thus corresponds to the number of customers arriving during a busy period, while the size K of the largest clique (which for interval graphs is equal to the chromatic number) corresponds to the maximum number of customers in the system during a busy period. We obtain the following results for both the M/D/Infinity and the M/M/Infinity models, with arrival rate lambda per mean service time. The expected number of vertices is e^lambda, and the distribution of the N/e^lambda tends to an exponential distribution with mean 1 as lambda tends to infinity. This implies that log N is very strongly concentrated about lambda-gamma (where gamma is Euler's constant), with variance just pi^2/6. The size K of the largest clique is very strongly concentrated about e lambda. Thus the ratio K/log N is strongly concentrated about e, in contrast with the situation for random graphs generated by unbiased coin flips, where K/log N is very strongly concentrated about 2/log 2.

TR-97-05 Surface Reflectance and Shape from Images Using Collinear Light Source, July 28, 1997 Jiping Lu and Jim Little, 219 pages

Surface Reflectance and Shape from Images Using Collinear Light Source Jiping Lu Jim Little jplu@cs.ubc.ca little@cs.ubc.c Laboratory for Computational Intelligence Department of Computer Science The University of British Columbia Vancouver BC Canada V6T 1Z4

ABSTRACT The purpose of computer vision is to extract useful information from images. Image features such as occluding contours, edges, flow, brightness, and shading provide geometric and photometric constraints on the surface shape and reflectance of physical objects in the scene. In this thesis, two novel tech- niques are proposed for surface reflectance extraction and surface recovery. They integrate geometric and photometric constraints in images of a rotating object illuminated under a collinear light source (where the illuminant direc- tion of the light source lies on or near the viewing direction of the camera). The rotation of the object can be precisely controlled. The object surface is assumed to be $C^2$ and its surface reflectance function is uniform. The first technique, called the photogeometric technique, uses geometric and photometric constraints on surface points with surface normal perpendicular to the image plane to calculate 3D locations of surface points, then extracts the surface reflectance function by tracking these surface points in the images. Using the extracted surface reflectance function and two images of the surface, the technique recovers the depth and surface orientation of the object surface simultaneously. The second technique, named the wire-frame technique, further exploits geome- tric and photometric constraints on the surface points whose surface normals are coplanar with the viewing direction and the rotation axis to extract a set of 3D curves. The set of 3D curves comprises a wire frame on the surface. The depth and surface orientation between curves on the wire frame are interpolated by using geometric or photogeometric methods. The wire-frame technique is superior because it does not need the surface reflectance function to extract the wire frame. It also works on piecewise uniform surfaces and requires only that the light source be coplanar with the viewing direction and the rotation axis. In addition, by interpolating the depth and surface orientation from a dense wire frame, the surface recovered is more accurate. The two techniques have been tested on real images of surfaces with differ- ent reflectance properties and geometric structures. The experimental results and comprehensive analysis show that the proposed techniques are efficient and robust. As an attempt to extend our research to computer graphics, work on extracting the shading function from real images for graphics rendering shows some promising results. Key words: Physics based vision, Reflectance, Shape recovery, Integrating mul- tiple cues, Integrating multiple views, surface modeling, surface rendering.

TR-97-06 An Object-Oriented Graphics Kernel, April 1997 Gene Lee, 22 pages

A graphics kernel serves as interface between an application program and an underlying graphics subsystem. Developers interact with kernel primitives while the primitives interact with a graphics subsystem. Although the two forms of interaction closely relate, their optimal designs conflict. The former interaction prefers a process that closely follows the mental (or object-based) model of application development while the latter prefers a process that parses display-lists.

This papers describes RDI, an object-oriented graphics kernel that resolves the differences between the two interactions with one design. Developers explicitly assign intuitive relationships between primitives while an underlying process interprets the primitives in an orderly manner. The kernel's extensible design decouples the processes of modeling and rendering. Primitives dynamically communicate with graphics subsystems to express their purpose and functions. The discussion of the kernel's design entails its optimizations, its benefits toward simulation, and its application toward parallel rendering.

TR-97-10 Surface and Shading Models from Real Images fro Computer Graphics, August 01, 1997 Jiping Lu and Jim Little, 23 pages

Surface and Shading Models from Real images for Computer Graphics

Jiping Lu Jim Little jplu@cs.ubc.ca little@cs.ubc.c Laboratory for Computational Intelligence Department of Computer Science The University of British Columbia Vancouver BC Canada V6T 1Z4

ABSTRACT In this technical report we present an object modeling and rendering technique from real images for computer graphics. The technique builds the surface geo- metric model and extracts the surface shading model from a real image sequence of a rotating object illuminated under a collinear light source (where the illuminant direction of the light source is the same as the viewing direction of the camera). In building the surface geometric model, the object surface reflectance function is extracted from the real images and used to recover the surface depth and orientation of the object. In building the surface shading model, the different shading components, the ambient component, the diffuse component and the specular component, are calculated from the surface reflec- tance function extracted from the real images. Then the obtained shading model is used to render the recovered geometric model of the surface in arbi- trary viewing and illuminant directions. Experiments have been conducted on diffuse surface and specular surface. The synthetic images of the recovered object surface rendered with the extracted shading model are compared with the real images of the same objects. The results shows that the technique is feasible and promising. Index terms: Computer vision, computer graphics, Surface recovery, Surface modeling, Surface reflectance, Shading function, Graphics rendering, Virtual reality.

TR-97-11 A Fast Heuristic For Finding The Minimum Weight Triangulation, July 10, 1997 Ronald Beirouti

No polynomial time algorithm is known to compute the minimum weight triangulation MWT of a point set. In this thesis we present an efficient implementation of the LMT-skeleton heuristic. This heuristic computes a subgraph of the MWT of a point set from which the MWT can usually be completed. For uniformly distributed sets of tens of thousands of points our algorithm constructs the exact MWT in expected linear time and space. A fast heuristic, other than being usefull in areas such as stock cutting, finite element analysis, and terrain modeling, allows to experiment with different point sets in order to explore the complexity of the MWT problem. We present point sets constructed with this implementation such that the LMT-skeleton heuristic does not produce a complete graph and can not compute the MWT in polynomial time, or that can be used to prove the NP-Hardness of the MWT problem.

TR-97-12 Formalization and Analysis of the Separation Minima for the North Atlantic Region: Complete Specification and Analysis Results, October 30, 1997 Nancy A. Day, Jeffrey J. Joyce and Gerry Pelletier, 74 pages

This report describes work to formalize and validate a specification of the separation minima for aircraft in the North Atlantic (NAT) region completed by researchers at the University of British Columbia in collaboration with Hughes International Airspace Management Systems. Our formal representation of these separation minima is given in a mixture of a tabular style of specification and textual predicate logic. We analyzed the tables for completeness, consistency and symmetry. This report includes the full specification and complete analysis results.

TR-97-13 Average-Case Bounds on the Complexity of Path-Search, August 19, 1997 Nicholas Pippenger

Average-Case Bounds for the Complexity of Path-Search Nicholas Pippenger A channel graph is the union of all paths between a given input and a given output in an interconnection network. At any moment in time, each vertex in such a graph is either idle or busy. The search problem we consider is to find a path (from the given input to the given output) consisting entirely of idle vertices, or to find a cut (separating the given input from the given output) consisting entirely of busy vertices. We shall also allow the search to fail to find either a path or a cut with some probability bounded by a parameter called the failure probability. This is to be accomplished by sequentially probing the idle-or-busy status of vertices, where the vertex chosen for each probe may depend on the outcome of previous probes. Thus a search algorithm may be modelled as a decision tree. For average-case analysis, we assume that each vertex is independently idle with some fixed probability, called the vacancy probability (and therefore busy with the complementary probability). For one commonly studied type channel graph, the parallel graph, we show that the expected number of probes is at most proportional to the length of a path, irrespective of the vacancy probability, and even if the allowed failure probability is zero. Another type of channel graph we study is the spider-web graph, which is superior to the parallel graph as regard linking probability (the probability that an idle path, rather than a busy cut, exists). For this graph we give an algorithm for which, as the vacancy probability is varied while the positive failure probability is held fixed, the expected number of probes reaches its maximum near the critical vacancy probability (where the linking probability make a rapid transition from a very small value to a substantial value). This maximum expected number of probes is about the cube-root of the diversity (the number of paths between the input and output).

TR-97-14 Conceptual Module Querying for Software Reengineering, September 2, 1997 E. Baniassad and G.C. Murphy, 10 pages

Many tools have been built to analyze source. Most of these tools do not adequately support reengineering activities because they do not allow a software engineer to simultaneously perform queries about both the existing and the desired source structure. This paper introduces the conceptual module approach that overcomes this limitation. A conceptual module is a set of lines of source that are treated as a logical unit. We show how the approach simplifies the gathering of source information for reengineering tasks, and describe how a tool to support the approach was built as a front-end to existing source analysis tools.

TR-97-15 Extending and Managing Software Reflexion Models, September 14, 1997 G.C. Murphy, D. Notkin and K. Sullivan, 16 pages

The artifacts comprising a software system often "drift" apart over time. Design documents and source code are a good example. The software reflexion model technique was developed to help engineers exploit---rather than remove---this drift to help them perform various software engineering tasks. More specifically, the technique helps an engineer compare artifacts by summarizing where one artifact (such as a design) is consistent with and inconsistent with another artifact (such as source). The use of the technique to support a variety of tasks-including the successful use of the technique to support an experimental reengineering of a system comprising a million lines-of-code-identified a number of shortcomings. In this paper, we present two categories of extensions to the technique. The first category concerns the typing of software reflexion models to allow different kinds of interactions to be distinguished. The second category concerns techniques to ease the investigation of reflexion models. These extensions are aimed at making the engineer more effective in performing various tasks by improving the management and understanding of the inconsistencies---the drift---between artifacts.

TR-97-16 The Measured Access Characteristics of World-Wide-Web Client Proxy Caches, October 22, 1997 Bradley M. Duska, David Marwood and Michael J. Feeley

The growing popularity of the World Wide Web is placing tremendous demands on the Internet. A key strategy for scaling the Internet to meet these increasing demands is to cache data near clients and thus improve access latency and reduce network and server load. Unfortunately, research in this area has been hampered by a poor understanding of the locality and sharing characteristics of Web-client accesses. The recent popularity of Web proxy servers provides a unique opportunity to improve this understanding, because a small number of proxy servers see accesses from thousands of clients. This paper presents an analysis of access traces collected from seven proxy servers deployed in various locations throughout the Internet. The traces record a total of 47.4 million requests made by 23,700 clients over a twenty-one day period. We use a combination of static analysis and trace-driven cache simulation to characterize the locality and sharing properties of these accesses. Our analysis shows that a 2- to 10-GB second-level cache yields hit rates between 24% and 45% with 85% of these hits due to sharing among different clients. Caches with more clients exhibit more sharing and thus higher hit rates. Between 2% and 7% of accesses are consistency misses to unmodified objects, using the Squid and CERN proxy cache coherence protocols. Sharing is bimodal. Requests for shared objects are divided evenly between objects that are narrowly shared and those that are shared by many clients; widely shared objects also tend to be shared by clients from unrelated traces.

TR-97-17 A Logic For Default Reasoning, July 30, 1979 Raymond Reiter

(Abstract not available on-line)

TR-97-19 Supporting Learners in a Remote Computer-Supported Collaborative Learning Environment: The Importance of Task and Communication, February 25, 1998 David Graves

This paper describes novel research in the area of remote Computer-Supported Collaborative Learning. A multimedia activity (Builder) was designed to allow a pair of players to build a house together, each working from his or her own computer. Features of the activity include: interactive graphical interface, two- and three-dimensional views, sound feedback, and real-time written and spoken communication. Mathematical concepts, including area, perimeter, volume, and tiling of surfaces, are embedded in the task. A field study with 134 elementary school children was undertaken to assess the learning and collaborative potential of the activity. Specifically, the study addressed how different modes of communication and different task directives affected learning, interpersonal attitudes, and the perceived value and enjoyment of the task. It was found that playing led to academic gains in the target math areas, and that the nature of how the task was specified had a significant impact on the size of the gains. The mode of communication was found to affect attitudes toward the game and toward the player's partner. Gender differences were found in attitude toward the game, perceived collaboration and attitude toward partner.

TR-97-21 A Network-Enhanced Volume Renderer, August 01, 1997 Jeff LaPorte, 14 pages

Volume renedering is superior in many respects to conventional methods of medical visualization. Extending this ability into the realm of telemedicine provides the opportunity for health professionals to offer expertise not normally available to smaller communities, via computer networking of health centers. This report describes such a software system for collaboration, a network-enhanced version of an existing program called Volren. The methods used to provide network functionality in Volren can serve as a prototype for future multiuser applications.

TR-98-01 Singularity-Robust Trajectory Generation for Robotic Manipulators, April 16, 1998 John E. Lloyd, 42 pages

An algorithm is presented which, given a prescribed manipulator path and corresponding kinematic solution, computes a feasible trajectory in the presence of kinematic singularities. The resulting trajectory is close to minimum time, subject to explicit bounds on joint velocities and accelerations, and follows the path with precision. The algorithm has complexity O(M log M), with respect to the number of joint coordinates M, and works using "coordinate pivoting", in which the path timing near singularities is controlled using the fastest changing joint coordinate. This allows the handling of singular situations, including linear self-motions (e.g., wrist singularities), where the path speed is zero but other joint velocities are non-zero. To compute the trajectory, knots points are inserted along the input path, with increased knot density near singularities. Appropriate path velocities are then computed at each knot point, and the resulting knot-velocity sequence can be integrated to yield the path timing. Examples involving the PUMA manipulator are shown.

TR-98-02 Entropy and Enumeration, April 03, 1998 Nicholas Pippenger, 14 pages

Entropy and Enumeration Nicholas Pippenger Shannon's notion of the entropy of a random variable is used to give simplified proofs of asymptotic formulas for the logarithms of the numbers of monotone Boolean functions and Horn functions, and for equivalent results concerning families of sets and closure operations.

TR-98-03 Assessing Aspect-Oriented Programming and Design: Preliminary Results, May 1, 1998 Robert J. Walker, Elisa L. A. Baniassad and Gail C. Murphy, 6 pages

Aspect-oriented programming is a new software design and implementation technique proposed by researchers at Xerox PARC. This project is assessing the claims of aspect-oriented programming to improve the software development cycle for particular kinds of applications. The project is divided into three experiments, the first of which has been completed. These experiments have been designed to investigate, separately, such characteristics of aspect-oriented development as the creation of new aspect-oriented programs and ease of debugging aspect-oriented programs.

TR-98-04 Automatically Generated Test Frames from an S Specification of Separation Minima for the North Atlantic Region, April 30, 1998 M.R. Donat, 225 pages

A partially automated process for generating tests has been experimentally applied to a formal specification of a real world specification for air traffic separation minima. This report discusses the problems addressed by this process along with how and why this automation was achieved.

TR-98-05 Automatically Generated Test Frames from a Q Specification of ICAO Flight Plan Form Instructions, April 30, 1998 M.R. Donat, 367 pages

A partially automated process for generating tests has been experimentally applied to a portion of a real world system-level requirements specification. This paper discusses the problems addressed by this process along with how and why this automation was achieved. The requirements were formalized using a notation designed to be readable by a large proportion of requirements stakeholders. This report also addresses traceability of requirements to tests and introduces the requirements specification language Q.

TR-98-06 Ensuring the Inspectability, Repeatability and Maintainability of the Safety Verification of a Critical System, May 11, 1998 Ken Wong, Jeff Joyce, Jim Ronback and , 18 pages

This paper proposes an approach to the safety verification of the source code of a software-intensive system. This approach centers upon the production of a document intended to ensure the inspectability, maintainability and repeatability of the source code safety verification. This document, called a "safety verification case", is intended to be a part of the overall system safety case. Although the approach was designed for large software-intensive real-time information systems, it may also be useful for other kinds of large software systems with safety-related functionality. The approach involves the construction of a rigorous argument that the source code is safe. The steps of the argument include simplifying the safety verification case structure by isolating the relevant details of the source code, and reducing the "semantic gap" between the source code and the system level hazards through a series of hierarchical refinement steps. Some of the steps in a process based on this approach may be partially automated with tool-based support. Current research and industry practices are reviewed in this paper for supporting tools and techniques.

TR-98-07 The Computational Complexity of Knot and Link Problems, May 28, 1998 Joel Hass, Jeffrey C. Lagarias and Nicholas Pippenger, 32 pages

The Computational Complexity of Knot and Link Problems Joel Hass Department of Mathematics University of California, Davis Davis, CA 95616 USA hass@math.ucdavis.edu Jeffrey C. Lagarias Information Sciences Research A T & T Labs 180 Park Avenue Florham Park NJ 07932-0971 USA jcl@research.att.com Nicholas Pippenger Department of Computer Science University of British Columbia Vancouver, BC V6T 1Z4 CANADA nicholas@cs.ubs.ca We consider the problem of deciding whether a polygonal knot in three dimensional space, or alternatively a knot diagram, is unknotted (that is, whether it is capable of being deformed continuously without self-intersection so that it lies in a plane.) We show that the problem, UNKNOTTING PROBLEM, is in NP. We also consider the problem, SPLITTING PROBLEM, of determining whether two or more such polygons can be split (that is, whether they are capable of being continuously deformed without self-intersection so that they occupy both sides of a plane without intersecting it) and show that it is also in NP. We show that the problem of determining the genus of a polygonal knot (a generalization of the problem of determining whether it is unknotted) is in PSPACE. We also give exponential worst-case running time bounds for deterministic algorithms to solve each of these problems. These algorithms are based on the use of normal surfaces and decision procedures due to W. Haken, with recent extensions by W. Jaco and J. F. Tollefson.

TR-98-08 Galois Theory for Minors of Finite Functions, May 28, 1998 Nicholas Pippenger, i+15 pages

Galois Theory for Minors of Finite Functions Nicholas Pippenger A Boolean function f is a minor of a Boolean function g if f is obtained from g by substituting an argument of f, the complement of an argument of f, or a Boolean constant for each argument of g. The theory of minors has been used to study threshold functions (also known as linearly separable functions) and their generalization to functions of bounded order (where the degree of the separating polynomial is bounded, but may be greater than one). We construct a Galois theory for sets of Boolean functions closed under taking minors, as well as for a number of generalizations of this situation. In this Galois theory we take as the dual objects certain pairs of relations that we call ``constraints'', and we explicitly determine the closure conditions on sets of constraints.

TR-98-09 Extending Applications to the Network, August 31, 1998 David Marwood, 73 pages

Network applications are applications capable of selecting, at run-time, portions of their code to execute at remote network locations. By executing remote code in a restricted environment and providing convenient communication mechanisms within the application, network applications enable the implementation of tasks that cannot be implemented using traditional techniques. Even existing applications can realize significant performance improvements and reduced resource consumption when redesigned as network applications.

By examining several application domains, we expose specific desirable capabilities of a software infrastructure to support network applications. These capabilities entail a variety of interacting software development challenges for which we recommend solutions.

The solutions are applied in the design and implementation of a network application infrastructure, Jay, based on the Java language. Jay meets most of the desired capabilities, particularly demonstrating a cohesive and expressive communication framework and an integrated yet simple security model.

In all, network applications combine the best qualities of intelligent networks, active networks, and mobile agents into a single framework to provide a unique and effective development environment.

TR-98-10 Evaluating Emerging Software Development Technologies: Lessons Learned from Assessing Aspect-oriented Programming, July 24, 1998 Gail C. Murphy, Robert J. Walker and Elisa L.A. Baniassad, 31 pages

Two of the most important and most difficult questions one can ask about a new software development technique are whether the technique is useful and whether the technique is usable. Various flavours of empirical study are available to evaluate these questions, including surveys, case studies, and experiments. These different approaches have been used extensively in a number of domains, including management science and human-computer interaction. A growing number of software engineering researchers are using experimental methods to statistically validate hypotheses about relatively mature software development aids. Less guidance is available for a developer of a new and evolving software development technique who is attempting to determine, within some cost bounds, if the technique shows some usefulness. We faced this challenge when assessing a new programming technique called aspect-oriented programming. To assess the technique, we chose to apply both a case study approach and a series of four experiments because we wanted to understand and characterize the kinds of information that each approach might provide when studying a technique that is in its infancy. Our experiences suggest some avenues for further developing empirical methods aimed at evaluating software engineering questions. For instance, guidelines on how different observational techniques can be used as multiple sources of data would be helpful when planning and conducting a case study. For the experimental situation, more guidance is needed on how to balance the precision of measurement with the realism necessary to investigate programming issues. In this paper, we describe and critique the evaluation methods we employed, and discuss the lessons we have learned. These lessons are applicable to researchers attempting to assess other new programming techniques that are in an early stage of development.

TR-98-11 Trajectory Generation Implemented as a Non-linear Filter, August 17, 1998 John E. Lloyd, 20 pages

An algorithm is presented which, given a prescribed manipulator path and corresponding kinematic solution, computes a feasible trajectory in the presence of kinematic singularities. The resulting trajectory is close to minimum time, subject to explicit bounds on joint velocities and accelerations, and follows the path with precision. The algorithm has complexity O(M log M), with respect to the number of joint coordinates M, and works using ``coordinate pivoting'', in which the path timing near singularities is controlled using the fastest changing joint coordinate. This allows the handling of singular situations, including linear self-motions (e.g., wrist singularities), where the path speed is zero but other joint velocities are non-zero. To compute the trajectory, knots points are inserted along the input path, with increased knot density near singularities. Appropriate path velocities are then computed at each knot point, and the resulting knot-velocity sequence can be integrated to yield the path timing. Examples involving the PUMA manipulator are shown.

TR-98-12 An Initial Assessment of Aspect-Oriented Programming, August 31, 1998 Robert J. Walker, Elisa L. A. Baniassad and Gail C. Murphy, 10 pages

The principle of separation of concerns has long been used by software engineers to manage the complexity of software system development. Programming languages help software engineers explicitly maintain the separation of some concerns in code. As another step towards increasing the scope of concerns that can be captured cleanly within the code, Kiczales and colleagues have introduced aspect-oriented programming. In aspect-oriented programming, explicit language support is provided to help modularize design decisions that cross-cut a functionally-decomposed program. Aspect-oriented programming is intended to make it easier to reason about, develop, and maintain certain kinds of application code. To investigate these claims, we conducted two exploratory experiments that considered the impact of aspect-oriented programming, as found in AspectJ version 0.1, on two common programming activities: debugging and change. Our experimental results provide insights into the usefulness and usability of aspect-oriented programming. Our results also raise questions about the characteristics of the interface between aspects and functionally-decomposed core code that are necessary to accrue programming benefits. Most notably, the separation provided by aspect-oriented programming seems most helpful when the interface is narrow (i.e., the separation is more complete); partial separation does not necessarily provide partial benefit.

TR-98-13 Using MMX Technology in Digital Image Processing, December 11, 1998 Vladimir Kravtchenko, 11 pages

MMX technology is designed to accelerate multimedia and communications applications. The technology includes new processor instructions and data types and exploits the parallelism in processing multiple data. In this work we demonstrate how to use MMX technology in digital image processing applications and discuss important aspects of practical implementation with the GCC compiler. We also focus on the experimental results of common image processing operations and provide a comparative performance analysis.

TR-98-14 Verifying a Self-Timed Divider, March 30, 1998 Tarik Ono-Tesfaye, Christoph Kern and Mark Greenstreet, 13 pages

This paper presents an approach to verifying timed designs based on refinement: first, correctness is established for a speed-independent model; then, the timed design is shown to be a refinement of this model. Although this approach is less automatic than methods based on timed state space enumeration, it is tractable for larger designs. Our method is implemented using a proof checker with a built-in model checker for verifying properties of high-level models, a tautology checker for establishing refinement, and a graph-based timing verification procedure for showing timing properties of transistor level models. We demonstrate the method by proving the timing correctness of Williams' self-timed divider.

TR-98-15 Trajectory Generation Implemented as a Non-linear, August 16, 1998 John E. Lloyd, 20 pages

Filter path which is sufficiently well behaved that it can be tracked by a manipulator. However, the creation of good paths becomes somewhat problematic in situations where a manipulator is required to follow a target whose position is varying erratically (for instance, if the target is specified using a position sensor held in an operator's hand). This paper presents a simple solution for such situations, in which the ``trajectory generator'' is implemented as a non-linear filter which tries to bring its output (manipulator setpoints) to the input (target position) as quickly as possible, subject to constraints on velocity and acceleration. The solution to this problem in one dimension is quite easy. For multiple dimensions, the problem can be handled by applying one-dimensional solutions to a pair of appropriately chosen coordinate axes. An interesting feature of the approach is that it can handle spatial rotations as well as vector quantities.

TR-99-01 A Light-Weight Framework for Hardware Verification, February 24, 1999 Christoph Kern, Tarik Ono-Tesfaye and Mark R. Greenstreet, 15 pages

We present a deductive verification framework that combines deductive reasoning, general purpose decision procedures, and domain-specific reasoning. We address the integration of formal as well as informal domain-specific reasoning, which is encapsulated in the form of user-defined inference rules. To demonstrate our approach, we describe the verification of a SRT divider where a transistor-level implementation with timing is shown to be a refinement of its high-level specification.

TR-99-02 Analyzing Exception Flow in Java Programs, March 02, 1999 Martin R. Robillard and Gail C. Murphy, 15 pages

Exception handling mechanisms provided by programming languages are intended to ease the difficulty of developing robust software systems. Using these mechanisms, a software developer can describe the exceptional conditions a module might raise, and the response of the module to exceptional conditions that may occur as it is executing. Creating a robust system from such a localized view requires a developer to reason about the flow of exceptions across modules. The use of unchecked exceptions, and in object-oriented languages, subsumption, makes it difficult for a software developer to perform this reasoning manually. In this paper, we describe a tool called Jex that analyzes the flow of exceptions in Java code to produce views of the exception structure. We demonstrate how Jex can help a developer identify program points where exceptions are caught accidentally, where there is an opportunity to add finer-grained recovery code, and where error-handling policies are not being followed.

TR-99-03 Revised Characterizations of 1-Way Quantum Finite Automata, July 20, 2000 Alex Brodsky and Nicholas Pippenger, 23 pages

(Abstract not available on-line)The 2-way quantum finite automaton introduced by Kondacs and Watrous\cite{KoWa97} can accept non-regular languages with bounded error in polynomial time. If we restrict the head of the automaton to moving classically and to moving only in one direction, the acceptance power of this 1-way quantum finite automaton is reduced to a proper subset of the regular languages. In this paper we study two different models of 1-way quantum finite automata. The first model, termed measure-once quantum finite automata, was introduced by Moore and Crutchfield\cite{MoCr98}, and the second model, termed measure-many quantum finite automata, was introduced by Kondacs and Watrous\cite{KoWa97}. We characterize the measure-once model when it is restricted to accepting with bounded error and show that, without that restriction, it can solve the word problem over the free group. We also show that it can be simulated by a probabilistic finite automaton and describe an algorithm that determines if two measure-once automata are equivalent. We prove several closure properties of the classes of languages accepted by measure-many automata, including inverse homomorphisms, and provide a new necessary condition for a language to be accepted by the measure-many model with bounded error. Finally, we show that piecewise testable languages can be accepted with bounded error by a measure-many quantum finite automaton, in the process introducing new construction techniques for quantum automata.

TR-99-04 Atlas: A Case Study in Building a Web-Based Learning Environment Using Aspect-oriented Programming, March 31, 1999 Mik A. Kersten and Gail C. Murphy, 20 pages

The Advanced Teaching and Learning Academic Server (Atlas) is a software system that supports web-based learning. Students can register for courses, and can navigate through personalized views of course material. Atlas has been built according to Sun Microsystem's Java (TM) Servlet specification using Xerox PARC's aspect-oriented programming support called AspectJ (TM). Since aspect-oriented programming is still in its infancy, little experience with employing this paradigm is currently available. In this paper, we start filling this gap by describing the aspects we used in Atlas and by discussing the effect of aspects on our object-oriented development practices. We describe some rules and policies that we employed to achieve our goals of maintainability and modifiability, and introduce a straightforward notation to express the design of aspects. Although we faced some small hurdles along the way, this combination of technology helped us build a fast, well-structured system in a reasonable amount of time.

TR-99-06 Systematic vs. Local Search for SAT, May 03, 1999 Holger H. Hoos, 14 pages

Due to its prominence in artificial intelligence and theoretical computer science, the propositional satisfiability problem (SAT) has received considerable attention in the past. Traditionally, this problem was attacked with systematic search algorithms, but more recently, local search methods were shown to be very effective for solving large and hard SAT instances. Especially in the light of recent, significant improvements in both approaches, it is not very well understood which type of algorithm performs best on a specific type of SAT instances. In this article, we present the results of a comprehensive empirical study, comparing the performance of some of the best known stochastic local search and systematic search algorithms for SAT on a wide range of problem instances, including Random-3-SAT and SAT-encoded problems from different domains. We show that while for Random-3-SAT local search is clearly superior, more structured instances are often, but not always, more efficiently solved by systematic search algorithms. This suggests that considering the specific strengths and weaknesses of both approaches, hybrid algorithms or portfolio combinations might be most effective for solving SAT problems in practice.

TR-99-07 Using Embedded Network Processors to Implement Global Memory Management in a Workstation Cluster, June 28, 1999 Yvonne Coady, Joon Suan Ong and Michael J. Feeley, 10 pages

Advances in network technology continue to improve the communication performance of workstation and PC clusters, making high-performance workstation-cluster computing increasingly viable. These hardware advances, however, are taxing traditional host-software network protocols to the breaking point. A modern gigabit network can swamp a host's IO bus and processor, limiting communication performance and slowing computation unacceptably. Fortunately, host-programmable network processors used by these networks present a potential solution. Offloading selected host processing to these embedded network processors lowers host overhead and improves latency. This paper examines the use of embedded network processors to improve the performance of workstation-cluster global memory management. We have implemented a revised version of the GMS global memory system that eliminates host overhead by as much as 29% on active nodes and improves page fault latency by as much as 39%.

TR-99-08 Spirale Reversi: Reverse decoding of the Edgebreaker encoding, October 4, 1999 Martin Isenburg and Jack Snoeyink, 12 pages

We present a simple linear time algorithm for decoding Edgebreaker encoded triangle meshes in a single traversal. The Edgebreaker compression technique , introduced by Rossignac, encodes the topology of meshes homeomorphic to a sphere with a guaranteed 2 bits per triangle or less. The encoding algorithm visits every triangle of the mesh in a depth-first order. The original decoding algorithm recreates the triangles in the same order they have been visited by the encoding algorithm and exhibits a worst case time complexity of O(n^2). More recent work by Szymczak and Rossignac uses the same traversal order and improves the worst case to O(n). However, for meshes with handles multiple traversals are needed during both encoding and decoding. We introduce here a simpler decompression technique that performs a single traversal and recreates the triangles in reverse order.

TR-99-09 Computing Contour Trees in All Dimensions, August 26, 1999 Hamish Carr, Jack Snoeyink and Ulrike Axen, 10 pages

We show that contour trees can be computed in all dimensions by a simple algorithm that merges two trees. Our algorithm extends, simplifies, and improves work of Tarasov and Vyalyi and of van Kreveld et al.

TR-99-10 Mesh Collapse Compression, November 15, 1999 Martin Isenburg and Jack Snoeyink, 10 pages

Efficiently encoding the topology of triangular meshes has recently been the subject of intense study and many representations have been proposed. The sudden interest in this area is fueled by the emerging demand for transmitting 3D data sets over the Internet (e.g. VRML). Since transmission bandwidth is a scarce resource, compact encodings for 3D models are of great advantage. In this report we present a novel algorithm for encoding the topology of triangular meshes. Our encoding algorithm is based on the edge contract operation, which has been used extensively in the area of mesh simplification, but not for efficient mesh topology compression. We perform a sequence of edge contract and edge divide operations that collapse the entire mesh into a single vertex. With each edge contraction we store a vertex degree and with each edge division we store a start and an end symbol. This uniquely determines all inverse operations. For meshes that are homeomorphic to a sphere, the algorithm is especially simple. Surfaces of higher genus are encoded at the expense of a few extra bits per handle.

TR-99-11 Deciding When to Forget in the Elephant File System, December 12, 1999 Douglas S. Santry, Michael J. Feeley, Norman Hutchinson, Alistair C. Veitch, Ross W. Carton and Jacob Ofir, 12 pages

Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite. This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using file-grain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system. Elephant is implemented as a new Virtual File System in the FreeBSD kernel.

TR-99-12 Practical Point-in-Polygon Tests Using CSG Representations of Polygons, November 12, 1999 Robert J. Walker and Jack Snoeyink, 22 pages

We investigate the use of a constructive solid geometry (CSG) representation of polygons in testing if points fall within them; this representation consists of a tree whose nodes are either Boolean operators or edges. By preprocessing the polygons, we seek (1) to construct a space-conserving data structure that supports point-in-polygon tests, (2) to prune as many edges as possible while maintaining the semantics of our tree, and (3) to obtain a tight inner loop to make testing the remaining edges as fast as possible. We utilize opportunities to optimize the pruning by permuting sibling nodes. We find that this process is less memory-intensive than the grid method and faster than existing one-shot methods.

TR-99-13 Using Implicit Context to Ease Software Evolution and Reuse, November 11, 1999 Robert J. Walker and Gail C. Murphy, 11 pages

Software systems should consist of simple, conceptually clean components interacting along narrow, well-defined paths. All too often, this is not reality: complex components end up interacting for reasons unrelated to the functionality they provide. We refer to knowledge within a component that is not conceptually required for the individual behaviour of that component as extraneous embedded knowledge (EEK). EEK creeps in to a system in many forms, including dependences upon particular names and the passing of extraneous parameters. This paper proposes implicit context as a means for reducing EEK in systems. Implicit context combines a mechanism to reflect upon what has happened in a system through queries on the call history with a mechanism for altering calls to and from a component. We demonstrate the benefits of implicit context by describing its use to reduce EEK in the Java Swing library.

TR-99-14 Regaining Control of Exception Handling, December 01, 1999 Martin P. Robillard and Gail C. Murphy, 10 pages

Error Handling, Design. tends to degrade as the system evolves, the structure of exception handling also degrades. In this paper, we draw on our experience building and analyzing the exception structure of Java programs to describe why and how exception structure degrades. Fortunately, we need not let our exception structure languish. We also relate our experience at regaining control of exception structure in several existing programs using a technique based on software containment.

TR-99-15 The Virtual Hand Laboratory Architecture, December 15, 1999 Valerie A. Summers, 29 pages

The Virtual Hand Lab (VHL) is an augmented reality environment for conducting experiments in human perception and motor performance that involve grasping, manipulation, and other complex 3D tasks that people perform with their hands. Our system supports a wide range of experiments and is used by (non-programmer) experimenters. Our system co-locates the hand and the manipulated objects (whether physical or virtual) in the same visual space. Spatial and temporal accuracy are maintained by using a high precision tracker and an efficient implementation of the software architecture which carefully synchronizes the timing equipment and software. There are many issues which influence architectural design; two of which are modularization and performance. We balance these concerns by creating a layered set of modules upon which the application is built and an animation control loop which cuts across module boundaries to control the timing of equipment and application. Augmented objects are composed of both physical and graphical components. The graphical object inheritance hierarchy has several unusual features. First, we provide a mechanism to decouple the movement of the graphical component of an augmented object from its physical object counterpart. This provides flexibility in the types of experiments supported by the testbed. Second, we create subclasses based on properties of the physical component of the augmented objects before creating subclasses based on the virtual components. Specifically, we categorize physical objects as either rigid or flexible based on their level of deformation. This allows us to efficiently implement many of the manipulation techniques. Third, after subclasses based on the physical objects have been created, the implementation of concrete virtual object classes is driven by the goal of creating an easy interface for the experimenters. This was based on our user centered design approach.

TR-99-16 Heavy-Tailed Behaviour in Randomised Systematic Search Algorithms for SAT?, November 30, 1999 Holger H. Hoos, 16 pages

Prompted by recent results reported by Carla Gomes, Bart Selman, and Henry Kautz, and in the context of my ongoing project with Thomas Stuetzle on characterising the behaviour of state-of-the-art algorithms for SAT, I measured some run-time distributions for Satz-Rand, the randomised version of the Satz algorithm, when applied to problem instances from various domains. I could not find truly heavy-tailed behaviour (in the sense of the definition used by Carla Gomes et.al.). Instead, I found evidence for multimodal distributions which might be characterisable using mixtures of the generalised exponential distributions introduced in (Hoos, 1999). However, the observed RTDs typically have long tails and random restart, using suitable cutoff times, increases the efficiency of the algorithm, as claimed by Gomes et.al. Furthermore, taking another look at the issue of heavy tails at the left-hand side of run-time distributions, I raise some questions regarding the arguments found in (Gomes et.al., 2000).

TR-2000-01 Stochastic Local Search Methods for Dynamic SAT - an Initial Investigation, February 01, 2000 Holger H. Hoos and Kevin O'Neill, 13 pages

We introduce the dynamic SAT problem, a generalisation of the satisfiability problem in propositional logic which allows changes of a problem over time. DynSAT can be seen as a particular form of a dynamic CSP, but considering results and recent success in solving conventional SAT problems, we believe that the conceptual simplicity of SAT will allow us to more easily devise and investigate high-performing algorithms for DynSAT than for dynamic CSPs. In this article, we motivate the DynSAT problem, discuss various formalisations of it, and investigate stochastic local search (SLS) algorithms for solving it. In particular, we apply SLS algorithms which perform well on conventional SAT problems to dynamic problems and we analyse and characterise their performance empirically; this initial investigation indicates that the performance differences between various algorithms of the well-known WalkSAT family of SAT algorithms generally carry over when applied to DynSAT. We also study different generic approaches of solving DynSAT problems using SLS algorithms and investigate their performance differences when applied to different types of DynSAT problems.

TR-2000-02 R-Simp to PR-Simp: Parallelizing A Model Simplification Algorithm, February 07, 2000 Dmitry Brodsky, 8 pages

As modelling and visualization applications proliferate, there arises a need to simplify large polygonal models at interactive rates. Unfortunately existing polygon mesh simplification algorithms are not well suited for this task because they are either too slow (requiring pre-computation) or produce models that are too poor in quality. Given a multi-processor environment a common approach for obtaining the required performance is to parallelize the algorithm. Many non-trivial issues need to be taken into account when parallelizing a sequential algorithm. We present PR-Simp a parallel model simplification algorithm and the issues involved in parallelizing R-Simp.

TR-2000-04 Enumeration of Equicolourable Trees, February 22, 2000 Nicholas Pippenger, 30 pages

A tree, being a connected acyclic graph, can be bicoloured in two ways, which differ from each other by exchange of the colours. We shall say that a tree is equicolourable if these bicolourings assign the two colours to equal numbers of vertices. Labelled equicoloured trees have been enumerated several times in the literature, and from this result it is easy to enumerate labelled equicolourable trees. The result is that the probability that a randomly chosen n-vertex labelled tree is equicolourable is asymptotically just twice the probability that its vertices would be equicoloured if they were assigned colours by independent unbiased coin flips. Our goal in this paper is the enumeration of unlabelled equicolourable trees (that is, trees up to isomorphism), both exactly (in terms of generating functions) and asymptotically. We treat both the rooted and unrooted versions of this problem, and conclude that in either case the probability that a randomly chosen n-vertex unlabelled tree is equicolourable is asymptotically 1.40499... times as large as the probability that it would be equicoloured if its vertices were assigned colours by independent unbiased coin flips.

TR-2000-05 Optimal and Approximate Stochastic Planning using Decision Diagrams, June 10, 2000 Jesse Hoey, Robert St.Aubin, Alan Hu and Craig Boutilier, 35 pages

Structured methods for solving factored Markov decision processes (MDPs) with large state spaces have been proposed recently to allow dynamic programming to be applied without the need for complete state enumeration. We present algebraic decision diagrams (ADDs) as efficient data structures for solving very large MDPs. We describe a new value iteration algorithm for exact dynamic programming, using an ADD input representation of the MDP. We extend this algorithm with an approximate version for generating near-optimal value functions and policies with much lower time and space requirements than the exact version. We demonstrate our methods on a class of large MDPs (up to 34 billion states). We show that significant gains can be had with our optimal value iteration algorithm when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions). We then demonstrate our approximate algorithm and compare results with the optimal ones. Finally, we examine various variable reordering techniques and demonstrate their use within the context of our methods.

TR-2000-06 Using Idle Workstations to Implement Predictive Prefetching, June 12, 2000 Jasmine Y.Q. Wang, Joon Suan Ong, Yvonne Coady and Michael J. Feeley, 8 pages

The benefits of Markov-based predictive prefetching have been largely overshadowed by the overhead required to produce high quality predictions. While both theoretical and simulation results for prediction algorithms appear promising, substantial limitations exist in practice. This outcome can be partially attributed to the fact that practical implementations ultimately make compromises in order to reduce overhead. These compromises limit the level of algorithm complexity, the variety of access patterns, and the granularity of trace data the implementation supports. This paper describes the design and implementation of GMS-3P, an operating-system kernel extension that offloads prediction overhead to idle network nodes. GMS-3P builds on the GMS global memory system, which pages to and from remote workstation memory. In GMS-3P, the target node sends an on-line trace of an application's page faults to an idle node that is running a Markov-based prediction algorithm. The prediction node then uses GMS to prefetch pages to the target node from the memory of other workstations in the network. Our preliminary results show that predictive prefetching can reduce remote-memory page fault time by 60% or more and that by offloading prediction overhead to an idle node, GMS-3P can reduce this improved latency by between 24% and 44%, depending on Markov-model order.

TR-2000-07 Eliminating Cycles in Composed Class Hierarchies, July 8, 2000 Robert J. Walker, 13 pages

Multiple class hierarchies can be used each to represent a separate requirement or design concern. To yield a working system, these disparate hierarchies must be composed in a semantically meaningful way. However, cycles can arise in the composed inheritance graph that restrict the space of composable hierarchies. This work presents an approach to eliminating these cycles by means of separating the type hierarchy from the implementation hierarchy; separate solutions are provided for languages permitting multiple inheritance, such as C++, and those permitting only interfaces, such as Java. The resulting acyclic class hierarchy will maintain the significant constraints imposed by the original, separate hierarchies, such as type-safety.

TR-2000-08 Determination of Intensity Thresholds via Shape Gradients, August 01, 2000 Roger Tam and Alain Fournier, 9 pages

medical imaging, segmentation, shape representation, shape measurement, thresholding, Union of Circles objects in an image provide high-level information that is essential for many image processing tasks. Accurate analysis of medical images is often dependent upon an appropriate greyscale thresholding of the image for reliable feature extraction. The determination of object thresholds can be a time-consuming task because the thresholds can vary greatly depending upon the quality and type of image. Thus, an efficient method for determining suitable thresholds is highly desirable. This paper presents a method that uses shape information to accurately determine the intensity ranges of objects present in a greyscale image. The technique introduced is based on the computation of the \emph{shape gradient}, a numerical value for the difference in shape. In this case, the difference in shape is caused by the change in threshold value applied to the image. The use of this gradient allows us to determine significant shape change \emph{events} in the evolution of object forms as the threshold varies. The gradient is computed using \emph{Union of Circles} matching, a method previously shown to be effective in computing shape differences. We show the results of applying this method to artificially computed images and to real medical images. The quality of these results shows that the method is potentially viable in practical applications.

TR-2000-09 Efficient Mapping of Software System Traces to Architectural Views, July 07, 2000 Robert J. Walker, Gail C. Murphy, Jeffrey Steinbok and Martin P. Robillard, 9 pages

Information about a software system's execution can help a developer with many tasks, including software testing, performance tuning, and program understanding. In almost all cases, this dynamic information is reported in terms of source-level constructs, such as procedures and methods. For some software engineering tasks, source-level information is not optimal because there is a wide gap between the information presented (i.e., procedures) and the concepts of interest to the software developer (i.e., subsystems). One way to close this gap is to allow developers to investigate the execution information in terms of a higher-level, typically architectural, view. In this paper, we present a straightforward encoding technique for dynamic trace information that makes it tractable and efficient to manipulate a trace from a variety of different architecture-level viewpoints. We also describe how this encoding technique has been used to support the development of two tools: a visualization tool and a path query tool. We present this technique to enable the development of additional tools that manipulate dynamic information at a higher-level than source.

TR-2000-10 The Rectilinear Crossing Number of K_10 is 62, August 10, 2000 Alex Brodsky, Stephane Durocher and Ellen Gethner, 19 pages

A drawing of a graph G in the plane is said to be a rectilinear drawing of G if the edges are required to be line segments (as opposed to Jordan curves). We assume no three vertices are collinear. The rectilinear crossing number of G is the fewest number of edge crossings attainable over all rectilinear drawings of G. Thanks to Richard Guy, exact values of the rectilinear crossing number of K_n, the complete graph on n vertices, for n = 3,...,9, are known (Guy 1972, White and Beineke 1978). Since 1971, thanks to the work of David Singer, the rectilinear crossing number of K_10 has been known to be either 61 or 62, a deceptively innocent and tantalizing statement. The difficulty of determining the correct value is evidenced by the fact that Singer's result has withstood the test of time. In this paper we use a purely combinatorial argument to show that the rectilinear crossing number of K_10 is 62. Moreover, using this result, we improve an asymptotic lower bound for a related problem. Finally, we close with some new and old open questions that were provoked, in part, by the results of this paper, and by the tangled history of the problem itself.

TR-2000-11 Toward the Rectilinear Crossing Number of K_n: New Drawings, Upper Bounds, and Asymptotics, September 14, 2000 Alex Brodsky, Stephane Durocher and Ellen Gethner, 13 pages

Scheinerman and Wilf (1994) assert that `an important open problem in the study of graph embeddings is to determine the rectilinear crossing number of the complete graph K_n.' A rectilinear drawing of K_n is an arrangement of n vertices in the plane, every pair of which is connected by an edge that is a line segment. We assume that no three vertices are collinear, and that no three edges intersect in a point unless that point is an endpoint of all three. The rectilinear crossing number of K_n is the fewest number of edge crossings attainable over all rectilinear drawings of K_n. For each n we construct a rectilinear drawing of K_n that has the fewest number of edge crossings and the best asymptotics known to date. Moreover, we give some alternative infinite families of drawings of K_n with good asymptotics. Finally, we mention some old and new open problems.

TR-2001-01 Quantum Signal Propagation in Depolarizing Channels, March 28, 2001 Nicholas Pippenger, 7 pages

Quantum Signal Propagation in Depolarizing Channels Nicholas Pippenger Abstract: Let X be an unbiassed random bit, let Y be a qubit whose mixed state depends on X, and let the qubit Z be the result of passing Y through a depolarizing channel, which replaces Y with a completely random qubit with probability p. We measure the quantum mutual information between X and Y by T(X; Y) = S(X) + S(Y) - S(X,Y), where S(...) denotes von Neumann's entropy. (Since X is a classical bit, the quantity T(X; Y) agrees with Holevo's bound chi(X; Y) to the classical mutual information between X and the outcome of any measurement of Y.) We show that T(X;Z) <= (1-p)^2 T(X;Y). This generalizes an analogous bound for classical mutual information due to Evans and Schulman, and provides a new proof of their result.

TR-2001-02 Analysis of Carry Propagation in Addition: An Elementary Approach, March 28, 2001 Nicholas Pippenger, 23 pages

Analysis of Carry Propagation in Addition: An Elementary Approach Nicholas Pippenger Abstract: Our goal in this paper is to analyze carry propagation in addition using only elementary methods (that is, those not involving residues, contour integration, or methods of complex analysis). Our results concern the length of the longest carry chain when two independent uniformly distributed n-bit numbers are added. First, we show using just first- and second-moment arguments that the expected length C_n of the longest carry chain satisfies C_n = log_2 n + O(1). Second, we use a sieve (inclusion-exclusion) argument to give an exact formula for C_n. Third, we give an elementary derivation of an asymptotic formula due to Knuth, C_n = log_2 n + Phi(log_2 n) + O((log n)^4 / n), where Phi(x) is a bounded periodic function of x, with period 1, for which we give both a simple integral expression and a Fourier series. Fourth, we give an analogous asymptotic formula for the variance V_n of the length of the longest carry chain: V_n = Psi(log_2 n) + O((log n)^5 / n), where Psi(x) is another bounded periodic function of x, with period 1. Our approach can be adapted to addition with the "end-around" carry that occurs in the sign-magnitude and 1s-complement representations. Finally, our approach can be adapted to give elementary derivations of some asymptotic formulas arising in connection with radix-exchange sorting and collision-resolution algorithms, which have previously been derived using contour integration and residues.

TR-2001-03 Proving Sequential Consistency by Model Checking, May 17, 2001 Tim Braun, Anne Condon, Alan J. Hu, Kai S. Juse, Marius Laza, Michael Leslie and Rita Sharma, 23 pages

Sequential consistency is a multiprocessor memory model of both practical and theoretical importance. The general problem of deciding whether a finite-state protocol implements sequential consistency is undecidable. In this paper, however, we show that for the protocols that arise in practice, proving sequential consistency can be done automatically in theory and can be reduced to regular language inclusion via a small amount of manual effort. In particular, we introduce an approach to construct finite-state ``observers'' that guarantee that a protocol is sequentially consistent. We have developed possible observers for several cache coherence protocols and present our experimental model checking results on a substantial directory-based cache coherence protocol. From a theoretical perspective, our work characterizes a class of protocols, which we believe encompasses all real protocols, for which sequential consistency can be decided. From a practical perspective, we are presenting a methodology for designing memory protocols such that sequential consistency may be proven automatically via model~checking.

TR-2001-05 Separating Crosscutting Concerns Across the Lifecycle:, August 08, 2001 Siobhan Clarke and Robert J. Walker, 13 pages

From Composition Patterns to AspectJ and Hyper/J distribution or persistence) present many problems for software development that manifest themselves throughout the lifecycle. Inherent properties of crosscutting requirements, such as scattering (where their support is scattered across multiple classes) and tangling (where their support is tangled with elements supporting other requirements), reduce the reusability, extensibility, and traceability of the affected software artefacts. Scattering and tangling exist both in designs and code and must therefore be addressed in both. To remove scattering and tangling properties, a means to separate the designs and code of crosscutting behaviour into independent models or programs is required. This paper discusses approaches that achieve exactly that in either designs or code, and presents an investigation into a means to maintain this separation of crosscutting behaviour seamlessly across the lifecycle. To achieve this, we work with composition patterns at the design level, AspectJ and Hyper/J at the code level, and investigate a mapping between the two levels. Composition patterns are a means to separate the design of crosscutting requirements in an encapsulated, independent, reusable, and extensible way. AspectJ and Hyper/J are technologies that provide similar levels of separation for Java code. We discuss each approach, and map the constructs from composition patterns to those of AspectJ and Hyper/J. We first illustrate composition patterns with the design of the Observer pattern, and then map that design to the appropriate code. As this is achieved with varying levels of success, the exercise also serves as a case study in using those implementation techniques.

TR-2001-06 Aspect-Oriented Incremental Customization of Middleware Services, May 28, 2001 Alex Brodsky, Dima Brodsky, Ida Chan, Yvonne Coady, Jody Pomkoski and Gregor Kiczales, 12 pages

As distributed applications evolve, incremental customization of middleware services is often required; these customizations should be unpluggable, modular, and efficient. This is difficult to achieve because the customizations depend on both application-specific needs and the services provided. Although middleware allows programmers to separate application-specific functionality from lower-level details, traditional methods of customization do not allow efficient modularization. Currently, making even minor changes to customize middleware is complicated by the lack of locality. Programmers may have to compromise between the two extremes: to interpose a simple, well-localized layer of functionality between the application and middleware, or to make a large number of small, poorly localized, invasive changes to all execution points which interact with middleware services. Although the invasive approach allows a more efficient customization, it is harder to ensure consistency, more tedious to implement, and exceedingly difficult to unplug. Thus, a common approach is to add an extra layer for systemic concerns such as robustness, caching, filtering, and security. Aspect-oriented programming (AOP) offers a potential alternative between the interposition and invasive approaches by providing modular support for the implementation of crosscutting concerns. AOP enables the implementation of efficient customizations in a structured and unpluggable manner. We demonstrate this approach by comparing traditional and AOP customizations of fault tolerance in a distributed file system model, JNFS. Our results show that using AOP can reduce the amount of invasive code to almost zero, improve efficiency by leveraging the existing application behaviour, and facilitate incremental customization and extension of middleware services.

TR-2001-07 Using Versioning to Simplify the Implementation of a Highly-Available File System, January 23, 2001 Dima Brodsky, Jody Pomkoski, Mike Feely, Norm Hutchinson and Alex Brodsky, 5 pages

(Abstract not available on-line)

TR-2001-08 Image-Based Measurement of Light Sources With Correct Filtering, July 30, 2002 Wolfgang Heidrich and Michael Goesele, 9 pages

In this document we explore the theory and potential experimental setups for measuring the near field of a complex luminary. This work extends on near field photometry by taking filtering issues into account. The physical measurement setups described here have not been tested at the time of writing this document, we simply describe several possibilities here. Once actual tests have been performed, the results will be published elsewhere.

TR-2001-09 Constraint-Based Agents: A Formal Model for Agent Design, May 25, 2001 Alan K. Mackworth and Ying Zhang, 20 pages

Formal models for agent design are important for both practical and theoretical reasons. The Constraint-Based Agent (CBA) model includes a set of tools and methods for specifying, designing, simulating, building, verifying, optimizing, learning and debugging controllers for agents embedded in an active environment. The agent and the environment are modelled symmetrically as, possibly hybrid, dynamical systems in Constraint Nets. This paper is an integrated presentation of the development and application of the CBA framework, emphasizing the important special case where the agent is an online constraint-satisfying device. Using formal modeling and specification, it is often possible to verify complex agents as obeying real-time temporal constraint specifications and, sometimes, to synthesize controllers automatically. In this paper, we take an engineering point of view, using requirements specification and system verification as measurement tools for intelligent systems. Since most intelligent systems are real-time dynamic systems, the requirements specification must be able to represent timed properties. We have developed timed $\forall$-automata for this purpose. We present this formal specification, examples of specifying requirements and a general procedure for verification. The CBA model demonstrates the power of viewing constraint programming as the creation of online constraint-solvers for dynamic constraints.

TR-2001-10 The Shortest Disjunctive Normal Form of a Random Boolean Function, June 08, 2001 Nicholas Pippenger, 28 pages

The Shortest Disjunctive Normal Form of a Random Boolean Function Nicholas Pippenger This paper gives a new upper bound for the average length l(n) of the shortest disjunctive normal form for a random Boolean function of n arguments, as well as new proofs of two old results related to this quantity. We consider a random Boolean function of n arguments to be uniformly distributed over all 2^{2^n} such functions. (This is equivalent to considering each entry in the truth-table to be 0 or 1 independently and with equal probabilities.) We measure the length of a disjunctive normal form by the number of terms. (Measuring it by the number of literals would simply introduce a factor of n into all our asymptotic results.) We give a short proof using martingales of Nigmatullin's result that almost all Boolean functions have the length of their shortest disjunctive normal form asymptotic to the average length l(n). We also give a short information-theoretic proof of Kuznetsov's lower bound l(n) >= (1+o(1)) 2^n / log n log log n. (Here log denotes the logarithm to base 2.) Our main result is a new upper bound l(n) <= (1+o(1)) H(n) 2^n / log n log log n, where H(n) is a function that oscillates between 1.38826... and 1.54169.... The best previous upper bound, due to Korshunov, had a similar form, but with a function oscillating between 1.581411... and 2.621132.... The main ideas in our new bound are (1) the use of Ro"dl's "nibble" technique for solving packing and covering problems, (2) the use of correlation inequalities due to Harris and Janson to bound the effects of weakly dependent random variables, and (3) the solution of an optimization problem that determines the sizes of "nibbles" and larger "bites" to be taken at various stages of the construction.

TR-2001-11 Characterizations of Random Set-Walks, August 1, 2001 Joseph H. T. Wong, 58 pages

In this thesis, we introduce a new class of set-valued random processes called random set-walk, which is an extension of the classical random walk that takes into account both the nonhomogeneity of the walk's environment, and the additional factor of nondeterminism in the choices of such environments. We also lay down the basic framework for studying random set-walks. We define the notion of a characteristic tuple as a 4-tuple of first-exit probabilities which characterizes the behaviour of a random walk in a nonhomogeneous environment, and a characteristic tuple set as its analogue for a random set-walk. We prove several properties of random set-walks and characteristic tuples, from which we derive our main result: the long-run behaviour of a sequence of random set-walks, relative to the endpoints of the walks, converges as the length of the walks tend to infinity.

TR-2001-12 Enumeration of Matchings in the Incidence Graphs of Complete and Complete Bipartite Graphs, September 10, 2001 Nicholas Pippenger, 23 pages

Enumeration of Matchings in the Incidence Graphs of Complete and Complete Bipartite Graphs Nicholas Pippenger If G = (V, E) is a graph, the incidence graph I(G) is the graph with vertices the union of V and E and an edge joining v in V and e in E when and only when v is incident with e in G. For G equal to K_n (the complete graph on n vertices) or K_{n,n} (the complete bipartite graph on n + n vertices), we enumerate the matchings (sets of edges, no two having a vertex in common) in I(G), both exactly (in terms of generating functions) and asymptotically. We also enumerate the equivalence classes of matchings (where two matchings are considered equivalent if there is an automorphism of G that induces an automorphism of I(G) that takes one to the other).

TR-2001-13 Concern Graphs: Finding and Describing Concerns Using Structural Program Dependencies, September 10, 2001 Martin P. Robillard and Gail C. Murphy, 11 pages

Many maintenance tasks address concerns, or features, that are not well modularized in the source code comprising a system. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code, complicating the analysis of the concerns. In this paper, we introduce the Concern Graph representation that abstracts the implementation details of a concern and makes explicit the relationships between different parts of the concern. The abstraction used in a Concern Graph has been designed to allow an obvious and inexpensive mapping back to the corresponding source code. To investigate the practical tradeoffs related to this approach, we have built the Feature Exploration and Analysis tool (FEAT) that allows a developer to manipulate a concern representation extracted from a Java system, and to analyze the relationships of that concern to the code base. We have used this tool to find and describe concerns related to software change tasks. We have performed case studies to evaluate the feasibility, usability, and scalability of the approach. Our results indicate that Concern Graphs can be used to document a concern for change, that developers unfamiliar with Concern Graphs can use them effectively, and that the underlying technology scales to industrial-sized programs.

TR-2001-14 Loosely Coupled Optimistic Replication for Highly Available, Scalable Storage, September 13, 2001 Dima Brodsky, Jody Pomkoski, Michael J. Feeley, Norman Hutchinson and Alex Brodsky, 12 pages

People are becoming increasingly reliant on computing devices and are trusting increasingly important data to persistent storage. These systems should protect this data from failure and ensure that it is available anytime, from anywhere. Unfortunately, traditional mechanisms for ensuring high availability suffer from the complexity of maintaining consistent, distributed replicas of data. This paper describes Mammoth, a novel file system that uses a loosely-connected set of nodes to replicate data and maintain consistency. The key idea of Mammoth is that files and directories are stored as histories of immutable versions and that all meta-data is stored in append-only change logs. Users specify availability policies for their files and the system uses these policies to replicate certain, but not necessarily all, versions to remote nodes to protect them from a variety of failures. Because file data is immutable, it can be freely replicated without complicating the file's consistency. File and directory meta-data is replicated using an optimistic policy that allows partitioned nodes to read and write whatever file versions are currently accessible. When network partitions heal, inconsistent meta-data is reconciled by merging the meta-data updates made in each partition; conflicting updates manifest as branches in the file's or directory's history and can thus can be further resolved by higher-level software or users. We describe our design and the implementation and performance of an early prototype.

TR-2001-15 Bayesian Latent Semantic Analysis of Multimedia Databases, October 11, 2001 Nando de Freitas and Kobus Barnard, 35 pages

We present a Bayesian mixture model for probabilistic latent semantic analysis of documents with images and text. The Bayesian perspective allows us to perform automatic regularisation to obtain sparser and more coherent clustering models. It also enables us to encode a priori knowledge, such as word and image preferences. The learnt model can be used for browsing digital databases, information retrieval with image and/or text queries, image annotation (adding words to an image) and text illustration (adding images to a text).

TR-2001-17 Clustering Facial Displays in Context, November 13, 2001 Jesse Hoey, 16 pages

A computer user's facial displays will be context dependent, especially in the presence of an embodied agent. Furthermore, each interactant will use their face in different ways, for different purposes. These two hypotheses motivate a method for clustering patterns of motion in the human face. Facial motion is described using optical flow over the entire face, projected to the complete orthogonal basis of Zernike polynomials. A context-dependent mixture of hidden Markov models (cmHMM) clusters the resulting temporal sequences of feature vectors into facial display classes. We apply the clustering technique to sequences of continuous video, in which a single face is tracked and spatially segmented. We discuss the classes of patterns uncovered for a number of subjects.

TR-2001-18 The Optimized Segment Support Map for the Mining of Frequent Patterns, November 15, 2001 Carson Kai-Sang Leung, Raymond T. Ng and Heikki Mannila, 25 pages

Computing the frequency of a pattern is a key operation in data mining algorithms. We describe a simple, yet powerful, way of speeding up any form of frequency counting satisfying the monotonicity condition. Our method, the optimized segment support map (OSSM), is based on a simple observation about data: Real life data sets are not necessarily be uniformly distributed. The OSSM is a light-weight structure that partitions the collection of transactions into m segments, so as to reduce the number of candidate patterns that require frequency counting. We study the following problems: (i) What is the optimal value of m, the number of segments to be used (the segment minimization problem)? (ii) Given a user-determined m, what is the best segmentation/composition of the m segments (the constrained segmentation problem)? For the segment minimization problem, we provide a thorough analysis and a theorem establishing the minimum value of m for which there is no accuracy lost in using the OSSM. For the constrained segmentation problem, we develop various algorithms and heuristics to help facilitate segmentation. Our experimental results on both real and synthetic data sets show that our segmentation algorithms and heuristics can efficiently generate OSSMs that are compact and effective.

TR-2001-19 Animation of Fish Swimming, January 30, 2002 William F. Gates, 8 pages

We present a simple, two-part model of the locomotion of slender-bodied aquatic animals designed specifically for the needs of computer animation. The first part of the model is kinematic and addresses body deformations for three swimming modes: steady swimming, rapid starting, and turning. The second part of the model is dynamic and addresses the resulting propulsion of the aquatic animal. While this approach is not as general as a fully dynamic model, it provides the animator with a small set of intuitive parameters that directly control how the fish model moves and is more efficient to simulate.

TR-2001-20 Free-Surface Conditions in the Realistic Animation of Liquids, January 30, 2002 William F. Gates, 13 pages

The realistic animation of liquids based on the dynamic simulation of free-surface flow requires appropriate conditions on the liquid-gas interface. These conditions can be painstaking to implement and are in general not unique. We present the conditions we use in our implementation of a fluid animation system and discuss our rationale behind them.

TR-2001-22 Controlling Fluid Flow Simulation, January 30, 2002 William F. Gates and Alain Fournier, 3 pages

Simulating fluid dynamics can be a powerful approach to animating liquids and gases, but it is often difficult to ``direct'' the simulation to ``perform'' as desired. We introduce a simple yet powerful technique of controlling incompressible flow simulation for computer animation purposes that works for any simulation method using a projection scheme for numerically solving the Navier-Stokes equations. In our technique, an abstract vector field representing the desired influence over the simulated flow is modelled using simple primitives. This technique allows an arbitrarily degree of control over the simulated flow at every point while still conserving mass, momentum, and energy.

TR-2002-01 Understanding Design Patterns with Design Rationale Graphs, March 01, 2002 Elisa L. A. Baniassad, Gail C. Murphy and Christa Schwanninger, 8 pages

A Design Pattern presents a proven solution to a common design problem using a combination of informal text, diagrams, and examples. Often, to suitably describe an issue, the author of a Design Pattern must spread and repeat information throughout the Pattern description. Unfortunately, spreading the information can make it difficult for a reader to grasp subtleties in the design, leading to possible misuses of the Pattern. In this paper, we introduce the Design Rationale Graph (DRG) representation that connects and visualizes related concepts described in a Design Pattern. The localization of concept information is intended to help improve a reader\222s understanding of a Design Pattern. Improved comprehension of a Pattern could aid the use of a Pattern during implementation, and the reading of code built upon the Pattern. In addition to describing the DRG representation, we present a tool we have built to support the semi-automatic creation of a DRG from Design Pattern text, and we report on a small study conducted to explore the utility of DRGs. The study showed that readers with access to a DRG were able to answer questions about the Pattern more completely and with more confidence than those given the Design Pattern alone.

TR-2002-02 Proceedings of the First AOSD Workshop on Aspects, Components, and Patterns for Intrastructure Software, April 23, 2002 Yvonne Coady (Ed.), 85 pages

Aspect-oriented programming, component models, and design patterns are modern and actively evolving techniques to improving the modularization of complex software. In particular, these techniques hold great promise for the development of "systems infrastructure" software, e.g., application servers, middleware, virtual machines, compilers, operating systems, and other software that provides general services for higher-level applications. The developers of infrastructure software are currently faced with increasing demands from application programmers needing higher-level support for application development. Meeting these demands requires careful use of software modularization techniques, since infrastructural concerns are notoriously hard to modularize. Aspects, components, and patterns provide very different means to deal with infrastructure software, but despite their differences, they have much in common. For instance, component models try to free the developer from the need to deal directly with services like security or transactions. These are primary examples of crosscutting concerns, and modularizing such concerns are the main target of aspect-oriented languages. Similarly, design patterns like Visitor and Interceptor facilitate the clean modularization of otherwise tangled concerns. This workshop aims to provide a highly interactive forum for researchers and developers to discuss the application of and relationships between aspects, components, and patterns within modern infrastructure software. The goal is to put aspects, components, and patterns into a common reference frame and to build connect ions between the software engineering and systems communities.

TR-2002-03 Motion Perturbation Based on Simple Neuromotor Control Models, May 23, 2002 Michael B. Clien, KangKang Yin and Dinesh K. Pai, 8 pages

Motion capture is widely used for character animation. One of the major challenges in this area is modifying human motion in plausible ways. Previous work has focused on transformations based on kinematics and dynamics, but has not explicitly taken into account the emerging knowledge of how humans control their motion. In this paper we show how this can be done using a simple human neuromuscular control model. Our model of muscle forces includes a feedforward term and low gain passive feedback. The feedforward component is calculated from motion capture data using inverse dynamics. The feedback component generates reaction forces to unexpected external disturbances. The perturbed animation is then resynthesized using forward dynamics. This allows us to create animations where the character reacts to unexpected external forces in a natural way (e.g.,when the character is hit by a flying object), but still retains qualities of the original animation. Such technique is useful for applications such as interactive sports video games.

TR-2002-04 The Inequalities of Quantum Information Theory, May 28, 2002 Nicholas Pippenger, i+46 pages

The Inequalities of Quantum Information Theory Nicholas Pippenger Given an n-part quantum state, we consider the 2^n substates obtained by restricting attention to a subset of the parts. The entropies (in the sense of von Neumann) of these substates may be regarded as a point, called the allocation of entropy, in a (2^n)-dimensional real vector space. We show that the topological closure of the set of allocations of entropy form a convex cone. We show that a set of inequalities due to Lieb and Ruskai characterize this cone when n is at most 3. We also consider the symmetric situation in which the entropy depends only on the number of parts in the substate. In this case, the topological closure of the set of allocations of entropy (in (n+1)-dimensional space) again form a convex cone, and we give inequalities characterizing this cone for all n.

TR-2002-05 Scaling an Object-Oriented System Execution Visualizer Through Sampling, December 11, 2002 Andrew Chan, Reid Holmes, Gail C. Murphy and Annie T.T. Ying, 11 pages

Increasingly, applications are being built by combining existing software components. For the most part, a software developer can treat the components as black-boxes. However, for some tasks, such as when performance tuning, a developer must consider how the components are implemented and how they interact. In these cases, a developer may be able to perform the task more effectively by using dynamic information about how the system executes. In previous work, we demonstrated the utility of a tool, called AVID (Architectural VIsualization of Dynamics), that animates dynamic information in terms of developer-chosen architectural views. One limitation of this earlier work was that AVID relied on trace information collected about the system's execution, limiting the duration of execution that could be considered. To enable AVID to scale to larger, longer-running systems, we have been investigating the visualization and animation of sampled dynamic information. In this paper, we discuss the addition of sampling support to AVID, and we present two case studies in which we experimented with animating sampled dynamic information to help with performance tuning tasks.

TR-2002-06 Entropy and Expected Acceptance Counts for Finite Automata, September 03, 2002 Nicholas Pippenger, 24 pages

Entropy and Expected Acceptance Counts for Finite Automata Nicholas Pippenger If a sequence of independent unbiased random bits is fed into a finite automaton, it is straightforward to calculate the expected number of acceptances among the first n prefixes of the sequence. This paper deals with the situation in which the random bits are neither independent nor unbiased, but are nearly so. We show that, under suitable assumptions concerning the automaton, if the the difference between the entropy of the first n bits and n converges to a constant exponentially fast, then the change in the expected number of acceptances also converges to a constant exponentially fast. We illustrate this result with a variety of examples in which numbers following the reciprocal distribution, which governs the significands of floating-point numbers, are recoded in the execution of various multiplication algorithms.

TR-2002-07 Extended Canonical Recoding, September 05, 2002 Nicholas Pippenger

Canonical recoding transforms a sequence of bits into a sequence of signed digits, preserving the numerical value of the sequence while reducing the number of non-zero digits. It is used to reduce the number of additions and subtractions when performing multiplication (or equivalently, the number of multiplications and divisions when performing exponentiation). Standard canonical recoding uses the digits 0, 1 and -1. Any two non-zero digits are separated by at least one zero, so in the worst case n/2 + O(1) non-zero digits are used to recode an n-bit sequence; in the average case n/3 + O(1) non-zero digits are used. We introduce extended canonical recoding, which uses the digits 0, 1, -1, 3 and -3. Any two non-zero digits are separated by at least two zeroes, so at most n/3 + O(1) non-zero digits are used in the worst case. We show that the scheme is uniquely determined by this condition, and that it is optimal among schemes using the same set of signed digits. Finally, we show that in the average case, extended canonical recoding uses n/4 + O(1) non-zero digits.

TR-2002-08 Choosing the Right Neighbourhood: a Way to Improve a Stochastic Local Search Algorithm, October 10, 2002 D. C. Tulpan and H. Hoos, 15 pages

for DNA Word Design algorithm for the design of DNA codes, namely sets of equal-length words over the nucleotides alphabet $\{A,C,G,T\}$ that satisfy certain combinatorial constraints. Using empirical analysis of the algorithm, we gain insight on good design principles and in turn improve the performance of the SLS algorithm. We report several cases in which our algorithm finds word sets that match or exceed the best previously known constructions.

TR-2002-11 Back to the Future: A Retroactive Study of Aspect Evolution in Operating System Code, October 21, 2002 Yvonne Coady and Gregor Kiczales, 10 pages

The FreeBSD operating system more than doubled in size between version 2 and version 4. Many changes to primary modularity are easy to spot at a high-level. For example, new device drivers account for 38% of the growth. Not surprisingly, changes to crosscutting concerns are more difficult to track. In order to better understand how an aspect-oriented implementation would have fared during this evolution, we introduced several aspects to version 2 code, and then rolled them forward into their subsequent incarnations in versions 3 and 4 respectively. This paper describes the impact evolution had on these concerns, and provides a comparative analysis of the changes required to evolve the original versus aspect-oriented implementations. Our results show that for the concerns we chose, the aspect-oriented implementation facilitated evolution in four key ways: (1) changes were better localized, (2) configurability was more explicit, (3) redundancy was reduced, and (4) extensibility aligned with an aspect was more modular. Additionally, we found the aspect-oriented implementation had negligible impact on performance.

TR-2003-01 Giving a Compass to a Robot - Probabilistic Techniques for Simultaneous Localisation and Map Building (SLAM) in Mobile Robotics, December 20, 2002 R. W. v. L. Wenzel, 6 pages

An important feature of an autonomous mobile robotic system is its ability to accurately localize itself while simultaneously constructing a map of its environment. This problem is complicated because of its chicken-and-egg nature: in order to determine its location the robot needs to know the map, and in order to build an accurate map the robot must know where it is. In addition, a robust system must account for the noise in odometry and sensor readings. This project explores the probabilistic methods of solving the SLAM problem using Rao-Blackwellisation.

TR-2003-02 The Boolean Functions Computed by Random Boolean Formulas OR, January 30, 2003 Alex Brodsky and Nicholas Pippenger

How to Grow the Right Function probabilistic amplification, random Boolean functions were used for constructing reliable networks from unreliable components, and deriving complexity bounds of various classes of functions. Hence, determining the initial conditions for such processes is an important and challenging problem. In this paper we characterize growth processes by their initial conditions and derive conditions under which results such as Valiant's (Valiant, 1984) hold. First, we completely characterize growth processes that use linear connectives. Second, by extending Savick\'y's (Savick\'y, 1990) analysis, via ``Restriction Lemmas'', we characterize growth processes that use monotone connectives, and show that our technique is applicable to growth processes that use other connectives as well.

TR-2003-03 On the Complexity of Buffer Allocation in Message Passing Systems, January 30, 2003 Alex Brodsky, Jan B. Pedersen and Alan Wagner, 35 pages

Message passing programs commonly use buffers to avoid unnecessary synchronizations and to improve performance by overlapping communication with computation. Unfortunately, using buffers makes the program no longer portable, potentially unable to complete on systems without a sufficient number of buffers. Effective buffer use entails that the minimum number needed for a safe execution be allocated. We explore a variety of problems related to buffer allocation for safe and efficient execution of message passing programs. We show that determining the minimum number of buffers or verifying a buffer assignment are intractable problems. However, we give a polynomial time algorithm to determine the minimum number of buffers needed to allow for asynchronous execution. We extend these results to several different buffering schemes, which in some cases make the problems tractable.

TR-2003-04 Flexible and Local Isosurfaces - Using Topology for Exploratory Visualization, February 10, 2003 Hamish Carr and Jack Snoeyink, 18 pages

The contour tree is a topological abstraction of a scalar field, used to accelerate isosurface extraction and to represent the topology of the scalar field visually. We simplify the minimal seed sets of van Kreveld et al. by extracting isosurface seeds directly from the contour tree at run-time, and by guaranteeing that no redundant seeds are generated. We then extend the contour spectrum of Bajaj et al. as an interface for flexible isosurfaces, in which individual contour surfaces with different isovalues can be displayed, manipulated and annotated. Finally, we show that the largest contour segmentation of Manders et al., in which separate surfaces are generated for each local maximum of the field, is in fact a special case of the flexible isosurface.

TR-2003-05 Bayesian Models for Massive Multimedia Databases: A New Frontier, February 18, 2003 Nando de Freitas, Eric Brochu, Kobus Barnard, Pinar Duygulu and David Forsyth, 12 pages

Modelling the increasing number of digital databases (the web, photo-libraries, music collections, news archives, medical databases) is one of the greatest challenges of statisticians in the new century. Despite the large amounts of data, the models are so large that they motivate the use of Bayesian models. In particular, the Bayesian perspective allows us to perform automatic regularisation to obtain sparse and coherent models. It also enables us to encode a priori knowledge, such as word, music and image preferences. The learned models can be used for browsing digital databases, information retrieval with image, music and/or text queries, image annotation (adding words to an image), text illustration (adding images to a text), and object recognition.

TR-2003-06 A Study of Program Evolution Involving Scattered Concerns, March 26, 2003 Martin P Robillard and Gail C. Murphy, 11 pages

Before making a change to a system, software developers typically explore the source code to find and understand the subset relevant to their task. Software changes often involve code addressing different conceptually-related segments of the implementation (concerns), which can be scattered across multiple modules. These scattered concerns make it difficult to reason about the code relevant to a change. We carried out a study to investigate how developers discover and manage scattered concerns during a software evolution task, and the role that structural queries play during the investigation. The task we studied consists of a developer adding a feature to a 65kloc Java code base. The study involved eight subjects: four who were not briefed about scattered concerns, and four who were trained to use a concern-oriented investigation tool. Analysis of the navigation among the different program elements examined by the subjects during the task shows evidence that, independent of whether concern-oriented tool support was available, the most successful subjects focused on specific concerns when planning their task, and used a high proportion of structural queries to investigate the code. In addition to the study results, this paper introduces two novel analyses: navigation graphs, which support the analysis of a subject's behavior when investigating source code, and variant analysis, which is used for evaluating the results of a program evolution task.

TR-2003-09 Motion Doodles: : A Sketching Interface for Character Animation, April 17, 2003 Matthew Thorne, David Burke and Michiel van de Panne, 11 pages

We present a novel system which allows a character to be sketched and animated within a minute and with a limited amount of training. The process involves two steps. First, a character is sketched by drawing the links representing the torso, head, arms, and legs. An articulated skeleton and mass distribution is then inferred from the sketched links. Further annotations can be added to the skeleton sketch and are automatically bound to the underlying links. Second, the desired animated motion is sketched using a series of arcs and loops that are interpreted as an appropriate sequence of steps, leaps, jumps, and flips. The motion synthesis process then ensures that the animated motion mirrors key features extracted from the input sketch. For example, the height, distance, and timing of an arc that represents a jump are all reflected in the resulting animated motion. The current prototype allows for walking, walking with a stomp, tip-toeing, leaps, jumps, in-place stomps, front flips, and back flips, including the backwards versions of all these motions.

TR-2003-10 MayaJala: A Framework for Multiparty Communication, September 12, 2003 Chamath Keppitiyagama and Norman C. Hutchinson, 12 pages

The availability of higher bandwidth at the end-points has resulted in the proliferation of distributed applications with diverse communication needs. Programmers have to express the communication needs of such applications in terms of a set of point-to-point communication primitives. Such an approach has several disadvantages. These include the increase in development time, the difficulty in getting the complex communication structure right and the redundancy in several developers doing the same work to achieve the same communication pattern. If common communication patterns are available as well defined communication types these problems can be avoided. A similar approach has already been used in the parallel programming domain; message passing parallel programs use collective communication primitives to express communication patterns instead of composing them with point-to-point communication primitives. This is not the case for distributed programs in general. In this paper we discuss different multiparty communication types and also a framework to implement and make them available to distributed programs.

TR-2003-11 Mammoth: A Peer-to-Peer File System, June 14, 2002 Dmitry Brodsky, Shihao Gong, Alex Brodsky, Michael J. Feeley and Norman C. Hutchinson, 15 pages

Peer-to-peer storage systems organize a collection of symmetric nodes to store data without the use of centralized data structures or operations. As a result, they can scale to many thousands of nodes and can be formed directly by clients. This paper describes the design and implementation Mammoth, which implements a traditional UNIX-like hierarchical file system in a peer-to-peer fashion. Each Mammoth node stores a potentially arbitrary collection of directories and files that may be replicated on multiple nodes. The only thing that links these nodes together is that each metadata object encodes the network addresses of the nodes that store it. Data is replicated by a background process whose operation is simplified by the fact that files are stored as journals of immutable versions. An optimistic replication technique is used to allow nodes to read and write whatever version of data they can access, while also ensuring consistency when nodes are connected. In the event of temporary failure, eventual consistency is achieved by ensuring that every replica of a directory or file metadata object receives all updates to the object, irrespective of delivery order. While an update is being propagated every node that receives it cooperates to ensure that the update is delivered, even if the original sender fails. Our prototype is implemented as a user-level NFS server. Its performance is comparable to a standard NFS server and it will be publicly available soon.

TR-2003-12 RAPPID SYNCHRONIZATION, September 08, 2003 Satyajit Chakrabarti and Sukanta Pramanik, 8 pages

This paper proposes a solution to the synchronization issue in RAPPID that has prevented it from being used in synchronous processors like the Pentium family of processors, in spite of its higher average case throughput. Our first approach explores the possibility of using an early count of the number of instructions pending in the decoder. If the availability of these instructions can be predicted within a bounded time then the execution unit can carry on upto that point without error. Our second approach moves the possibility of metastability from the data path to the control path. The data is placed in the path before the control signal which ensures a stable data when the control is stable or metastable. If the metastability in control signal is not resolved within a single clock-cycle it is re-sampled and the re-sampled signal overwrites the previous signal, thereby ensuring a worst case latency of one clock-cycle.

TR-2003-13 Multiparty Communication Types for Distributed Applications, September 25, 2003 Chamath Keppitiyagama and Norman C. Hutchinson, 6 pages

Communication is an important and complex component of distributed applications. It requires considerable effort and expertise to implement communication patterns in distributed applications. Therefore, it is prudent to separate the two tasks of implementing the communication and implementing the application's main functionality which needs the communication. Traditionally this separation is achieved through a standard interface. A good example is the Message Passing Interface (MPI) for message-passing parallel programs. It takes considerable experience, effort and general agreement in the community to define such an interface. However, a standard interface is not flexible enough for the rapidly changing requirements of distributed applications. We propose that the separation of communication and application specific functionality should be achieved through the abstraction of communication types. In this paper we present a communication type system for multiparty communication. The type system can be used to express the communication requirements of an application, describe an implementation of a communication type, and make a match between these two. Our type system is only a single component of a framework for multiparty communication that we are developing.

TR-2003-14 Implementing a Connected Dominating Set Algorithm for Clustering Mobile Ad Hoc Networks, October 20, 2003 Kan Cai, Suprio Ray, Michael J. Feeley and Norman C. Hutchinson, 14 pages

Advances in network technology have tremendously changed the way people communicate. The relatively recent introduction of wireless networking technology has the potential to play an even more influential role in our daily lives. However, the nature of wireless technology makes it a more difficult platform on which to build useful systems. Some particularly troubling aspects of wireless networks include high mobility, frequent failures, limited power, and low bandwidth. A core component of a networked system is its routing protocol. Several ad-hoc wireless routing protocols have been proposed which depend on a flood-and-cache design. However, these algorithms suffer from scalability problems whenever the hit rate in the cache is low, such as when most connections are short-lived. This paper describes the design and implementation of a Deployable Connected Dominating Set (DCDS) algorithm. As in other CDS algorithms, DCDS provides a scalable routing scheme by constructing and maintaining a backbone across the network. To make our CDS algorithm truly deployable in an IEEE 802.11 network, we eliminate three unrealistic assumptions on which previous designs are based: reliable broadcast, accurate neighbouring information, and a static setup phase. We have implemented the DCDS algorithm and simulated it using Glomosim. The evaluations show that DCDS has significantly better scalability than AODV. We also show that our algorithm can maintain an effective backbone which appropriately balances setup time and size.

TR-2003-15 Policy Driven Replication, October 7, 2003 Dmitry Brodsky, Alex Brodsky, Michael J. Feeley and Norman C. Hutchinson, 14 pages

The increasingly commodity nature of storage and our insatiable tendency to produce, store, and use large amounts of data exacerbates the problem of ensuring data survivability. The advent of large robust networks has gained the idea of replicating data on remote hosts wide-spread acceptance. Unfortunately, the growth of network bandwidth is far outstripped by both the growth of both storage capacity~\cite{patterson03jimgray} and our ability to fill it. Thus, most replication systems, which traditionally replicate data blindly, fail under the onslaught of this lopsided mismatch. We propose a Policy Driven Replication (PDR) system that prioritizes the replication of data, based on user-defined policies that specify which data is to be protected, from which failures, and to what extent. By prioritizing which data is replicated, our system conserves limited resources and ensures that data which is deemed most important to and by the user is protected from failures that are deemed most likely to occur.

TR-2003-16 Apostle: A Simple Incremental Weaver for a Dynamic Aspect Language, October 22, 2003 Brian de Alwis and Gregor Kiczales, 9 pages

This paper describes the incremental weaving implementation of Apostle, an aspect-oriented language extension to Smalltalk modelled on AspectJ. Apostle implements incremental weaving in order to make aspect-oriented programming (AOP) a natural extension of the incremental edit-run-debug cycle of Smalltalk environments. The paper analyzes build dependencies for aspect declarations, and shows that two simple dependency table structures are sufficient to produce reasonable re-weaving efficiency. The resulting incremental weaver provides re-weaving performance proportional to the change in the program.

TR-2003-17 Energy Efficient Peer-to-Peer Storage, November 10, 2003 Geoffrey Lefebvre and Michael J. Feeley, 6 pages

This paper describes key issues for building an energy efficient peer-to-peer (P2P) storage system. Current P2P systems waste large amounts of energy because of the false assumption that participating nodes' resources are free. Environmentally and economically, this is not true. Instead this paper argues that idle nodes in a P2P system should sleep to save energy. We derive an upper bound on the time an idle node can sleep without affecting the durability of the data stored in the system. This upper bound is parameterized by the replication factor and expected failure rates. We also outline a protocol for failure detection in an environment where only a small fraction of the nodes are alive at any time.

TR-2003-18 A Bayesian Network Model of a Collaborative Interactive Tabletop Display, December 09, 2003 Mark S. Hancock, 8 pages

In this paper, I explore the use of Bayesian Networks to model the use of an interactive tabletop display in a collaborative environment. Specifically, this model is intended to extract user-profile information for each user including their location at the table as well as their handedness. The network uses input from a six-degrees-of-freedom stylus device as its source of observable information. This paper introduces a first attempt at a model to support these requirements as well as a preliminary evaluation of the model. Results show that the model is sufficiently accurate to obtain a user profile in real time in a Tabletop Display environment.

TR-2003-20 Aspect Weaving with C# and .NET, December 19, 2003 Michael A. Blackstock, 9 pages

Since current object oriented programming languages don’t have existing support for aspects, aspects are often supported through language extensions. Another approach is to use the existing language to encapsulate aspect behaviors, and provide an additional language to express cross cutting statements. Finally, other systems including the one described in this paper use features of the existing language to specify aspect behavior and cross cutting. This paper presents a prototype weaver called AOP.NET that demonstrates the feasibility of supporting aspect oriented programming in C# without the need for language extensions, or a cross cutting statement file. All of the information related to supporting AOP including the cross cutting statements is contained in the aspect declaration. The cross cutting statements are expressed using a language feature called attributes which are used to annotate methods, fields and classes with meta data in languages targeting the Common Language Runtime (CLR) such as C#. Since attributes are supported in all CLR languages it should be possible to maintain .NET language independence with this approach. AOP.NET demonstrates the feasibility of static and transparent dynamic weaving in .NET. Unlike other .NET dynamic weavers, no changes are required to the source code of clients of functional components for dynamic weaving, the same weaving engine is used in both a static tool and dynamic weaving run time host, and it is implemented completely in C#.

TR-2004-01 Demonstrating Numerical Convergence to the Analytic Solution of some Backwards Reachable Sets with Sharp Features, January 29, 2004 Ian M. Mitchell, 42 pages

We examine the convergence properties of a level set algorithm designed to track evolving interfaces; in particular, its convergence properies on a series of two and three dimensional backwards reachable sets whose flow fields involve kink formation (sharp features) and, in some cases, rarefaction fans introduced by input parameters in the dynamics. The chosen examples have analytic solutions to facilitate the convergence analysis. We describe the error analysis method, the formulation of reachability in terms of a Hamilton-Jacobi equation, and our implementation of the level set method in some detail. In addition to the convergence analysis presented here, these techniques and examples could be used to validate either other nonlinear reachability algorithms or other level set implementations.

TR-2004-02 Decision Theoretic Learning of Human Facial Displays and Gestures, March 11, 2004 Jesse Hoey and James J. Little, 45 pages

ian learning facial displays and gestures in interaction. Changes in the human face occur due to many factors, including communication, emotion, speech, and physiology. Most systems for facial expression analysis attempt to recognize one or more of these factors, resulting in a machine whose inputs are video sequences or static images, and whose outputs are, for example, basic emotion categories. Our approach is fundamentally different. We make no prior commitment to some particular recognition task. Instead, we consider that the meaning of a facial display for an observer is contained in its relationship to actions and outcomes. Agents must distinguish facial displays according to their affordances, or how they help an agent to maximize utility. To this end, our system learns relationships between the movements of a person's face, the context in which they are acting, and a utility function. The model is a partially observable Markov decision process, or POMDP. The video observations are integrated into the POMDP using a dynamic Bayesian network, which creates spatial and temoral abstractions amenable to decision making at the high level. The parameters of the model are learned from training data using an a-posteriori constrained optimization technique based on the expectation-maximization algorithm. One of the most significant advantages of this type of learning is that it does not require labeled data from expert knowledge about which behaviors are significant in a particular interaction. Rather, the learning process discovers clusters of facial motions and their relationship to the context automatically. As such, it can be applied to any situation in which non-verbal gestures are purposefully used in a task. We present an experimental paradigm in which we record two humans playing a collaborative game, or a single human playing against an automated agent, and learn the human behaviors. We use the resulting model to predict human actions. We show results on three simple games.

TR-2004-04 Logarithmic Complexity for a class of XML Queries, April 01, 2004 Jeremy Barbay, 14 pages

The index of an XML document typically consists of a set of lists of node references. For each node type, a list gives the references of all nodes of this type, in the prefix traversal order. A twig pattern query is answered by the list of all occurrences of a labeled tree structure, and is computed faster using an index. While previous results propose index structures and algorithms which answer twig pattern queries with a complexity linear in the size of the document, we propose an index which allows to answer twig pattern queries with a number of comparisons logarithmic in the size of the document. As answering efficiently twig pattern matching queries necessitates a sophisticate encoding of the output, we expose our technique on two simpler problems, and we claim that the technique can be applied to answer twig pattern queries using a logarithmic number of comparisons as well.

TR-2004-05 Constraint-Based Approach to Hybrid Dynamical Systems with Uncertainty, April 01, 2004 Robert St-Aubin and Alan K. Mackworth, 41 pages

Due to the recent technological advances, real-time hybrid dynamical systems are becoming ubiquitous. Most of these systems behave unpredictably, and thus, exhibit uncertainty. Hence, a formal framework to model systems with unpredictable behaviours is needed. We develop Probabilistic Constraint Nets (PCN), a new framework that can handle a wide range of uncertainty, whether it be probabilistic, stochastic or non-deterministic. In PCN, we view probabilistic dynamical systems as online constraint-solvers for dynamic probabilistic constraints and requirements specification as global behavioural constraints on the systems. We demonstrate the power of PCN by applying it to a fully hybrid model of an elevator system which encompasses several different types of uncertainty. We present verification rules, which have been fully implemented, to perform automatic behavioural constraint verification.

TR-2004-06 Topology Sensitive Replica Selection, June 03, 2004 Dmitry Brodsky, Michael J. Feeley and Norman C. Hutchinson, 14 pages

With the proliferation of peer-to-peer storage it is now possible to protect one's data at a level that is comparable to traditional replication systems but at reduced cost and complexity. These systems provide the needed flexibility, reliability, and scalability to operate in present day environments, and handle present day loads. These peer-to-peer storage systems must be able to replicate data on hosts that are trusted, secure, and available. However, recent research has shown that the traditional model, where nodes are assumed to have identical levels of trust, to behave independently, and to have similar failure modes, is incorrect. Thus, there is a need for a mechanism that automatically, correctly, and efficiently selects replica nodes from a large number of available hosts with varying capabilities and trust levels. In this paper we present an algorithm to handle node selection either for new replica groups or to replace failed replicas in a peer-to-peer replication system. We show through simulation that our algorithm maintains the interconnection topology such that the cost of recovery from a failed replica, measured by the number of messages and bandwidth, is minimized.

TR-2004-07 A New Approach to Upward-Closed Set Backward Reachability Analysis, June 21, 2004 Jesse Bingham, 18 pages

In this paper we present a new framework for computing the backward reachability from an upward-closed set in a class of parameterized (i.e. infinite state) systems that includes broadcast protocols and petri nets. In contrast to the standard approach, which performs a single least fixpoint computation, we consecutively compute the finite state least fixpoint for constituents of increasing size, which allows us to employ binary decision diagram (BDD)-based symbolic model checking. In support of this framework, we prove necessary and sufficient conditions for convergence and intersection with the initial states, and provide an algorithm that uses BDDs as the underlying data structure. We give experimental results that demonstrate the existence of a petri net for which our algorithm is two orders of magnitude faster than the standard approach, and speculate properties that might suggest which approach to apply.

TR-2004-09 A Toolbox of Level Set Methods (version 1.0) (Replaced by TR-2007-11), March 14, 2005 Ian M Mitchell, 94 pages

This document describes a toolbox of level set methods for solving time-dependent Hamilton-Jacobi partial differential equations (PDEs) in the \matlab\ programming environment. Level set methods are often used for simulation of dynamic implicit surfaces in graphics, fluid and combustion simulation, image processing, and computer vision. Hamilton-Jacobi and related PDEs arise in fields such as control, robotics, differential games, dynamic programming, mesh generation, stochastic differential equations, financial mathematics, and verification. The algorithms in the toolbox can be used in any number of dimensions, although computational cost and visualization difficulty make dimensions four and higher a challenge. All source code for the toolbox is provided as plain text in the \matlab\ m-file programming language. The toolbox is designed to allow quick and easy experimentation with level set methods, although it is not by itself a level set tutorial and so should be used in combination with the existing literature.

TR-2004-10 Rendering Color Information Using Haptic Feedback, July 22, 2004 S, Chakrabarti, S. Pramanik, D. Du and R. Paul, 9 pages

This paper investigates a novel approach to rendering color information from pictures as haptic feedback at the fingers. Our approach uses a 1D haptic rotary display to render the color information to the fingers using sinusoidal textures of different frequency and amplitude. We tested 12 subjects on their ability to associate colors laid out in a spatially irregular pattern with haptic feedback displayed to their fingers, with the numbers of color/haptic stimuli pairs presented increasing in successive trials. The experiment results suggest that subjects are able to comfortably learn and distinguish up to 8 color/haptic stimuli pairs based on this particular mapping; and with some effort, many can distinguish as many as 16 pairs. The results also raise key issues for further investigation in subsequent studies including the role of multimodal inputs like audio along with haptics.

TR-2004-11 Index-Trees for Descendant Tree Queries in the Comparison Model, July 27, 2004 Jeremy Barbay, 17 pages

Considering indexes and algorithms to answer XPath queries over XML data, we propose an index structure and a related algorithm, both adapted to the comparison model, where elements can be accessed non-sequentially. The indexing scheme uses classical labelling techniques, but structurally represents the ancestor-descendant relationships of nodes of each type, in order to allow exponential searches. The algorithm performs XPath location steps along the descendant axis, and it generates few intermediate results. The complexity of the algorithm is proved worst-case optimal in an adaptive comparison model where the index is given, and where the instances are grouped by the number of comparisons needed to check their answer.

TR-2004-12 An Analysis of the Hotspot Diffusion Paradox, July 27, 2004 Jeremy Barbay, 5 pages

Crossover is believed to initiate at specific sites called hotspots, by combinational-repair mechanism in which the initiating hotspot is replaced by a copy of its homologue. Boulton et al. studied through simulation the effect of this mechanism, and observed in their model that active hotspot alleles are rapidly replaced by inactive alleles. This is paradoxical because active hotspots alleles do not disappear in natural systems. We give a theoretical analysis of this model, which confirms their experimental result, and we argue that they failed to take properly into account the benefits of recombination, because of the optimality of their initial population. On the other hand, we show that even with an initial population of low fitness the model does not sustain the active hotspot alleles. Those results suggest that at least one model is wrong, either the one for the recombination of chromosomes, or the one for the diffusion of the hotspot alleles: we suggest another model for the diffusion of hotspots alleles.

TR-2004-13 SQPATH-A Combined SQL-XPATH Query System for RNAML Data, August 16, 2004 Chita C, Patel R and Yang J, 23 pages

RNA secondary structure prediction has become a major bioinformatics research area, since it could be inferred that all functions of a single-stranded RNA are influenced by its secondary structure [29]. Progress in this field has been hindered, among other things, by the lack of a unified repository for RNA informatics data exchange, and by the lack of a standardized file format. We propose to advance the cause for such a centralized RNA database, and to look at what would be the fastest query approach, should one exist: to store the indexes in a relational table, and use SQL to narrow the set of potential answers to only the matching files, prior to performing XPATH on the RNAML file itself, or to store the indexes at the highest (i.e. first) level of an XML file, and use XPATH exclusively. We have found that storing the indexes in a relational table and using both SQL and XPATH is faster by at least one order of magnitude than storing the indexes at the 1st level of an XML file and using XPATH only. Furthermore, the discrepancy between the speeds of the two query methods increases with the number of files. We describe system we have build to test our hypothesis, our testing procedure and results, and explore avenues that will allow us to generalize our results to other XML databases.

TR-2004-14 Haptic Support for Urgency-Based Turn-Taking, April 25, 2005 Andrew Chan, Joanna McGrenere and Karon MacLean, 10 pages

Real-time collaboration systems that enable distributed access to a shared application often require a turn-taking protocol. Current protocols rely on the visual channel using GUI widgets, and do not support expressions of urgency. We describe a novel urgency-based turn-taking protocol that is mediated through haptics: vibrotactile signals inform users of their current role in the collaboration. For example, a control holder receives different signals according to the urgency with which collaborators request control. In an observational user study we compare three implementations of the protocol: one dominated by haptic signals, one with visual cues alone, and one balancing both modalities. Our results suggest that a modestly-sized set of well-designed haptic stimuli can be learned to a high degree of accuracy in a short time, that the inclusion of haptic stimuli can make turn-taking behavior more equitable, and that the ability to communicate urgency positively impacts collaboration dynamics.

TR-2004-15 Learning and Identifying Haptic Icons under Workload, April 25, 2005 Andrew Chan, Karon MacLean and Joanna McGrenere, 10 pages

This work addresses the use of vibrotactile haptic feedback to transmit background information with variable intrusiveness, when recipients are engrossed in a primary visual and/or auditory task. We describe two studies designed to (a) perceptually optimize a set of vibrotactile "icons" and (b) evaluate users' ability to identify them in the presence of varying degrees of workload. Seven icons learned in approximately 3 minutes were each typically identified within 2.5 s and at 95% accuracy in the absence of workload.

TR-2004-16 GLStereo: Stereo Vision Implemented in Graphics Hardware, October 14, 2004 Dustin Lang and James J. Little, 14 pages

We present an implementation of the standard sum of absolute differences (SAD) stereo disparity algorithm, performing all computation in graphics hardware. To our knowledge, this is the fastest published stereo disparity implementation on commodity hardware. With an inexpensive graphics card, we achieve `raw' SAD performance above 170 MPDS (mega-pixel disparities per second), corresponding to 5x5 neighbourhoods, 640x480 pixel images, 54 disparities, 10 frames per second (fps) (or 320x240 pixels, 96 disparities, 25 fps). The CPU is approximately 90% idle while this computation is being performed. Other authors have presented stereo disparity implementations for graphics hardware. However, we focus on filtering the raw results in order to eliminate unreliable pixels, thereby decreasing the error in the final disparity maps. Since the standard SAD algorithm produces disparity maps with relatively high error rates, such filtering is essential for many applications. We implement shiftable windows, left-right consistency, texture, and disparity smoothness filters, all using graphics hardware. We investigate the accuracy/density tradeoff of the latter three filters using a novel analysis. We find that the left-right consistency and smoothness filters are particularly effective, and using these filters we achieve performance above 110 MPDS: 640x480 pixel images, 36 disparities, 10 frames per second (or 320x240 pixels, 66 disparities, 25 fps). This level of performance demonstrates that graphics cards are powerful co-processors for low-level computer vision tasks.

TR-2005-01 Fast Implementation of Lemke's Algorithm for Rigid Body Contact Simulation, January 18, 2005 John E. Lloyd, 30 pages

We present a fast method for solving rigid body contact problems with friction, based on optimizations incorporated into Lemke's algorithm for solving linear complementarity problems. These optimizations reduce the expected solution complexity (in the number of contacts) from O(n3) to nearly O(n m + m3), where m is the number of bodies in the system. For a fixed m the expected complexity is then close to O(n). By simplifying internal computations our method also improves numerical robustness, and removes the need to explicitly compute the large matrices associated with rigid body contact problems.

TR-2005-02 Probability and Equality: A Probabilistic Model of Identity Uncertainty, April 03, 2006 R "Sharma and David " Poole, 12 pages

Identity uncertainty is the task of deciding whether two descriptions correspond to the same object. It is a difficult and important problem in real world data analysis. It occurs whenever objects are not assigned with unique identifiers or when those identifiers may not be observed perfectly. Traditional approaches to identity uncertainty assume that the attributes in the descriptions are independent of each other given whether or not the descriptions refer to the same object. However, this assumption is often faulty. For example, in the person identity uncertainty problem -- the problem of deciding whether two descriptions refer to the same person, the attributes ``date of birth'' and ``last name'' have the same values for twins. In this paper we discuss the identity uncertainty problem in the context of person identity uncertainty. We model the inter-dependence of the attributes and the probabilistic relations between the observed value of attributes and their actual values using a similarity network representation. Our approach allows queries such as, ``what is the distribution over the actual names of a person given the names that appear in the description of the person'', or, ``what is the probability that two descriptions refer to the same person''. We present results that show that our method outperforms the traditional approach for person identity uncertainty which considers the attributes as independent of each other.

TR-2005-03 Empirical Testing of Fast Kernel Density Estimation Algorithms, May 19, 2005 Dustin Lang, Mike Klaas and Nando de Freitas, 6 pages

We present results of experiments testing the Fast Gauss Transform, Improved Fast Gauss Transform, and Dual-Tree methods (using $kd$-tree and Anchors Hierarchy data structures) for fast Kernel Density Estimation (KDE). We examine the performance of these methods with respect to data set size, dimension, allowable error, and data set structure (``clumpiness''), measured in terms of CPU time and memory usage. This is the first multi-method comparison in the literature. The results are striking, challenging several claims that are commonly made about these methods. The results are useful for researchers considering fast methods for KDE problems. Along the way, we provide a corrected error bound and a parameter-selection regime for the IFGT algorithm.

TR-2005-04 Toward indicative discussion fora summarization, March 07, 2005 Mike Klaas, 10 pages

Summarization of electronic discussion fora is a unique challenge; techniques that work startlingly well on monolithic documents tend to fare poorly in this informal setting. Additionally, conventional techniques ignore much of the structures that have the potential to serve as valuable features in the summarization task. We present several novel examples of such features, including the catalyst score, which is effective at identifying salient messages without looking at their content. We also describe and evaluate NewsSum, a prototype summarization system that is able to efficiently generate variable-length summarizations of Usenet threads.

TR-2005-06 Fostering Student Learning and Motivation: an Interactive Educational Tool for AI, March 18, 2005 Saleema Amershi, Nicole Arksey, Giuseppe Carenini, Cristina Conati, Alan Mackworth, Heather Maclaren and David Poole, 5 pages

There are inherent challenges in teaching and learning Artificial Intelligence (AI) due to the complex dynamics of the many fundamental AI concepts and algorithms. Interactive visualization tools have the potential to overcome these challenges. However, there are reservations towards adopting interactive visualizations due to mixed results on their pedagogical effectiveness. Previous work has also often failed to directly assess student preferences and motivation. CIspace is a set of nine interactive visualization tools demonstrating fundamental principles in AI. The CIspace tools are currently in use in undergraduate and graduate classrooms at the University of British Columbia and around the world. In this paper, we present two experiments aimed at assessing the effectiveness of one the tools in terms of knowledge gain and user preference. Our results provide evidence that the tool is as effective as a traditionally accepted form of learning in terms of knowledge gain, and that students significantly prefer to use the tools over traditional forms of study. These results strengthen the case for the incorporation of CIspace, and other interactive visualizations, into courses.

TR-2005-07 Empirically Efficient Verification for a Class of Infinite-State Systems, March 23, 2005 Jesse Bingham and Alan J. Hu, 19 pages

Well-structured transition systems (WSTS) are a broad and well-studied class of infinite-state systems, for which the problem of verifying the reachability of an upward-closed set of error states is decidable (subject to some technicalities). Recently, Bingham proposed a new algorithm for this problem, but applicable only to the special cases of broadcast protocols and petri nets. The algorithm exploits finite-state symbolic model checking and was shown to outperform the classical WSTS verification algorithm on a contrived example family of petri nets. In this work, we generalize the earlier results to handle a larger class of WSTS, which we dub "nicely sliceable", that includes broadcast protocols, petri nets, context-free grammars, and lossy channel systems. We also add an optimization to the algorithm that accelerates convergence. In addition, we introduce a new reduction that soundly converts the verification of parameterized systems with unbounded conjunctive guards into a verification problem on nicely sliceable WSTS. The reduction is complete if a certain decidable side condition holds. This allows us to access industrially relevant challenge problems from parameterized memory system verification. Our empirical results show that, although our new method performs worse than the classical approach on small petri net examples, it performs substantially better on the larger examples based on real, parameterized protocols (e.g., German's cache coherence protocol, with data paths).

TR-2005-08 Nonparametric BLOG, April 06, 2005 Peter Carbonetto, Jacek Kisynski, Nando de Freitas and David Poole, 8 pages

The BLOG language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends the expressiveness of the BLOG language by adopting generative processes over function spaces --- known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By exploiting exchangeability, distributions over unknown objects and their attributes are cast as Dirichlet processes, which resolve difficulties in model selection and inference caused by varying numbers of objects. We demonstrate these concepts with applications to air traffic control and citation matching. #U /grads2/pcarbo/npblog.pdf

TR-2005-09 The Twiddler: A Haptic Teaching Tool: Low-Cost Communication and Mechanical Design, February 18, 2004 Michael Shaver and Karon E. MacLean, 42 pages

The previous haptic interface in the research lab was prohibitively expensive for distribution to a class of students and required a specialized Input/Output board. In order to solve these problems a new device was designed with the stipulations that its interface did not require an I/O board and that it have half the power of the previous device and cost less than $400cdn. The resulting design is called the Twiddler. The Twiddler is a single degree of freedom rotary haptic device. An electronic box and an electric DC motor make up the Twiddler. The electronic box reads the current rotational position of the motor sends it to the host PC through the Parallel Port. The algorithm for the output force command as a function of the position is easily accessed and changed on the host PC to make prototyping and development simple. The host sends the command through the Parallel Port back to the electronic box. The command is converted to a motor driving signal and sent to the motor. This loop happens ever millisecond so that reliable haptic forces can be simulated on the rotational axis of the motor. The parts for the Twiddler cost approximately $400 and the peak torque output is approximately 0.04 Nm (or six oz-in). The software, mechanical and electrical designs are freely available for reproduction.

TR-2005-10 Generalized Constraint-Based Inference, May 11, 2005 Le Chang and Alan K. Mackworth, 21 pages

Constraint-Based Inference (CBI) is a unified framework that subsumes many practical problems in different research communities. These problems include probabilistic inference, decision-making under uncertainty, constraint satisfaction, propositional satisfiability, decoding problems, and possibility inference. Recently, researchers have presented various unified representation and algorithmic frameworks for CBI problems in their fields, based on the increasing awareness that these problems share common features in representation and essentially identical inference approaches. As the first contribution of this paper, we explicitly use the semiring concept to generalize various CBI problems into a single formal representation framework that provides a broader coverage of the problem space based on the synthesis of existing generalized frameworks. Second, the proposed semiring-based unified framework is also a single formal algorithmic framework that provides a broader coverage of both exact and approximate inference algorithms, including variable elimination, junction tree, and loopy message propagation methods. Third, we discuss inference algorithm design and complexity issues. Finally, we present a software toolkit named the Generalized Constraint-Based Inference Toolkit in Java (GCBIJ) as the last contribution of this paper. GCBIJ is the first concrete software toolkit that implements the abstract semiring approach to unify the CBI problem representations and the inference algorithms. The discussion and the experimental results based on GCBIJ show that the generalized CBI framework is a useful tool for both research and applications.

TR-2005-11 A Framework for Multiparty Communication Types, April 28, 2005 Chamath Keppitiyagama and Norman C. Hutchinson, 22 pages

Several multiparty communication paradigms, such as multicast and anycast, have been discussed in the literature and some of them have been used to build applications. There is a vast design space to be explored in implementing these communication paradigms over the wide area Internet. Ideally, application programmers should be able to use these paradigms independent of their implementation details and implementors should be able to explore the design space. However, this is hindered by the lack of three components; a naming system to identify the paradigms, a standard API, and a system to deploy the implementations. We provide a framework to address the above problems. The framework includes a model to name the communication paradigms through the notion of communication types. It also provides an API suitable for all communication types. The framework also includes a middleware that facilitates the implementation and deployment of communication types. We have implemented a wide assortment of communication types and we demonstrate their utility and the effectiveness of the framework through some simple example applications. We also show that the cost interposed by the middleware is minimal and that the framework facilitates the concise implementation of communication types.

TR-2005-12 Role-Based Policies to Control Shared Application Views, May 02, 2005 L. Berry, L. Bartram and K.S. Booth, 24 pages

Collaboration often relies on all group members having a shared view of a single-user application. A common situation is a single active presenter sharing a live view of her workstation screen with a passive audience, using simple hardware-based video signal projection onto a large screen or simple bitmap-based sharing protocols. This offers simplicity and some advantages over more sophisticated software-based replication solutions, but everyone has the exact same view of the application. This conflicts with the presenter's need to keep some information and interaction details private. It also fails to recognize the needs of the passive audience, who may struggle to follow the presentation because of verbosity, display clutter or insufficient familiarity with the application. Views that cater to the different roles of the presenter and the audience can be provided by custom solutions, but these tend to be bound to a particular application. In this paper we describe a general technique and implementation details of a prototype system that allows standardized role-specific views of existing single-user applications and permits additional customization that is application-specific with no change to the application source code. Role-based policies control manipulation and display of shared windows and image buffers produced by the application, providing proactive privacy protection and relaxed verbosity to meet both presenter and audience needs.

TR-2005-13 Perceiving Ordinal Data Haptically Under Workload, May 08, 2005 Anthony Tang, Peter McLachlan, Karen Lowe, Chalapati Rao Saka and Karon MacLean, 8 pages

Visual information overload is a threat to the interpretation of displays presenting large data sets or complex application environments. To combat this problem, researchers have begun to explore how haptic feedback can be used as another means for information transmission. In this paper, we show that people can perceive and accurately process haptically rendered ordinal data while under cognitive workload. We evaluated three haptic models for rendering ordinal data with participants who were performing a taxing visual tracking task. The evaluation demonstrates that information rendered by these models is perceptually available even when users are visually busy. This preliminary research has promising implications for haptic augmentation of visual displays for information visualization.

TR-2005-14 A Generalization of Generalized Arc Consistency: From Constraint Satisfaction to Constraint-Based Inference, May 11, 2005 Le Chang and Alan K. Mackworth, 15 pages

Arc consistency and generalized arc consistency are two of the most important local consistency techniques for binary and non-binary classic constraint satisfaction problems (CSPs). Based on the Semiring CSP and Valued CSP frameworks, arc consistency has also been extended to handle soft constraint satisfaction problems such as fuzzy CSP, probabilistic CSP, max CSP, and weighted CSP. This extension is based on an idempotent or strictly monotonic constraint combination operator. In this paper, we present a weaker condition for applying the generalized arc consistency approach to constraint-based inference problems other than classic and soft CSPs. These problems, including probability inference and maximal likelihood decoding, can be processed using generalized arc consistency enforcing approaches. We also show that, given an additional monotonic condition on the corresponding semiring structure, some of constraint-based inference problems can be approximately preprocessed using generalized arc consistency algorithms.

TR-2005-16 A Trust-based Model for Collaborative Intrusion Response, June 02, 2005 Kapil Singh and Norman C. Hutchinson, 6 pages

Intrusion detection systems (IDS) are quickly becoming a standard component of a network security infrastructure. Most IDS developed to date emphasize detection; response is mainly concentrated on blocking a part of the network after an intrusion has been detected. This mechanism can help in temporarily stopping the intrusion, but such a limited response means that attacking is free for the attacker. The idea behind our approach is to frustrate the intruder by attacking back. This requires developing a sense of trust in the network for the attacked host and establishing proof of the attack so the attack-back action can be justified. To develop this trust model, we propose a protocol that allows the attacked host to prove to the attacker’s edge router that it has been attacked. The model is quite flexible, and based on the level of trust developed for the host, an appropriate countermeasure is taken. Besides attack-back, other possible responses could be blocking a part of the network and use of network puzzles to limit the attacker’s access to network resources. We believe that the attack-back approach would certainly demoralize novice attackers, and even expert attackers will think twice before attacking again. In addition, the protocol prevents a host from pretending that it has been attacked. We are building a system that can handle a majority of known attacks (signature-based). We are also exploring the idea of adding a third trusted party into the system in order to provide countermeasure action for novel attacks (anomaly-based).

TR-2005-17 Improving Backbone Routing for Transient Communication in Mobile Ad Hoc Networks, July 04, 2005 Kan Cai, Michael J. Feeley and Norman C. Hutchinson, 12 pages

Department of Computer Science, University of British Columbia

TR-2005-18 Hot Coupling: A Particle Approach to Inference and Normalization on Pairwise Undirected Graphs of Arbitrary Topology, July 12, 2005 Firas Hamze and de Freitas. Nando, 8 pages

This paper presents a new sampling algorithm for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using ``tempered'' proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning tree of the variables. We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs.

TR-2005-19 A Logic and Decision Procedure for Predicate Abstraction of Heap-Manipulating Programs, September 19, 2005 Jesse Bingham and Zvonimir Rakamaric, 28 pages

An important and ubiquitous class of programs are heap-manipulating programs (HMP), which manipulate unbounded linked data structures by following pointers and updating links. Predicate abstraction has proved to be an invaluable technique in the field of software model checking; this technique relies on an efficient decision procedure for the underlying logic. The expression and proof of many interesting HMP safety properties require transitive closure predicates; such predicates express that some node can be reached from another node by following a sequence of (zero or more) links in the data structure. Unfortunately, adding support for transitive closure often yields undecidability, so one must be careful in defining such a logic. Our primary contributions are the definition of a simple transitive closure logic for use in predicate abstraction of HMPs, and a decision procedure for this logic. Through several experimental examples, we demonstrate that our logic is expressive enough to prove interesting properties with predicate abstraction, and that our decision procedure provides us with both a time and space advantage over previous approaches.

TR-2005-20 Coping With an Open Bug Repository, August 26, 2005 Anvik John, Hiew Lyndon and Murphy Gail C., 5 pages

Most open source software development projects include an open bug repository---one to which users of the software can gain full access---that is used to report and track problems with, and potential enhancements to, the software system. There are several potential advantages to the use of an open bug repository: more problems with the system might be identified because of the relative ease of reporting bugs, more problems might be fixed because more developers might engage in problem solving, and developers and users can engage in focused conversations about the bugs, allowing users input into the direction of the system. However, there are also some potential disadvantages such as the possibility that developers must process irrelevant bugs that reduce their productivity. Despite the rise in use of open bug repositories, there is little data about what is stored inside these repositories and how they are used. In this paper, we provide an initial characterization of two open bug repositories from the Eclipse and Firefox projects, describe the duplicate bug and bug triage problems that arise with these open bug repositories, and discuss how we are applying machine learning technology to help automate these processes.

TR-2005-21 Theory, Software, and Psychophysical Studies for the Tactile Handheld Miniature Bimodal Device, October 23, 2005 Shannon H. Little, 35 pages

The intention of the work described in this report began as an exploratory effort to determine the basic principles and properties of the THMB device. The result of this experimentation is a description of the basic configuration and perceptual limits of the THMB device, terminology and theory to describe the output of the device, libraries that implement this theory, software that utilizes these libraries to interact with the THMB device, and two user studies, a speed study and a multi-dimensional scaling (MDS) study, that attempt to relate the terminology and theories developed to human perceptual limitations and capabilities.

TR-2005-22 Go with the Flow: How Users Monitor Incoming Email, September 20, 2005 Anthony Tang, Nelson Siu, Lee Iverson and Sidney Fels, 4 pages

We have only a limited understanding of how users continuously monitor and manage their incoming email flow. A series of day-long field observations uncovered three distinct strategies people use to handle their incoming email flow: glance, scan, and defer. Consequently, supporting email flow involves providing simplified views of the email inbox and mechanisms to support the revisitation of overflow messages.

TR-2005-23 Remaining Oriented During Software Development Tasks: An Exploratory Field Study, July 17, 2005 Brian S. de Alwis and Gail C. Murphy, 24 pages

Humans have been observed to become \emph{disoriented} when using menu or hypertext systems. Similar phenomena have been reported by software developers, often manifesting as a feeling of \emph{lostness} while exploring a software system. To investigate this phenomena in the context of software development, we undertook a field study, observing eight developers of the open-source Eclipse project for two hours each as they conducted their normal development work. We also interviewed two other developers using the same tools but who were working on a closed-source system. The developers did report some instances of disorientation, but it was a rare occurrence; rather we observed strategies the developers used to r emain \emph{oriented}. Based on the study results, we hypothesize factors that contribute to disorientation during programming tasks as well as factors that contribute to remaining oriented. Our results can help encode best practices for code navigation, can help inform the development of tools, and can help in the further study of orientation and disorientation in software development.

TR-2005-24 A Formal Mathematical Framework for Modeling Probabilistic Hybrid Systems, October 12, 2005 Robert St-Aubin and Alan K. Mackworth, 22 pages

The development of autonomous agents, such as mobile robots and software agents, has generated considerable research in recent years. Robotic systems, which are usually built from a mixture of continuous (analog) and discrete (digital) components, are often referred to as hybrid dynamical systems. Traditional approaches to real-time hybrid systems usually define behaviors purely in terms of determinism or sometimes non-determinism. However, this is insufficient as real-time dynamical systems very often exhibit uncertain behaviour. To address this issue, we develop a semantic model, Probabilistic Constraint Nets (PCN), for probabilistic hybrid systems. PCN captures the most general structure of dynamic systems, allowing systems with discrete and continuous time/variables, synchronous as well as asynchronous event structures and uncertain dynamics to be modeled in a unitary framework. Based on a formal mathematical paradigm uniting abstract algebra, topology and measure theory, PCN provides a rigorous formal programming semantics for the design of hybrid real-time embedded systems exhibiting uncertainty.

TR-2005-25 Visual Mining of Power Sets with Large Alphabets, December 25, 2005 Tamara Munzner, Qiang Kong, Raymond T. Ng, Jordan Lee, Janek Klawe, Dragana Radulovic and Carson K. Leung, 10 pages

We present the PowerSetViewer visualization system for the lattice-based mining of power sets. Searching for itemsets within the power set of a universe occurs in many large dataset knowledge discovery contexts. Using a spatial layout based on a power set provides a unified visual framework at three different levels: data mining on the filtered dataset, browsing the entire dataset, and comparing multiple datasets sharing the same alphabet. The features of our system allow users to find appropriate parameter settings for data mining algorithms through lightweight visual experimentation showing partial results. We use dynamic constrained frequent set mining as a concrete case study to showcase the utility of the system. The key challenge for spatial layouts based on power set structure is handling large alphabets, because the size of the power set grows exponentially with the size of the alphabet. We present scalable algorithms for enumerating and displaying datasets containing between 1.5 and 7 million itemsets, and alphabet sizes of over 40,000.

TR-2005-26 Material Aware Mesh Deformations, November 07, 2005 Tiberiu Popa, Dan Julius and Alla Sheffer, 11 pages

We propose a novel approach to mesh deformation centered around material properties. Using these, we allow the user to achieve meaningful deformations easily with a small set of anchor triangles. Material properties, as defined here, are stiffness coefficients assigned to the mesh triangles and edges. We use these to describe the bending and shearing flexibility of the surface. By adjusting these coefficients, we provide fine continuous control over the resulting deformations. These material properties can be user-driven using a simple paint-like interface to define them, or data-driven, inferred from a sequence of sample poses. Also, a combination of the two, where the user can refine the resulting data-driven materials may be used to achieve more controlled results. As an alternative to skeleton based deformation methods, our method is both simpler and more powerful, allowing various degrees of stiffness along the surface without requiring a skeleton. Moreover, our method handles non-articulated models which are not suitable for skeleton deformations in a natural way. The formulation is simple, requiring only two linear systems whose coefficient matrices can be pre-inverted, thus allowing the algorithm to work at interactive rates on large models. The resulting deformations are as-rigid-as-possible, subject to the material properties, thus mesh details are well preserved as seen in our results.

TR-2005-27 The Role of Prototyping Tools for Haptic Behavior Design, November 14, 2005 Colin Swindells, Evgeny Maksakov, Karon E. MacLean and Victor Chung, 8 pages

We describe key affordances required by tools for developing haptic behaviors. Haptic icon design involves the envisioning, expression and iterative modification of haptic behavior representations. These behaviors are then rendered on a haptic device. For example, a sinusoidal force vs. position representation rendered on a haptic knob would produce the feeling of detents. Our contribution is twofold. We introduce a custom haptic icon prototyper that includes novel interaction features. We then use the lessons learnt from its development plus our experiences with many haptic devices to present and argue high-level design choices for such prototyping tools in general.

TR-2005-28 Co-locating Haptic and Graphic Feedback in Manual Controls, November 16, 2005 Colin Swindells, Mario J. Enriquez, Karon E. MacLean and Kellogg S. Booth, 4 pages

Based on data showing performance benefits separately for programmable versus static feedback in manual controls and for co-location of dynamic haptic/graphic controls, we hypothesized that combining these benefits would further improve context- and task-specific feedback. Two application prototypes created to explore this premise for (a) streaming media navigation and (b) information flow demonstrate the potential affordances and performance benefits for integration in manual controls. Finally, we describe two practical fabrication methods: embedding a haptic controller into an active graphic display panel, and rear projection onto a passive surface with an embedded haptic control.

TR-2005-29 Building a Haptic Language: Communication Through Touch, November 22, 2005 K. Maclean, J. Pasquero and J. Smith, 16 pages

Designing haptic signals to enrich technology interactions requires a clear understanding of the task, the user and the intricate affordances of touch. This is especially true when the haptics are not implemented as direct renderings of real world forces and textures, but as new interactions designed to convey meaning in new physical ways and support communication. The overall goal of our group's research is to provide the foundations for haptic interactions that are simple, usable and intuitive and that fit within the context of the user's life. In this paper, we describe three avenues through which our group is exploring and building a haptic language that will effectively support communication: signaling and monitoring, expressive communication and shared control. We use scenarios to illustrate where this approach could take us, and emphasize the importance of process and appropriate tools and representations.

TR-2005-30 TopoLayout: Graph Layout by Topological Features, December 02, 2005 D. Archambault, T. Munzner and D. Auber, 9 pages

We describe TopoLayout, a novel framework to draw undirected graphs based on the topological features they contain. Topological features are detected recursively, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. The final layout is drawn using an appropriate algorithm for each topological feature. A more general goal is to partition the graph into features for which there exist good layout algorithms, so in addition to strictly topological features such as trees, connected components, biconnected components, and clusters, we have a detector function to determine when High-Dimensional Embedder is an appropriate choice for subgraph layout. Our framework is the first multi-level approach to provide a phase for reducing the number of node-edge and edge-edge crossings and a phase to eliminate all node-node overlaps. The runtime and layout visual quality of TopoLayout depend on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multi-level algorithms on ten datasets with a range of connectivities and sizes, including real-world graphs of web sites, social networks, and Internet routers. TopoLayout is frequently faster or has results of higher visual quality, and sometimes, it has both. For example, the router dataset of about 140,000 nodes which contains many large tree subgraphs is drawn an order of magnitude faster with improved visual quality.

TR-2005-31 Exact regularization of linear programs, December 26, 2005 Michael P. Friedlander, 10 pages

We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed---differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. 745-752, 1979). We show that there always exist positive values of the regularization parameter such that a solution of the regularized problem simultaneously minimizes the original LP and minimizes the regularization function over the original solution set. We illustrate the main result using the nondifferentiable L1 regularization function on a set of degenerate LPs. Numerical results demonstrate how such an approach yields sparse solutions from the application of an interior-point method.

TR-2006-01 Preconditioners for the discretized time-harmonic Maxwell equations in mixed form, June 9, 2005 Chen Greif and Dominik Sch\"otzau, 17 pages

We introduce a new preconditioning technique for iteratively solving linear systems arising from finite element discretizations of the mixed formulation of the time-harmonic Maxwell equations. The preconditioners are based on discrete regularization using the scalar Laplacian, and are motivated by spectral equivalence properties of the discrete operators. The analytical observations are accompanied by numerical results that demonstrate the scalability of the proposed approach.

TR-2006-02 A Better Logic and Decision Procedure for Predicate Abstraction of Heap-Manipulating Programs, January 30, 2006 Zvonimir Rakamaric, Jesse Bingham and Alan J. Hu, 43 pages

Heap-manipulating programs (HMP), which manipulate unbounded linked data structures via pointers, are a major frontier for software model checking. In recent work, we proposed a small logic and inference-rule-based decision procedure and demonstrated their potential by verifying, via predicate abstraction, some simple HMPs. In this work, we generalize and improve our previous results to be practically useful: we allow more than a single pointer field, we permit updating the data stored in heap nodes, we add new primitives and inference rules for cyclic structures, and we greatly improve the performance of our implementation. Experimentally, we are able to verify many more HMP examples, including three small container functions from the Linux kernel. On the theoretical front, we prove NP-hardness for a small fragment of our logic, completeness of our inference rules for a large fragment, and soundness for the full logic.

TR-2006-03 Finding a Hamiltonian cycle in the dual graph of Right-Triangulations, January 27, 2006 Viann W. "Chan and William S." Evans, 16 pages

In this paper, we describe a method for refining a class of balanced bintree triangulations which maintains a hamiltonian cycle in the dual graph. We also introduce a method for building refinable balanced bintree triangulations using two types of tiles, a diamond tile and a triangular tile.

TR-2006-04 Presenter-on-Paper: the Camera Phone as an In-Class Educational Technology Tool, March 14, 2006 W. Tian Lim and Steven A. Wolfman, 28 pages

This report documents development work on the "Presenter-on-Paper" project for integrating cell phone cameras as a new channel for communication in the classroom. We present key project design decisions and their rationales. Our target audience includes those who are interested in replicating or extending our work, and therefore would benefit from lessons learned.

TR-2006-05 “What I Want, Where I Want:” Reference Material Use in Tabletop Work, March 19, 2006 A. Tang and S. Fels, 4 pages

Usable digital tabletop design hinges on a deep understanding of people’s natural work practices over traditional tables. We present an ethnographic study of engineering project teams that highlights the use of reference material—artifacts not the primary product or focus of work activity, but referred to or inspected while the work activity is carried out—in tabletop work. We show how the variety of reference material forms and their role in tabletop work suggest that digital tabletop systems must recognize external artifacts and should allow reconfiguration of external work surfaces and information.

TR-2006-06 Finding local RNA motifs using covariance models, April 03, 2006 Sohrab P. Shah and Anne E. Condon, 29 pages

We present DISCO, an algorithm to detect conserved motifs in sets of unaligned RNA sequences. Our algorithm uses covariance models (CM) to represent motifs. We introduce a novel approach to initialise a CM using pairwise and multiple sequence alignment. The CM is then iteratively refined. We tested our algorithm on 26 data sets derived from Rfam seed alignments of microRNA (miRNA) precursors and conserved elements in the untranslated regions of mRNAs (UTR elements). Our algorithm outperformed RNAProfile and FOLDALIGN in measures of sensitivity and positive predictive value, although the running time of RNAProfile was considerably faster. The accuracy of our algorithm was unaffected by properties of the input data and performed consistently under different settings of key parameters. The running time of DISCO is $O(N^2L^2W2 + NL3)$ where $W$ is the approximate width of the motif, $L$ is the length of the longest sequence in the input data, and $N$ is the number of sequences. Supplemental material is available at: http://www.cs.ubc.ca/\~{}sshah/disco.

TR-2006-08 Local Consistency in Junction Graphs for Constraint-Based Inference, March 28, 2006 L. Chang and A. K. Mackworth, 11 pages

The concept of local consistency plays a central role in constraint satisfaction and has been extended to handle general constraint-based inference (CBI) problems. We propose a family of novel generalized local consistency concepts for the junction graph representation of CBI problems. These concepts are based on a general condition that depends only on the existence and property of the multiplicative absorbing element and does not depend on the other semiring properties of CBI problems. We present several local consistency enforcing algorithms and their approximation variants. Theoretical complexity analyses and empirical experimental results for the application of these algorithms to both MaxCSP and probability inference are given. We also discuss the relationship between these local consistency concepts and message passing schemes such as junction tree algorithms and loopy message propagation.

TR-2006-09 Shuffler: Modeling with Interchangeable Parts, March 31, 2006 Kraevoy V, Julius D and Sheffer A, 9 pages

Many man made and natural objects are easily classified into families of models with a similar part-based structure. Example families include quadrupeds, humans, chairs, and airplanes. In this paper we present Shuffler – a modeling system that automates the process of creating new models by composing interchangeable parts from different existing models within each family. Our system does not require the users to perform any geometric operations; they simply select which parts should come from which input model, and the system composes the parts together. To enable this modeling paradigm, Shuffler precomputes the interchangeable parts across each input family of models by first segmenting the models into meaningful components and then computing correspondences between them. We introduce two new algorithms to perform the segmentation and to establish part correspondences that can also be used for many other applications in computer graphics.

TR-2006-10 ArtiSynth: A Biomechanical Simulation Platform for the Vocal Tract and Upper Airway, March 01, 2006 Sidney Fels, Florian Vogt, Kees van den Doel, John E. Lloyd, Ian Stavness and Eric Vatikiotis-Bateson, 7 pages

We describe ArtiSynth, a 3D biomechanical simulation platform directed toward modeling the vocal tract and upper airway. It provides an open-source environment in which researchers can create and interconnect various kinds of dynamic and parametric models to form a complete integrated biomechanical system which is capable of articulatory speech synthesis. An interactive graphical Timeline runs the simulation and allows the temporal arrangement of input/output channels to control or observe properties of the model's components. Library support is available for particle-spring and rigid body systems, finite element models, and spline-based curves and surfaces. To date, these have been used to create a dynamic muscle-based model of the jaw, a deformable tongue model, a deformable airway, and a linear acoustics model, which have been connected together to form a complete vocal tract that produces speech and is drivable both by data and by dynamics.

TR-2006-11 Omnidirectional Humanoid Balance Control:Multiple Strategies for Reacting to a Push, June 07, 2006 KangKang Yin and Michiel van de Panne, 6 pages

We develop and evaluate humanoid balance controllers that can recover from unexpected external perturbations of varying magnitudes in arbitrary directions. Balance strategies explored include ankle and hip strategies for in-place balance, as well as single-step, double-step, and multi-step balance recovery. Simulation results are provided for a 30 DOF humanoid.

TR-2006-14 Captured Dynamics Data of 5 Mechanical Knobs, August 11, 2006 Colin Swindells and Karon E. MacLean, 36 pages

Torque models are presented for five mechanical knobs that were characterized using a custom built rotary haptic camera. Non-linear least squares fitting was used to estimate model parameters for position, velocity, and acceleration model parts. Additionally, two simulated knobs were modeled to test the accuracy of the characterization algorithm.

TR-2006-15 Integrating Gaussian Processes with Word-Sequence Kernels for Bayesian Text Categorization, August 12, 2006 Maryam Mahdaviani, Sara Forghanizadeh and Giuseppe Carenini, 8 pages

We address the problem of multi-labelled text classification using word-sequence kernels. However, rather than applying them with Support Vector Machine as in previous work, we chose a classifier based on Gaussian Processes. This is a probabilistic non-parametric method that retains a sound probabilistic semantics while overcoming the limitations of parametric methods. We present the empirical evaluation of our approach on the standard Reuters-21578 datasets.

TR-2006-16 A Preconditioner For Linear Systems Arising From Interior-Point Optimization Methods, August 29, 2006 Tim Rees and Chen Greif, 25 pages

We investigate a preconditioning technique applied to the problem of solving linear systems arising from primal-dual interior point algorithms in linear and quadratic programming. The preconditioner has the attractive property of improved eigenvalue clustering with increased ill-conditioning of the (1,1) block of the saddle point matrix. We analyze its spectral characteristics, utilizing projections onto the null space of the constraint matrix, and demonstrate performance of the preconditioner on problems from the NETLIB and CUTEr test suites. The numerical experiments include results based on inexact inner iterations, and comparisons of the proposed techniques with constraint preconditioners.

TR-2006-18 Gradient Projection for General Quadratic Programs (Replaced by TR-2007-16), September 13, 2006 Michael P. Friedlander and Sven Leyffer, 28 pages

We present a hybrid algorithm for solving large-scale quadratic programs (QPs) that is based on a combination of techniques from gradient projection, augmented Lagrangian, and filter methods. The resulting algorithm is globally and finitely convergent. The method efficiently accommodates a matrix-free implementation and is suitable for large-scale problems with many degrees of freedom. The algorithm is based on two main phases. First, gradient projection iterations approximately minimize the augmented Lagrangian function and provide an estimate of the optimal active set. Second, an equality-constrained QP is approximately minimized on this subspace in order to generate a second-order search direction. Numerical experiments on a subset of the CUTEr QP test problems demonstrate the effectiveness of the proposed approach.

TR-2006-19 Routing Transient Traffic in Mobile Ad Hoc Networks, September 25, 2006 Kan Cai, Michael J. Feeley and Norman C. Hutchinson, 14 pages

Recent research shows that the traffic in public wireless networks is mostly transient and bursty. There is good reason to believe that ad-hoc traffic will follow the same pattern as its popularity grows. Unfortunately transient traffic generates route discoveries much more frequently than the well-studied long-term, constant-bit-rate traffic, causing network congestion problems for existing routing protocols. This paper describes the design of a new routing algorithm, called {\small ECBR}, that uses hybrid backbone routing in a manner that is well suited to workloads that include transient traffic. Our simulation results show that {\small ECBR} outperforms one of the main reactive algorithms (i.e., {\small DSR}). We also explain three key features of our algorithm and demonstrate their roles in substantially improving the performance compared to existing backbone routing techniques.

TR-2006-20 Understanding 802.11 Performance for Two Competing Flows (Replaced by TR-2007-09), October 12, 2006 Kan Cai, Michael J. Feeley and George Sharath J., 9 pages

TR-2006-21 Computing nonnegative tensor factorizations, October 19, 2006 (Revised October 7, 2007) Michael P, Friedlander and Kathrin Hatz, 16 pages

Nonnegative tensor factorization (NTF) is a technique for computing a parts-based representation of high-dimensional data. NTF excels at exposing latent structures in datasets, and at finding good low-rank approximations to the data. We describe an approach for computing the NTF of a dataset that relies only on iterative linear-algebra techniques and that is comparable in cost to the nonnegative matrix factorization. (The better-known nonnegative matrix factorization is a special case of NTF and is also handled by our implementation.) Some important features of our implementation include mechanisms for encouraging sparse factors and for ensuring that they are equilibrated in norm. The complete Matlab software package is available under the GPL license.

TR-2006-23 Comparing Forward and Backward Reachability as Tools for Safety Analysis, December 19, 2006 Ian M. Mitchell, 24 pages

Using only the existence and uniqueness of trajectories for a generic dynamic system with inputs, we define and examine eight types of forward and backward reachability constructs. If the input is treated in a worst-case fashion, any forward or backward reach set or tube can be used for safety analysis, but if the input is treated in a best-case fashion only the backward reach tube always provides the correct results. Fortunately, forward and backward algorithms can be exchanged if well-posed reverse time trajectories can be defined. Unfortunately, backward reachability constructs are more likely to suffer from numerical stability issues, especially in systems with significant contraction---the very systems where forward simulation and reachability are most effective.

TR-2006-26 Exact regularization of convex programs, November 18, 2006 Michael P. Friedlander and Paul Tseng, 25 pages

The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian (1991) involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas (1975) and by Bertsekas, Nedi\'c, and Ozdaglar (2003). When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a ``weak sharp minimum'' is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the L1 regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of L1 regularization in finding sparse solutions.

TR-2006-27 FAST MARCHING METHODS FOR A CLASS OF ANISOTROPIC STATIONARY HAMILTON-JACOBI EQUATIONS, January 16, 2007 Ken Alton and Ian M. Mitchell, 38 pages

The Fast Marching Method (FMM) has proved to be a very efficient algorithm for solving the isotropic Eikonal equation. Because it is a minor modification of Dijkstra’s algorithm for finding the shortest path through a discrete graph, FMM is also easy to implement. In this paper we describe a new class of Hamilton-Jacobi (HJ) PDEs with axis-aligned anisotropy which satisfy a causality condition for standard finite difference schemes on orthogonal grids and can hence be solved using the FMM; the only modification required to the algorithm is in the local update equation for a node. Since our class of HJ PDEs and grids permit asymmetries, we also examine some methods of improving the efficiency of the local update that do not require symmetric grids and PDEs. This class of HJ PDEs has applications in robotic path planning, and a brief example is included. In support of this and similar applications, we also include explicit update formulas for variations of the Eikonal equation that use the Manhattan, Euclidean and infinity norms on orthogonal grids of arbitrary dimension and with variable node spacing.

TR-2006-28 Highly Efficient Flooding in Mobile Ad Hoc Networks, December 20, 2006 Majid Khabbazian and Vijay K. Bhargava, 12 pages

This paper presents two efficient flooding algorithms based on 1-hop information.In the first part of the paper, we consider sender-based flooding algorithms, specifically the algorithm proposed by Liu et al. In their paper, Liu et al. propose a sender-based flooding algorithm that can achieve local optimality by selecting the minimum number of forwarding nodes in the lowest computational time complexity O(n logn), where $n$ is the number of neighbors. We show that this optimality only holds for a subclass of sender-based algorithms. We propose an efficient sender-based flooding algorithm based on 1-hop information that reduces the time complexity of computing forwarding nodes to O(n). In Liu's algorithm, n nodes are selected to forward the message in the worst case, whereas in our proposed algorithm, the number of forwarding nodes in the worst case is 11. In the second part of the paper we propose a simple and highly efficient receiver-based flooding algorithm. When nodes are randomly distributed, we prove that the probability of two neighbor nodes broadcasting the same message exponentially decreases when the distance between them decreases or when the node density increases. Using simulation, we confirm these results and show that the number of broadcasts in our proposed receiver-based flooding algorithm can be even less than one of the best-known approximations for the minimum number of required broadcasts.

TR-2006-29 Cognitive Principles for Information Management: The Principles of Mnemonic Associative Knowledge, August 31, 2006 Holger Hoos, Michael Huggett and Ron Rensink, 54 pages

Information management systems improve the retention of information in large collections. As such they act as memory prostheses, implying an ideal basis in human memory models. Since humans process information by association, and situate it in the context of space and time, systems should maximize their effectiveness by mimicking these functions. Since human attentional capacity is limited, systems should scaffold cognitive efforts in a comprehensible manner. We propose the Principles of Mnemonic Associative Knowledge (P-MAK), which describes a framework for semantically identifying, organizing, and retrieving information, and for encoding episodic events by time and stimuli. Inspired by prominent human memory models, we propose associative networks as a preferred representation. Networks are ideal for their parsimony, flexibility, and ease of inspection. Networks also possess topological properties—such as clusters, hubs, and the small world—that aid analysis and navigation in an information space. Our cognitive perspective addresses fundamental problems faced by information management systems, in particular the retrieval of related items and the representation of context. We present evidence from neuroscience and memory research in support of this approach, and discuss the implications of systems design within the constraints of P-MAK’s principles, using text documents as an illustrative semantic domain.

TR-2007-02 Discussion of "The Dantzig Selector" by Candes and Tao, January 30, 2007 Michael P. Friedlander and Michael A. Saunders, 7 pages

(Abstract not available on-line)

TR-2007-03 Relationships Between Human and Automated System Identification of Physical Controls, March 01, 2007 Colin Swindells, Karon E. MacLean and Kellogg S. Booth, 20 pages

Active haptic renderings need not only mimic the physical characteristics of a mechanical control such as a knob or slider. They must also elicit a comparable subjective experience from a human who uses the physical control. We compare the results of automated and human captures of 4 salient static and dynamic physical parameters for 5 mechanical test knobs. The automated captures were accomplished using our custom rotary Haptic Camera tool. Human captures were accomplished through user studies in which users matched active haptic renderings to passive mechanical test knobs by adjusting the parameters of the model for the active knob. Both quantitative and qualitative results from the experiments support the hypothesis that the renderings evoke a subjective experience similar to the mechanical knob.

TR-2007-04 On Solving General State-Space Sequential Decision Problems using Inference Algorithms, March 08, 2007 M. Hoffman, A. Doucet, N. de Freitas and A. Jasra, 8 pages

A recently proposed formulation of the stochastic planning and control problem as one of parameter estimation for suitable artificial statistical models has led to the adoption of inference algorithms for this notoriously hard problem. At the algorithmic level, the focus has been on developing Expectation-Maximization (EM) algorithms. For example, Toussaint et al (2006) uses EM with optimal smoothing in the E step to solve finite state-space Markov Decision Processes. In this paper, we extend this EM approach in two directions. First, we derive a non-trivial EM algorithm for linear Gaussian models where the reward function is represented by a mixture of Gaussians, as opposed to the less flexible classical single quadratic function. Second, in order to treat arbitrary continuous state-space models, we present an EM algorithm with particle smoothing. However, by making the crucial observation that the stochastic control problem can be reinterpreted as one of trans-dimensional inference, we are able to propose a novel reversible jump Markov chain Monte Carlo (MCMC) algorithm that is more efficient than its smoothing counterparts. Moreover, this observation also enables us to design an alternative full Bayesian approach for policy search, which can be implemented using a single MCMC run.

TR-2007-06 Imaging and 3D Tomographic Reconstruction of Time-varying, Inhomogeneous Refractive Index Fields, April 03, 2007 B. Atcheson, I. Ihrke, D. Bradley, W. Heidrich, M. Magnor and HP. Seidel, 9 pages

We present a technique for 2D imaging and 3D tomographic reconstruction of time-varying, inhomogeneous refractive index fields. Our method can be used to perform three-dimensional reconstruction of phenomena such as gas plumes or liquid mixing. We can also use the 2D imaging results of such time-varying phenomena to render environment mattes and caustics. To achieve these results, we improve a recent fluid imaging technique called Background Oriented Schlieren imaging, and develop a novel theory for tomographic reconstructions from Schlieren images based on first principles of optics. We demonstrate our approach with two different measurement setups, and discuss example applications such as measuring the heat and density distribution in gas flows.

TR-2007-07 Interplay of Tactile and Visual Guidance Cues under Multimodal Workload, April 04, 2007 M. J. Enriquez, K. E. MacLean and H. Neilson, 19 pages

Modern user interfaces – computerized, complex and time-critical – increasingly support users who multi-task; yet to do this well, we need a better understanding of how computer-user communication degrades with demand on user attention, and the benefits and risks of introducing new display modalities into high-demand environments, Touch can be a natural and intuitive locus of information exchange and is an obvious candidate for offloading visual and/or auditory channels. In this study we compared salience-calibrated tactile, visual and multimodal navigation cues during a driving-like task, and examined the effectiveness and intrusiveness of the navigation signals while varying cognitive workload and masking of task cues. We found that participants continued to utilize haptic navigation signals under high workload, but their usage of visual and reinforced multimodal navigation cues degraded; further, the reinforced cues under high cognitive workload disrupted the visual primary task. While multimodal cue reinforcement is generally considered a positive interface design practice, these results demonstrate a different view: dual-modality cues can cross a distraction threshold in high-workload environments and lead to overall performance degradation. Conversely, our results indicate that tactile signals can be a robust, intuitive and non-intrusive way to communicate information to a user performing a visual primary task.

TR-2007-08 Localized Broadcasting with Guaranteed Delivery and Bounded, April 04, 2007 Hosna Jabbari, Majid Khabbazian and Vijay K. Bhargava, 14 pages

The common belief is that localized broadcast algorithms are not able to guarantee both full delivery and a good bound on the number of transmissions. In this paper, we propose the first localized broadcast algorithm that guarantees full delivery and a constant approximation ratio to the minimum number of required transmissions in the worst case. The proposed broadcast algorithm is a self-pruning algorithm based on 1-hop neighbor information. Using the proposed algorithm, each node determines its status (forwarding/non-forwarding) in O(d log(d)), where d is the maximum node degree of the network. By extending the proposed algorithm, we show that localized broadcast algorithms can achieve both full delivery and a constant approximation ratio to the optimum solution with message complexity O(N), where N is the total number of nodes in the network and each message contains a constant number of bits. We also show how to save bandwidth by reducing the size of piggybacked information. Finally, we relax several system-model assumptions, or replace them with practical ones, in order to improve the practicality of the proposed broadcast algorithm.

TR-2007-09 Understanding Performance for Two 802.11b Competing Flows, April 07, 2007 Kan Cai, Michael J. Feeley and Sharath J. George, 8 pages

It is well known that 802.11 suffers from both inefficiency and unfairness in the face of competition and interference. This paper provides a detailed analysis of the impact of topology and traffic type on network performance when two flows compete with each other for airspace. We consider both TCP and UDP flows and a comprehensive set of node topologies. We vary these topologies to consider all combinations of the following four node-to-node interactions: (1) nodes unable to read or sense each other, (2) nodes able to sense each other but not able to read each other’s packets and nodes able to communicate with (3) weak and with (4) strong signal. We evaluate all possible cases through simulation and show that, for 802.11b competing flows, the cases can be reduced to 11 UDP and 10 TCP models with similar efficiency/fairness characteristics. We also validate our simulation results with extensive experiments conducted in a laboratory testbed.

TR-2007-10 BRDF Acquisition with Basis Illumination, April 16, 2007 Abhijeet Ghosh, Shruthi Achutha, Wolfgang Heidrich and Matthew O'Toole, 8 pages

Realistic descriptions of surface reflectance have long been a topic of interest in both computer vision and computer graphics research. In this paper, we describe a novel and fast approach for the acquisition of bidirectional reflectance distribution functions (BRDFs). We develop a novel theory for directly measuring BRDFs in a basis representation by projecting incident light as a sequence of basis functions from a spherical zone of directions. We derive an orthonormal basis over spherical zones that is ideally suited for this task. BRDF values outside the zonal directions are extrapolated by re-projecting the zonal measurements into a spherical harmonics basis, or by fitting analytical reflection models to the data. We verify this approach with a compact optical setup that requires no moving parts and only a small number of image measurements. Using this approach, a BRDF can be measured in just a few minutes.

TR-2007-11 A Toolbox of Level Set Methods (version 1.1), June 1, 2007 Ian M Mitchell, 94 pages

This document describes a toolbox of level set methods for solving time-dependent Hamilton-Jacobi partial differential equations (PDEs) in the \matlab\ programming environment. Level set methods are often used for simulation of dynamic implicit surfaces in graphics, fluid and combustion simulation, image processing, and computer vision. Hamilton-Jacobi and related PDEs arise in fields such as control, robotics, differential games, dynamic programming, mesh generation, stochastic differential equations, financial mathematics, and verification. The algorithms in the toolbox can be used in any number of dimensions, although computational cost and visualization difficulty make dimensions four and higher a challenge. All source code for the toolbox is provided as plain text in the \matlab\ m-file programming language. The toolbox is designed to allow quick and easy experimentation with level set methods, although it is not by itself a level set tutorial and so should be used in combination with the existing literature.

TR-2007-12 Information Mobility Between Virtual and Physical Domains, May 31, 2007 Garth Shoemaker, 11 pages

Humans are ideally suited to interaction with physical information artifacts. These capabilities should be supported by computing systems, and users should be able to move seamlessly between interacting with virtual artifacts and equivalent physical artifacts. Unfortunately, existing computing systems are centered almost solely around virtual information interaction, with interaction metaphors developed sepcifically for computing systems. It is important that future computing systems evolve to properly support data mobility between physical and virtual domains, and as a result, natural human interactions. We perform a survey of existing research relevant to data mobility, identify key problems which need addressing, and speculate on future research that may help alleviate the problems identified.

TR-2007-13 Contour-based Modeling Using Deformable 3D Templates, June 06, 2007 Vladislav Kraevoy, Alla Sheffer, Michiel van de Panne and , 9 pages

We present a new technique for image-based modeling using as input image contours and a deformable 3D template. The technique gradually deforms the template to fit the contours. At the heart of this process is the need to provide good correspondences between points on image contours and vertices on the model. We propose the use of a hidden Markov model for efficiently computing an optimal set of correspondences. An iterative match-and-deform process then progressively deforms the 3D template to match the image contours. The technique can successfully deform the template to match contours that represent significant changes in shape. The template models can be augmented to include properties such as bending stiffness and symmetry constraints. We demonstrate the results on a variety of objects.

TR-2007-14 Interfaces for Web Service Intermediaries, June 05, 2007 S. Forghanizadeh, I. Minevskiy and E. Wohlstadter, 10 pages

The use of XML as a format for message exchange makes Web services well suited for composition of heterogeneous components. However, since the schema of these messages must be understood by all cooperating services, interoperability is still a significant problem. There is often some level of semantic overlap between schemas even when there is no syntactic match. We are interested in supporting interoperability through the use of partial interface adaptation. Using Web services, it is becoming common for clients to share adaptations provided by a Web service intermediary. Given a Web service and an intermediary, we generate the interface that can be used by a client taking into account what can happen at the intermediary. We provide examples using publicly available service schemas and a performance evaluation for the purpose of validating the usefulness of our approach.

TR-2007-15 Glimmer: Multilevel MDS on the GPU, June 11, 2007 Stephen Ingram, Tamara Munzner and Marc Olano, 8 pages

We present Glimmer, a new multilevel algorithm for multidimensional scaling that accurately reflects the high-dimensional structure of the original data in the low-dimensional embedding, and converges well. It is designed to exploit modern graphics processing unit (GPU) hardware for a dramatic speedup compared to previous work. We also present GPU-SF, an efficient GPU version of a stochastic force algorithm that we use as a subsystem in Glimmer. We propose robust termination conditions for the iterative GPU-SF computation based the filtered sum of point velocities. Our algorithms can either compute high-dimensional Euclidean distance on the fly from a set of high-dimensional points as input, or handle precomputed distance matrices. The O(N2) size of these matrices would quickly overflow texture memory, so we propose distance paging and distance feeding to remove this scalability restriction. We demonstrate Glimmer’s benefits in terms of speed, convergence and correctness against several previous algorithms for a range of synthetic and real benchmark datasets.

TR-2007-16 Global and finite termination of a two-phase augmented Lagrangian filter method for general quadratic programs, June 12, 2007 Michael P. Friedlander and Sven Leyffer, 24 pages

We present a two-phase algorithm for solving large-scale quadratic programs (QPs). In the first phase, gradient-projection iterations approximately minimize an augmented Lagrangian function and provide an estimate of the optimal active set. In the second phase, an equality-constrained QP defined by the current inactive variables is approximately minimized in order to generate a second-order search direction. A filter determines the required accuracy of the subproblem solutions and provides an acceptance criterion for the search directions. The resulting algorithm is globally and finitely convergent. The algorithm is suitable for large-scale problems with many degrees of freedom, and provides an alternative to interior-point methods when iterative methods must be used to solve the underlying linear systems. Numerical experiments on a subset of the CUTEr QP test problems demonstrate the effectiveness of the approach.

TR-2007-17 Efficient Optimal Multi-Location Robot Rendezvous, October 05, 2007 Ken Alton and Ian M. Mitchell, 21 pages

We present an efficient algorithm to solve the problem of optimal multi-location robot rendezvous. The rendezvous problem considered can be structured as a tree, with each node representing a meeting of robots, and the algorithm computes optimal meeting locations and connecting robot trajectories. The tree structure is exploited by using dynamic programming to compute solutions in two passes through the tree: an upwards pass computing the cost of all potential solutions, and a downwards pass computing optimal trajectories and meeting locations. The correctness and efficiency of the algorithm are analyzed theoretically, while a discrete robotic clinic problem and a continuous robot arm problem demonstrate the algorithm's practicality.

TR-2007-18 Optimizing Acquaintance Selection in a PDMS, September 28, 2007 Rachel Pottinger Jian Xu, 16 pages

In a Peer Data Management System (PDMS), autonomous peers share semantically rich data. For queries to be translated across peers, a peer must provide a mapping to other peers in the PDMS; peers connected by such mappings are called acquaintances. To maximize query answering ability, a peer needs to optimize its choice of acquaintances. This paper introduces a novel framework for performing acquaintance selection. Our framework includes two selection schemes that effectively and efficiently estimate mapping quality. The "one-shot" scheme clusters peers and estimates the improvement in query answering based on cluster properties. The "two-hop" scheme, estimates using locally available information at multiple rounds. Our empirical study shows that both schemes effectively help acquaintance selection and scale to PDMSs with large number of peers.

TR-2007-21 A Study-Based Guide to Multiple Visual Information Resolution Interface Designs, September 22, 2007 H. Lam and T Munzner, 28 pages

Displaying multiple visual information resolutions (VIRs) of data has been proposed for the challenge of limited screen space. We review 19 existing multiple-VIR interface studies and cast our findings into a four-point decision tree: (1) When is multiple VIR useful? (2) How to create the low-VIR display? (3) Should the VIRs be displayed simultaneously? (4) Should the VIRs be embedded, or separated? We recommend that VIR and data levels should match, and low VIRs should only display task-relevant information. Simultaneous display, rather than temporal switching, is suitable for tasks with multi-level answers.

TR-2007-22 MAGIC Broker: A Middleware Toolkit for Interactive Public Displays, October 02, 2007 Aiman Erbad, Michael Blackstock, Adrian Friday, Rodger Lea and Jalal Al-Muhtadi, 6 pages

Large screen displays are being increasingly deployed in public areas for advertising, entertainment, and information display. Recently we have witnessed increasing interest in supporting interaction with such displays using personal mobile devices. To enable the rapid development of public large screen interactive applications, we have designed and developed the MAGIC Broker. The MAGIC Broker provides a set of abstractions and a simple RESTful web services protocol to easily program interactive public large screen display applications with a focus on mobile device interactions. We have carried out a preliminary evaluation of the MAGIC Broker via the development of a number of prototypes and believe our toolkit is a valid first step in developing a generic support infrastructure to empower developers of interactive large screen display applications.

TR-2007-23 GLUG: GPU Layout of Undirected Graphs, October 15, 2007 Stephen Ingram, Tamara Munzner and Marc Olano, 13 pages

We present a fast parallel algorithm for layout of undirected graphs, using commodity graphics processing unit (GPU) hardware. The GLUG algorithm creates a force-based layout minimizing the Kamada Kawai energy of the graph embedding. Two parameters control the graph layout: the number of landmarks used in the force simulation determines the influence of the global structure, while the number of near neighbors affects local structure. We provide examples and guidelines for their use in controlling the visualization. Our layouts are of comparable or better quality to existing fast, large-graph algorithms. GLUG is an order of magnitude faster than the previous CPU-based FM3 algorithm. It is considerably faster than the only previous GPU-based approach to force-directed placement, a multi-stage algorithm that uses a mix of CPU partitioning and GPU force simulation at each step. While GLUG has a preprocessing stage that runs on the CPU, the core algorithm operates entirely on the GPU.

TR-2007-25 A Tutorial on the Proof of the Existence of Nash Equilibria, November 09, 2007 Albert Xin Jiang and Kevin Leyton-Brown, 10 pages

In this tutorial we detail a proof of Nash's famous theorem on the existence of Nash equilibria in finite games, first proving Sperner's lemma and Brouwer's fixed-point theorem.

TR-2007-27 An Inner/Outer Stationary Iteration for Computing PageRank, December 20, 2007 Andrew P. Gray, Chen Greif and Tracy Lau, 12 pages

We present a stationary iterative scheme for PageRank computation. The algorithm is based on a linear system formulation of the problem, uses inner/outer iterations, and amounts to a simple preconditioning technique. It is simple, can be easily implemented and parallelized, and requires minimal storage overhead. Convergence analysis shows that the algorithm is effective for a crude inner tolerance and is not particularly sensitive to the choice of the parameters involved. Numerical examples featuring matrices of dimensions up to approximately $107$ confirm the analytical results and demonstrate the accelerated convergence of the algorithm compared to the power method.

TR-2008-01 Probing the Pareto frontier for basis pursuit solutions, January 28, 2008 Ewout van den Berg and Michael P. Friedlander, 23 pages

The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.

TR-2008-02 Fast Marching Methods for Stationary Hamilton-Jacobi Equations with Axis-Aligned Anisotropy, August 08, 2008 Ken Alton and Ian M. Mitchell, 33 pages

The Fast Marching Method (FMM) has proved to be a very efficient algorithm for solving the isotropic Eikonal equation. Because it is a minor modification of Dijkstra's algorithm for finding the shortest path through a discrete graph, FMM is also easy to implement. In this paper we describe a new class of Hamilton-Jacobi (HJ) PDEs with axis-aligned anisotropy which satisfy a causality condition for standard finite difference schemes on orthogonal grids and can hence be solved using the FMM; the only modification required to the algorithm is in the local update equation for a node. This class of HJ PDEs has applications in anelliptic wave propagation and robotic path planning, and brief examples are included. Since our class of HJ PDEs and grids permit asymmetries, we also examine some methods of improving the efficiency of the local update that do not require symmetric grids and PDEs. Finally, we include explicit update formulas for variations of the Eikonal equation that use the Manhattan, Euclidean and infinity norms on orthogonal grids of arbitrary dimension and with variable node spacing.

TR-2008-03 Computation with Energy-Time Trade-Offs: Models, Algorithms and Lower-Bounds, April 10, 2008 Bradley D. Bingham and Mark R. Greenstreet, 11 pages

Power consumption has become one of the most critical concerns for processor design. This motivates designing algorithms for minimum execution time subject to energy constraints. We propose simple models for analysing algorithms that reflect the energy-time trade-offs of CMOS circuits. Using these models, we derive lower bounds for the energy-constrained execution time of sorting, addition and multiplication, and we present algorithms that meet these bounds. We show that minimizing time under energy constraints is not the same as minimizing operation count or computation depth.

TR-2008-04 Supporting Transitions in Work: Informing Groupware Design by Understanding Whiteboard Use, April 24, 2008 Anthony Tang, Joel Lanir, Saul Greenberg and S. Sidney Fels, 10 pages

Many groupware tools focus on supporting collaborative real-time work; yet in practice, work spans many different modes: from collaborative to independent activity, and from synchronous, real-time activity to asynchronous activity. How can we design tools that allow users to transition between these modes of activity smoothly in their work? We consider how the common office and domestic whiteboard are used for both independent and asynchronous activity, showing how users employ the whiteboard to transition between these and other modes of activity. Our findings suggest that the whiteboard does so by being a contextually located display with visually persistent content, facilitating transitions because it is a flexible, common tool enabling the creation of representations that are useful across modes. We explore the design implications of these findings with respect to interactive whiteboard tools, and discuss how they can be applied more generally to inform the design of groupware tools.

TR-2008-05 Coupled CRFs for estimating the underlying ground surface from airborne LiDAR data, May 21, 2008 Wei-Lwun Lu, Kevin P. Murphy, James J. Little, Alla Sheffer and Hongbo Fu, 7 pages

Airborne laser scanners (LiDAR) return point clouds of millions of points imaging large regions. It is very challenging to recover the bare earth, i.e., the surface remaining after the buildings and vegetative cover have been identified and removed; manual correction of the recovered surface is very costly. Our solution combines classification into ground and non-ground with reconstruction of the continuous underlying surface. We define a joint model on the class labels and estimated surface, $p(\vc,\vz|\vx)$, where $c_i \in \{0,1\}$ is the label of point $i$ (ground or non-ground), $z_i$ is the estimated bare-earth surface at point $i$, and $x_i$ is the observed height of point $i$. We learn the parameters of this CRF using supervised learning. The graph structure is obtained by triangulating the point clouds. Given the model, we compute a MAP estimate of the surface, $\arg \max p(\vz|\vx)$, using the EM algorithm, treating the labels $\vc$ as missing data. Extensive testing shows that the recovered surfaces agree very well with those reconstructed from manually corrected data. Moreover, the resulting classification of points is competitive with the best in the literature.

TR-2008-06 An Exploratory Study on How Can Diagramming Tools Help Support Programming Activities?, May 28, 2008 Seonah Lee, Gail C. Murphy, Thomas Fritz and Meghan Allen, 8 pages

Programmers often draw diagrams on whiteboards or on paper. To enable programmers to use such diagrams in the context of their programming environment, many tools have been built. Despite the existence and availability of such tools, many programmers continue to work predominantly with textual descriptions of source code. In this paper, we report on an exploratory study we conducted to investigate what kind of diagrammatic tool support is desired by programmers, if any. The study involved 19 professional programmers working at three different companies. The study participants desired a wide range of information content in diagrams and wanted the content to be sensitive to particular contexts of use. Meeting these needs may require flexible, adaptive and responsive diagrammatic tool support.

TR-2008-08 Uncovering Activity and Patterns in Video using Slit-Tear Visualizations, July 31, 2008 Anthony Tang, Joel Lanir, Saul Greenberg and S. Sidney Fels, 8 pages

In prior work, we introduced a visualization technique for analyzing fixed position video streams called slit-tear visualizations. This technique supports exploratory data analysis by interactively generating views about the video stream that can provide insight into the spatial/temporal relationships of the entities contained within. These insights are necessarily grounded in context of the specific video being analyzed, and in this paper, we provide a general typology of the kinds of slit-tears an analyst may use. Further, we discuss the kinds of analytic primitives that often signal relevant events given these slit-tear types. The work is relevant to human-centered computing because the technique provides the most insight in the presence of human interpretation.

TR-2008-09 Group sparsity via linear-time projection, June 06, 2008 Ewout van den Berg, Mark Schmidt, Michael P. Friedlander and Kevin Murphy, 11 pages

We present an efficient spectral projected-gradient algorithm for optimization subject to a group one-norm constraint. Our approach is based on a novel linear-time algorithm for Euclidean projection onto the one- and group one-norm constraints. Numerical experiments on large data sets suggest that the proposed method is substantially more efficient and scalable than existing methods

TR-2008-10 Collison in Unrepeated, First-Price Auctions with an Uncertain Number of Participants, August 04, 2008 Kevin Leyton-Brown, Moshe Tennenholtz, Navin A. R. Bhat and Yoav Shoham, 30 pages

We identify a self-enforcing collusion protocol (a ``bidding ring'') for non-repeated first-price auctions. Unlike previous work on the topic such as that by McAfee & McMillan [1992] and Marshall & Marx [2007], we allow for the existence of multiple cartels in the auction and do not assume that non-colluding agents have perfect knowledge about the number of colluding agents whose bids are suppressed by the bidding ring. We show that it is an equilibrium for agents to choose to join bidding rings when invited and to truthfully declare their valuations to a ring center, and for non-colluding agents to bid straightforwardly. Furthermore, even though our protocol is efficient, we show that the existence of bidding rings benefits ring centers and all agents, both members and non-members of bidding rings, at the auctioneer's expense.

TR-2008-11 Efficient Dynamic Programming for Optimal Multi-Location Robot Rendezvous with Proofs, August 06, 2008 Ken Alton and Ian M. Mitchell, 8 pages

We present an efficient dynamic programming algorithm to solve the problem of optimal multi-location robot rendezvous. The rendezvous problem considered can be structured as a tree, with each node representing a meeting of robots, and the algorithm computes optimal meeting locations and connecting robot trajectories. The tree structure is exploited by using dynamic programming to compute solutions in two passes through the tree: an upwards pass computing the cost of all potential solutions, and a downwards pass computing optimal trajectories and meeting locations. The correctness and efficiency of the algorithm are analyzed theoretically, while a continuous robot arm problem demonstrates the algorithm's practicality.

TR-2008-13 Action-Graph Games, September 08, 2008 Albert Xin Jiang, Kevin Leyton-Brown and Bhat Navin A.R., 54 pages

Representing and reasoning with games becomes difficult once they involve large numbers of actions and players, because utility functions can grow unmanageably. Action-Graph Games (AGGs) are a fully-expressive game representation that can compactly express utility functions with structure such as context-specific (or strict) independence, anonymity, and additivity. We show that AGGs can be used to compactly represent all games that are compact when represented as graphical games, symmetric games, anonymous games, congestion games, and polymatrix games. We further show that AGGs can compactly represent additional, realistic games that require exponential space under all of these existing representations. We give a dynamic programming algorithm for computing a player's expected utility under an arbitrary mixed-strategy profile, which can achieve running times polynomial in the size of an AGG representation. We show how to use this algorithm to achieve exponential speedups of existing methods for computing sample Nash and correlated equilibria. Finally, we present the results of extensive experiments, showing that using AGGs leads to a dramatic increase in the size of games accessible to computational analysis.

TR-2008-14 Reducing Code Navigation Effort with Differential Code Coverage, September 08, 2008 Kaitlin Duck Sherwood and Gail C Murphy, 11 pages

Programmers spend a significant amount of time navigating code. However, few details are known about how this time is spent. To investigate this time, we performed a study of professional programmers performing programming tasks. We found that these professionals frequently needed to follow execution paths in the code, but that they often made faulty assumptions about which code had executed, impeding their progress. Earlier work on software reconnaissance has addressed this problem, but has focused on whether the technique could provide the correct information to a programmer, not on whether the technique reduces or improves navigation. We built a tool, called Tripoli, that provides an approximation to software reconnaissance via differential code coverage and reran a subset of the initial study. We found that Tripoli had a positive effect on code navigation: less experienced programmers with Tripoli were often more successful in less time than experienced programmers without.

TR-2008-15 Non-von Neumann-Morgenstern expected utility maximization models of choice from behavioural game theory, September 23, 2008 James Wright, 6 pages

I survey a number of papers that describe models of choice from behavioural game theory. These models are alternatives to expected utility maximization, the standard game-theoretic model of choice [Von Neumann and Morgenstern, 1944].

TR-2008-17 Guaranteed Voronoi Diagrams of Uncertain Sites, December 31, 2008 William Evans and Jeff Sember, 13 pages

In this paper we investigate the Voronoi diagram that is induced by a set of sites in the plane, where each site's precise location is uncertain but is known to be within a particular region, and the cells of this diagram contain those points guaranteed to be closest to a particular site. We examine the diagram for sites with disc-shaped regions of uncertainty, prove that it has linear complexity, and provide an optimal algorithm for its construction. We also show that the diagram for uncertain polygons has linear complexity. We then describe two generalizations of these diagrams for uncertain discs. In the first, which is related to a standard order-k Voronoi diagram, each cell is associated with a subset of k sites, and each point within the cell is guaranteed closer to any of the sites within the subset than to any site not in the subset. In the second, each cell is associated with the smallest subset guaranteed to contain the nearest site to each point in the cell. For both generalizations, we provide tight complexity bounds and efficient construction algorithms. Finally, we examine the Delaunay triangulations that can exist for sites within uncertain discs, and provide an optimal algorithm for generating those edges that are guaranteed to exist in every such triangulation.

TR-2009-02 Towards an experimental model for exploring the role of touch, February 03, 2009 Joseph P. Hall, Jeswin Jeyasurya, Aurora Phillips, Chirag Vesuvala, Steve Yohanan and K.E. MacLean, 8 pages

In this paper we investigate the ability of a haptic device to reduce anxiety in users exposed to disturbing images, as we begin to explore the utility of haptic display in anxiety therapy. We conducted a within-subjects experimental design where subjects were shown two sets of disturbing images; once with the haptic creature and once without; as well as a control condition with calming images. Subjects were connected to bio-sensors which monitored their skin conductance, heart rate and forehead corrugator muscle changes; we then used these signals to estimate the subject's arousal, which has been correlated with anxiety level. We observed a significant interaction effect on arousal when subjects held the creature in the presence of disturbing (versus calm) images. The results of this exploratory study suggest that the creature was able to reduce the level of anxiety induced in the subjects by the images. Qualitative feedback also indicated that a majority of subjects found the haptic creature comforting, supporting the results from the bio-sensor readings

TR-2009-03 Numerically Robust Continuous Collision Detection for Dynamic Explicit Surfaces, February 20, 2009 Tyson Brochu and Robert Bridson, 5 pages

We present a new, provably robust method for continuous collision detection for moving triangle meshes. Our method augments the spatial coordinate system by one dimension, representing time. We can then apply numerically robust predicates from computational geometry to detect intersections in space-time. These predicates use only multiplication and addition, so we can determine the maximum numerical error accumulated in their computation. From this forward error analysis, we can identify and handle degenerate geometric configurations without resorting to user-tuned error tolerances.

TR-2009-04 Efficient Snap Rounding in Square and Hexagonal Grids using Integer Arithmetic, February 12, 2009 Boaz Ben-Moshe, Binay K. Bhattacharya and Jeff Sember, 20 pages

In this paper we present two efficient algorithms for snap rounding a set of segments to both square and hexagonal grids. The first algorithm takes n line segments as input and generates the set of snapped segments in O((n + k) log n + |I| + |I*|) time, where k is never more than the number of hot pixels (and may be substantially less), |I| is the complexity of the unrounded arrangement I, and |I*| is the multiset of snapped segment fragments. The second algorithm generates the rounded arrangement of segments in O(|I| + ( |I*| + Sigma_c is(c)) log n) time, where |I*| is the complexity of the rounded arrangement I* and is(c) is the number of segments that have an intersection or endpoint in pixel row or column c. Both use simple integer arithmetic to compute the rounded arrangement by sweeping a strip of unit width through the arrangement, are robust, and are practical to implement. They improve upon existing algorithms, since existing running times either include an |I| log n term, or depend upon the number of segments interacting within a particular hot pixel h (is(h) and ed(h), or |h|), whereas ours depend on |I| without the log n factor and are either independent of the number of segments interacting within a hot pixel (algorithm 1) or depend upon the number of segments interacting in an entire hot row or column (is(c)), which is a much coarser partition of the plane (algorithm 2).

TR-2009-06 An Analysis of Spatial- and Fourier-Multiplexed Imaging, March 24, 2009 Gordon Wetzstein, Ivo Ihrke and Wolfgang Heidrich, 11 pages

Multiplexing is a common technique for encoding highdimensional image data into a single two-dimensional image. Examples of spatial multiplexing include Bayer patterns to encode color channels, and integral images to encode light fields. In the Fourier domain, optical heterodyning has been used to encode light fields. In this paper, we analyze the relationship between spatial and Fourier multiplexing techniques. We develop this analysis on the example of multi-spectral imaging, and then generalize it to light fields and other properties. We also analyze the effects of sensor saturation on Fourier multiplexing techniques such as optical heterodyning, and devise a new optimization approach for recovering saturated detail.

TR-2009-07 Joint-sparse recovery from multiple measurements, April 14, 2009 Ewout van den Berg and Michael P. Friedlander, 19 pages

The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using L1 minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008) in combination with L1 minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.

TR-2009-08 Learning a contingently acyclic, probabilistic relational model of a social network, April 06, 2009 Peter Carbonetto, Jacek Kisynski, Michael Chiang and David Poole, 14 pages

We demonstrate through experimental comparisons that modeling relations in a social network with a directed probabilistic model provides a viable alternative to the standard undirected graphical model approach. Our model incorporates special latent variables to guarantee acyclicity. We investigate the inference and learning challenges entailed by our approach.

TR-2009-09 Revenue Monotonicity in Deterministic, Dominant-Strategy Combinatorial Auctions, April 11, 2009 Baharak Rastegari, Anne Condon and Kevin Leyton-Brown, 28 pages

In combinatorial auctions using VCG, a seller can sometimes increase revenue by dropping bidders. In this paper we investigate the extent to which this counter-intuitive phenomenon can also occur under other deterministic dominant-strategy combinatorial auction mechanisms. Our main result is that such failures of “revenue monotonicity” can occur under any such mechanism that is weakly maximal—meaning roughly that it chooses allocations that cannot be augmented to cause a losing bidder to win without hurting winning bidders—and that allows bidders to express arbitrary single-minded preferences. We also give a set of other impossibility results as corollaries, concerning revenue when the set of goods changes, false-name-proofness, and the core.

TR-2009-10 Promoting Collaborative Learning in Lecture Halls using Multiple Projected Screens with Persistent and Dynamic Content, April 20, 2009 Joel Lanir, Kellogg S. Booth and Steven Wolfman, 10 pages

A necessary condition for collaborative learning is shared access and control of the representations of information under discussion. Much of the teaching in higher education today is done in classroom lectures with a largely one-way flow of information from the instructor to students, often using computer slides. Persistence of the information is not under student control and there is little student-initiated interaction or dynamic use of the visual aids. We evaluated the use of MultiPresenter, a presentation system that utilizes multiple screens to afford more flexibility in the delivery of the lecture and persistence of information so students can selectively attend to information on their own terms. Data about the use of multiple screens provides insights into how MultiPresenter affects classroom interactions and students’ learning. We discuss these findings and make recommendations for extending MultiPresenter to better support symmetric collaborative learning within the context of large lecture presentations.

TR-2009-11 On the Power of Local Broadcast Algorithms, April 24, 2009 Hosna Jabbari, Majid Khabbazian, Ian Blake and Vijay Bhargava, 9 pages

There are two main approaches, static and dynamic, to broadcasting in wireless ad hoc networks. In the static approach, local algorithms determine the status (forwarding/non-forwarding) of each node proactively based on local topology information and a globally known priority function. In this paper, we first show that local broadcast algorithms based on the static approach cannot achieve a good approximation factor to the optimum solution (an NP-hard problem). However, we show that a constant approximation factor is achievable if (relative) position information is available. In the dynamic approach, local algorithms determine the status of each node ``on-the-fly'' based on local topology information and broadcast state information. Using the dynamic approach, it was recently shown that local broadcast algorithms can achieve a constant approximation factor to the optimum solution when (approximate) position information is available. However, using position information can simplify the problem. Also, in some applications it may not be practical to have position information. Therefore, we wish to know whether local broadcast algorithms based on the dynamic approach can achieve a constant approximation factor without using position information. We answer this question in the positive - we design a local broadcast algorithm in which the status of each node is decided ``on-the-fly'' and prove that the algorithm can achieve both full delivery and a constant approximation to the optimum solution.

TR-2009-12 Body-Centric Interactions With Very Large Wall Displays, April 28, 2009 Garth Shoemaker, Takayuki Tsukitani, Yoshifumi Kitamura and Kellogg S. Booth, 10 pages

We explore a set of body-centric interaction techniques for very large wall displays. The techniques described include: virtual tools that are stored on a user’s own body, protocols for sharing personal information between co-located collaborators, a shadow representation of users’ bodies, and methods for positioning virtual light sources in the work environment. These techniques are important as a group because they serve to unify the virtual world and the physical world, breaking down the barriers between display space, personal body space, and shared room space. We describe an implementation of these techniques as integrated into a collaborative map viewing and editing application.

TR-2009-13 Degree-of-Knowledge: Investigating an Indicator for Source Code Authority, May 07, 2009 Thomas Fritz, Jingwen Ou and Gail C. Murphy, 10 pages

Working on the source code as part of a large team productively requires a delicate balance. Optimally, a developer might like to thoroughly assess each change to the source code entering their development environment lest the change introduce a fault. In reality, a developer is faced with thousands of changes to source code elements entering their environment each day, forcing the developer to make choices about how often and to what depth to assess changes. In this paper, we investigate an approach to help a developer make these choices by providing an indicator of the authority with which a change has been made. We refer to our indicator of source code authority as a degree-of-knowledge (DOK), a real value that can be computed automatically for each source code element and each developer. The computation of DOK is based on authorship data from the source revision history of the project and on interaction data collected as a developer works. We present data collected from eight professional software developers to demonstrate the rate of information flow faced by developers. We also report on two experiments we conducted involving nine professional software developers to set the weightings of authorship and interaction for the DOK computation. To show the potential usefulness of the indicator, we report on three case studies. These studies considered the use of the indicator to help find experts, to help with onboarding and to help with assessing information in changesets.

TR-2009-15 An Automatically Configured Modular Algorithm for Post Enrollment Course Timetabling, June 15, 2009 Chris Fawcett, Holger H. Hoos and Marco Chiarandini, 14 pages

Timetabling tasks form a widely studied type of resource scheduling problem, with important real-world applications in schools, universities and other educational settings. In this work, we focus on post-enrollment course timetabling, the problem that was covered by Track 2 of the recent 2nd International Timetabling Competition (ITC2007). Following an approach that makes strong use of automated exploration of a large design space of modular and highly parameterised stochastic local search algorithms for this problem, we produced a solver that placed third in Track 2 of ITC2007. In subsequent work, we further improved both the solver framework and the automated algorithm design procedure, and obtained a solver that achieves consistently better performance than the top-ranked solver from the competition and represents a substantial improvement in the state of the art for post-enrollment course timetabling.

TR-2009-18 On Improving Key Pre-distribution Schemes for Sensor Networks, July 24, 2009 Majid "Khabbazian, Ian Blake, Vijay Bhargava and Hosna" Jabbari, 9 pages

In this work, we show how to improve the resilience or computational cost of two primary key pre-distribution schemes.First, we consider the primary key pre-distribution scheme proposed by Eschenauer and Gligor and its extension by Chan, Perrig and Song. We propose a modified version of their schemes and prove that it provides significantly higher resilience than the original schemes at almost no extra cost. The second part of this work deals with the primary key pre-distribution scheme proposed by Blom and its extension by Du, Deng Han and Varshney. The key pre-distribution scheme by Blom and its extension offer much higher resilience than random key pre-distribution schemes at the cost of higher computational cost. We show that the computational cost of the Blom scheme can be significantly reduced at the cost of slight reduction in resilience or a small increase in memory requirement. It is expected that aspects of the techniques introduced here, suitably adapted, can be applied to other key distribution schemes to improve efficiency.

TR-2009-19 Optimization Methods for L1-Regularization, August 04, 2009 Mark Schmidt, Glenn Fung and Romer Rosales, 20 pages

In this paper we review and compare state-of-the-art optimization techniques for solving the problem of minimizing a twice-differentiable loss function subject to L1-regularization. The first part of this work outlines a variety of the approaches that are available to solve this type of problem, highlighting some of their strengths and weaknesses. In the second part, we present numerical results comparing 14 optimization strategies under various scenarios.

TR-2009-20 On Efficient Replacement Policies for Cache Objects with Non-uniform Sizes and Costs, July 31, 2009 Ying Su and Laks Lakshmanan, 12 pages

Replacement policies for cache management have been studied extensively. Most policies proposed in the literature tend to be ad hoc and typically exploit the retrieval cost or latency of the objects, object sizes, popularity of requests and temporal correlations between requests. A recent paper [1] studied the caching problem and developed an optimal replacement policy C∗ 0 under the independent reference model (IRM), assuming nonuniform object retrieval costs. In this paper, we consider the more general setting where both object sizes and their retrieval costs are non-uniform. This setting arises in a number of applications such as web caching and view management in databases and in data warehouses. We consider static selection as a benchmark when evaluating the performance of replacement policies. Our first result is negative: no dynamic policy can achieve a better performance than the optimal static selection in terms of long run average metric. We also prove that a (dynamic) replacement policy attains this optimum iff the stochastic chain induced by it is irreducible. We show that previously studied optimal policies such as A0 and C∗ 0 are special cases of our optimal policy. This motivates the study of static selection. For the general case we are considering, static selection is NP-complete. Let K denote the maximum cache capacity and let K′ be the sum of sizes of all objects minus the cache capacity K. We propose a polynomial time algorithm that is both K-approximate w.r.t. the fractional optimum solution and HK′ -approximate w.r.t. the integral optimum solution, where HK′ is the K′-th Harmonic number. In addition, we develop a K-competitive dynamic policy and show that K is the best possible approximation ratio for both static algorithms and dynamic policies.

TR-2009-21 Tradeoffs in the Empirical Evaluation of Competing Algorithm Designs, October 20, 2009 Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown, 27 pages

We propose an empirical analysis approach for characterizing tradeoffs between different methods for comparing a set of competing algorithm designs. Our approach can provide insight into performance variation both across candidate algorithms and across instances. It can also identify the best tradeoff between evaluating a larger number of candidate algorithm designs, performing these evaluations on a larger number of problem instances, and allocating more time to each algorithm run. We applied our approach to a study of the rich algorithm design spaces offered by three highly-parameterized, state-of-the-art algorithms for satisfiability and mixed integer programming, considering six different distributions of problem instances. We demonstrate that the resulting algorithm design scenarios differ in many ways, with important consequences for both automatic and manual algorithm design. We expect that both our methods and our findings will lead to tangible improvements in algorithm design methods.

TR-2009-22 A Box Shaped Cyclically Reduced Operator, October 23, 2009 Chen Greif and L. Robert Hocking, 25 pages

A new procedure of cyclic reduction is proposed, whereby instead of performing a step of elimination on the original cartesian mesh using a two-color ordering and a standard 5-point or 7-point operator, we perform the decoupling step on the reduced mesh associated with one color, using non-standard operators that are better aligned with that mesh. This yields a cartesian mesh and box shaped 9-point (in 2D) or 27-point (in 3D) operators that are easy to deal with. Convergence analysis for multi-line and multi-plane orderings is carried out. Numerical experiments demonstrate the merits of the approach taken.

TR-2009-23 A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning, November 16, 2009 Eric Brochu, Mike Cora and Nando de Freitas, 50 pages

We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments -- active user modelling with preferences, and hierarchical reinforcement learning. While the most common prior for Bayesian optimization is a Gaussian process, we also present random forests as an example of an alternative prior.

TR-2009-24 Reflections on QuestVis: A Visualization System for an Environmental Sustainability Model, November 19, 2009 Tamara Munzner, Aaron Barsky and Matt Williams, 10 pages

We present lessons learned from the iterative design of QuestVis, a visualization interface for the QUEST environmental sustainability model. The QUEST model predicts the effects of policy choices in the present using scenarios of future outcomes that consist of several hundred indicators. QuestVis treats this information as a high-dimensional dataset, and shows the relationship between input choices and output indicators using linked views and a compact multilevel browser for indicator values. A first prototype also featured an overview of the space of all possible scenarios based on dimensionality reduction, but this representation was deemed to be be inappropriate for a target audience of people unfamiliar with data analysis. A second prototype with a considerably simplified and streamlined interface was created that supported comparison between multiple scenarios using a flexible approach to aggregation. However, QuestVis was not deployed because of a mismatch between the design goals of the project and the true needs of the target user community, who did not need to carry out detailed analysis of the high-dimensional dataset. We discuss this breakdown in the context of a nested model for visualization design and evaluation.

TR-2009-25 Visualizing Interactions on Distributed Tabletops, December 11, 2009 Anthony Tang, Michel Pahud and Bill Buxton, 6 pages

We describe our experiences with a tool we created to interactively visualize the collaborative interactions of groups making use of a distributed tabletop collaboration system. The tool allows us to ask and explore several questions about how users are actually interacting and making use of the space. We briefly describe the tool, discussing both our experience in building and using the tool. Finally, we describe our goals for attending this workshop.

TR-2010-01 Interpreter Implementation of Advice Weaving, January 22, 2010 Immad Naseer, Ryan M. Golbeck, Peter Selby and Gregor Kiczales, 12 pages

When late-binding of advice is used for incremental development or configuration, implementing advice weaving using code rewriting external to the VM can cause performance problems during application startup. We present an interpreter-based (non-rewriting) weaver that uses a simple table and cache structure for matching pointcuts against dynamic join points together with a simple mechanism for calling the matched advice. An implementation of our approach in the Jikes RVM shows its feasibility. Internal micro-benchmarks show dynamic join point execution overhead of approximately 28\% in the common case where no advice is applicable and that start-up performance is improved over VM-external weavers. The cache and table structures could be used during later (i.e. JIT time) per-method rewrite based weaving to reduce pointcut matching overhead. We conclude that it is worthwhile to develop and evaluate a complete in-VM hybrid implementation, comprising both non-rewriting and rewriting based advice weaving.

TR-2010-03 Evaluating Two Window Manipulation Techniques on a Large Screen Display, May 08, 2010 Russell Mackenzie, Kirstie Hawkey, Presley Perswain and Kellogg S. Booth, 9 pages

Large screen displays are a common feature of modern meeting rooms, conference halls, and classrooms. The large size and often high resolution of these displays make them inherently suitable for collaborative work, but these attributes cause traditional windowing systems to become difficult to use because the interaction handles become smaller in visual space and in motor space. This may be exacerbated when a user faces the additional cognitive load of active, real-time collaboration. We describe a new window manipulation technique for such a collaborative meeting environment. Its design was inspired by recent collaborative systems in which a user must explicitly take control of a window in order to interact with its contents; actions are otherwise interpreted as navigational. Our Large Screen Optimized (LSO) window manipulation technique utilizes the entire window for manipulations instead of only the title-bar and borders. In addition, LSO includes „snapping regions‟ that automatically move the cursor to the boundary of the window, allowing quick, accurate manipulations involving the edges and corners of the screen. We experimentally validated that our new technique allows users to move and resize windows more quickly than with a traditional window manipulation technique.

TR-2010-04 Sparsity priors and boosting for learning localized distributed feature representations, March 29, 2010 K. Swersky, B. Marlin, and N. de Freitas B. Chen, 18 pages

This technical report presents a study of methods for learning sparse codes and localized features from data. In the context of this study, we propose a new prior for generating sparse image codes with low-energy, localized features. The experiments show that with this prior, it is possible to encode the model with significantly fewer bits without affecting accuracy. The report also introduces a boosting method for learning the structure and parameters of sparse coding models. The new methods are compared to several existing sparse coding techniques on two tasks: reconstruction of natural image patches and self taught learning. The experiments examine the effect of structural choices, priors and dataset size on model size and performance. Interestingly, we discover that, for sparse coding, it is possible to obtain more compact models without incurring reconstruction errors by simply increasing the dataset size.

TR-2010-05 PReach: A Distributed Explicit State Model Checker, April 06, 2010 Flavio M. de Paula, Brad Bingham, Jesse Bingham, John Erickson, Mark Reitblatt and Gaurav Singh, 4 pages

We present PReach, a distributed explicit state model checker based on Murphi. PReach is implemented in the concurrent functional language Erlang. This allowed a clean and simple implementation, with the core algorithms under 1000 lines of code. Additionally, the PReach implementation is targeted to deal with very large models. PReach is able to check an industrial cache coherence protocol with approximately 30 billion states. To our knowledge, this is the largest number published to date for a distributed explicit state model checker.

TR-2010-06 Where do priors and causal models come from? An experimental design perspective, April 07, 2010 and N. de Freitas H. Kueck, 12 pages

(Abstract not available on-line)

TR-2010-07 A Mixed Finite Element Method with Exactly Divergence-free Velocities for Incompressible Magnetohydrodynamics, May 21, 2010 Chen Greif, Dan Li, Dominik Schoetzau and Xiaoxi Wei, 33 pages

We introduce and analyze a mixed finite element method for the numerical discretization of a stationary incompressible magnetohydrodynamics problem, in two and three dimensions. The velocity field is discretized using divergence-conforming Brezzi-Douglas-Marini (BDM) elements and the magnetic field is approximated by curl-conforming N\'{e}d\'{e}lec elements. The $H1$-continuity of the velocity field is enforced by a DG approach. A central feature of the method is that it produces exactly divergence-free velocity approximations, and captures the strongest magnetic singularities. We prove that the energy norm error is convergent in the mesh size in general Lipschitz polyhedra under minimal regularity assumptions, and derive nearly optimal a-priori error estimates for the two-dimensional case. We present a comprehensive set of numerical experiments, which indicate optimal convergence of the proposed method for two-dimensional as well as three-dimensional problems.

TR-2010-08 Preconditioning Iterative Methods for the Optimal Control of the Stokes Equations, June 22, 2010 Tyrone Rees and Andrew J. Wathen, 22 pages

Solving problems regarding the optimal control of partial differential equations (PDEs) -- also known as PDE-constrained optimization -- is a frontier area of numerical analysis. Of particular interest is the problem of flow control, where one would like to effect some desired flow by exerting, for example, an external force. The bottleneck in many current algorithms is the solution of the optimality system -- a system of equations in saddle point form that is usually very large and ill-conditioned. In this paper we describe two preconditioners -- a block-diagonal preconditioner for the minimal residual method and a block-lower triangular preconditioner for a non-standard conjugate gradient method -- which can be effective when applied to such problems where the PDEs are the Stokes equations. We consider only distributed control here, although other problems -- for example boundary control -- could be treated in the same way. We give numerical results, and compare these with those obtained by solving the equivalent forward problem using similar techniques.

TR-2010-10 Sequential Model-Based Optimization for General Algorithm Configuration (extended version), October 13, 2010 Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown, 24 pages

State-of-the-art algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded state-of-the-art performance, and in many cases outperformed the previous best configuration approach. # /ubc/cs/home/h/hutter/World/papers/10-TR-SMAC.pdf

TR-2010-11 A Guide to Visual Multi-Level Interface Design From Synthesis of Empirical Study Evidence, October 21, 2010 Heidi Lam and Tamara Munzner, 57 pages

Displaying multiple levels of data visually has been proposed to address the challenge of limited screen space. We review 22 existing multi-level interface studies and cast findings into a four-point decision tree: (1) When are multi-level displays useful? (2) What should the higher visual levels display? (3) Should the visual levels be displayed simultaneously? (4) Should the visual levels be embedded, or separated? Our analysis resulted in three design guidelines: (1) display and data levels should match; (2) high visual levels should only display task-relevant information; (3) simultaneous display, rather than temporal switching, is suitable for tasks with multi-level answers.

TR-2010-12 Determining Relevancy: How Software Developers Determine Relevant Information in Feeds, December 06, 2010 Thomas Fritz and Gail C. Murphy, 4 pages

Finding relevant information within the vast amount of information exchanged via streams, such as provided by Twitter or Facebook, is difficult. Previous research into this problem has largely focused on recommending relevant information based on topicality. By not considering individual and situational factors that can affect whether information is relevant, these approaches fall short. Through a formative, interview-based study, we explored how five software developers from a team determined relevancy of items in two kinds of project news feeds. We identified four factors that the developers used to help determine relevancy and found that placement of items in source code and team contexts can ease the determination of relevancy.

TR-2010-13 Exploring older adults' needs and preferences in learning to use mobile computer devices, December 21, 2010 Rock Leung, Joanna McGrenere, Peter Graf and Vilia Ingriany, 31 pages

Older adults have difficulty using and learning to use mobile phones, in part because the displays are too small for providing effective interactive help. We were interested in augmenting the small phone display with a larger display to support older adults’ learning process, but it was not clear how to apply existing guidelines to design such an augmented display system. In this technical report, we present a comprehensive survey study of 131 respondents we conducted to better understand the learning needs and preferences that are unique to older adults. The results showed, among other things, that when learning, older adults want to learn to perform task steps and prefer using manuals.

TR-2011-01 Hydra-MIP: Automated Algorithm Configuration and Selection for Mixed Integer Programming, April 04, 2011 Lin Xu, Frank Hutter, Holger Hoos and Kevin Leyton-Brown, 15 pages

State-of-the-art mixed integer programming (MIP) solvers are highly parameterized. For heterogeneous and a priori unknown instance distributions, no single parameter configuration generally achieves consistently strong performance, and hence it is useful to select from a portfolio of different solvers. HYDRA is a recent method for using automated algorithm configuration to derive multiple configurations of a single parameterized algorithm for use with portfolio-based selection. This paper shows that, leveraging two key innovations, HYDRA can achieve strong performance for MIP. First, we describe a new algorithm selection approach based on classification with a non-uniform loss function, which significantly improves the performance of algorithm selection for MIP (and SAT). Second, by modifying HYDRA’s method for selecting candidate configurations, we obtain better performance as a function of training time.

TR-2011-02 Displacement Interpolation Using Lagrangian Mass Transport, April 08, 2011 Nicolas Bonneel, Michiel van de Panne, Sylvain Paris and Wolfgang Heidrich, 10 pages

Interpolation between pairs of values, typically vectors, is a fundamental operation in many computer graphics applications. In some cases simple linear interpolation yields meaningful results without requiring domain knowledge. However, interpolation between pairs of distributions or pairs of functions often demands more care because features may exhibit translational motion between exemplars. This property is not captured by linear interpolation. This paper develops the use of displacement interpolation for this class of problem, which provides a generic method for interpolating between distributions or functions based on advection instead of blending. The functions can be non-uniformly sampled, high-dimensional, and defined on non-Euclidean manifolds, e.g., spheres and tori. Our method decomposes distributions or functions into sums of radial basis functions (RBFs). We solve a mass transport problem to pair the RBFs and apply partial transport to obtain the interpolated function. We describe practical methods for computing the RBF decomposition and solving the transport problem. We demonstrate the interpolation approach on synthetic examples, BRDFs, color distributions, environment maps, stipple patterns, and value functions.

TR-2011-03 SinfoniaEx : Fault-Tolerant Distributed Transactional Memory, April 10, 2011 Mahdi Tayarani Najaran and Charles Krasic, 4 pages

We present SinfoniaEx, a powerful paradigm for designing distributed applications. SinfoniaEx is an extension to Sinfonia, a service that provides fault-tolerant atomic access to distributed memory, and is suitable for cloud environments. SinfoniaEx is built over the same design principles of Sinfonia, while extending the interface to allow applications to share system resources, i.e. memory nodes.

TR-2011-04 Applying Interruption Techniques from the HCI Literature to Portable Music, April 21, 2011 Amirhossein Mehrabian and Joanna McGrenere, 40 pages

Players

TR-2011-05 Closed-Form Multigrid Smoothing Factors for Lexicographic Gauss-Seidel, May 3, 2011 L. Robert Hocking and Chen Greif, 15 pages

Abstract: This paper aims to present a unified framework for deriving analytical formulas for smoothing factors in arbitrary dimensions, under certain simplifying assumptions. To derive these expressions we rely on complex analysis and geometric considerations, using the maximum modulus principle and m\"obius transformations. We restrict our attention to pointwise and block lexicographic Gauss-Seidel smoothers on a $d$-dimensional uniform mesh, where the computational molecule of the associated discrete operator forms a $2d+1$ point star. Our results apply to any number of spatial dimensions, and are applicable to high-dimensional versions of a few common model problems with constant coefficients, including the Poisson and anisotropic diffusion equations and a special case of the convection-diffusion equation. We show that our formulas, exact under the simplifying assumptions of Local Fourier Analysis, form tight upper bounds for the asymptotic convergence of geometric multigrid in practice. We also show that there are asymmetric cases where lexicographic Gauss-Seidel smoothing outperforms red-black Gauss-Seidel smoothing; this occurs in particular for certain model convection-diffusion equations with high mesh Reynolds numbers.

TR-2011-06 Audio Stream Bookmarking with a Wristband Controller: Exploring the Role of Explicit Commands in an Implicit Control Loop, May 13, 2011 Jih-Shiang Chang, Joanna McGrenere and Karon E. MacLean, 14 pages

This project places itself as a preliminary and informal evaluation to explicit commands and a proposed implicit control loop. It focuses on the design methods for explicit commands and the relationship between implicit and explicit control channels. Attentional bookmarking (interruption driven) for spoken audio streams (e.g., audio books, podcasts) are chosen as a sample use case. The full project is comprised of information gathering of the use case, brainstorming and designing a controller prototype, and a preliminary field evaluation of the controller. Several design implications for explicit commands and implicit control loops are suggested.

TR-2011-08 Semi-supervised Learning for Identifying Players from Broadcast Sport Videos with Play-by-Play Information, July 22, 2011 Wei-Lwun Lu, Jo-Anne Ting, James J. Little and Kevin P. Murphy, 8 pages

Tracking and identifying players in sports videos filmed with a single moving pan-tilt-zoom camera has many applications, but it is also a challenging problem due to fast camera motions, unpredictable player movements, and unreliable visual features. Recently, [26] introduced a system to tackle this problem based on conditional random fields. However, their system requires a large number of labeled images for training. In this paper, we take advantage of weakly labeled data in the form of publicly available play-by-play information. This, together with semi-supervised learning, allows us to train an identification system with very little supervision. Experiments show that by using only 1500 labels with the play-by-play information in a dataset of 75000 images, we can train a system that has a comparable accuracy as a fully supervised model trained by using 75000 labels.

TR-2011-09 SinExTree : Scalable Multi-Attribute Queries through Distributed Spatial Partitioning, July 22, 2011 Mahdi Tayarani Najaran, Charles Krasic and Norman C. Hutchinson, 7 pages

In this paper we present SinExTree, a spatial partitioning tree designed for scalable low-latency information stor- age and retrieval. SinExTree is built over a Sinfonia-like service that provides atomic access to distributed mem- ory suitable for a cloud environment. An n-dimension SinExTree provides key/value storage, where each key has n attributes, and supports general application-defined queries over multiple attributes.

TR-2011-10 Ephemeral Paths: Gradual Fade-In as a Visual Cue for Subgraph Highlighting, July 28, 2011 Jessica Dawson, Joanna McGrenere, Tamara Munzner, Karyn Moffatt Moffatt and Leah Findlater, 11 pages

Ephemeral highlighting uses the temporal dimension to draw the user's attention to specific interface elements through a combination of abrupt onset and gradual fade-in. This technique has shown promise in adaptive interfaces, but has not been tested as a dynamic visual encoding to support information visualization. We conducted a study with 32 participants using subgraph highlighting to support path tracing in node-link graphs, a task abstracting a large class of visual queries. The study compared multiple highlighting techniques, including traditional static highlighting (using color and size), ephemeral highlighting (where the subgraph is emphasized by appearing first, and the rest of the graph fades in gradually), and a combination of static and ephemeral. The combination was the most effective visual cue: it always performed at least as well or better than static highlighting. Ephemeral on its own was sometimes faster than the combined technique, but it was also more error prone. Self-reported workload and preference followed these performance results.

TR-2011-11 Local Naive Bayes Nearest Neighbor for Image Classification, November 30, 2011 Sancho McCann and David G. Lowe, 10 pages

We present Local Naive Bayes Nearest Neighbor, an improvement to the NBNN image classification algorithm that increases classification accuracy and improves its ability to scale to large numbers of object classes. The key observation is that only the classes represented in the local neighborhood of a descriptor contribute significantly and reliably to their posterior probability estimates. Instead of maintaining a separate search structure for each class, we merge all of the reference data together into one search structure, allowing quick identification of a descriptor's local neighborhood. We show an increase in classification accuracy when we ignore adjustments to the more distant classes and show that the run time grows with the log of the number of classes rather than linearly in the number of classes as did the original. This gives a 100 times speed-up over the original method on the Caltech 256 dataset. We also provide the first head-to-head comparison of NBNN against spatial pyramid methods using a common set of input features. We show that local NBNN outperforms all previous NBNN based methods and the original spatial pyramid model. However, we find that local NBNN, while competitive with, does not beat state-of-the-art spatial pyramid methods that use local soft assignment and max-pooling.

TR-2011-12 Multi-preconditioned GMRES, December 23, 2011 Chen Greif, Tyrone Rees and Daniel Szyld, 24 pages

Standard Krylov subspace methods only allow the user to choose a single preconditioner, although in many situations there may be a number of possibilities. Here we describe an extension of GMRES, multi-preconditioned GMRES, which allows the use of more than one preconditioner. We give some theoretical results, propose a practical algorithm, and present numerical results from problems in domain decomposition and PDE-constrained optimization. These numerical experiments illustrate the applicability and potential of the multi-preconditioned approach.

TR-2012-01 Hierarchical Clustering and Tagging of Mostly Disconnected Data, April 30, 2012 Stephen Ingram, Tamara Munzner and Jonathan Stray, 10 pages

We define the document set exploration task as the production of an application-specific categorization. Computers can help by producing visualizations of the semantic relationships in the corpus, but the approach of directly visualizing the vector space representation of the document set via multidimensional scaling (MDS) algorithms fails to reveal most of the structure because such datasets are mostly disconnected, that is, the preponderance of inter-item distances are large and roughly equal. Interpreting these large distances as disconnection between items yields a decomposition of the dataset into distinct components with small inter-item distances, that is, clusters. We propose the Disconnected Component Tree (DiscoTree) as a data structure that encapsulates the hierarchical relationship between components as the disconnection threshold changes, and present a sampling-based algorithm for efficiently computing the DiscoTree in O(N) time where N is the number of items. We present the MoDiscoTag application which combines the DiscoTree with an MDS view and a tagging system, and show that it succeeds in resolving structure that is not visible using previous dimensionality reduction methods. We validate our approach with real-world datasets of WikiLeaks Cables and War Logs from the journalism domain.

TR-2012-02 Ensuring Safety of Nonlinear Sampled Data Systems through Reachability (extended version), March 01, 2012 Ian M. Mitchell, Mo Chen and Meeko Oishi, 20 pages

Abstract: In sampled data systems the controller receives periodically sampled state feedback about the evolution of a continuous time plant, and must choose a constant control signal to apply between these updates; however, unlike purely discrete time models the evolution of the plant between updates is important. In contrast, for systems with nonlinear dynamics existing reachability algorithms---based on Hamilton-Jacobi equations or viability theory---assume continuous time state feedback and the ability to instantaneously adjust the input signal. In this paper we describe an algorithm for determining an implicit surface representation of minimal backwards reach tubes for nonlinear sampled data systems, and then construct switched, set-valued feedback controllers which are permissive but ensure safety for such systems. The reachability algorithm is adapted from the Hamilton-Jacobi formulation proposed in Ding & Tomlin [2010]. We show that this formulation is conservative for sampled data systems. We implement the algorithm using approximation schemes from level set methods, and demonstrate it on a modified double integrator.

TR-2012-03 Dimensionality Reduction in the Wild: Gaps and Guidance, June 26, 2012 Michael Sedlmair, Matthew Brehmer, Stephen Ingram and Tamara Munzner, 10 pages

Despite an abundance of technical literature on dimension reduction (DR), our understanding of how real data analysts are using DR techniques and what problems they face remains largely incomplete. In this paper, we contribute the first systematic and broad analysis of DR usage by a sample of real data analysts, along with their needs and problems. We present the results of a two-year qualitative research endeavor, in which we iteratively collected and analyzed a rich corpus of data in the spirit of grounded theory. We interviewed 24 data analysts from different domains and surveyed papers depicting applications of DR. The result is a descriptive taxonomy of DR usage, and concrete real-world usage examples summarized in terms of this taxonomy. We also identify seven gaps where user DR needs are unfulfilled by currently available techniques, and three mismatches where the users do not need offered techniques. At the heart of our taxonomy is a task classification that differentiates between abstract tasks related to point clusters and those related to dimensions. The taxonomy and usage examples are intended to provide a better descriptive understanding of real data analysts’ practices and needs with regards to DR. The gaps are intended as prescriptive pointers to future research directions, with the most important gaps being a lack of support for users without expertise in the mathematics of DR, and an absence of DR techniques for comparing explicit groups of dimensions or for relating non-linear embeddings to original dimensions.

TR-2012-04 Efficient Extraction of Ontologies from Domain Specific Text Corpora, July 26, 2012 Tianyu Li, Pirooz Chubak, Laks V.S. Lakshmanan and Rachel Pottinger, 12 pages

Extracting ontological relationships (e.g., isa and hasa) from free-text repositories (e.g., engineering documents and in- struction manuals) can improve users’ queries, as well as benefit applications built for these domains. Current methods to extract ontologies from text usually miss many meaningful relationships because they either con- centrate on single-word terms and short phrases or neglect syntactic relationships between concepts in sentences. We propose a novel pattern-based algorithm to find onto- logical relationships between complex concepts by exploit- ing parsing information to extract multi-word concepts and nested concepts. Our procedure is iterative: we tailor the constrained sequential pattern mining framework to discover new patterns. Our experiments on three real data sets show that our algorithm consistently and significantly outperforms previous representative ontology extraction algorithms.

TR-2012-06 Learning Reduced-Order Feedback Policies for Motion Skills, September 11, 2012 Kai Ding, Libin Liu, Michiel van de Panne and KangKang Yin, 10 pages

We introduce a method for learning low-dimensional linear feedback strategies for the control of physics-based animated characters. Once learned, these allow simulated characters to respond to changes in the environment and changes in goals. The approach is based on policy search in the space of reduced-order linear output feedback matrices. We show that these can be used to replace or further reduce manually-designed state and action abstractions. The approach is sufficiently general to allow for the development of unconventional feedback loops, such as feedback based on ground reaction forces to achieve robust in-place balancing and robust walking. Results are demonstrated for a mix of 2D and 3D systems, including tilting-platform balancing, walking, running, rolling, targeted kicks, and several types of ball-hitting tasks.

TR-2013-02 The Beta Mesh: a New Approach for Temporally Coherent Particle Skinning, May 20, 2013 Hagit Schechter and Robert Bridson, 8 pages

We present a novel surface reconstruction approach for generating surfaces from animated particle data, targeting temporally coherent surface reconstruction that can also approximate smooth surfaces and capture fine details. Our beta mesh algorithm uses the union of balls as a building block to reach temporal coherence. First we construct mesh vertices from sphere intersection points, and declare faces on the spheres surface guided by connectivity intelligence derived from the alpha mesh. Then we smooth the beta vertices positions to reflect smooth surfaces, and subdivide the mesh using weighted centroids. We also highlight the strengths and weaknesses of the related alpha mesh for animation purposes, and discuss ways of leveraging its qualities Open issues are discussed to outline what is still lacking in order to make our algorithm a ready-to-use surfacing technique. Nevertheless, we advocate using the beta mesh approach in future surface reconstruction research to benefit from its unique properties.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.