Technical Reports

The ICICS/CS Reading Room


1989 UBC CS Technical Report Abstracts

TR-89-01 Organization of Smooth Image Curves at Multiple Scales, January 1989 David G. Lowe

While edge detection is an important first step for many vision systems, the linked lists of edge points produced by most existing edge detectors lack the higher level of curve description needed for many visual tasks. For example, they do not specify the tangent direction or curvature of an edge or the locations of tangent discontinuities. In this paper, a method is presented for describing linked edge points at a range of scales by selecting intervals of the curve and scales of smoothing that are most likely to represent the underlying structure of the scene. This multi-scale analysis of curves is complementary to any multi-scale detection of the original edge points. A solution is presented for the problem of shrinkage of curves during Gaussian smoothing, which has been a significant impediment to the use of smoothing for practical curve description. The curve segmentation method is based on a measure of smoothness minimizing the third derivative of Gaussian convolution. The smoothness measure is used to identify discontinuities of curve tangents simultaneously with selecting the appropriate scale of smoothing. The averaging of point locations during smoothing provides for accurate subpixel curve localization. This curve description method can be implemented efficiently and should prove practical for a wide range of applications including correspondence matching, perceptual grouping, and model-based recognition.

TR-89-02 Using Deficiency Measure For Tiebreaking the Minimum Degree Algorithm, January 1989 Ian A. Cavers

The minimum degree algorithm is known as an effective scheme for identifying a fill reduced ordering for symmetric, positive definite, sparse linear systems. Although the original algorithm has been enhanced to improve the efficiency of its implementation, ties between minimum degree elimination candidates are still arbitrarily broken. For many systems, the fill levels of orderings produced by the minimum degree algorithm are very sensitive to the precise manner in which these ties are resolved. This paper introduces several tiebreaking schemes for the minimum degree algorithm. Emphasis is placed upon a tiebreaking strategy based on the deficiency of minimum degree elimination candidates, which can consistently identify low fill orderings for a wide spectrum of test problems. The tiebreaking strategies are integrated into a quotient graph form of the minimum degree algorithm with uneliminated supernodes. Implementations of the enhanced forms of the algorithm are tested on a wide variety of sparse systems to investigate the potential of the tiebreaking strategies.

TR-89-03 A New Approach To Test Sequence Derivation Based on External Behavior Expression (EBE), January 1989 Jianping Wu and Samuel T. Chanson

This paper presents a new approach to test sequence derivation from formal protocol specifications for protocol conformance testing. The approach is based on a model of External Behaviour Expression (EBE) which specifies only the external behaviour of a protocol in terms of the input/output sequences and their logical (function and predicate) relations, and can be obtained from formal protocol specifications in either Estelle or LOTOS. A basic test derivation theory is defined for the purpose of formalizing test sequence derivation strategies. Based on the EBE of a protocol, a test sequence derivation method is proposed to identify associations between inputs and outputs through the interaction paths and their I/O subpaths. Generic Test Cases generated from these I/O subpaths are based on specific testing purposes. Abstract Test Cases are selected in terms of particular test methods and additional requirements. Comparison to other existing schemes shows the method proposed here is simple and concise, and the resulting set of test sequences is complete and effective. It is our belief that this approach to test sequence derivation can provide the basis of a formalized framework for protocol conformance testing.

TR-89-04 Explanation and Prediction: An Architecture for Default and Abductive Reasoning, March 1989 David Poole

Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.

TR-89-05 Randomized Distributed Computing on Rings, January 1989 Lisa Higham

The communication complexity of fundamental problems in distributed computing on an asynchronous ring are examined from both the algorithmic and lower bound perspective. A detailed study is made of the effect on complexity of a number of assumptions about the algorithms. Randomization is shown to influence both the computability and complexity of several problems. Communication complexity is also shown to exhibit varying degrees of sensitivity to additional parameters including admissibility of error, kind of error, knowledge of ring size, termination requirements, and the existence of identifiers. \n A unified collection of formal models of distributed computation on asynchronous rings is developed which captures the essential characteristics of a spectrum of distributed algorithms --- those that are error free (deterministic, Las Vegas, and nondeterministic) and those that err with small probability (Monte Carlo and nondeterministic/probabilistic). The nondeterministic and nondeterministic/probabilistic models are introduced as natural generalizations of the Las Vegas and Monte Carlo models respectively, and prove useful in deriving lower bounds. The unification helps to clarify the essential differences between the progressively more general notions of a distributed algorithm. In addition, the models reveal the sensitivity of various problems to the parameters listed above. \n Complexity bounds derived using these models typically vary depending on the type of algorithm being investigated. The lower bounds are complemented by algorithms with matching complexity while frequently the lower bounds hold on even more powerful models than those required by the algorithms. \n Among the algorithms and lower bounds presented are two specific results which stand out because of their relative significance. \n\begin{enumerate} \item If $g$ is any nonconstant cyclic function of n variables. then any nondeterministic algorithm for computing $g$ on an anonymous ring of size $n$ has complexity $\Omega (n \sqrt{\log n})$ bits of communication; and. there is a is nonconstant cyclic boolean function $f$, such that f can be computed by a Las Vegas algorithm in $O (n \sqrt{\log n})$ expected bits of communication on a ring of size $n$. \n\item The expected complexity of computing AND (and a number of other natural functions) on a ring of fixed size n in the Monte Carlo model is \( \Theta (n \min \{ \log n, \log \log ( 1 / \epsilon ) \} ) \) messages and bits where $\epsilon $ is the allowable probability of error. \end{enumerate}

TR-89-06 A Completeness Theorem for NaD Set, January 1989 Paul C. Gilmore

(Abstract not available on-line)

TR-89-07 How Many Real Numbers Are There?, August 1989 Paul C. Gilmore

The question posed in the title of this paper is raised by a reexamination of Cantor's diagonal argument. Cantor used the argument in its most general form to prove that no mapping of the natural numbers into the reals could have all the reals as its range. It has subsequently been used in a more specific form to prove, for example, that the computable reals cannot be enumerated by a Turing machine. The distinction between these two forms of the argument can be expressed within a formal logic as a distinction between using the argument with a parameter F, denoting an arbitrary map from the natural numbers to the reals, and with a defined term F, representing a particular defined map. \nThe setting for the reexamination is a natural deduction based set theory, NaDSet, presented within a Gentzen sequent calculus. The elementary and logical syntax of NaDSet, as well as its semantics, is described in the paper. The logic extends an earlier form by removing a restriction on abstraction, and by replacing first and second order quantifiers by a single quantifier. The logic remains second order, however; this is necessary to avoid an abuse of use and mention that would otherwise arise from the interpretation of atomic sentences. \nThat there can be doubt about the number of reals is suggested by the failure of the general form of Cantor's diagonal argument in NaDSet. To provide a basis for discussion, a formalization of Godel-Bernays set theory is provided within NaDSet. The subsequent discussion reinforces Skolem's relativistic answer to the question posed. \nAlthough the general form of Cantor's argument fails in NaDSet, a rule of deduction which formalizes the argument for defined maps is derived. A simple application of the rule is provided. \nFeferman has argued that a type-free logic is required for the formalization of category theory since no existing logic or set theory permits self-reference of the kind required by the theory. A demonstration is provided elsewhere that category theory can be formalized within NaDSet. An elementary form of the demonstration is provided in the paper by proving a theorem suggested by _{A}>$ for which $\oplus $ is a binary, commutative. and associative _{A},$ is itself such a structure under cartesian product and isomorphism.

TR-89-08 A Logic-Based Analysis of Dempster Shafer Theory, December 1989 Gregory M. Provan

We formulate Dempster Shafer Theory in terms of Propositional Logic, using the implicit notion of probability underlying Dempster Shafer Theory. Dempster Shafer theory can be modeled in terms of propositional logic by the tuple $(\Sigma , \varrho )$, where $\Sigma $ is a set of propositional clauses and $\varrho $ is an assignment of measure to each clause $\Sigma_i \in \Sigma $. We show that the disjunction of minimal support clauses for a clause $i$ with respect to a set $\Sigma $ of propositional clauses, $ \xi ( \Sigma_{i}, \Sigma )$, is a symbolic representation of the Dempster Shafer Belief function for $\Sigma_{i}$. The combination of Belief functions using Dempster's Rule of Combination corresponds to a combination of the corresponding support clauses. The disjointness of the Boolean formulae representing DS Belief functions is shown to be necessary. Methods of computing disjoint formulae using Network Reliability techniques are discussed. \n In addition, we explore the computational complexity of deriving Dempster Shafer Belief functions, including that of the logic-based methods which are the focus of this paper. Because of Intractability even for moderately-sized problem instances, we propose the use of effluent approximation methods for such computations. Finally, we examine implementations of Dempster Shafer theory, based on domain restrictions of DS theory, hypertree embeddings, and the ATMS.

TR-89-10 Cooperative Systems for Perceptual Tasks in a Remote Sensing Environment, January 1989 Alan K. Mackworth

To design and implement knowledge-based systems for perceptual tasks, such as interpreting remotely-sensed data, we must first evaluate the appropriateness of current expert system methodology for these tasks. That evaluation leads to four conclusions which form the basis for the theoretical and practical work described in this paper. The first conclusion is that we should build `cooperative systems' that advise and cooperate with a human interpreter rather than `expert systems' that replace her. The second conclusion is that cooperative systems should place the user and the system in symmetrical roles where each can query the other for facts, rules, explanations and interpretations. The third conclusion is that most current expert system technology is {\em ad hoc}. Formal methods based on logic lead to more powerful, and better understood systems that are just as efficient when implemented using modern Prolog technology. The fourth conclusion is that, although the first three conclusions can be, arguably, accepted for high-level rule-based symbol-manipulation tasks, there are difficulties in accepting them for perceptual tasks that rely on visual expertise. In the rest of the paper work on overcoming those difficulties in the remote sensing environment is described. In particular, the issues of representing and reasoning about image formation, map-based constraints, shape descriptions and the semantics of depiction are discussed with references to theories and prototype systems that address them.

TR-89-11 Tool Box-Based Routines for Macintosh Timing and Display, June 1989 R. Rensink

Pascal routines are described for performing and testing various timing and display operations on Macintosh computers. Millisecond timing of internal operations is described, as is a method to time inputs more accurately than tick timing. Techniques are also presented for placing arbitrary bit-image displays on the screen within one screen refresh. All routines are based on Toolbox procedures applicable to the entire range of Macintosh computers.

TR-89-12 Computer-Vision Update, June 1989 R. M. Haralick, Alan K. Mackworth and S. L. Tanimoto

(Abstract not available on-line)

TR-89-13 A Model-Based Vision System for Manipulator Position Sensing, June 1989 I. Jane Mulligan, Alan K. Mackworth and Lawrence

The task and design requirements for a vision system for manipulator position sensing in a telerobotic system are described. Model-based analysis-by-synthesis techniques offer generally applicable methods with the potential to meet the system's requirement for accurate, fast and reliable results. Edge-based chamfer matching allows efficient computation of a measure, E, of the local difference between the real image and a synthetic image generated from arm and camera models. Gradient descent techniques are used to minimize E by adjusting joint angles. The dependence of each link position on the position of the link preceding it allows the search to be broken down into lower dimensional problems. Intensive exploitation of geometric constraints on the possible position and orientation of manipulator components results in a correct and efficient solution to the problem. Experimental results demonstrate the use of the implemented prototype system to locate the boom, stick and bucket of an excavator, given a single video image.

TR-89-14 A Theory of Multi-Scale Curvature-Based Shape Representation for Planar Curves, August 1989 Farzin Mokhtarian and Alan K. Mackworth

This paper presents a multi-scale, curvature-based shape representation technique for planar curves which satisfies several criteria, considered necessary for any shape representation method, better than other shape representation techniques. As a result, the representation is suitable for tasks which call for recognition of a noisy curve of arbitrary shape at an arbitrary scale or orientation. \n The method rests on the concept of describing a curve at varying levels of detail using features that are invariant with respect to transformations which do not change the shape of the curve. Three different ways of computing the representation are described in this paper. These three methods result in three different representations: the curvature scale space image, the renormalized curvature scale space image, and the resampled curvature scale space image. \n The process of describing a curve at increasing levels of abstraction is referred to as the evolution of that curve. Several evolution properties of planar curves are described in this paper. Some of these properties show that evolution is a physically plausible operation and characterize possible behaviours of planar curves during evolution. Some show that the representations proposed in this paper in fact satisfy some of the required criteria. Others impose constraints on the location of a planar curve as it evolves. Together, these evolution properties provide a theoretical foundation for the representation methods introduced in this paper.

TR-89-15 The Asymptotic Optimality of Spider-Web Networks, January 1989 Nicholas Pippenger

We determine the limiting behavior of the linking probability for large spider web networks. The result confirms a conjecture made by Ikeno in 1959. We also show that no balanced crossbar network, in which the same components are interconnected according to a different pattern, can have an asymptotically larger linking probability.

TR-89-16 A Simple Linear Time Algorithm for Concave One-Dimensional Dynamic Programming, January 1989 Maria M. Klawe

Following [KK89] we will say that an algorithm for finding the column minima of a matrix is ordered if the algorithm never evaluates the $(i,j)$ entry of the matrix until the minima of columns $1, 2, \ldots , i$ are known. This note presents an extremely simple linear time ordered algorithm for finding column minima in triangular totally monotone matrices. Analogous to [KK89] this immediately yields a linear time algorithm for the concave one-dimensional dynamic programming problem. Wilber [W88] gave the first linear time algorithm for the concave one-dimensional dynamic programming problem, but his algorithm was not ordered and hence could not be applied in some situations. Examples of these situations are given in [GP89] and [L89]. Galil and Park [GP89] and Larmore [L89] independently found quite different ordered linear time algorithms. All of these algorithms, and ours as well, rely on the original linear-time algorithm known as SMAWK for finding column minima in totally monotone matrices [AKMSW87]. The constant in our algorithm is essentially the same of that of the Galil-Park algorithm, and since our algorithm is so simple to program, we expect it to be the algorithm of choice in implementations.

TR-89-17 Exactly Solvable Telephone Switching Problems, January 1989 Nicholas Pippenger

For a certain class of telephone switching problems, much of our understanding arises from an analogy with statistical mechanics that was proposed by Benes in 1963. This analogy has lead to the exact solution of a number of idealized problems, which we survey in this paper.

TR-89-18 The Expected Capacity of Concentrators, January 1989 Nicholas Pippenger

We determine the ``expected capacity'' of a class of sparse concentrators called ``modular concentrators''. In these concentrators, each input is connected to exactly two outputs, each output is connected to exactly three inputs, and the ``girth'' (the length of the shortest cycle in the connexion graph) is large. We consider two definitions of expected capacity. For the first (which is due to Masson and Morris), we assume that a batch of customers arrives at a random set of inputs and that a maximum matching of these customers to servers at the outputs is found. The number of unsatisfied requests is negligible if customers arrive at fewer than one-half of the inputs, and it grows quite gracefully even beyond this threshold. We also consider the situation in which customers arrive sequentially, and the decision as to how to serve each is made randomly, without knowledge of future arrivals. In this case, the number of unsatisfied requests is larger, but still quite modest.

TR-89-19 On Parallel Methods for Boundary Value Odes, September 1989 Uri Ascher and S. Y. Pat Chan

Some of the traditional methods for boundary value ODEs, such as standard multiple shooting, finite difference and collocation methods, lend themselves well to parallelization in the independent variable: the first stage of the construction of a solution approximation is performed independently on each subinterval of a mesh. However, the underlying possibly fast bidirectional propagation of information by fundamental modes brings about stability difficulties when information from the different subintervals is combined to form a global solution. Additional difficulties occur when a very stiff problem is to be efficiently and stably solved on a parallel architecture. \n In this paper parallel shooting and difference methods are examined, a parallel algorithm for the stable solution of the resulting algebraic system is proposed and evaluated, and a parallel algorithm for stiff boundary value problems is proposed.

TR-89-20 A Methodology for Using a Default and Abductive Reasoning System, September 1989 David Poole

This paper investigates two different activities that involve making assumptions: predicting what one expects to be true and explaining observations. In a companion paper, a logic-based architecture for both prediction and explanation is proposed and an implementation is outlined. In this paper, we show how such a hypothetical reasoning system can be used to solve recognition, diagnostic and prediction problems. As part of this is the assumption that the default reasoner must be ``programmed'' to get the right answer and it is not just a matter of ``stating what is true'' and hoping the system will magically find the right answer. A number of distinctions have been found in practice to be important: between predicting whether something is expected to be true versus explaining why it is true; and between conventional defaults (assumptions as a communication convention), normality defaults (assumed for expediency) and conjectures (assumed only if there is evidence). The effects of these distinctions on recognition and prediction problems are presented. Examples from a running system are given.

TR-89-21 Optimal Parallel Algorithms for Convex Polygon Separation, September 1989 Norm Dadoun and David G. Kirkpatrick

Cooperative parallel algorithms are presented for determining convex polygon separation and constructing convex polygon mutual tangents. Given two $n$-vertex convex polygons, using $k$ CREW processors $(1 \leq k \leq n)$, each of these algorithms has an $ \Theta (\log n/(l + \log k)) $ time bound. This provides algorithms for these problems which run in $O(\log n)$ time sequentially or in constant time using a quasi-linear $(n^{\alpha}$ for some $\alpha > 0$) number of processors. \n\n These algorithms make use of hierarchical data structures to solve their respective problems. The polygonal hierarchy used by our algorithms is available implicitly (with no additional preprocessing) within standard representations of polygons.

TR-89-22 A New Proof of the NP Completeness of Visual Match, September 1989 R. Rensink

A new proof is presented of Tsotsos' result [1] that the VISUAL MATCH problem is NP-complete when no (high-level) constraints are imposed on the search space. Like the proof given by Tsotsos, it is based on the polynomial reduction of the NP-complete problem KNAPSACK [2] to VISUAL MATCH. Tsotsos' proof, however, involves limited-precision real numbers, which introduces an extra degree of complexity to his treatment. The reduction of KNAPSACK to VISUAL MATCH presented here makes no use of limited-precision numbers, leading to a simpler and more direct proof of the result.

TR-89-23 A Data Management Strategy for Transportable Natural Language Interfaces, January 1989 J. Johnson

This thesis focuses on the problem of designing a highly portable domain independent natural language interface for standard relational database systems. It is argued that a careful strategy for providing the natural language interface (NLI) with morphological, syntactic, and semantic knowledge about the subject of discourse and the database is needed to make the NLI portable from one subject area and database to another. There has been a great deal of interest recently in utilizing the database system to provide that knowledge. Previous approaches attempted to solve this challenging problem by capturing knowledge from the relational database (RDB) schema, but were unsatisfactory for the following reasons: 1.) RDB schemas contain referential ambiguities which seriously limit their usefulness as a knowledge representation strategy for NL understanding. 2.) Knowledge captured from the RDB schema is sensitive to arbitrary decisions made by the designer of the schema. In our work we provide a new solution by applying a conceptual model for database schema design to the design of a portable natural language interface. It has been our observation that the process used for adapting the natural language interface to a new subject area and database overlaps considerably with the process of designing the database schema. Based on this important observation, we design an enhanced natural language interface with the following significant features: complete independence of the linguistic component from the database component, economies in attaching the natural language and DB components, and sharing of knowledge about the relationships in the subject of discourse for database schema design and NL understanding.

TR-89-24 Bar-Representable Visibility Graphs and a Related Network Flow Problem, August 1989 Stephen Kenneth Wismath

A bar layout is a set of vertically oriented non-intersecting line segments in the plane called bars. The visibility graph associated with a layout is defined as the graph whose vertices correspond to the bars and whose edges represent the horizontal visibilities between pairs of bars. \n This dissertation is concerned with the characterization of bar-representable graphs: those graphs which are the visibility graphs of some bar layout. A polynomial time algorithm for determining if a given graph is bar-representable, and the subsequent construction of an associated layout are provided. Weighted and directed versions of the problem are also formulated and solved; in particular, polynomial time algorithms for the layout of such graphs are developed. \n The Planar Full Flow problem is to determine a plane embedding and an (acyclic) orientation of an undirected planar network that admits a feasible flow, that uses all arcs (except those incident upon the source or sink) to full capacity and maintains planarity. The connection of this flow problem to bar-representable graphs is exploited to solve the weighted case of the latter. As evidence that both the acyclicity and planarity constraints are necessary to obtain a polynomial algorithm for this problem, two natural variants of the Full Flow problem are shown to be strongly NP-Complete.

TR-89-25 Efficient Construction of Binary Trees with Almost Optimal Weighted Path Length, January 1989 David G. Kirkpatrick and Teresa Maria Przytycka

We present sequential and parallel algorithms to construct binary trees with almost optimal weighted path length. Specifically, assuming that weights are normalized (to sum up to one) and error refers to the (absolute) difference between the weighted path length of a given tree and the optimal tree with the same weights, we present: an $O(\log n)$ time and \n\( n \frac{\log \log n}{\log n} \)EREW processor algorithm which constructs a tree with error less than 0.172; an $O(k \log n \log^{*} n)$ time and $n^{2}$ CREW processor algorithm which produces a tree with error at most $ \frac{1}{n^{k}} $, and an $O(k^{2} \log n) $ time and $n^{2}$ CREW processor algorithm which produces a tree \n\nwith error at most $\frac{1}{n^{k}} $ As well, we present two sequential algorithms: an $O(kn)$ time algorithm which produces a tree with error at most $\frac{1}{n^{2^{k}}}$ and $O(kn)$ time algorithm which produces a tree with error at most $ \frac{1}{2^{n^{2^{k}}}} $ .The last two algorithms use different computation models.

TR-89-26 Fitting Parameterized 3-D Models to Images, December 1989 David G. Lowe

Model-based recognition and tracking from 2-D images depends upon the ability to solve for projection and model parameters that will best fit a 3-D model to matching image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulations, variable dimensions, or surface deformations. Numerical stabilization methods are developed that take account of inherent inaccuracies in the image measurements and allow useful solutions to be determined even when there are fewer matches than unknown parameters. A standardized modeling language has been developed that can be used to define models and their internal parameters for efficient application to model-based vision. These new techniques allow model- based vision to be used for a much wider class of problems than was possible with earlier methods.

TR-89-27 Towards Structured Parallel Computing --- Part 1 --- A Theory of Algorithm Design and Analysis for Distributed-Memory Architectures, December 1989 Feng Gao

This paper advocates a architecture-independent, hierarchical approach to algorithm design and analysis for distributed-memory architectures, in contrast to the current trend of tailoring algorithms towards specific architectures. We show that, rather surprisingly, this new approach can achieve uniformity without sacrificing optimality. In our particular framework there are three levels of algorithm design: design of a network-independent algorithm in a network-independent programming environment, design of virtual architectures for the algorithm, and design of emulations of the virtual architectures on physical architectures. We propose and substantiate through a complete complexity analysis of the example of ordinary matrix multiplication, the following thesis: architecture-independent optimality can lead to portable optimality. Namely, a single network-independent algorithm, when optimized network-independently, with the support of properly chosen virtual architectures, can be implemented on a wide spectrum of networks to achieve optimality on each of them with respect to both computation and communication. Besides its implications to the methodology of parallel algorithm design, our theory also suggests new questions for theoretical research in parallel computation on interconnection networks.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.