- TR-88-01 Parallel Recognition of Complement Reducible Graphs and Cotree Construction, January 1988 David G. Kirkpatrick and Teresa Maria Przytycka
A simple parallel algorithm is presented for constructing parse tree representations of graphs in a rich family known as cographs. From the parse tree representation of a cograph it is possible to compute in an efficient way many properties which are difficult for general graphs. The presented algorithm runs in O$(\log^{2} n)$ parallel time using O$(n^{3} / \log^{2} n)$ processors on a CREW PRAM.

- TR-88-02 On Lower Bound for Short Noncontractible Cycles in Embedded Graphs, January 1988 Teresa Maria Przytycka and J. H. Przytycki
Let $C_{g,n}$ be a constant such that for each triangulation of a surface of genus $g$ with a graph of $n$ vertices there exists a noncontractible cycle of length at most $C_{g,n}$. Hutchinson in [H87] conjectures that $C(g,n)$ = $O(\sqrt{n/g})$ for $g > 0$. In this paper, we present a construction of a triangulation which disproves this conjecture.

- TR-88-03 Protocol Specification and Verification using the Significant Event Temporal Logic, January 1988 George K. Tsiknis and Son T. Vuong
In this report we discuss the Significant Event Temporal Logic specification technique (SIGETL), a method for protocol specification and verification using a temporal logic axiomatic system. This technique is based on the idea that the state and the behaviour of a module can be completely described by the sequence of the significant events with which the module was involved in communicating with its environment till the present time. The behaviour of a module at any time is specified by simple temporal logic formulas, called transition axioms or properties of the module. Both, the safety and liveness properties of a module, as well as the global properties of a system, can be proven from its axioms using the axiomatic temporal logic system. As an example, we apply SIGETL to specify and verify a simple data transfer protocol. The general correspondence between SIGETL and ESTELLE FDT is also discussed.

- TR-88-04 The Inconsistency of Belief Revision System, January 1988 George K. Tsiknis
In 1987 Chern Seet developed a belief revision algorithm and a deduction system by which, he claims, default reasoning can be accomplished. We show that his deduction system is inconsistent. Some obvious corrections are suggested but the resulting system is still inconsistent. Its behaviour is similar to that of a closed-world- assumption reasoner. We examine a case in which the modified system behaves like the predicate circumscription and also has a reasonable performance. Finally, we discuss some problems pertaining to Seet's revision strategy. A similar revision algorithm for normal default logic is outlined and the use of the SET model for handling exceptions --- and default reasoning --- is briefly discussed.

- TR-88-05 The Connection Method for Non-Monotonic \& Autoepistemic Logic, January 1988 George K. Tsiknis
In this paper, first we present a connection method for non-monotonic logic, together with its soundness and completeness proof. Then, we extend this to a proof procedure for autoepistemic logic. In the last section, we also discuss some improvements on the method through structure sharing techniques.

- TR-88-06 On the Comparative Complexity of Resolution and the Connection Method, February 1988 Wolfgang Bibel
Quadratic proofs of the pigeonhole formulas are presented using the connection method proof techniques. For this class of formulas exponential lower bounds are known for the length-of-resolution refutations. This indicates a significant difference in the power of these two proof techniques. While short proofs of these formulas are known using extended resolution, this particular proof technique, in contrast to both the connection method and resolution, seems not suitable for the actual proof search.

- TR-88-07 The Technological Change of Reality Opportunities and Dangers, March 1988 Wolfgang Bibel
This essay discusses the trade-off between the opportunities and the dangers involved in technological change mainly from the perspective of Artificial Intelligence technology. In order to lay the foundation for the discussion, the symptoms of general unease which are associated with current technological progress, the concept of reality, and the field of Artificial Intelligence are very briefly discussed. In the main body of the essay, the dangers are contrasted with the potential benefits of such high technology. Besides discussing more well known negative and positive aspects we elaborate on the disadvantages of executive systems and the advantages of legislative systems. It is argued that only the latter might enable the re-establishment of the feedback-mechanism which proved so successful in earlier phases of evolution.

- TR-88-08 Evolution Properties of Space Curves, February 1988 Farzin Mokhtarian
The Curvature Scale Space and Torsion Scale Space Images of a space curve are a multi-scale representation for that curve which satisfies several criteria for shape representation and is therefore a preferred representation method for space curves. \n The torsion scale space image of a space curve is computed by convolving a path-based parametric representation of the curve with Gaussian functions of varying widths, extracting the torsion zero-crossings of the convolved curves and combining them in a torsion scale space image of the curve. The curvature scale space image of the curve is computed similarly but curvature level-crossings are extracted instead. An evolved version of a space curve $\Gamma $ is obtained by convolving a parametric representation of that curve with a Gaussian function of variance $\sigma ^{2}$ and denoted by $\Gamma _{\sigma }$. The process of generating the ordered sequence of curves $ \{ \Gamma _{\sigma }\mid \sigma \geq 0\} $ is referred to as the {\it evolution} of $\Gamma $. \n A number of evolution properties of space curves are investigated in this paper. It is shown that the evolution of space curves is invariant under rotation, uniform scaling and translation of those curves. This property makes the representation suitable for recognition purposes. It is also shown that properties such as connectedness and closedness of a space curve are preserved during evolution of the curve and that the center of mass of a space curve remains the same as the curve evolves. Among other results is the fact that a space curve contained inside a simple, convex object, remains inside that object during evolution. \n The two main theorems of the paper examine a space curve during its evolution just before and just after the formation of a cusp point. It is shown that strong constraints on the shape of the curve in the neighborhood of the cusp point exist just before and just after the formation of that point.

- TR-88-09 Fingerprint Theorems for Curvature and Torsion Zero-Crossings, April 1988 Farzin Mokhtarian
The {\em scale space image} of a signal f(x) is constructed by extracting the zero-crossings of the second derivative of a Gaussian of variable size a convolved with the signal, and recording them in the $x-\sigma $ map. \n Likewise, the {\em curvature scale space image} of a planar curve is computed by extracting the curvature zero-crossings of a parametric representation of the curve convolved with a Gaussian of variable size. The curvature level-crossings and torsion zero-crossings are used to compute the {\em curvature} and {\em torsion scale space images} of a space curve respectively. \n It has been shown [Yuille and Poggio 1983] that the scale space image of a signal determines that signal uniquely up to constant scaling and a harmonic function. This paper presents a generalization of the proof given in [Yuille and Poggio 1983]. It is shown that the curvature scale space image of a planar curve determines the curve uniquely, up to constant scaling and a rigid motion. Furthermore, it is shown that the torsion scale space of a space curve determines the function $\tau (u) \kappa ^{2} (u)$ modulus a scale factor where $\tau (u)$ and $\kappa (u)$ represent the torsion and curvature functions of the curve respectively. Our results show that a 1-D signal can be reconstructed using only one point from its scale space image. This is an improvement of the result obtained by Yuille and Poggio. \n The proofs are constructive and assume that the parametrizations of the curves can be represented by polynomials of finite order. The scale maps of planar and space curves have been proposed as representations for those curves [Mokhtarian and Mackworth 1986, Mokhtarian 1988]. The result that such maps determine the curves they are computed from uniquely, shows that they satisfy an important criterion for any shape representation technique.

- TR-88-10 Solving Diagnostic Problems using Extended Truth Maintenance Systems, July 1988 Gregory M. Provan
We describe the use of efficient, extended Truth Maintenance Systems (TMSs) for diagnosis. We show that for complicated diagnostic problems, existing ATMSs need some method of ranking competing explanations and pruning the search space in order to maintain computational efficiency. We describe a specific implementation of an ATMS for efficient problem solving that incorporates the full Dempster Shafer theory in a semantically clear and efficient manner. Such an extension allows the Problem Solver to rank competing solutions and explore only the ``most likely'' solutions. We also describe several efficient algorithms for computing both exact and approximate values for Dempster Shafer belief functions.

- TR-88-11 The Computational Complexity of Truth Maintenance Systems, July 1988 Gregory M. Provan
We define the complexity of the problems that the Assumption-Based TMS (ATMS) solves. Defining the conjunction of the set of input clauses as a Boolean expression, it is shown that an ATMS solves two distinct problems: (l) generating a set of minimal supports (or label) for each database literal; and (2) computing a minimal expression (or set of maximal contexts) from the set of minimal supports. The complexity of determining the set of minimal supports for a set $x$ of literals with respect to a set $X$ of clauses is exponential in the number of assumptions for almost all Boolean expressions, even though a satisfying assignment for the literals occurring in $X$ can be found in linear time. Generating a minimal expression is an NP-hard problem. The ATMS algorithms can be used with many control mechanisms to improve their performance for both problems; however, we argue that manipulating the label set (which is exponential in the number of assumptions) requires considerable computational overhead (in terms of space and time), and that it will be infeasible to solve moderately large problems without problem restructuring.

- TR-88-12 On Symmetric Schemes and Differential-Algebraic Equations., June 1988 Uri Ascher
An example is given which demonstrates a potential risk in using symmetric difference schemes for initial value differential-algebraic equations (DAEs) or for very stiff ODEs. The basic difficulty is that the stability of the scheme is controlled by the stability of an auxiliary (ghost) ODE problem which is not necessarily stable even when the given problem is. \n The stability of symmetric schemes is better understood in the context of boundary value problems. In this context, such schemes are more naturally applied as well. For initial value problems, better alternatives may exist. A computational algorithm is proposed for boundary value index-l DAEs.

- TR-88-13 Using Multigrid for Semiconductor Device Simulation in 1-D, September 1988 Uri Ascher and Stephen E. Adams
This paper examines the application of the multigrid method to the steady state semiconductor equations in one dimension. A number of attempts reported in the literature have yielded only limited success in applying multigrid algorithms to this sensitive problem, suggesting that a more careful look in relatively simple circumstances is worthwhile. \nSeveral modifications to the basic multigrid algorithm are evaluated based on their performance for a one-dimensional model problem. It was found that use of a symmetric Gauss-Seidel relaxation scheme, a special prolongation based on using the difference operator, and local relaxation sweeps near junctions, produced a robust and efficient code. This modified algorithm is also successful for a wide variety of cases, and its performance compares favourably with other multigrid algorithms that have been applied to the semiconductor equations.

- TR-88-14 Spatial and Spectral Description of Stationary Gaussian Fractals, July 1988 R. Rensink
A general treatment of stationary Gaussian fractals is presented. Relations are established between the fractal properties of an $n$-dimensional random field and the form of its correlation function and power spectrum. These relations are used to show that the second-order parameter $H$ commonly used to describe fractal texture (e.g., in [4][5]) is insufficient to characterize all fractal aspects of the field. A larger set of measures --- based on the power spectrum --- is shown to provide a more complete description of fractal texture. \n Several interesting types of ``non-fractal'' self-similarity are also developed. These include a generalization of the fractional Gaussian noises of Mandelbrot and van Ness [6], as well as a form of ``locally'' self-similar behaviour. It is shown that these have close relations to the Gaussian fractals, and consequently, that textures containing these types of self-similarity can be described by the same set of measures as used for fractal texture.

- TR-88-15 Probabilistic Evaluation of Common Functions On Rings of Known Size, June 1988 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick
In [5]. Duris and Galil prove an $ \Omega (n \log n)$ lower bound for the average number of messages required by any deterministic algorithm which elects a leader on an asynchronous ring with distinct identifiers where ring size $n$ is known and is a power of 2. Their results imply the same lower bound for the expected complexity of any randomized leader election algorithm for an anonymous ring of known size 2^{k}$. If their new techniques are used to achieve the randomized result directly, the resulting proof is significantly simpler than the original deterministic one. This simplicity facilitates extension of the result in two directions; namely, for arbitrary known ring size, and for algorithms that permit error with probability at most $\epsilon $. Specifically, we prove that the expected message complexity of any probabilistic algorithm that selects a leader with probability at least $1 - \epsilon $ on an anonymous ring of known size $n$, is $\Omega (n \min(\log n, \log \log(l / \epsilon ))) $. A number of common function evaluation problems (including AND, OR, PARITY, and SUM) on rings of known size, are shown to inherit this complexity bound and that the bound is tight to within a constant factor.

- TR-88-16 An Incremental Method for Generating Prime Implicants/Implicates, July 1988 Alex Kean and George K. Tsiknis
Given the recent investigation of clause management systems(CMSs) for Artificial Intelligence applications, there is an urgent need for an efficient incremental method for generating prime implicants. Given a set of clauses $\cal F$, a set of prime implicants $\Pi$ of $\cal F$ and a clause $C$, the problem can be formulated as finding the set of prime implicants for $\Pi \bigcup \{ C \} $. Intuitively, the property of implicants being prime implies that any effort to generate prime implicants from a set of prime implicants will not yield any new prime implicants but themselves. In this paper, we exploit the properties of prime implicants and propose an incremental method for generating prime implicants from a set of existing prime implicants plus a new clause. The correctness proof and complexity analysis of the incremental method are presented, and the intricacy of subsumptions in the incremental method is also examined. Additionally, the role of prime implicants in the CMS is also mentioned.

- TR-88-17 A Logical Framework for Depiction and Image Interpretation, August 1988 Raymond Reiter and Alan K. Mackworth
We propose a logical framework for depiction and interpretation that formalizes image domain knowledge, scene domain knowledge and the depiction mapping between the image and scene domains. This framework requires three sets of axioms: image axioms, scene axioms and depiction axioms. An interpretation of an image is defined to be a logical model of these axioms. \n The approach is illustrated by a case study, a reconstruction in first order logic of a simplified map understanding program, Mapsee. The reconstruction starts with a description of the map and a specification of general knowledge of maps, geographic objects and their depiction relationships. For the simple map world we show how the task level specification may be refined to a provably correct implementation by applying model-preserving transformations to the initial logical representation to produce a set of propositional formulas. The implementation may use known constraint satisfaction techniques to find the set of models of these propositional formulas. In addition, we sketch preliminary logical treatments for image queries, contingent scene knowledge, ambiguity in image description, occlusion, complex objects, preferred interpretations and image synthesis. \n This approach provides a formal framework for analyzing and going beyond existing systems such as Mapsee, and for understanding the use of constraint satisfaction techniques. It can be used as a foundation for the specification, design and implementation of vision and graphics systems that are correct with respect to the task and algorithm levels.

- TR-88-18 A Principle-Based System for Natural Language Analysis and Translation, August 1988 Matthew Walter Crocker
Traditional views of grammatical theory hold that languages are characterized by sets of constructions. This approach entails the enumeration of all possible constructions for each language being described. Current theories of transformational generative grammar have established an alternative position. Specifically, Chomsky's Government-Binding theory proposes a system of principles which are common to human language. Such a theory is referred to as a ``Universal Grammar'' (UG). Associated with the principles of grammar are parameters of variation which account for the diversity of human languages. The grammar for a particular language is known as a ``Core Grammar'', and is characterized by an appropriately parametrized instance of UG. Despite these advances in linguistic theory, construction-based approaches have remained the status quo within the field of natural language processing. This thesis investigates the possibility of developing a principle-based system which reflects the modular nature of the linguistic theory. That is, rather than stipulating the possible constructions of a language, a system is developed which uses the principles of grammar and language specific parameters to parse language. Specifically, a system is presented which performs syntactic analysis and translation for a subset of English and German. The cross-linguistic nature of the theory is reflected by the system which can be considered a procedural model of UG.

- TR-88-19 Valira/Valisyn-Protocol Validator/Synthesizer User's Manual (Version 1.2), January 1988 Son T. Vuong and T. Lau
(Abstract not available on-line)

- TR-88-20 The Impact of Artificial Intelligence on Society, September 1988 Richard S. Rosenberg
This paper presents an introduction to a number of social issues which may arise as a result of the diffusion of Artificial Intelligence (AI) applications from the laboratory to the workplace and marketplace. Four such applications are chosen for discussion: expert systems, image processing, robotics, and natural language understanding. These are briefly characterized and possible areas of misuse are explored. Of the many social issues of concern, four are selected for treatment here as representative of other potential problems likely to follow such a powerful technology as AI. These four are work (how much and of what kind), privacy (on which the assault continues), decision-making (by whom and for whose benefit), and social organization (how in a society in which intelligent systems perform so many functions). Finally it is argued that both a major programme of study in this field be launched and that practitioners assume the responsibility to inform the public about their work.

- TR-88-21 Clause Management System, October 1988 George K. Tsiknis and Alex Kean
In this paper, we study the full extent of the Clause Management System(CMS) proposed by Reiter and de Kleer. The CMS is adapted specifically for aiding a reasoning system (Reasoner) in explanations generation. The Reasoner transmits propositional formulae representing its knowledge to the CMS and in return, the Reasoner can query the CMS for concise explanations w.r.t the CMS knowledge base. We argue that based on the type of tasks the CMS performs, it should represent its knowledge base $\Sigma $ using a set of prime implicates $PI(\Sigma )$. The classification of implicates as prime, minimal, trivial and minimal trivial is carefully examined. Similarly, the notion of a support (or roughly, an explanation) for a clause including prime, minimal, trivial and minimal trivial is also elaborated. The methods to compute these supports from implicates and a preference ordering schema for the set of supports for a given clause are also presented. The generalization of the notion of a minimal support for a conjunction of clauses is also shown. Finally, two logic based diagnostic reasoning paradigms aided by the CMS are shown to exemplify the functionality of the CMS.

- TR-88-22 Invariants of Chromatic Graphs, November 1988 Teresa Maria Przytycka and J. H. Przytycki
In the paper we construct abstract algebras which yield invariants of graphs (including graphs with colored edges --- chromatic graphs). We analyse properties of those algebras. We show that various polynomials of graphs are yielded by models of the algebras (including Tutte and matching polynomials). In particular we consider a generalization of Tutte's polynomial to a polynomial of chromatic graphs. We analyse relation of graph polynomials with recently discovered link polynomials. \n It is known that computing of the Tutte polynomial is NP-hard. We show that a part of Tutte polynomial (and its generalization) can be computed faster than in exponential time.

- TR-88-23 Resampled Curvature and Torsion Scale Space Representation of Planar and Space Curves, December 1988 Farzin Mokhtarian
The curvature scale space representations of planar curves are computed by combining information about the curvature of those curves at multiple levels of detail. Similarly, curvature and torsion scale space representations of space curves are computed by combining information about the curvature and torsion of those curves at varying levels of detail. \nCurvature and torsion scale space representations satisfy a number of criteria such as efficiency, invariance, detail, sensitivity, robustness and uniqueness [Mokhtarian \& Mackworth 1986] which makes them suitable for recognizing a noisy curve at any scale or orientation. \nThe renormalized curvature and torsion scale space representations [Mackworth \& Mokhtarian 1988] are more suitable for recognition of curves with non-uniform noise added to them but can only be computed for closed curves. \nThe resampled curvature and torsion scale space representations introduced in this paper are shown to be more suitable than the renormalized curvature and torsion scale space representations for recognition of curves with non-uniform noise added to them. Furthermore, these representations can also be computed for open curves. \n A number of properties of the representation are also investigated and described. An important new property presented in this paper is that no new curvature zero-crossing points can be created in the resampled curvature scale space representation of simple planar curves.

- TR-88-24 Design and Implementation of a Ferry-based Protocol Test System, December 1988 Samuel T. Chanson, B. P. Lee and N. J. Parakh
The Ferry Clip concept can be used to build a Test System for protocol testing. By structuring the system into a set of modules, it is possible to minimize the effort required in using such a system to test different protocol implementations. In this paper we describe a method for structuring and implementing a Ferry Clip based Test System. Implementation issues encountered in building such a system under different environments are also discussed.

If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.