Technical Reports

The ICICS/CS Reading Room


1986 UBC CS Technical Report Abstracts

TR-86-01 Retracts of Numerations --- out of print, January 1986 K and Akira a

In this paper we study some important properties of numerations which can be passe to their retracts. Furthermore we show a sufficient condition for a category $Ret(\alpha)$ of retracts of a numeration $\alpha$ and morphisms to be Cartesian closed, in terms of $\alpha$.

TR-86-02 Choices in, \& Limitations of, Logic Programming, January 1986 Paul J. Voda

(Abstract not available on-line)

TR-86-03 The Bit Complexity of Randomized Leader Election on a Ring, February 1986 Karl Abrahamson, Andrew Adler, Rachel Gelbart, Lisa Higham and David G. Kirkpatrick

The inherent bit complexity of leader election on asynchronous unidirectional rings of processors is examined under various assumptions about global knowledge of the ring. If processors have unique identities with a maximum of $m$ bits, then the expected number of communication bits sufficient to elect a leader with probability 1, on a ring of (unknown) size $n$ is $O(nm)$. If the ring size is known to within a multiple of 2, then the expected number of communication bits sufficient to elect a leader with probability 1 is $O(n \log n)$.

These upper bounds are complemented by lower bounds on the communication complexity of a related problem called solitude verification that reduces to leader election in $O(n)$ bits. If processors have unique identities chosen from a sufficiently large universe of size $s$, then the average, over all choices of identities, of the communication complexity of verifying solitude is $\Omega (n \log s)$ bits. When the ring size is known only approximately, then $\Omega (n \log n)$ bits are required for solitude verification. The lower bounds address the complexity of certifying solitude. This is modelled by tbe best case behaviour of non-deterministic solitude verification algorithms.

TR-86-04 A Distributed Kernel for Reliable Group Communication, February 1986 Samuel T. Chanson and K. Ravindran

Multicasting provides a convenient and efficient way to perform one-to-many process communication. This paper presents a kernel model which supports reliable group communication in a distributed computing environment. We introduce new semantic tools which capture the nondeterminism of the underlying low level events concisely and describe a process alias-based structuring technique for the kernel to handle the reliability problems that may arise during group communication. The scheme works by maintaining a close association between group messages and their corresponding reply messages. We also introduce a dynamic binding scheme which maps group id's to multicast addresses. The scheme allows the detection and subsequent recovery from inconsistencies in the binding information. Sample programs illustrating how the semantic tools may be used are also included.

TR-86-05 Host Identification in Reliable Distributed Kernels, February 1986 Samuel T. Chanson and K. Ravindran

Acquisition of a host identifier (id) is the first and foremost network level activity initiated by a machine joining a network. It allows the machine to assume an identity in the network and build higher levels of abstraction that may be integrated with those on other machines. In order that the system may work properly, host id's must satisfy certain properties such as uniqueness.

In recent years, distributed systems consisting of workstations connected by a high speed local area network have become popular. Hosts in such systems interact with one another more frequently and in a manner much more complex than is possible in the long haul network environment. The kernels in such systems are called upon to support various inter-host operations, requiring additional properities for host id's. This paper discusses the implications of these properties with respect to distributed kernel reliability. A new scheme to generate and manage host id's satisfying the various properties is also presented. The scheme is distributed and robust, and works even under network partitioning.

TR-86-06 Semi-Automatic Implementation of Network Protocols, February 1986 Daniel A. Ford

A compiler which achieves automatic implementation of network protocols by transforming specifications written in {\em FDT} into {\em C} programs is presented. A brief introduction to the the fundamentals of {\em FDT}, a standard language developed by ISO/TC97/SC 16/WG 1 Subgroup B for specifying network protocols, is given. We then present an overview of the compiler and discuss the problem of {\em PASCAL} to {\em C} translation. Transformation of a {\em FDT} specification into code is explained and illustrated by two implementation examples. The first example illustrates the implementation strategy by tracing the processing of a simple protocol The second example demonstrates the validity of using automatically generated implementations by showing how a communication path was established between two hosts using code generated for the alternating bit protocol.

TR-86-07 Implementation of Microcomputers in Elementary Schools: A Survey and Evaluation --- don't reprint, May 1986 Christine Chan

The objective of this thesis is to investigate the uses of computer aided learning (CAL) at the elementary level. Some recent publications on CAL are summarized and discussed. A questionnaire was used and interviews uere conducted with elementary teachers in four chosen school districts in Vancouver and Toronto. From this field research, information was collected on teachers' perceptions on the use or CAL in the elementary classroom. This data is compared with observations presented in the relevant literature, and the comparison discussed within Robert Taylor's framework or using the computer as tutor, tool, and tutee. Included are the results from the questionnaire. The thesis concludes with a discussion on the role of the teacher in the use or computers in the classroom, a flexible approach to adopting CAL, and possible areas for future research.

TR-86-08 Implementation of Team Shoshin: An Exercise in Porting and Multiprocess Structuring of the Kernel, March 1986 Huay-Yong Wang

Team Shoshin is an extension of Shoshin, a testbed for distributed software originally developed on the LSI 11/23s at the University of Waterloo. This thesis presents a description of the implementation of Team Shoshin on the Sun Workstation. With wide disparity in the underlying hardware, a major part of our initial development effort is to port Shoshin to its new hardware. The problems and design decisions faced by the porting effort and how they are overcome will be discussed. The development of Team Shoshin has provided us the opportunity to investigate the use of multiprocess structuring techniques at the kernel level. We will describe the design and implementation of the proposed kernel multiprocess structure and the rationale behind it. The applicability of the proposed kernel multiprocess structure and its impact on operating system design will be discussed drawing from experience gained through actual implementation.

TR-86-09 Precomplete Negation \& Universal Quantification, April 1986 Paul J. Voda

This paper is concerned with negation in logic programs. We propose to extend negation as failure by a stronger form of negation called precomplete negation. In contrast to negation as failure, precomplete negation has a simple semantic characterization given in terms of computational theories which deliberately abandon the law of the excluded middle (and thus classical negation) in order to attain computational efficiency. The computation with precomplete negation proceeds with the direct computation of negated formulas even in the presence of free variables. Negated formulas are computed in a mode which is dual to the standard positive mode of logic computations. With negation as failure the formulas with tree variables must be delayed until the latter obtain values. Consequently, in situations where delayed formulas are never sufficiently instantiated, precomplete negation can find solutions unattainable with negation as failure. As a consequence of delaying, negation as failure cannot compute unbounded universal quantifiers whereas precomplete negation can. Instead of concentrating on the model-theoretical side of precomplete negation this paper deals with questions of complete computations and efficient implementations.

TR-86-11 Model \& Solution Strategy for Placement of Rectangular Blocks in the Euclidean Plane, May 1986 Amir Alon and Uri Ascher

This paper describes a nonlinear optimization model for the placement of rectangular blocks with some wire connections among them in the Euclidian plane, such that the total wire length is minimized. Such a placement algorithm is useful as a CAD tool for VLSI and PCB layout designs.

The mathematical model presented here ensures that the blocks will not overlap and minimizes the sum of the distances of the interconnections of the blocks with respect to their orientation as well as their position. We also present mechanisms for solving more restrictive placement problems, including one in which there is a set of equally spaced, discrete angles to be used in the placement. The mathematical model is based on the Lennard-Jones 6-12 potential equation, on a sine wave shaped penalty function, and on minimizing the sum of the squares of the Euclidian distances of the block interconnections. We also present some experimental results which show that good placements are achieved with our techniques.

TR-86-12 Shape Analysis, May 1986 Robert J. Woodham

(Abstract not available on-line)

TR-86-13 Structuring Reliable Interactions in Distributed Server Architectures, January 1986 K. Ravindran and Samuel T. Chanson

(Abstract not available on-line)

TR-86-14 Reasoning with Incomplete Information Investigations of Non-Monotonic Reasoning, January 1986 David W. Etherington

(Abstract not available on-line)

TR-86-15 Productive Sets and Constructively Nonpartial-Recursive Functions, August 1986 K and Akira a

(Abstract not available on-line)

TR-86-16 On the Visual Discrimination of Self-Similar Random Textures, September 1986 R. Rensink

This work investigates the ability of the human visual system to discriminate self-similar Gaussian random textures. The power spectra of such textures are similar to themselves when rescaled by some factor $h > 1$. As such, these textures provide a natural domain for testing the hypothesis that texture perception is based on a set of spatial-frequency channels characterized by filters of similar shape.

Some general properties of self-similar random textures are developed. In particular, the relations between their covariance functions and power spectra are established, and are used to show that many self-similar random textures are stochastic fractals. These relations also lead to a simple texture-generation algorithm that allows independent and orthogonal variation of several properties of interest.

Several sets of psychophysical experiments are carried out to determine the statistical properties governing the discrimination of self-similar line textures. Results show that both the similarity parameter $H$ and the scaling ratio $h$ influence discriminability. These two quantities, however, are insufficient to completely characterize perceived texture.

The ability of the visual system to discriminate between various classes of self-similar random texture is analyzed using a simple multichannel model of texture perception. The empirical results are found to be compatible with the hypothesis that texture perception is mediated by the set of spatial-frequency channels putatively involved in form vision.

TR-86-17 Additional Requirements for Matrix \& Transposed Matrix Products, October 1986 M. Kaminski, David G. Kirkpatrick and N. H. Bshouty

Let $M$ be an $s \times t$ matrix and let $M^{T}$ be the transpose of $M$. Let {\bf x} and {\bf y} be $t$- and $s$-dimensional indeterminate column vectors, respectively. We show that any linear algorithm $A$ that computes $M${\bf x} has associated with it a natural dual linear algorithm denoted $A^{T}$ that computes $M^{T}${\bf y}. Furthermore, if $M$ has no zero rows or columns then the number of additions used by $A^{T}$ exceeds the number of additions used by $A$ by exactly $s-t$. In addition, a strong correspondence is established between linear algorithms that compute the product $M{\bf x}$ and bilinear algorithms that compute the bilinear form ${\bf y}^{T}M{\bf x}$.}

TR-86-18 Conditioning of the Steady State Semiconductor Service Problem, January 1986 Uri Ascher, P. A. Markowich, C. Schmeiser, H. Steinruck and R.", Weiss

When solving numerically the steady state semiconductor device problem using appropriate discretizations, extremely large condition numbers are often encountered for the linearized discrete device problem. These condition numbers are so large that, if they represented a sharp bound on the amplification of input errors, or even of roundoff errors, then the obtained numerical solution would be meaningless.

As it turns out, one major reason for these large numbers is due to poor row and column scaling, which is essentially harmless and/or can be fixed. But another reason could be an ill-conditioned device, which yields a true loss of significant digits in the numerical calculation.

In this paper we carry out a conditioning analysis for the steady state device problem. We consider various quasilinearizations as well as Gummel-type iterations and obtain stability bounds which may indeed allow ill-conditioning in general. These bounds are exponential in the potential variation, and are sharp e.g. for a thyristor. But for devices where each smooth subdomain has an Ohmic contact, e.g. a pn-diode, moderate bounds guaranteeing well-conditioning are obtained. Moreover, the analysis suggests how various row and column scalings should be applied in order for the measured condition numbers to correspond more realistically to the true loss of significant digits in the calculations.

TR-86-19 On Collocation Implementation for Singularly Perturbed Two-Point Problems, November 1986 Uri Ascher and Simon Jacobs

We consider the numerical solution of singularly perturbed two-point boundary value problems in ordinary differential equations. Implementation methods for general purpose solvers of first order linear systems are examined, with the basic difference scheme being collocation at Gaussian points. Adaptive mesh selection is based on localized error estimates at the collocation points. These methods are implemented as modifications to the successful collocation code COLSYS, which was originally designed for mildly stiff problems only. Efficient high order approximations to extremely stiff problems are obtained, and comparisons to COLSYS show that the modifications work relatively much better as the singular perturbation parameter gets small (i.e. the problem gets stiff), for both boundary layer and turning point problems.

TR-86-20 A Semi-Automatic Approach to Protocol Implementation --- The ISO Class 2 Transport Protocol as an Example, November 1986 Allen C. Lau

Formal Description Techniques (FDTs) for specifying communication protocols, and the adopted FDT standards such as Estelle have opened a new door for the possibility of automating the implementation of a complex communication protocol directly from its specification. After a brief overview of Estelle FDT, we present the basic ideas and the encountered problems in developing a C-written Estelle compiler, which accepts an Estelle specification of protocols and produces a protocol implementation in C. The practicality of this tool --- the Estelle compiler --- has been examined via a semi-automatic implementation of the ISO class 2 Transport Protocol using the tool. A manual implementation in C/UNIX 4.2bsd of this protocol is also performed and compared with the semi-automatic implementation. We find the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one. These advantages include correctness and modularity in protocol implementation code and reduction in implementation development time. In this thesis, we discuss our experience on using the semi-automatic approach in implementing the ISO class 2 Transport Protocol.

TR-86-21 Handling Call Idempotency Issues in Replicated Distributed Programs, January 1986 K. Ravindran and Samuel T. Chanson

(Abstract not available on-line)

TR-86-22 Factors and Flows, November 1986 P. Hell and David G. Kirkpatrick

(Abstract not available on-line)

TR-86-23 An Environment Theory with Precomplete Negation over Pairs, November 1986 James H. Andrews

A formal semantics of Voda's Theory of Pairs is given which takes the natural- deduction form of Gilmore's first-order set theory. The complete proof theory corresponding to this semantics is given. Then, a logic programming system is described in the form of a computational proof theory for the Gilmore semantics. This system uses parallel disjunction and the technique of precomplete negation; these features are shown to make it more complete than conventional logic programming languages.

Finally, some alternative formulations are explored which would bring the logic programming system described closer to conventional systems. The semantic problems arising from these alternatives are explored.

Included in appendices are the proof of completeness of the complete proof theory, and the environment solution algorithm which is at the heart of precomplete negation over pairs.

TR-86-24 Compiling Functional Programming Constructs to a Logic Engine, November 1986 Harvey Abramson and Peter Ludemann

In this paper we consider how various constructs used in functional programming can be efficiently translated to code for a Prolog engine (designed by L\"{u}demann) similar in spirit but not identical to the Warren machine. It turns out that this Prolog engine which contains a delay mechanism designed to permit coroutining and sound implementation of negation is sufficient to handle all common functional programming constructs; indeed, such a machine has about the same efficiency as a machine designed specifically for a functional programming language. This machine has been fully implemented.

TR-86-25 Efficiently Implementing Pure Prolog or: Not ``YAWAM'', November 1986 Peter Ludemann

High performance hardware and software implementations of Prolog are now being developed by many people, using the Warren Abstract Machine (or ``WAM'') [Warr83]. We have designed a somewhat different machine which supports a more powerful language than Prolog, featuring: \begin{itemize} \item efficiency similar to the WAM for sequential programs, \item tail recursion optimization (TRO) [Warr86], \item sound negation, \item pseudo-parallelism (co-routining) with full backtracking, \item dynamic optimization of clause order, \item efficiency {\em if-then-else} (``shallow'' backtracking), \item simple, regular instruction set designed for easily optimized compilation, \item efficient memory utilization, \item integrated object-oriented virtual memory, \item predicates as first class objects. \end{itemize}

Our design gives the programmer more flexibility in designing programs than is provided by standard Prolog, yet it retains the efficiency of more limited designs.

TR-86-26 Probabilistic Solitude Detection on Rings of Known Size, December 1986 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Upper and lower bounds that match to within a constant factor are found for the expected bit complexity of a problem on asynchronous unidirectional rings of known size $n$, for algorithms that must reach a correct conclusion with probability at least $1 - \epsilon$ for some small preassigned $\epsilon \geq 0$. The problem is for a nonempty set of contenders to determine whether there is precisely one contender. If distributive termination is required, the expected bit complexity is \( \Theta (n \min ( \log \nu (n) + \sqrt{ \log \log (1 / \epsilon)}, \sqrt{ \log n}, \log \log (1 / \epsilon))) \), where $ \nu (n) $ is the least nondivisor of $n$. For nondistributive termination, $ \sqrt{\log \log (1 / \epsilon)} $ and $ \sqrt{\log n}$ are replaced by $\log \log \log (l/ \epsilon)$ and $\log \log n$ respectively. The lower bounds hold even for probabilistic algorithms that exhibit some nondeterministic features.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.