Technical Reports

The ICICS/CS Reading Room

1987 UBC CS Technical Report Abstracts

TR-87-01 Analytic Method for Radiometric Correction of Satellite Multispectral Scanner Data, January 1987 R. J. Woodham and M. H. Gray

The problem of radiometric correction of multispectral scanner data is posed as the problem of determining an intrinsic reflectance factor characteristic of the surface material being imaged and invariant to topography, position of the sun, atmosphere and position of the viewer. A scene radiance equation for remote sensing is derived based on an idealized physical model of image formation. The scene radiance equation is more complex for rugged terrain than for flat terrain since it must model slope, aspect and elevation dependent effects. Scene radiance is determined by the bidirectional reflectance distribution function (BRDF) of the surface material and the distribution of light sources. The sun is treated as a collimated source and the sky is treated as a uniform hemispherical source. The atmosphere is treated as an optically thin, horizontally uniform layer. The limits of this approach are reviewed using results obtained with Landsat MSS images and a digital terrain model (DTM) of a test site near St. Mary Lake, British Columbia, Canada.

New results, based on regression analysis, are described for the St. Mary Lake site. Previous work is extended to take advantage of explicit forest cover data and to consider numeric models of sky radiance. The calculation of sky irradiance now takes occlusion by adjacent terrain into account. The results for St. Mary Lake suggest that the cosine of the incident solar angle and elevation are the two most important correction terms. Skylight and inter-reflection from adjacent terrain, however, also are significant.

TR-87-02 A Schema \& Constraint-Based Representation to Understanding Natural Language, January 1987 Eliza Wing-Mun Kuttner

This thesis attempts to represent the syntax and semantics of English sentences using a schema and constraint-based approach. In this approach, syntactic and semantic knowledge that are represented by schemata are processed in parallel with the utilization of network consistency techniques and an augmented version of Earley's context-free parsing algorithm. A sentence's syntax and semantics are disambiguated incrementally as the interpretation proceeds left to right, word by word. Each word and recognized grammatical constituent provide additional information that helps to guide the interpretation process.

It is desirable to attempt to apply network consistency techniques and schema-knowledge representations on understanding natural language since the former has been proven to be quite efficient and the latter provides modularity in representing knowledge. In addition, this approach is appealing because it can cope with ambiguities in an efficient manner. Multiple interpretations are retained if ambiguity exists as indicated by the words processed so far. How- ever, incorrect interpretations are eliminated as soon as their inappropriateness is discovered. Thus, backtracking search which is known to be inefficient is avoided.

TR-87-03A Application-Driven Failure Semantics of Interprocess Communication in Distributed Programs, June 1988 K. Ravindran, Samuel T. Chanson and Ramakrishnam

Distributed systems are often modelled after the client-server paradigm where resources are managed by servers, and clients communicate with servers for operations on the resources. These client-server communications fall into two categories --- connection-oriented and connection-less, depending on whether the servers maintain state information about the clients or not. Additionally, each of the servers may itself be distributed, i.e., structured as a group of identical processes; these processes communicate with one another to manage shared resources (intra-server communications). Thus, the activities of a distributed program may be viewed as a sequence of client-server communications interspersed with intra-server communications.

In this paper, we identify suitable interprocess communication (IPC) abstractions for such communications --- remote procedure calls for client-server communications and {\em application-driven shared variables} (a shared memory-like abstraction) for intra-server group communication. We specify the properties of these abstractions to handle partial failures that may occur during program execution.

The issues of orphans and consistency arising due to partial failures are examined, and solution techniques application to be incorporated in the run-time system. Examples are given to illustrate the use of these bstractions as primitives for constructing distributed programs.

TR-87-03 Failure Transparency in Remote Procedure Calls, 1987 K. Ravindran and Samuel T. Chanson

Remote procedure call (RPC) is a communication abstraction widely used in distributed programs. The general premise entwined in existing approaches to handle machine and communication failures during RPC is that the applications which interface to the RPC layer cannot tolerate the failures. The premise manifests as a top level constraint on the failure recovery algorithms used in the RPC layer in these approaches. However, our premise is that applications can tolerate certain types of failures under certain situations. This may in turn relax the top level constraint on failure recovery algorithms and allow exploiting the inherent tolerance of applications to failures in a systematic way to simplify failure recovery. Motivated by the premise. The paper presents a model of RPC. The model reflects certain generic properties of the application layer that may be exploited by the RPC layer during failure recovery. Based on the model a new technique of adopting orphans caused by failures is described. The technique minimizes Ihe rollback which may be required in orphan killing techniques. Algorithmic details of the adoption technique are described followed by a quantitative analysis. The model has been implemented as a prototype on a local area network. The simplicity and generality of the failure recovery renders the RPC model useful in distributed systems. particularly those that are large and heterogeneous and hence have complex failure modes.

TR-87-04 Adequacy Criteria for Visual Knowledge Representation, January 1987 Alan K. Mackworth

(Abstract not available on-line)

TR-87-05 Stable Representation of Shape, February 1987 R. J. Woodham

(Abstract not available on-line)

TR-87-06 Semi-Automatic Implementation of Protocols Using an Estelle-C Compiler, March 1987 Son T. Vuong, Allen Chakming Lau and Robin Isaac Man-Hang Chan

In this paper, we present the basic ideas underlying an {\em Estelle-C} compiler, which accepts an {\em Estelle} protocol specification and produces a protocol implementation in C. We discuss our experience gained from using the semi-automatic approach to implement the ISO class 2 transport protocol. A manual implementation of the protocol is performed and compared with the semi-automatic implementation. We find the semi-automatic approach to protocol implementation offers several advantages over the conventional manual one, including correctness and modularity in protocol implementation code, conformance to the specification and reduction in implementation time. Finally, we present our ongoing development of a new {\em Estelle-C} compiler.

TR-87-07 The Set Conceptual Model and the Domain Graph Method of Table Design, March 1987 Paul C. Gilmore

A purely set-based conceptual model SET is described along with a specification/query language DEFINE. SET is intended for modelling all phases of database design and data processing. The model for an enterprise is its set schema, consisting of all the sets that are declared for it.

The domain graph method of table design translates the set schema for an enterprise into a table schema in which each table is a defined user view declared as a set in DEFINE. But for one initial step, the method can be fully automated. The method makes no use of normalization.

Two kinds of integrity constraints are supported in the SET model. Although these constraints take a simple form in the set schema for an enterprise, they can translate into referential integrity constraints for the table schema of a complexity not previously considered.

The simplicity of the constraints supported in the SET model, together with the simplicity of the domain graph table design method, suggests that a conceptual view of an enterprise provided by the SET model is superior to the lower level data presentation view provided by the relational model.

TR-87-08 Probabilistic Solitude Detection I: Ring Size Known Approximately, March 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Matching upper and lower bounds for the bit complexity of a problem on asynchronous unidirectional rings are established, assuming that algorithms must reach a correct conclusion with probability $1 - \epsilon$, for some $\epsilon > 0$. Processors can have identities, but the identities are not necessarily distinct. The problem is that of a distinguished processor determining whether it is the only distinguished processor. The complexity depends on the processors' knowledge of the size $n$ of the ring. When no upper bound is known, only nondistributive termination is possible, and $\Theta (n \log (1 / \epsilon)) $ bits are necessary and sufficient. When only an upper bound $N$ is known, distributive termination is possible, but the complexity of achieving distributive termination is $ \Theta (n \sqrt{\log ( \frac{N}{n})} + n \log (\frac{1}{\epsilon}))$. When processors know that $(\frac{1}{2} + \rho)N \leq n \leq N$ for $\rho > 0$, then the bound drops to $\Theta (n \log \log (\frac{1}{\epsilon}) + n \log (\frac{1}{\rho}))$, for both distributive and nondistributive termination, for sufficiently large $N$.

TR-87-09 Justification and Applications of the Set Conceptual Model, April 1987 Paul C. Gilmore

In an earlier paper, the SET conceptual model was described. along with the domain graph method of table design. In this paper a justification for the method is provided, and a simple condition shown to be sufficient for the satisfaction of the degree constraints of a set schema. The basis for the consistency of the model is also described. Applications of the SET model to the full range of data processing are suggested, as well as to the problems raised by incomplete information.

TR-87-10 A Foundation for the Entity Relationship Model: Why \& How, April 1987 Paul C. Gilmore

(Abstract not available on-line)

TR-87-11 Probabilistic Solitude Detection II: Ring Size Known Exactly, April 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Upper and lower bounds that match to within a constant factor are found for the expected bit complexity of a problem on asynchronous unidirectional rings of known size $n$, for algorithms that must reach a correct conclusion with probability at least $1 - \epsilon$ for some small preassigned $\epsilon \geq 0$. The problem is for a nonempty set of contenders to determine whether there is precisely one contender. If distributive termination is required, the expected bit complexity is \( \Theta (n \min ( \log \nu (n) + \sqrt{\log \log (\frac{1}{\epsilon})}, \sqrt{\log n}, \log \log (\frac{1}{\epsilon}))) \), where $\nu (n)$ is the least nondivisor

of $n$. For nondistributive termination, $ \sqrt{\log \log (\frac{1}{\epsilon})}$ and $\sqrt{\log n}$ are

replaced by $\log \log \log(\frac{1}{\epsilon})$ and $\log \log n$ respectively. The lower bounds hold even for probabilistic algorithms that exhibit some nondeterministic features.

TR-87-12 Establishing Order in Planar Subdivisions, May 1987 David G. Kirkpatrick

A planar subdivision is the partition of the plane induced by an embedded planar graph. A representation of such a subdivision is {\em ordered} if, for each vertex $\upsilon$ of the associated graph $G$, the (say) clockwise sequence of edges in the embedding of $G$ incident with appears explicitly.

The worst-case complexity of establishing order in a planar subdivision --- i.e. converting an unordered representation into an ordered one --- is shown to be \( \Theta (n + \log \lambda(G)) \), where $n$ is the size (number of vertices) of the underlying graph $G$ and $\lambda (G)$ is the number of topologically distinct embeddings of $G$ in the plane.

TR-87-13 Parallel Construction of Subdivision Hierarchies, May 1987 Norm Dadoun and David G. Kirkpatrick

A direct, simple and general parallel algorithm is described for the preprocessing of a planar subdivision for fast (sequential) search. In essence, the hierarchical subdivision search structure described by Kirkpatrick [K] is constructed in parallel. The method relies on an efficient parallel algorithm for constructing large independent sets in planar graphs. This is accomplished by a simple reduction to the same problem for lists.

Applications to the manipulation of convex polyhedra are described including an \( O(\log^{2}n \log^{*}n) \) parallel time algorithm for constructing the convex hull of $n$ points in $R^{3}$ and an \( O( \log n \log^{*}n) \) parallel time algorithm for detecting the separation of convex polyhedra.

TR-87-14 A Simple Optimal Parallel List Ranking Algorithm, May 1987 Karl Abrahamson, Norm Dadoun, David G. Kirkpatrick and Teresa Maria Przytycka

We describe a randomized parallel algorithm to solve list ranking in $O(\log n)$ expected time using $n/ \log n$ processors, where $n$ is the length of the list. The algorithm requires considerably less load rebalancing than previous algorithms.

TR-87-15 A Parallel Algorithm for Finding Maximal Independent Sets in Planar Graphs, June 1987 Norm Dadoun and David G. Kirkpatrick

The problem of constructing parallel algorithms for finding Maximal Independent Sets in graphs has received considerable attention. In the case that the given graph is planar, the simple efficient parallel algorithm described here may be employed. The method relies on an efficient parallel algorithm for constructing large independent sets in bounded degree graphs. This is accomplished by a simple reduction to the same problem for lists.

Using a linear number of EREW processors, the algorithm identifies a maximal independent set in an arbitrary planar graph in O$(\log n \log^{*} n)$ parallel time. A randomized version of the algorithm runs in O$(\log n)$ expected parallel time.

TR-87-19 Time-Space Tradeoffs for Branching Programs Contrasted With Those for Straight-Line Programs, June 1987 Karl Abrahamson

This paper establishes time-space tradeoffs for some algebraic problems in the branching program model, including convolution of vectors, matrix multiplication, matrix inversion, computing the product of three matrices and computing $PAQ$ where $P$ and $Q$ are permutation matrices. While some of the results agree well with known results for straight- line programs, one of them (for matrix multiplication) surprisingly is stronger, and one (for computing $PAQ$) is necessarily weaker. Some of the tradeoffs are proved for expected time and space, where all inputs are equally likely.

TR-87-20 Randomized Function Evaluation on a Ring, May 1987 Karl Abrahamson, Andrew Adler, Lisa Higham and David G. Kirkpatrick

Let $R$ be a unidirectional asynchronous ring of $n$ processors each with a single input bit. Let $f$ be any cyclic non-constant function of $n$ boolean variables. Moran and Warmuth [8] prove that any deterministic algorithm for $R$ that evaluates $f$ has communication complexity $\Omega (n \log n)$ bits. They also construct a cyclic non-constant boolean function that can be evaluated in $O(n \log n)$ bits by a deterministic algorithm.

This contrasts with the following new results: \begin{enumerate} \item There exists a cyclic non-constant boolean function which can be evaluated with expected complexity $O (n \sqrt{\log n})$ bits by a randomized algorithm for $R$.

\item Any nondeterministic algorithm for $R$ which evaluates any cyclic non-constant function has communication complexity $\Omega (n \sqrt{\log n})$ bits. \end{enumerate

TR-87-21 Knowledge Structuring \& Constraint Satisfaction: the Mapsee Approach, June 1987 Jan A. Mulder, Alan K. Mackworth and William S. Havens

This paper shows how to integrate constraint satisfaction techniques with schema-based representations for visual knowledge. This integration is discussed in a progression of three sketch map interpretation programs: Mapsee-l, Mapsee-2, and Mapsee-3. The programs are evaluated by the criteria of descriptive and procedural adequacy. The evaluation indicates that a schema-based representation used in combination with a hierarchical arc consistency algorithm constitutes a modular, efficient, and effective approach to the structured representation of visual knowledge. The schemata used in this representation are embedded in composition and specialization hierarchies. Specialization hierarchies are further expanded into discrimination graphs.

TR-87-24 The Logic of Depiction, June 1987 Raymond Reiter and Alan K. Mackworth

We propose a theory of depiction and interpretation that formalizes image domain knowledge, scene domain knowledge and the depiction mapping between the image and scene domains. This theory is illustrated by specifying some general knowledge about maps, geographic objects and their depiction relationships in first order logic with equality.

An interpretation of an image is defined to be a logical model of the general knowledge and a description of that image. For the simple map world we show how the task level specification may be refined to a provably correct implementation by invoking model preserving transformations on the logical representation. In addition, we sketch logical treatments for querying an image, incorporating contingent scene knowledge into the interpretation process, occlusion, ambiguous image descriptions, and composition.

This approach provides a formal framework for analyzing existing systems such as Mapsee, and for understanding the use of constraint satisfaction techniques. It also can be used to design and implement vision and graphics systems that are correct with respect to the task and algorithm levels.

TR-87-25 On the Modality of Convex Polygons, June 1987 Karl Abrahamson

Under two reasonable definitions of random convex polygons, the expected modality of a random convex polygon grows without bound as the number of vertices grows. This refutes a conjecture of Aggarwall and Melville.

TR-87-26 Formalizing Attribution by Default, July 1987 Paul C. Gilmore

Attribution by default occurs when, in the absence of information to the contrary, an entity is assumed to have a property. The provision of information to the contrary results in the withdrawal of the attribution. It has been argued that classical logic cannot formalize such commonsense reasoning and that the development of a nonmonotonic logic is necessary. Evidence is offered in this note that this is not the case for some important defaults.

TR-87-27 On Numerical Differential Algebraic Problems with Application to Semiconductor Device Simulation, July 1987 Uri Ascher

This paper considers questions of conditioning of and numerical methods for certain differential algebraic equations subject to initial and boundary conditions. The approach taken is that of separating ``differential'' and ``algebraic'' solution components, at least theoretically.

This yields conditioning results for differential algebraic boundary value problems in terms of ``pure'' differential problems, for which existing theory is well-developed. We carry the process out for problems with (global) index 1 or 2.

For semi-explicit boundary value problems of index 1 (where solution components are separated) we give a convergence theorem for a special class of collocation methods. For general index 1 problems we discuss advantages and disadvantages of certain symmetric difference schemes. For initial value problems with index 2 we discuss the use of BDF schemes, summarizing conditions for their successful and stable utilization.

Finally, the present considerations and analysis are applied to two problems involving differential algebraic equations which arise in semiconductor device simulation.

TR-87-28 General Framework, Stability and Error Analysis for Numerical Stiff Boundary Value Methods, July 1987 Uri Ascher and R. M. Mattheij

This paper provides a general framework, called ``theoretical multiple shooting'', within which various, numerical methods for stiff boundary value ordinary differential problems can be analyzed. A global stability and error analysis is given, allowing (as much as possible) the specificities of an actual numerical method to come in only locally. We demonstrate the use of our results for both one-sided and symmetric difference schemes. The class of problems treated includes some with internal (e.g. ``turning point'') layers.

TR-87-29 Update on Computational Vision: Shape Representation, Object Recognition \& Constraint Satisfaction --- replaced, see 89-12, July 1987 Alan K. Mackworth

(Abstract not available on-line)

TR-87-30 A Parallel Tree Contraction Algorithm, August 1987 Karl Abrahamson, Norm Dadoun, David G. Kirkpatrick and Teresa Maria Przytycka

A simple reduction from the tree contraction problem to the list ranking problem is presented. The reduction takes O$(\log n)$ time for a tree with $n$ nodes, using O$(n / \log n)$ EREW processors. Thus tree contraction can be done as efficiently as list ranking.

A broad class of parallel tree computations to which the tree contraction techniques apply is described. This subsumes earlier characterizations. Applications to the computation of certain properties of cographs are presented in some detail.

TR-87-31 Concepts \& Methods for Database Design, August 1987 Paul C. Gilmore

This report consists of drafts of chapters of a book prepared as course material for CSCI 404 at the University of British Columbia.

TR-87-32 Generalized LL(K) grammars for Concurrent Logic Programming Languages, October 1987 Harvey Abramson

We examine the compilation of the LL(k) deterministic context free grammars to Horn clause logic programs and sequential and concurrent execution of these programs. In the sequential case, one is able to take advantage of the determinism to eliminate the generation of unnecessary backtracking information during execution of the compiled logic program. In the concurrent case, grammar rules are simply and directly translated to clauses of Concurrent Prolog, Parlog, or Guarded Horn Clause programs, allowing grammatical processing directly in the setting of committed or ``don't care'' nondeterminism. LL(k) grammar rules are generalized so that grammatical processing of streams involving derivations of infinite length is possible. A top-down analogue of Marcus's deterministic parser is a possible application of these generalized LL(k) grammars.

TR-87-33 Towards an Expert System for Compiler Development, October 1987 Harvey Abramson

(Abstract not available on-line)

TR-87-34 The Design \& Control of Visual Routines for the Computation of Simple Geometric Properties \& Relations, October 1987 Marc H. J. Romanycia

The present work is based on the Visual Routine theory of Shimon Ullman. This theory holds that efficient visual perception is managed by first applying spatially parallel methods to an initial input image in order to construct the basic representation-maps of features within the image. Then, this phase is followed by the application of serial methods --- visual routines --- which are applied to the most salient items in these and other subsequently created maps.

Recent work in the visual routine tradition is reviewed, as well as relevant psychological work on preattentive and attentive vision. An analysis is made of the problem of devising a visual routine language for computing geometric properties and relations. The most useful basic representations to compute directly from a world of 2-D geometric shapes are determined. An argument is made for the case that an experimental program is required to establish which basic operations and which methods for controlling them will lead to the efficient computation of geometric properties and relations.

A description is given of an implemented computer system which can correctly compute, in images of simple 2-D geometric shapes, the properties {\em vertical}, {\em horizontal}, {\em closed}, and {\em convex}, and the relations {\em inside}, {\em outside}, {\em touching}, {\em centred-in}, {\em connected}, {\em parallel}, and {\em being-part-of}. The visual routines which compute these, the basic operations out of which the visual routines are composed, and the important logic which controls the goal-directed application of the routines to the image are all described in detail. The entire system is embedded in a Question-and-Answer system which is capable of answering questions of an image, such as ``Find all the squares inside triangles'' or ``Find all the vertical bars outside of closed convex shapes.'' By asking many such questions about various test images, the effectiveness of the visual routines and their controlling logic is demonstrated.

TR-87-35 A Default Logic Approach to the Derivation of Natural Language Presuppositions, October 1987 Robert E. Mercer

(Abstract not available on-line)

TR-87-36 An Estelle-C Compiler for Automatic Protocol Implementation, November 1987 Robin Isaac Man-Hang Chan

Over the past few years, much experience has been gained in semi-automatic protocol implementation using an existing Estelle-C compiler developed at the University of British Columbia. However, with the continual evolution of the Estelle language, that compiler is now obsolete. The present study found substantial syntactic and semantic differences between the Estelle language as implemented by the existing compiler and that specified in the latest ISO document to warrant the construction of a new Estelle-C compiler. The result is a new compiler which translates Estelle as defined in the second version of the ISO Draft Proposal 9074 into the programming language C. The new Estelle-C compiler addresses issues such as dynamic reconfiguration of modules snd maintenance of priority relationships among nested modules. A run-time environment capable of supporting the new Estelle features is also presented. The implementation strategy used in the new Estelle-C compiler is illustrated by using the alternating bit protocol found in the ISO Draft Proposal 9074 document.

TR-87-37 The Renormalized Curvature Scale Space and the Evolution Properties of Planar Curves, November 1987 Alan K. Mackworth and Farzin Mokhtarian

The Curvature Scale Space Image of a planar curve is computed by convolving a path-based parametric representation of the curve with a Gaussian function of variance $\sigma^{2}$, extracting the zeroes of curvature of the convolved curves and combining them in a scale space representation of the curve. For any given curve $\Gamma$, the process of generating the ordered sequence of curves \{ \( \Gamma_{\sigma} \mid \sigma \geq 0\) \} is known as the evolution of $\Gamma$.

It is shown that the normalized arc length parameter of a curve is, in general, not the normalized arc length parameter of a convolved version of that curve. A new method of computing the curvature scale space image reparametrizes each convolved curve by its normalized arc length parameter. Zeroes of curvature are then expressed in that new parametrization. The result is the Renormalized Curvature Scale Space Image and is more suitable for matching curves similar in shape.

Scaling properties of planar curves and the curvature scale space image are also investigated. It is shown that no new curvature zero-crossings are created at the higher scales of the curvature scale space image of a planar curve in $C_{2}$ if the curve remains in $C_{2}$ during evolution. Several positive and negative results are presented on the preservation of various properties of planar curves under the evolution process. Among these results is the fact that every polynomially represented planar curve in $C_{2}$ intersects itself just before forming a cusp point during evolution.

TR-87-38 Multi-Scale Description of Space Curves and Three-Dimensional Objects, November 1987 Farzin Mokhtarian

This paper addresses the problem of representing the shape of three-dimensional or space curves. This problem is important since space curves can be used to model the shape of many three-dimensional objects effectively and economically. A number of shape representation methods that operate on two-dimensional objects and can be extended to apply to space curves are reviewed briefly and their shortcomings discussed.

Next, the concepts of curvature and torsion of a space curve are explained. The curvature and torsion functions of a space curve specify it uniquely up to rotation and translation. Arc-length parametrization followed by Gaussian convolution is used to compute curvature and torsion on a space curve at varying levels of detail. Larger values of the scale parameter of the Gaussian bring out more basic features of the curve. Information about the curvature and torsion of the curve over a continuum of scales are combined to produce the curvature and torsion scale space images of the curve. These images are essentially invariant under rotation, uniform scaling and translation of the curve and are used as a representation for it. Using this representation, a space curve can be successfully matched to another one of similar shape.

The application of this technique to a common three-dimensional object is demonstrated. Finally, the proposed representation is evaluated according to several criteria that any shape representation method should ideally satisfy. It is shown that the curvature and torsion scale space representation satisfies those criteria better than other possible candidate methods.

TR-87-39 Advanced Topics in Automated Deduction, November 1987 Wolfgang Bibel

(Abstract not available on-line)

TR-87-40 Constraint Satisfaction from a Deductive Viewpoint, December 1987 Wolfgang Bibel

This paper reports the result of testing the author's proof techniques on the class of constraint satisfaction problems (CSP). This experiment has been successful in the sense that a completely general proof technique turns out to behave well also for this special class of problems which itself has received considerable attention in the community. So at the same time the paper happens to present a new (deductive) mechanism for solving constraint satisfaction problems that is of interest in its own right. This mechanism may be characterized as a bottom-up, lazy-evaluation technique which reduces any such problem to the problem of evaluating a database expression typically involving a number of joins. A way of computing such an expression is proposed.

If you have any questions or comments regarding this page please send mail to