Technical Reports

The ICICS/CS Reading Room


1997 UBC CS Technical Report Abstracts

TR-97-01 Soundness and Cut-Elimination in NaDSyL, February 1997 Paul C. Gilmore, 27 pages

NaDSyL, a Natural Deduction based Symbolic Logic, like some earlier logics, is motivated by the belief that a confusion of use and mention is the source of the set theoretic paradoxes. However NaDSyL differs from the earlier logics in several important respects.

"Truth gaps", as they have been called by Kripke, are essential to the consistency of the earlier logics, but are absent from NaDSyL; the law of the excluded middle is derivable for all the sentences of NaDSyL. But the logic has an undecidable elementary syntax, a departure from tradition that is of little importance, since the semantic tree presentation of the proof theory can incorporate the decision process for the elementary syntax.

The use of the lambda calculus notation in NaDSyL, rather than the set theoretic notation of the earlier logics, reflects much more than a change of notation. For a second motivation for NaDSyL is the provision of a higher order logic based on the original term models of the lambda calculus rather than on the Scott models. These term models are the "natural" intepretation of the lambda calculus for the naive nominalist view that justifies the belief in the source of the paradoxes. They provide the semantics for the first order domain of the second order logic NaDSyL.

The elementary and logical syntax or proof theory of NaDSyL is fully described, as well as its semantics. Semantic proofs of the soundness of NaDSyL with cut and of the completeness of NaDSyL without cut are given. That cut is a redundant rule follows form these results. Some applications of the logic are also described.

TR-97-02 On Digital Money and Card Technologies, January 1997 Edwin M. Knorr, 24 pages

We survey two related fields: digital money and card technologies (especially smart cards), for possible PhD research topics. We believe that digital money and card technologies will revolutionize life in the 21st century. It will be shown that privacy issues are of serious concern, but that well-designed implementations can have long-term strategic and economic benefits to society. We have been following these two fields for a number of years. It is only very recently that digital money and card technologies have captured the attention of the North American marketplace. We believe that there will be significant research opportunities in these areas for years to come, as evidenced by recent commercial interest in supporting financial transactions via the Internet. This paper examines various aspects of digital money and card technologies, and attempts to provide a comprehensive overview of these fields and their research prospects.

TR-97-03 Video and Audio Streams Over an IP/ATM Wide Area Network, July 28, 1997 Mark McCutcheon, Mabo R. Ito and Gerald W. Neufeld, 101 pages

ABSTRACT This is a survey of the state of the art in delivering IP services over ATM networks, as it stands in the second quarter of 1997. It also includes a look at the alternatives to that set of technologies. The technology and the choices are changing "on the fly", and have evolved significantly during the course of this project. Moreover, the issues are not exclusively technical, but in many respects reflect the great schism in the data communications world: connection-oriented versus connectionless networks. We have tried to present the technical issues and solutions along with an unbiased overview of the more "philosophical" issues. We indicate how we think the technology and the installed base of equipment is going to develop over the next few years, in order to give a picture of the future of ATM in data networking.

TR-97-04 Random Interval Graphs, February 1997 Nicholas Pippenger, 22 pages

We consider models for random interval graphs that are based on stochastic service systems, with vertices corresponding to customers and edges corresponding to pairs of customers that are in the system simultaneously. The number N of vertices in a connected component thus corresponds to the number of customers arriving during a busy period, while the size K of the largest clique (which for interval graphs is equal to the chromatic number) corresponds to the maximum number of customers in the system during a busy period. We obtain the following results for both the M/D/Infinity and the M/M/Infinity models, with arrival rate lambda per mean service time. The expected number of vertices is e^lambda, and the distribution of the N/e^lambda tends to an exponential distribution with mean 1 as lambda tends to infinity. This implies that log N is very strongly concentrated about lambda-gamma (where gamma is Euler's constant), with variance just pi^2/6. The size K of the largest clique is very strongly concentrated about e lambda. Thus the ratio K/log N is strongly concentrated about e, in contrast with the situation for random graphs generated by unbiased coin flips, where K/log N is very strongly concentrated about 2/log 2.

TR-97-05 Surface Reflectance and Shape from Images Using Collinear Light Source, July 28, 1997 Jiping Lu and Jim Little, 219 pages

Surface Reflectance and Shape from Images Using Collinear Light Source Jiping Lu Jim Little jplu@cs.ubc.ca little@cs.ubc.c Laboratory for Computational Intelligence Department of Computer Science The University of British Columbia Vancouver BC Canada V6T 1Z4

ABSTRACT The purpose of computer vision is to extract useful information from images. Image features such as occluding contours, edges, flow, brightness, and shading provide geometric and photometric constraints on the surface shape and reflectance of physical objects in the scene. In this thesis, two novel tech- niques are proposed for surface reflectance extraction and surface recovery. They integrate geometric and photometric constraints in images of a rotating object illuminated under a collinear light source (where the illuminant direc- tion of the light source lies on or near the viewing direction of the camera). The rotation of the object can be precisely controlled. The object surface is assumed to be $C^2$ and its surface reflectance function is uniform. The first technique, called the photogeometric technique, uses geometric and photometric constraints on surface points with surface normal perpendicular to the image plane to calculate 3D locations of surface points, then extracts the surface reflectance function by tracking these surface points in the images. Using the extracted surface reflectance function and two images of the surface, the technique recovers the depth and surface orientation of the object surface simultaneously. The second technique, named the wire-frame technique, further exploits geome- tric and photometric constraints on the surface points whose surface normals are coplanar with the viewing direction and the rotation axis to extract a set of 3D curves. The set of 3D curves comprises a wire frame on the surface. The depth and surface orientation between curves on the wire frame are interpolated by using geometric or photogeometric methods. The wire-frame technique is superior because it does not need the surface reflectance function to extract the wire frame. It also works on piecewise uniform surfaces and requires only that the light source be coplanar with the viewing direction and the rotation axis. In addition, by interpolating the depth and surface orientation from a dense wire frame, the surface recovered is more accurate. The two techniques have been tested on real images of surfaces with differ- ent reflectance properties and geometric structures. The experimental results and comprehensive analysis show that the proposed techniques are efficient and robust. As an attempt to extend our research to computer graphics, work on extracting the shading function from real images for graphics rendering shows some promising results. Key words: Physics based vision, Reflectance, Shape recovery, Integrating mul- tiple cues, Integrating multiple views, surface modeling, surface rendering.

TR-97-06 An Object-Oriented Graphics Kernel, April 1997 Gene Lee, 22 pages

A graphics kernel serves as interface between an application program and an underlying graphics subsystem. Developers interact with kernel primitives while the primitives interact with a graphics subsystem. Although the two forms of interaction closely relate, their optimal designs conflict. The former interaction prefers a process that closely follows the mental (or object-based) model of application development while the latter prefers a process that parses display-lists.

This papers describes RDI, an object-oriented graphics kernel that resolves the differences between the two interactions with one design. Developers explicitly assign intuitive relationships between primitives while an underlying process interprets the primitives in an orderly manner. The kernel's extensible design decouples the processes of modeling and rendering. Primitives dynamically communicate with graphics subsystems to express their purpose and functions. The discussion of the kernel's design entails its optimizations, its benefits toward simulation, and its application toward parallel rendering.

TR-97-10 Surface and Shading Models from Real Images fro Computer Graphics, August 01, 1997 Jiping Lu and Jim Little, 23 pages

Surface and Shading Models from Real images for Computer Graphics

Jiping Lu Jim Little jplu@cs.ubc.ca little@cs.ubc.c Laboratory for Computational Intelligence Department of Computer Science The University of British Columbia Vancouver BC Canada V6T 1Z4

ABSTRACT In this technical report we present an object modeling and rendering technique from real images for computer graphics. The technique builds the surface geo- metric model and extracts the surface shading model from a real image sequence of a rotating object illuminated under a collinear light source (where the illuminant direction of the light source is the same as the viewing direction of the camera). In building the surface geometric model, the object surface reflectance function is extracted from the real images and used to recover the surface depth and orientation of the object. In building the surface shading model, the different shading components, the ambient component, the diffuse component and the specular component, are calculated from the surface reflec- tance function extracted from the real images. Then the obtained shading model is used to render the recovered geometric model of the surface in arbi- trary viewing and illuminant directions. Experiments have been conducted on diffuse surface and specular surface. The synthetic images of the recovered object surface rendered with the extracted shading model are compared with the real images of the same objects. The results shows that the technique is feasible and promising. Index terms: Computer vision, computer graphics, Surface recovery, Surface modeling, Surface reflectance, Shading function, Graphics rendering, Virtual reality.

TR-97-11 A Fast Heuristic For Finding The Minimum Weight Triangulation, July 10, 1997 Ronald Beirouti

No polynomial time algorithm is known to compute the minimum weight triangulation MWT of a point set. In this thesis we present an efficient implementation of the LMT-skeleton heuristic. This heuristic computes a subgraph of the MWT of a point set from which the MWT can usually be completed. For uniformly distributed sets of tens of thousands of points our algorithm constructs the exact MWT in expected linear time and space. A fast heuristic, other than being usefull in areas such as stock cutting, finite element analysis, and terrain modeling, allows to experiment with different point sets in order to explore the complexity of the MWT problem. We present point sets constructed with this implementation such that the LMT-skeleton heuristic does not produce a complete graph and can not compute the MWT in polynomial time, or that can be used to prove the NP-Hardness of the MWT problem.

TR-97-12 Formalization and Analysis of the Separation Minima for the North Atlantic Region: Complete Specification and Analysis Results, October 30, 1997 Nancy A. Day, Jeffrey J. Joyce and Gerry Pelletier, 74 pages

This report describes work to formalize and validate a specification of the separation minima for aircraft in the North Atlantic (NAT) region completed by researchers at the University of British Columbia in collaboration with Hughes International Airspace Management Systems. Our formal representation of these separation minima is given in a mixture of a tabular style of specification and textual predicate logic. We analyzed the tables for completeness, consistency and symmetry. This report includes the full specification and complete analysis results.

TR-97-13 Average-Case Bounds on the Complexity of Path-Search, August 19, 1997 Nicholas Pippenger

Average-Case Bounds for the Complexity of Path-Search Nicholas Pippenger A channel graph is the union of all paths between a given input and a given output in an interconnection network. At any moment in time, each vertex in such a graph is either idle or busy. The search problem we consider is to find a path (from the given input to the given output) consisting entirely of idle vertices, or to find a cut (separating the given input from the given output) consisting entirely of busy vertices. We shall also allow the search to fail to find either a path or a cut with some probability bounded by a parameter called the failure probability. This is to be accomplished by sequentially probing the idle-or-busy status of vertices, where the vertex chosen for each probe may depend on the outcome of previous probes. Thus a search algorithm may be modelled as a decision tree. For average-case analysis, we assume that each vertex is independently idle with some fixed probability, called the vacancy probability (and therefore busy with the complementary probability). For one commonly studied type channel graph, the parallel graph, we show that the expected number of probes is at most proportional to the length of a path, irrespective of the vacancy probability, and even if the allowed failure probability is zero. Another type of channel graph we study is the spider-web graph, which is superior to the parallel graph as regard linking probability (the probability that an idle path, rather than a busy cut, exists). For this graph we give an algorithm for which, as the vacancy probability is varied while the positive failure probability is held fixed, the expected number of probes reaches its maximum near the critical vacancy probability (where the linking probability make a rapid transition from a very small value to a substantial value). This maximum expected number of probes is about the cube-root of the diversity (the number of paths between the input and output).

TR-97-14 Conceptual Module Querying for Software Reengineering, September 2, 1997 E. Baniassad and G.C. Murphy, 10 pages

Many tools have been built to analyze source. Most of these tools do not adequately support reengineering activities because they do not allow a software engineer to simultaneously perform queries about both the existing and the desired source structure. This paper introduces the conceptual module approach that overcomes this limitation. A conceptual module is a set of lines of source that are treated as a logical unit. We show how the approach simplifies the gathering of source information for reengineering tasks, and describe how a tool to support the approach was built as a front-end to existing source analysis tools.

TR-97-15 Extending and Managing Software Reflexion Models, September 14, 1997 G.C. Murphy, D. Notkin and K. Sullivan, 16 pages

The artifacts comprising a software system often "drift" apart over time. Design documents and source code are a good example. The software reflexion model technique was developed to help engineers exploit---rather than remove---this drift to help them perform various software engineering tasks. More specifically, the technique helps an engineer compare artifacts by summarizing where one artifact (such as a design) is consistent with and inconsistent with another artifact (such as source). The use of the technique to support a variety of tasks-including the successful use of the technique to support an experimental reengineering of a system comprising a million lines-of-code-identified a number of shortcomings. In this paper, we present two categories of extensions to the technique. The first category concerns the typing of software reflexion models to allow different kinds of interactions to be distinguished. The second category concerns techniques to ease the investigation of reflexion models. These extensions are aimed at making the engineer more effective in performing various tasks by improving the management and understanding of the inconsistencies---the drift---between artifacts.

TR-97-16 The Measured Access Characteristics of World-Wide-Web Client Proxy Caches, October 22, 1997 Bradley M. Duska, David Marwood and Michael J. Feeley

The growing popularity of the World Wide Web is placing tremendous demands on the Internet. A key strategy for scaling the Internet to meet these increasing demands is to cache data near clients and thus improve access latency and reduce network and server load. Unfortunately, research in this area has been hampered by a poor understanding of the locality and sharing characteristics of Web-client accesses. The recent popularity of Web proxy servers provides a unique opportunity to improve this understanding, because a small number of proxy servers see accesses from thousands of clients. This paper presents an analysis of access traces collected from seven proxy servers deployed in various locations throughout the Internet. The traces record a total of 47.4 million requests made by 23,700 clients over a twenty-one day period. We use a combination of static analysis and trace-driven cache simulation to characterize the locality and sharing properties of these accesses. Our analysis shows that a 2- to 10-GB second-level cache yields hit rates between 24% and 45% with 85% of these hits due to sharing among different clients. Caches with more clients exhibit more sharing and thus higher hit rates. Between 2% and 7% of accesses are consistency misses to unmodified objects, using the Squid and CERN proxy cache coherence protocols. Sharing is bimodal. Requests for shared objects are divided evenly between objects that are narrowly shared and those that are shared by many clients; widely shared objects also tend to be shared by clients from unrelated traces.

TR-97-17 A Logic For Default Reasoning, July 30, 1979 Raymond Reiter

(Abstract not available on-line)

TR-97-19 Supporting Learners in a Remote Computer-Supported Collaborative Learning Environment: The Importance of Task and Communication, February 25, 1998 David Graves

This paper describes novel research in the area of remote Computer-Supported Collaborative Learning. A multimedia activity (Builder) was designed to allow a pair of players to build a house together, each working from his or her own computer. Features of the activity include: interactive graphical interface, two- and three-dimensional views, sound feedback, and real-time written and spoken communication. Mathematical concepts, including area, perimeter, volume, and tiling of surfaces, are embedded in the task. A field study with 134 elementary school children was undertaken to assess the learning and collaborative potential of the activity. Specifically, the study addressed how different modes of communication and different task directives affected learning, interpersonal attitudes, and the perceived value and enjoyment of the task. It was found that playing led to academic gains in the target math areas, and that the nature of how the task was specified had a significant impact on the size of the gains. The mode of communication was found to affect attitudes toward the game and toward the player's partner. Gender differences were found in attitude toward the game, perceived collaboration and attitude toward partner.

TR-97-21 A Network-Enhanced Volume Renderer, August 01, 1997 Jeff LaPorte, 14 pages

Volume renedering is superior in many respects to conventional methods of medical visualization. Extending this ability into the realm of telemedicine provides the opportunity for health professionals to offer expertise not normally available to smaller communities, via computer networking of health centers. This report describes such a software system for collaboration, a network-enhanced version of an existing program called Volren. The methods used to provide network functionality in Volren can serve as a prototype for future multiuser applications.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.