Technical Reports

The ICICS/CS Reading Room


1999 UBC CS Technical Report Abstracts

TR-99-01 A Light-Weight Framework for Hardware Verification, February 24, 1999 Christoph Kern, Tarik Ono-Tesfaye and Mark R. Greenstreet, 15 pages

We present a deductive verification framework that combines deductive reasoning, general purpose decision procedures, and domain-specific reasoning. We address the integration of formal as well as informal domain-specific reasoning, which is encapsulated in the form of user-defined inference rules. To demonstrate our approach, we describe the verification of a SRT divider where a transistor-level implementation with timing is shown to be a refinement of its high-level specification.

TR-99-02 Analyzing Exception Flow in Java Programs, March 02, 1999 Martin R. Robillard and Gail C. Murphy, 15 pages

Exception handling mechanisms provided by programming languages are intended to ease the difficulty of developing robust software systems. Using these mechanisms, a software developer can describe the exceptional conditions a module might raise, and the response of the module to exceptional conditions that may occur as it is executing. Creating a robust system from such a localized view requires a developer to reason about the flow of exceptions across modules. The use of unchecked exceptions, and in object-oriented languages, subsumption, makes it difficult for a software developer to perform this reasoning manually. In this paper, we describe a tool called Jex that analyzes the flow of exceptions in Java code to produce views of the exception structure. We demonstrate how Jex can help a developer identify program points where exceptions are caught accidentally, where there is an opportunity to add finer-grained recovery code, and where error-handling policies are not being followed.

TR-99-03 Revised Characterizations of 1-Way Quantum Finite Automata, July 20, 2000 Alex Brodsky and Nicholas Pippenger, 23 pages

(Abstract not available on-line)The 2-way quantum finite automaton introduced by Kondacs and Watrous\cite{KoWa97} can accept non-regular languages with bounded error in polynomial time. If we restrict the head of the automaton to moving classically and to moving only in one direction, the acceptance power of this 1-way quantum finite automaton is reduced to a proper subset of the regular languages. In this paper we study two different models of 1-way quantum finite automata. The first model, termed measure-once quantum finite automata, was introduced by Moore and Crutchfield\cite{MoCr98}, and the second model, termed measure-many quantum finite automata, was introduced by Kondacs and Watrous\cite{KoWa97}. We characterize the measure-once model when it is restricted to accepting with bounded error and show that, without that restriction, it can solve the word problem over the free group. We also show that it can be simulated by a probabilistic finite automaton and describe an algorithm that determines if two measure-once automata are equivalent. We prove several closure properties of the classes of languages accepted by measure-many automata, including inverse homomorphisms, and provide a new necessary condition for a language to be accepted by the measure-many model with bounded error. Finally, we show that piecewise testable languages can be accepted with bounded error by a measure-many quantum finite automaton, in the process introducing new construction techniques for quantum automata.

TR-99-04 Atlas: A Case Study in Building a Web-Based Learning Environment Using Aspect-oriented Programming, March 31, 1999 Mik A. Kersten and Gail C. Murphy, 20 pages

The Advanced Teaching and Learning Academic Server (Atlas) is a software system that supports web-based learning. Students can register for courses, and can navigate through personalized views of course material. Atlas has been built according to Sun Microsystem's Java (TM) Servlet specification using Xerox PARC's aspect-oriented programming support called AspectJ (TM). Since aspect-oriented programming is still in its infancy, little experience with employing this paradigm is currently available. In this paper, we start filling this gap by describing the aspects we used in Atlas and by discussing the effect of aspects on our object-oriented development practices. We describe some rules and policies that we employed to achieve our goals of maintainability and modifiability, and introduce a straightforward notation to express the design of aspects. Although we faced some small hurdles along the way, this combination of technology helped us build a fast, well-structured system in a reasonable amount of time.

TR-99-06 Systematic vs. Local Search for SAT, May 03, 1999 Holger H. Hoos, 14 pages

Due to its prominence in artificial intelligence and theoretical computer science, the propositional satisfiability problem (SAT) has received considerable attention in the past. Traditionally, this problem was attacked with systematic search algorithms, but more recently, local search methods were shown to be very effective for solving large and hard SAT instances. Especially in the light of recent, significant improvements in both approaches, it is not very well understood which type of algorithm performs best on a specific type of SAT instances. In this article, we present the results of a comprehensive empirical study, comparing the performance of some of the best known stochastic local search and systematic search algorithms for SAT on a wide range of problem instances, including Random-3-SAT and SAT-encoded problems from different domains. We show that while for Random-3-SAT local search is clearly superior, more structured instances are often, but not always, more efficiently solved by systematic search algorithms. This suggests that considering the specific strengths and weaknesses of both approaches, hybrid algorithms or portfolio combinations might be most effective for solving SAT problems in practice.

TR-99-07 Using Embedded Network Processors to Implement Global Memory Management in a Workstation Cluster, June 28, 1999 Yvonne Coady, Joon Suan Ong and Michael J. Feeley, 10 pages

Advances in network technology continue to improve the communication performance of workstation and PC clusters, making high-performance workstation-cluster computing increasingly viable. These hardware advances, however, are taxing traditional host-software network protocols to the breaking point. A modern gigabit network can swamp a host's IO bus and processor, limiting communication performance and slowing computation unacceptably. Fortunately, host-programmable network processors used by these networks present a potential solution. Offloading selected host processing to these embedded network processors lowers host overhead and improves latency. This paper examines the use of embedded network processors to improve the performance of workstation-cluster global memory management. We have implemented a revised version of the GMS global memory system that eliminates host overhead by as much as 29% on active nodes and improves page fault latency by as much as 39%.

TR-99-08 Spirale Reversi: Reverse decoding of the Edgebreaker encoding, October 4, 1999 Martin Isenburg and Jack Snoeyink, 12 pages

We present a simple linear time algorithm for decoding Edgebreaker encoded triangle meshes in a single traversal. The Edgebreaker compression technique , introduced by Rossignac, encodes the topology of meshes homeomorphic to a sphere with a guaranteed 2 bits per triangle or less. The encoding algorithm visits every triangle of the mesh in a depth-first order. The original decoding algorithm recreates the triangles in the same order they have been visited by the encoding algorithm and exhibits a worst case time complexity of O(n^2). More recent work by Szymczak and Rossignac uses the same traversal order and improves the worst case to O(n). However, for meshes with handles multiple traversals are needed during both encoding and decoding. We introduce here a simpler decompression technique that performs a single traversal and recreates the triangles in reverse order.

TR-99-09 Computing Contour Trees in All Dimensions, August 26, 1999 Hamish Carr, Jack Snoeyink and Ulrike Axen, 10 pages

We show that contour trees can be computed in all dimensions by a simple algorithm that merges two trees. Our algorithm extends, simplifies, and improves work of Tarasov and Vyalyi and of van Kreveld et al.

TR-99-10 Mesh Collapse Compression, November 15, 1999 Martin Isenburg and Jack Snoeyink, 10 pages

Efficiently encoding the topology of triangular meshes has recently been the subject of intense study and many representations have been proposed. The sudden interest in this area is fueled by the emerging demand for transmitting 3D data sets over the Internet (e.g. VRML). Since transmission bandwidth is a scarce resource, compact encodings for 3D models are of great advantage. In this report we present a novel algorithm for encoding the topology of triangular meshes. Our encoding algorithm is based on the edge contract operation, which has been used extensively in the area of mesh simplification, but not for efficient mesh topology compression. We perform a sequence of edge contract and edge divide operations that collapse the entire mesh into a single vertex. With each edge contraction we store a vertex degree and with each edge division we store a start and an end symbol. This uniquely determines all inverse operations. For meshes that are homeomorphic to a sphere, the algorithm is especially simple. Surfaces of higher genus are encoded at the expense of a few extra bits per handle.

TR-99-11 Deciding When to Forget in the Elephant File System, December 12, 1999 Douglas S. Santry, Michael J. Feeley, Norman Hutchinson, Alistair C. Veitch, Ross W. Carton and Jacob Ofir, 12 pages

Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite. This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using file-grain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system. Elephant is implemented as a new Virtual File System in the FreeBSD kernel.

TR-99-12 Practical Point-in-Polygon Tests Using CSG Representations of Polygons, November 12, 1999 Robert J. Walker and Jack Snoeyink, 22 pages

We investigate the use of a constructive solid geometry (CSG) representation of polygons in testing if points fall within them; this representation consists of a tree whose nodes are either Boolean operators or edges. By preprocessing the polygons, we seek (1) to construct a space-conserving data structure that supports point-in-polygon tests, (2) to prune as many edges as possible while maintaining the semantics of our tree, and (3) to obtain a tight inner loop to make testing the remaining edges as fast as possible. We utilize opportunities to optimize the pruning by permuting sibling nodes. We find that this process is less memory-intensive than the grid method and faster than existing one-shot methods.

TR-99-13 Using Implicit Context to Ease Software Evolution and Reuse, November 11, 1999 Robert J. Walker and Gail C. Murphy, 11 pages

Software systems should consist of simple, conceptually clean components interacting along narrow, well-defined paths. All too often, this is not reality: complex components end up interacting for reasons unrelated to the functionality they provide. We refer to knowledge within a component that is not conceptually required for the individual behaviour of that component as extraneous embedded knowledge (EEK). EEK creeps in to a system in many forms, including dependences upon particular names and the passing of extraneous parameters. This paper proposes implicit context as a means for reducing EEK in systems. Implicit context combines a mechanism to reflect upon what has happened in a system through queries on the call history with a mechanism for altering calls to and from a component. We demonstrate the benefits of implicit context by describing its use to reduce EEK in the Java Swing library.

TR-99-14 Regaining Control of Exception Handling, December 01, 1999 Martin P. Robillard and Gail C. Murphy, 10 pages

Error Handling, Design. tends to degrade as the system evolves, the structure of exception handling also degrades. In this paper, we draw on our experience building and analyzing the exception structure of Java programs to describe why and how exception structure degrades. Fortunately, we need not let our exception structure languish. We also relate our experience at regaining control of exception structure in several existing programs using a technique based on software containment.

TR-99-15 The Virtual Hand Laboratory Architecture, December 15, 1999 Valerie A. Summers, 29 pages

The Virtual Hand Lab (VHL) is an augmented reality environment for conducting experiments in human perception and motor performance that involve grasping, manipulation, and other complex 3D tasks that people perform with their hands. Our system supports a wide range of experiments and is used by (non-programmer) experimenters. Our system co-locates the hand and the manipulated objects (whether physical or virtual) in the same visual space. Spatial and temporal accuracy are maintained by using a high precision tracker and an efficient implementation of the software architecture which carefully synchronizes the timing equipment and software. There are many issues which influence architectural design; two of which are modularization and performance. We balance these concerns by creating a layered set of modules upon which the application is built and an animation control loop which cuts across module boundaries to control the timing of equipment and application. Augmented objects are composed of both physical and graphical components. The graphical object inheritance hierarchy has several unusual features. First, we provide a mechanism to decouple the movement of the graphical component of an augmented object from its physical object counterpart. This provides flexibility in the types of experiments supported by the testbed. Second, we create subclasses based on properties of the physical component of the augmented objects before creating subclasses based on the virtual components. Specifically, we categorize physical objects as either rigid or flexible based on their level of deformation. This allows us to efficiently implement many of the manipulation techniques. Third, after subclasses based on the physical objects have been created, the implementation of concrete virtual object classes is driven by the goal of creating an easy interface for the experimenters. This was based on our user centered design approach.

TR-99-16 Heavy-Tailed Behaviour in Randomised Systematic Search Algorithms for SAT?, November 30, 1999 Holger H. Hoos, 16 pages

Prompted by recent results reported by Carla Gomes, Bart Selman, and Henry Kautz, and in the context of my ongoing project with Thomas Stuetzle on characterising the behaviour of state-of-the-art algorithms for SAT, I measured some run-time distributions for Satz-Rand, the randomised version of the Satz algorithm, when applied to problem instances from various domains. I could not find truly heavy-tailed behaviour (in the sense of the definition used by Carla Gomes et.al.). Instead, I found evidence for multimodal distributions which might be characterisable using mixtures of the generalised exponential distributions introduced in (Hoos, 1999). However, the observed RTDs typically have long tails and random restart, using suitable cutoff times, increases the efficiency of the algorithm, as claimed by Gomes et.al. Furthermore, taking another look at the issue of heavy tails at the left-hand side of run-time distributions, I raise some questions regarding the arguments found in (Gomes et.al., 2000).


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.