Technical Reports

The ICICS/CS Reading Room


1994 UBC CS Technical Report Abstracts

TR-94-01 Exploring Common Conceptions About Boys and Electronic Games, January 1994 Joan Lawry, Rena Upitis, Maria Klawe, Kori Inken, Ann Anderson, Kori Inkpen, M. Ndunda, David Hsu, Stephen Leroux and Kamran Sedighian, 20 pages

Electronic games are an integral part of many boys' lives. Based on observations made over a two-month period at an electronic games exhibit in an interactive science museum in Vancouver, Canada, we examine three commonly held views about boys and electronic game culture: (a) electronic games and boys' behaviour while playing them contain elements of aggression, violence, competition, fast-action, and speed; (b) electronic games encourage anti-social, ``loner'' behaviour; and (c) boys who play electronic games are susceptible to becoming so devoted to playing the games that they neglect other areas of their lives, such as school, physical activity, and family. Our findings indicate the following: (a) while violent games are popular, many boys prefer games that challenge them mentally; (b) there appears to be little connection between anti-social behavior and electronic game playing; and (c) many boys who play electronic games have interests also in music, programming, reading, and school.

This paper depicts one facet of the first, exploratory phase of the Electronic Games for Education in Math and Science (E-GEMS) enterprise. E-GEMS is an ongoing research project with the ultimate goal of increasing the proportion of children who enjoy learning and using math and science---specifically by engaging children's interest in these subjects through the play of electronic games in the context of existing classroom educational methods. Hence, we also consider some of the implications for educational electronic game design in view of our findings about current commercial electronic games.

TR-94-02 Generalized Ternary Simulation of Sequential Circuits, January 1994 C. J. Seger Seger and J. A. Brzozowski, 20 pages

(Abstract not available on-line)

TR-94-03 A Multigrid Solver for the Steady State Navier-Stokes Equations Using The Pressure-Poisson Formulation, January 1994 David Sidilkover and Uri M. Ascher, 13 pages

This paper presents an efficient multigrid solver for steady-state Navier-Stokes equations in 2D on non-staggered grids. The pressure Poisson equation formulation is used, together with a finite volume discretization. A discretization of the boundary conditions for pressure and velocities is presented. An efficient multigrid algorithm for solving the resulting discrete equations is then developed. The issue of the numerical treatment of advection is also addressed: a family of stable and accurate difference schemes for the advection-dominated flow are presented. This family includes also second order accurate schemes.

TR-94-04 Model-Based Object Recognition - A Survey of Recent Research, January 1994 Arthur R. Pope, 33 pages

We survey the main ideas behind recent research in model-based object recognition. The survey covers representations for models and images and the methods used to match them. Perceptual organization, the use of invariants, indexing schemes, and match verification are also reviewed. We conclude that there is still much room for improvement in the scope, robustness, and efficiency of object recognition methods. We identify what we believe are the ways improvements will be achieved.

TR-94-05 Cooperative Learning in the Classroom: The Importance of a Collaborative Environment for Computer-Based Education, February 1994 Kori Inkpen, Kellogg S. Booth, Maria Klawe and Rena Upitis, 11 pages

Cooperative Learning in the Classroom: The Importance of a Collaborative Envi ronment for Computer-Based Education

Cooperative behavior of students playing an educational computer game was investigated. The combination of gender and whether one or two computers were present significantly affected the level of achievement as measured by the number of puzzles completed in the game. Female/Female pairs playing on two computers, on average, completed less puzzles than any other pairs in any other condition. Differences were also observed for gender pairs sharing control of the mouse while playing on a single computer. Male/Male pairs had a higher number and percentage of refusals to give up control of the mouse.

TR-94-06 Constant Time Parallel Indexing of Points in a Triangle, February 1994 Simon Kahan and Pierre Kelsen, 9 pages

Consider a triangle whose three vertices are grid points. Let k denote the number of grid points in the triangle. We describe an indexing of the triangle: a bijective mapping from {0, ..., k-1} to the grid points in the triangle. Computing such a mapping is a fundamental subroutine in fine-grained parallel computation arising in graphics applications such as ray-tracing. We describe a very fast indexing algorithm: after a preprocessing phase requiring time proportional to the number of bits in the vertices of the triangle, a grid point in the triangle can be computed in constant time from its index. The method requires only constant space.

TR-94-07 A Novel Constraint-Based Data Fusion System for Limited-Angle Computed Tomography, March 1994 Jeffrey E. Boyd, 136 pages

(Abstract not available on-line)

TR-94-08 A Computational Theory of Decision Networking, March 1994 Nevin L. Zhang, 196 pages

(Abstract not available on-line)

TR-94-09 Abduction to Plausible Causes: An Event-Based Model of Belief Update, March 1994 Craig Boutilier, 29 pages

The Katsuno and Mendelzon theory of belief update has been proposed as a reasonable model for revising beliefs about a changing world. However, the semantics of update relies on information which is not readily available. We describe an alternative semantical view of update in which observations are incorporated into a belief set by: a) explaining the observation in terms of a set of plausible events that might have caused that observation; and b) predicting further consequences of those explanations. We also allow the possibility of conditional explanations. We show that this picture naturally induces an update operator under certain assumptions. However, we argue that these assumptions are not always reasonable, and they restrict our ability to integrate update with other forms of revision when reasoning about action.

TR-94-10 Probabilistic Conflicts in a Search Algorithm for Estimating Posterior Probabilities in Bayesian Networks, March 1994 David Poole, 40 pages

This paper presents a search algorithm for estimating prior and posterior probabilities in discrete Bayesian networks. It shows how conflicts (as used by the consistency-based diagnosis community) can be adapted to speed up the search. This is an `anytime' algorithm, that at any stage can estimate probabilities and give an error bound. This algorithm is especially suited to the case where there are skewed distributions, although nothing about the algorithm or the definitions depends on skewness of conditional distributions. Empirical results with Bayesian networks having tens of thousands of nodes are presented.

TR-94-11 Semantics, Consistency and Query Processing of Empirical Deductive Databases, April 1994 Raymond T. Ng, 35 pages

In recent years, there has been growing interest in reasoning with uncertainty in logic programming and deductive databases. However, most frameworks proposed thus far are either non-probabilistic in nature or based on subjective probabilities. In this paper, we address the problem of incorporating empirical probabilities -- that is, probabilities obtained from statistical findings -- in deductive databases. To this end, we develop a formal model-theoretic basis for such databases. We also present a sound and complete algorithm for checking the consistency of such databases. Moreover, we develop consistency-preserving ways to optimize the algorithm for practical usage. Finally, we show how query answering for empirical deductive databases can be carried out. {\bf Keywords:} deductive databases, empirical probabilities, model semantics, constraint satisfaction, optimizations, query answering

TR-94-12 Topology Building and Random Polygon Generation, April 1994 Chong Zhu, 93 pages

(Abstract not available on-line)

TR-94-13 Efficient and Effective Clustering Methods for Spatial Data Mining, May 1994 Raymond T. Ng and Jiawei Han, 25 pages

Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLARANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLARANS. Our analysis and experiments show that with the assistance of CLARANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLARANS with that of existing clustering methods show that CLARANS is the most efficient.

TR-94-14 Topological Aspects of Regular Languages, May 1994 Nicholas Pippenger, 17 pages

We establish a number of new results (and rederive some old results) concerning regular languages, using essentially topological methods. Our development is based on the duality (established by Stone) between Boolean algebras and certain topological spaces (which are now called "Stone spaces"). (This duality does not seem to have been recognized in the literature on regular languages, even though it is well known that the regular languages over a fixed alphabet form a Boolean algebra and that the "implicit operations" with a fixed number of operands form a Stone space!) By exploiting this duality, we are able to obtain a much more accessible account of the Galois correspondence between varieties of regular languages (in the sense of Eilenberg) and certain sets of "implicit identities". The new results include an analogous Galois correspondence for a generalization of varieties, and an explicit characterization by means of closure conditions of the sets of implicit identities involved in these correspondences.

TR-94-15 Computing Common Tangents Without a Separating Line, May 1994 David Kirkpatrick and Jack Snoeyink, 10 pages

Given two disjoint convex polygons in standard representations, one can compute outer common tangents in logarithmic time without first obtaining a separating line. If the polygons are not disjoint, there is an additional factor of the logarithm of the intersection or union size, whichever is smaller.

TR-94-16 Transformations in High Level Synthesis: Axiomatic Specification and Efficient Mechanical Verification, May 1994 P. Sreeranga Rajan, 71 pages

In this work, we investigate the specification and mechanical verification of the correctness of transformations used during high level synthesis in hardware design. The high level synthesis system we address here is due, in part, to the SPRITE project at Philips Research Labs. The transformations in this system are used for refinement and optimization of descriptions specified in a signal flow graph language called SPRITE Input Language (SIL). SIL is an intermediate language used during the synthesis of hardware described using languages such as VHDL. Besides being an intermediate language, it also forms the backbone of TRADES synthesis system from University of Twente. SIL has been used in the design of hardware for audio and video applications.

We use Prototype Verification System (PVS) from SRI International, to specify and verify the correctness of the transformations. The PVS specification language allows us to investigate the correctness problem using a convenient level of representation. While, the PVS verifier features automatic procedures and interactive verification rules to check properties of specifications. It has permitted us to examine not only the correctness, but also generalization and composition of transformations of SIL.

TR-94-17 Link Strength in Bayesian Networks, May 1994 Brent Boerlage, 102 pages

This thesis introduces the concept of a connection strength (CS) between the nodes in a propositional Bayesian network (BN). Connection strength generalizes node independence from a binary property to a graded measure. The connection strength from node A to node B is a measure of the maximum amount that the belief in B will change when the truth value of A is learned. If the belief in B does not change, they are independent (zero CS), and if it changes a great deal, they are strongly connected (high CS).

Another concept introduced is the link strength (LS) between two adjacent nodes, which is an upper bound on that part of their connection strength which is due only to the link between them (and not other paths which may connect them). Calculating connection strengths is computationally expensive, while calculating link strengths is not. A linear complexity algorithm is provided which finds a bound on the connection strength between any two nodes by combining link strengths along the paths connecting them. Such an algorithm lends substance to notions of an "effect" or "influence" flowing along paths, and "effect" being attenuated by "weak" links, which is terminology that has appeared often in the literature, but only as an intuitive idea.

An algorithm for faster, approximate BN inference is presented, and connection strengths are used to provide bounds for its error. A system is proposed for BN diagrams to be drawn with strong links represented by heavy lines and weak links by fine lines, as a visualization aid for humans. Another visualization aid which is explored is the CS contour map, in which connection strengths from one node to the rest are represented as contour lines super-imposed on a regular BN diagram, allowing the viewer to quickly assess which nodes that node influences the most (or which nodes influence it the most). A non-trivial example BN is presented, some of its connection strengths are calculated, CS contour maps are constructed for it, and it is displayed with link strength indicated by line width.

TR-94-18 Prescriptions: A Language for Describing Software Configurations, June 1994 Jim Thornton, 44 pages

Automation of software configuration management is an important practical problem. Any automated tool must work from some specifications of correct or desired configurations. This report introduces a language for describing acceptable configurations of common systems, so that automated management is possible. The proposed language is declarative, while at the same time there are efficient algorithms for modifying a system to conform to a specification in many cases of practical importance.

TR-94-19 Vision Servers and Their Clients, October 1994 James J. Little, 13 pages

Robotic applications impose hard real-time demands on their vision components. To accommodate the real-time constraints, the visual component of robotic systems are often simplified by narrowing the scope of the vision system for a particular task. Another option is to build a generalized vision (sensor) processor and provides multiple interfaces, of differing scales and content, to other modules in the robot. Both options can be implemented in many ways, depending on computational resources.

The tradeoffs among these alternatives become clear when we study the vision process as a server whose clients request information about the world. We model the interface on client-server relations in user interfaces and operating systems. We examine the relation of this model to robot and vision sensor architecture and explore its application to a variety of vision sensor implementations.

TR-94-20 An Analysis of Buffer Sharing and Prefetching Techniques for Multimedia Systems, September 1994 Raymond T. Ng and Jinhai Yang, 30 pages

In this paper, we study the problem of how to maximize the throughput of a continuous-media system, given a fixed amount of buffer space and disk bandwidth both pre-determined at design-time. Our approach is to maximize the utilizations of disk and buffers. We propose doing so in two ways. First, we analyze a scheme that allows multiple streams to share buffers. Our analysis and preliminary simulation results indicate that buffer sharing could lead to as much as 50\% reduction in total buffer requirement. Second, we develop three prefetching strategies: SP, IP1 and IP2. As will be demonstrated by SP, straightforward prefetching is not effective at all. In contrast, IP1 and IP2, which prefetch more intelligently than does SP, could be valuable in maximizing the effective use of buffers and disk. Our preliminary simulation results show that IP1 and IP2 could lead to a 40\% improvement in throughput.

TR-94-21 Incremental Algorithms for Optimizing Model Computation Based on Partial Instantiation, September 1994 Raymond T. Ng and Xiaomei Tian, 28 pages

It has been shown that mixed integer programming methods can effectively support minimal model, stable model and well-founded model semantics for ground deductive databases. Recently, a novel approach called partial instantiation has been developed which, when integrated with mixed integer programming methods, can handle non-ground logic programs. The goal of this paper is to explore how this integrated framework based on partial instantiation can be optimized. In particular, we develop an incremental algorithm that minimizes repetitive computations. We also develop several optimization techniques to further enhance the efficiency of our incremental algorithm. Experimental results indicate that our algorithm and optimization techniques can bring about very significant improvement in run-time performance.

TR-94-22 How Fast Will The Flip Flop?, September 1994 Mark R. Greenstreet and Peter Cahoon, 10 pages

This report describes an experimental investigation of the application of dynamical systems theory to the verification of digital VLSI circuits. We analyze the behavior of a nine-transistor toggle element using a simple, SPICE-like model. We show how such properties as minimum and maximum clock frequency can be identified from topological features of solutions to the corresponding system of differential equations. This dynamical systems perspective also gives a clear, continuous-model interpretations of such phenomena as dynamic storage and timing hazards.

TR-94-23 Informal, Semi-Formal, and Formal Approaches to the Specification of Software Requirements., September 1994 Helene Marie Wong, 376 pages

(Abstract not available on-line)

TR-94-24 Defeasible Preferences and Goal Derivations, October 1994 Craig Boutilier, 47 pages

(Abstract not available on-line)

TR-94-25 RASP - Robotics and Animation Simulation Platform, October 1994 Gene S. Lee, 221 pages

(Abstract not available on-line)

TR-94-26 A Foundation for the Design and Analysis of Robotic Systems and Behaviors, October 1994 Zhang Ying, 245 pages

Robots are generally composed of electromechanical parts with multiple sensors and actuators. The overall behavior of a robot emerges from coordination among its various parts and interaction with its environment. Developing intelligent, reliable, robust and safe robots, or real-time embedded systems, has become a focus of interest in recent years. In this thesis, we establish a foundation for modeling, specifying and verifying discrete/continuous hybrid systems and take an integrated approach to the design and analysis of robotic systems and behaviors.

A robotic system in general is a hybrid dynamic system, consisting of continuous, discrete and event-driven components. We develop a semantic model for dynamic systems, that we call Constraint Nets (CN). CN introduces an abstraction and a unitary framework to model discrete/continuous hybrid systems. CN provides aggregation operators to model a complex system hierarchically. CN supports multiple levels of abstraction, based on abstract algebra and topology, to model and analyze a system at different levels of detail. CN, because of its rigorous foundation, can be used to define programming semantics of real-time languages for control systems.

While modeling focuses on the underlying structure of a system --- the organization and coordination of its components --- requirements specification imposes global constraints on a system's behavior, and behavior verification ensures the correctness of the behavior with respect to its requirements specification. We develop a timed linear temporal logic and timed $\forall$-automata to specify timed as well as sequential behaviors. We develop a formal verification method for timed $\forall$-automata specification, by combining a generalized model checking technique for automata with a generalized stability analysis method for dynamic systems.

A good design methodology can simplify the verification of a robotic system. We develop a systematic approach to control synthesis from requirements specification, by exploring a relation between constraint satisfaction and dynamic systems using constraint methods. With this approach, control synthesis and behavior verification are coupled through requirements specification.

To model, synthesize, simulate, and understand various robotic systems we have studied in this research, we develop a visual programming and simulation environment that we call ALERT: A Laboratory for Embedded Real-Time systems.

TR-94-27 DECISION GRAPHS: Algorithms and Applications to Influence Diagram Evaluation and High-Level Path Planning Under Uncertainty, October 1994 Runping Qi, 225 pages

(Abstract not available on-line)

TR-94-28 Computing the largest inscribed isothetic rectangle, December 1994 Helmut Alt, David Hsu and Jack Snoeyink, 6 pages

This paper describes an algorithm to compute, in Theta(log n) time, a rectangle that is contained in a convex n-gon, has sides parallel to the coordinate axes, and has maximum area. With a slight modification it will compute the smallest perimeter. The algorithm uses a tentative prune-and-search approach, even though this problem does not appear to fit into the functional framework of Kirkpatrick and Snoeyink.

TR-94-29 Conservative Approximations of Hybrid Systems, October 1994 Andrew K. Martin and Carl-Johan Seger, 29 pages

Systems that are modeled using both continuous and discrete mathematics are commonly called hybrid systems. Although much work has been done to develop frameworks in which both types of systems can be modeled at the same time, this is often a very difficult task. Verifying that desired properties hold in such hybrid models is even more daunting. In this paper we attack the problem from a different direction. First we make a distinction between two models of the system. A detailed model is developed as accurately as possible. Ultimately, one must trust in its correctness. An abstract model, which is typically less detailed, is actually used to verify properties of the system. The detailed model is typically defined in terms of both continuous and discrete mathematics, whereas the abstract one is typically discrete. We formally define the concept of conservative approximation, a relationship between models, that holds with respect to a translation between specification languages. We then progress by developing a theory that allows us to build a complicated detailed model by combining simple primitives. Simultaneously, we build a conservative approximation by similarly combining pre-defined parameterized approximations of those primitives.

TR-94-30 Weird-a-gons and Other Folded Objects: The Influence of Computer Animation, Paper Models and Cooperative Mediation on Spatial Understanding, October 1994 Rena Upitis, Ricard Dearden, Kori Inkpen, Joan Lawry, Maria Klawe, Kelly Davidson, Stephen Leroux, David Hsu, Nic Thorne, Kamran Sedighian, Robert Scharein and Ann Anderson, 20 pages

(Abstract not available on-line)

TR-94-31 Exploiting Structure in Policy Construction, November 1994 Craig Boutilier, Richard Dearden and M. Goldszmidt

(Abstract not available on-line)

TR-94-32 Modeling Positional Uncertainty in Object Recognition, November 1994 Arthur R. Pope and David G. Lowe, 22 pages

Iterative alignment is one method for feature-based matching of an image and a model for the purpose of object recognition. The method alternately hypothesizes feature pairings and estimates a viewpoint transformation from those pairings; at each stage a refined transformation estimate is used to suggest additional pairings.

This paper extends iterative alignment in the domain of 2D similarity transformations so that it represents the uncertainty in the position of each model and image feature, and that of the transformation estimate. A model describes probabilistically the significance, position, and intrinsic attributes of each feature, plus topological relations among features. A measure of the match between a model and an image integrates all four of these, and leads to an efficient matching procedure called probabilistic alignment. That procedure supports both recognition and a learning procedure for acquiring models from training images.

By explicitly representing uncertainty, one model can satisfactorily describe appearance over a wider range of viewing conditions. Thus, when models represent 2D characteristic views of a 3D object, fewer models are needed. Experiments demonstrating the effectiveness of this approach are reported.

TR-94-33 Multiresolution Rough Terrain Motion Planning, November 1994 Dinesh K. Pai and L. M. Reissell, 24 pages

(Abstract not available on-line)

TR-94-34 Langwidere: A New Facial Animation System, January 31, 1994 David R. Forsey and Carol L.-Y. Wang, 12 pages

This paper presents Langwidere, a facial animation system. Langwidere is the basis for a flexible system capable of imitating a wide range of characteristics and actions, such as speech or expressing emotion. Langwidere integrates a hierarchical spline modeling system with simulated muscles based on local area surface deformation. The multi-level shape representation allows control over the extent of deformations, at the same time reducing the number of control vertices needed to define the surface. The head model is constructed from a single closed surface allowing the modeling of internal structures such as tongue and teeth, rather than just a mask. Simulated muscles are attached to various levels of the surface with more rudimentary levels substituting for bone such as the skull and jaw. The combination of a hierarchical model and simulated muscles provides precise, flexible surface control and supports easy generation of new characters with a minimum of recoding.

TR-94-35 A Multilevel Approach to Surface Response in Dynamically Deformable Models, January 31, 1994 Larry Palazzi and David R. Forsey, 12 pages

Discretized representations of deformable objects, based upon simple dynamic point-mass systems, rely upon the propagation of forces between neighbouring elements to produce a global change in the shape of the surface. Attempting to make such a surface rigid produces stiff equations that are costly to evaluate with any numerical stability. This paper introduces a new multilevel approach for controlling the response of a deformable object to external forces. The user specifies the amount of flexibility or stiffness of the surface by controlling how the applied forces propagate through the levels of a multi-resolution representation of the object. A wide range of surface behaviour is possible, and rigid motion is attained without resort to special numerical methods. This technique is applied to the displacement constraints method of Gascuel and Gascuel to provide explicit graduated control of the response of a deformable object to imposed forces.

TR-94-36 Chebyshev Polynomials for Boxing and Intersections of Parametric Curves and, January 31, 1994 Alain Fournier and John Buchanan, 17 pages

Surfaces Computer Graphics and Computer Aided Design. Ray-tracing is a versatile and popular rendering technique. There is therefore a strong incentive in developing fast, accurate and reliable algorithms to intersect rays and parametric curves and surfaces. We propose and demonstrate the use of Chebyshev basis functions to speed up the computation of the intersections between rays and parametric curves or surfaces. The properties of Chebyshev polynomials result in the computation of better and tighter enclosing boxes. For surfaces they provide a better termination criterion to decide on the limits of subdivision, and allow the use of bilinear surfaces for the computation of the intersection when needed. The efficiency of the techniques used depends on the relative magnitude of the coefficients of the Chebyshev basis functions. We show from a statistical analysis of the characteristics of several thousands surfaces of different origin that these techniques will result most of the time in significant improvement in speed and accuracy over other boxing and subdivision technqiues.

TR-94-37 A Kinematic Model for Collision Response, July 13, 1994 Jason Harrison and David Forsey, 20 pages

One aspect of traditional 3D animation using clay or plasticine is the ease with which the object can be deformed. Animators take for granted the ability to interactively press complex objects together. In 3D computer animation, this ability is severely restricted and any improvement would drastically increase the range and style of animations that can be created within a production environment. This paper presents a simple, fast, geometric approach to controlling the nature, extent and timing of the surface deformations arising from the interpenetration of kinematically controlled animated objects. Rather than using dynamic simulations, which are difficult to configure, code, and control, the algorithm presented here formulates collision response kinematically by moving points on a multiresolution surface towards goal points at a certain rate. This new multi-resolution approach to deformation provides control over the response of the surface using a small number of parameters that determine how each level in the multi-resolution representation of the surface reacts to the interpenetration. The deformations are calculated in linear time and space proportional to the number of points used to define the surface.

TR-94-38 Making Shaders More Physically Plausible, March 4, 1993 Robert R. Lewis, 14 pages

There is a need to develop shaders that not only "look good", but are more physically plausible. From physical and geometric considerations, we review the derivation of a shading equation expressing reflected radiance in terms of incident radiance and the bidirectional reflectance distribution function (BRDF). We then examine the connection between this equation and conventional shaders used in computer graphics. Imposing the additional physical constraints of energy conservation and Helmholtz reciprocity allows us to create variations of the conventional shaders that are more physically plausible.

TR-94-39 Real-Time Multivariate Data Visualization Using Preattentive Processing, January 31, 1994 Christopher G. Healey, Kellogg S. Booth and James T. Enns, 35 pages

A new method is presented for visualizing data as they are generated from real-time applications. These techniques allow viewers to perform simple data analysis tasks such as detection of data groups and boundaries, target detection, and estimation. The goal is to do this rapidly and accurately on a dynamic sequence of data frames. Our techniques take advantage of an ability of the human visual system called preattentive processing. Preattentive processing refers to an initial organization of the visual system based on operations believed to be rapid, automatic, and spatially parallel. Examples of visual features that can be detected in this way include hue, orientation, intensity, size, curvature, and line length. We believe that studies from preattentive processing should be used to assist in the design of visualization tools, especially those for which high speed target, boundary, and region detection are important. Previous work has shown that results from research in preattentive processing can be used to build visualization tools which allow rapid and accurate analysis of individual, static data frames. We extend these techniques to a dynamic real-time environment. This allows users to perform similar tasks on dynamic sequences of frames, exactly like those generated by real-time systems such as visual interactive simulation. We studied two known preattentive features, hue and curvature. The primary question investigated was whether rapid and accurate target and boundary detection in dynamic sequences is possible using these features. Behavioral experiments were run that simulated displays from our preattentive visualization tools. Analysis of the results of the experiments showed that rapid and accurate target and boundary detection is possible with both hue and curvature. A second question, whether interactions occur between the two features in a real-time environment, was answered positively. This suggests that these and perhaps other visual features can be used to create visualization tools that allow high-speed multidimensional data analysis for use in real-time applications. It also shows that care must be taken in the assignment of data elements to preattentive features to avoid creating certain visual interference effects.

TR-94-40 Three-Dimensional Analysis of Scoliosis Surgery Using Stereophotogrammetry, January 31, 1994 Stanley B. Jang, Kellogg S. Booth, Chris W. Reily, Bonita J. Sawatzky and Stephen J. Tredwell, 8 pages

Scoliosis is a deformity characterized by coronal, sagittal and axial rotation of the spine. Surgical instrumentation (metal pins and rods) and eventual fusion are required in severe cases. Assessment of the correction requires enough accuracy to allow rational proactive planning of individual interventions or implant design. Conventional 2-D radiography and newer 3-D CT scanning do not provide this accuracy. A new stereophotogrammetric analysis and 3-D visualization allow accurate assessment of the scoliotic spine during instrumentation. Stereophoto pairs taken at each stage of the operation and robust statistical techniques are used to compute 3-D transformations of the vertebrae between stages. These determine rotation, translation, goodness of fit, and overall spinal contour. A polygonal model of the spine using a commercial 3-D modeling package is used to produce an animation sequence of the transformation. The visualizations have provided some important observations. Correction of the scoliosis is achieved largely through vertebral translation and coronal plane rotation, contrary to claims that large axial rotations are required. The animations provide valuable qualitative information for surgeons assessing the results of scoliotic correction. A detailed study of derotation provided by different instrumentation systems and the assessment of hook position patterns is underway.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.