CS Theses & Dissertations 1994

For 1994 graduation dates (in alphabetical order by last name):

Unified Language and Operating System Support for Parallel Processing
Acton, Donald
URI : http://hdl.handle.net/2429/6881
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

The programming of parallel and distributed applications is difficult. The proliferation of net works of workstations, combined with the advent of shared memory-machines for desktop and office use, is making parallel and distributed environments more commonplace. And, there is an increasing demand for general purpose applications to be able to make use of these extra processing resources. This thesis explores the adding of language and system support for general purpose parallel and distributed programming to the object-oriented language and system Raven. The system is targeted for operation in shared-memory and distributed-memory environments where the processors are general purpose and execute their own independent instruction streams. Support for parallelism forms a fundamental part of Raven. To simplify the creation of parallelism Raven introduces the concepts of class-based and user-based parallelism. Class-based parallelism is parallelism created within a class and it is realized through early and delayed result. User-based parallelism results through the use of a class and is realized through companions and certificates. Raven also supplies automatic concurrency control, even across machine boundaries. Concurrency control is attached to an object through the more general property mechanism. Properties are a new way of supplying system services to objects on an individual object basis. Raven also introduces the notion of invoke streams which are a way for a third party to sequence invocation requests targeted at the same object. To demonstrate the viability of these ideas, an extensive implementation was performed. Raven runs on several different machines and operating systems including a 20 processor shared-memory machine. To demonstrate the usability of the parallel constructs, and the efficiency of the implementation, several parallel applications were written and performance measurements made. Included are implementations on shared-memory and distributed-memory machines that are identical except for a few lines of code specifying the system configuration. A distributed mail system is also presented to highlight the support for writing distributed applications.

Selection and Organization of Subjective Contours
Ando, Yoko
URI : http://hdl.handle.net/2429/5110
Degree : Master of Science – MSc
Graduation Date : 1994-05

2D Contour Shape Analysis for Automated Herring Roe Quality Grading by Computer Vision
Beatty, Andrew
URI : http://hdl.handle.net/2429/5189
Degree : Master of Science – MSc
Graduation Date : 1994-05

Enhancing Case Tool Support for Analysis through Formal Logic Utilities
Borm, Eric
URI : http://hdl.handle.net/2429/5457
Degree : Master of Science – MSc
Graduation Date : 1994-11

A Novel Constraint-Based Data Fusion System for Limited-Angle Computed Tomography
Boyd, Jeffrey E.
URI : http://hdl.handle.net/2429/6877
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

Computed tomography (CT) is a non-destructive evaluation technique that reconstructs the cross section of a specimen from x-ray raysum measurements. Whereas CT reconstruction is an ill-posed inverse problem that is easily solved, limited-angle CT, where raysum data are missing for a range of angles, is more severely ill-posed and more difficult to solve. In the limited-angle case, a priori assumptions are necessary to constrain the problem. Specimens wider than the x-ray source to sensor spacing require limited-angle CT. Furthermore, if the specimen is a sandwich structure, i.e., some core material surrounded by load-bearing face sheets, then the face sheets must lie in the null space. Components in the null space do not appear in the raysum data and thus confound CT reconstruction because there is no basis for interpolation. This thesis proposes a novel constraint-based data fusion method for limited-angle CT reconstruction of sandwich structures. The method reduces the reliance of limited-angle CT on assumptions by using range and ultrasound measurements to constrain the solution. Fusion of the data sources results in a problem with a much smaller null space that no longer includes the face sheets. The reduction of the null space in a manner consistent with the specimen yields a more accurate tomographic reconstruction. Synthetic and real data experiments show marked improvement in reconstruction accuracy achieved by using the fusion system.

Filtering Volumetric Data
Buchanan, John W.
URI : http://hdl.handle.net/2429/6882
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

The display of volumetric data is a problem of increasing importance: The display of this data is being studied in texture mapping and volume rendering applications. The goal of texture mapping is to add variation to the surfaces that is not caused by the geometric models of the objects. The goal of volume rendering is to display the data so that the study of this data is made easier. Three-dimensional texture mapping requires the use of filtering not only to reduce aliasing artifacts but also to compute the texture value which is to be used for the display. Study of two-dimensional texture map filtering techniques led to a number of techniques which were extended to three dimensions: namely clamping, elliptical weighted average (EWA) filters, and a pyramidal scheme known as NIL maps; (NIL stands for nodus in largo, the rough translation of which is knot large). The use of three-dimensional textures is not a straightforward extension of the use of two-dimensional textures. Where two-dimensional textures are usually discrete arrays of texture samples which are applied to the surface of objects, three-dimensional textures are usually procedural textures which can be applied on the surface of an object, throughout the object, or in volumes near the object. We studied the three-dimensional extensions of clamping, EWA filters, and NIL maps for filtering these textures. In addition to these three techniques a direct evaluation technique based on quadrature methods is presented. The performance of these four techniques is compared using a variety of criteria, and recommendations are made regarding their use. There are several techniques for volume rendering which can be formulated as filtering operations. By altering these display filters different views of the data can be generated. We modified the NIL map filtering technique for use as a filter-prototyping tool. This extension incorporated transfer functions into the NIL map technique. This allows the manipulation of the transfer functions without requiring the re-computation of the NIL maps. The use of NIL maps as a filter-prototyping tool is illustrated with a series of examples.

Determining the Complete Spectrum for Sparse Symmetric Matrices
Cavers, Ian
URI : http://hdl.handle.net/2429/6990
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-11

This dissertation studies a restricted form of the fundamental algebraic eigenvalue prob lem. From the broad spectrum of eigenvalue problems and solution methods, it focuses upon sequential direct methods for determining moderately large subsets of eigenvalues or the complete spectrum of large sparse symmetric matrices. The thesis uses a combination of theoretical analysis and experimentation with symbolic and numeric implementations to develop generally applicable, reliable, efficient and accurate algorithms that are easily applied by novice and expert practitioners alike. This dissertation’s approach is to reexam- ine eigenvalue methods based on the similarity reduction of matrices to tridiagonal form, developing algorithms that more fully exploit matrix sparsity. Using specially developed sparse reduction tools, the thesis identifies the deficiencies and limitations of existing direct tridiagonalization methods, providing an improved un derstanding of the underlying fill characteristics of sparse reductions. The best previ ously published approach combines a bandwidth reducing preordering with Rutishauser and Schwarz’s O(bn2) band-preserving tridiagonalization algorithm. This approach places complete reliance upon the preordering to exploit sparsity, but it typically leaves the band of the matrix relatively sparse prior to reduction. The thesis presents several novel sparse reduction algorithms, including the hybrid tridiagonalization methods HYBBC and HYB SBC, that rearrange the elimination of nonzero entries to improve band sparsity utilization. HYBBC combines Bandwidth Contraction, a diagonally-oriented sparse reduction, with Rutishauser and Schwarz’s column-oriented tridiagonalization. For a wide range of 70 prac- tical sparse problems the new algorithm reduces CPU requirements by an average of 31%, with reductions as high as 63%. HYBSBC improves upon HYBBC’s successful techniques by substituting the novel Split Bandwidth Contraction algorithm for Bandwidth Contraction. The Split Bandwidth Contraction algorithm takes additional advantage of band sparsity to significantly improve the efficiency of partial bandwidth contractions. In addition, HYB- SBC employs the Z-transition strategy to precisely regulate the transition between its two reduction stages, permitting tridiagonalization in as little as 1/5 the time of Rutishauser and Schwarz. Finally, to demonstrate the relative efficiency of sparse tridiagonalization based eigenvalue methods, the thesis compares variants of the Lanczos algorithm to HYBSBC using theoretical analysis and experimentation with leading Lanczos codes.

Configuration Management Using Objects and Constraints
Coatta, Terry
URI : http://hdl.handle.net/2429/6888
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

Distributed programming techniques have transformed applications into federations of cooperating semi-autonomous components. Complex interactions between these components create complex interdependencies which are quickly outstripping the capacity of human systems managers. Adding configuration management features to an application’s components reduces the flexibility and portability of those components. This thesis develops a new model of configuration management designed to address these issues. To respond to the wide variety of situations in which configuration management is required, the model is universal, addressing with one formalism many different types of configuration management tasks. In order to reduce reliance on human systems managers, the model describes configuration management as an active process carried out by the computer itself. Finally, as the need for automated configuration management is most acute in distributed systems, the model provides for the distribution of configuration management activities. This new model of configuration management is the basis for the design of a new configuration management tool, the Raven Configuration Management System (RCMS). RCMS provides an environment in which configuration management is handled orthogonally to the task of programming individual components. RCMS allows a programmer to group related components together and specify rules which govern their interactions. RCMS combines object-oriented structuring with declarative programming to produce a system which provides improved reliability and performance in the presence of an evolving environment. The core of RCMS is a new configuration programming language called the Raven Configuration Language (RCL). This thesis describes the implementation of RCMSIRCL. This implementation demonstrates that the language constructs of RCL can be performed with acceptable efficiency, and it establishes the sort of services required to implement a universal, active, and distributed configuration management system. Finally, this thesis discusses several interesting implementation techniques that address particular problems which arose during the implementation of RCMS/RCL.

Making Predictions Directly from Past Experiences
Craddock, Arthur Julian
URI : http://hdl.handle.net/2429/7008
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

This thesis considers the problem of making predictions about new experiences based upon past experiences. The problem is of interest to artificial intelligence because past experiences are a kind of domain knowledge that is readily available to com- putational agents, and are at least one form of knowledge that humans use to make predictions. Instead of considering the problem in terms of first inducing a domain model from a set of past experiences, and then using some form of deduction to make predictions, this thesis develops a new technique called the reference class approach (RCA) that directly infers estimates of conditional probabilities from a knowledge base of past experiences. The resulting estimates can be readily used in a number of contexts such as non-monotonic reasoning, the characterisation of probability distribution functions, prediction and classification. Given a knowledge base (KB) of descriptions of past experiences, a description of a new experience, and a proposition representing a query about the new experience, the RCA estimates the conditional probability of the proposition being true of the new experience. The RCA starts by identifying a subset of the KB called the reference class that contains all those past experiences in the KB whose descriptions cover everything that is known about the new experience in addition to providing a truth value for the proposition. If there are no directly applicable past experiences, i.e., the reference class is empty, then the description of the new experience is modified until a non-empty ref- erence class can be found. This thesis investigates two new approaches to modifying the description, namely syntactic generalisation and chaining. Previous research has proposed that logical implication can be used to semantically generalise an empty reference class to any non-empty reference class. This thesis shows that semantic generalisation does not work in the context of making predictions from a KB of past experiences. This thesis argues that we should syntactically generalise the descrip- tion of the new experience. Chaining is a novel extension of syntactic generalisation that allows us to systematically increase what we know about a new experience by elaborating its description while generalising. Once a non-empty reference class has been identified the RCA estimates the conditional probability of the proposition being true by measuring the frequency with which the proposition is true in the reference class. The RCA is an inductive technique in that it estimates probabilities directly from past experiences. One useful test of an inductive technique is to test whether or not it can be used to make accurate predictions from past experiences. This thesis argues that in order to implement the RCA we need a notion of irrelevance to pick the most appropriate generalised or chained reference class. This thesis shows that even with very simple notions of irrelevance, the RCA’s estimates can be used to make predictions whose accuracy compares favourably with state of the art machine learning techniques on standard test data from the machine learning community.

Abstraction and Search for Decision-Theoretic Planning
Dearden, Richard William
URI : http://hdl.handle.net/2429/5461
Degree : Master of Science – MSc
Graduation Date : 1994-11

Many-Valued Generalizations of Two Finite Intervals in Post's Lattice
Denham, Graham Campbell
URI : http://hdl.handle.net/2429/5287
Degree : Master of Science – MSc
Graduation Date : 1994-11

Universal Access to Electronic Communications Systems in Canada
Ehrcke, Tara R.
URI : http://hdl.handle.net/2429/5265
Degree : Master of Science – MSc
Graduation Date : 1994-11

Object Properties: A Mechanism for Providing Runtime Services to Objects in a Distributed System
Finkelstein, David
URI : http://hdl.handle.net/2429/5401
Degree : Master of Science – MSc
Graduation Date : 1994-11

Reinforcement Learning using The game of soccer
Ford, Roger David
URI : http://hdl.handle.net/2429/5465
Degree : Master of Science – MSc
Graduation Date : 1994-11

Interactive Flow Field Modeling for the Design and Control of Fluid Motion in Computer Animation
Gates, William F.
URI : http://hdl.handle.net/2429/4979
Degree : Master of Science – MSc
Graduation Date : 1994-05

Memory Reclamation in Emerald-An Object-Oriented Programming Language
Han, Xiaomei
URI : http://hdl.handle.net/2429/5329
Degree : Master of Science – MSc
Graduation Date : 1994-11

A Kinematic Model for Collision Response
Harrison,  Jason
URI : http://hdl.handle.net/2429/5250
Degree : Master of Science – MSc
Graduation Date : 1994-05

Three-Dimensional Multispectral Stochastic Image Segmentation
Johnston, Brian Gerard
URI : http://hdl.handle.net/2429/4952
Degree : Master of Science – MSc
Graduation Date : 1994-05

A Simple Model of Ship Wakes
Khan, Raza S.
URI : http://hdl.handle.net/2429/5372
Degree : Master of Science – MSc
Graduation Date : 1994-11

A Programming Library for the Construction of 3-D Widgets
Lau, Tony Tat Chung
URI : http://hdl.handle.net/2429/5215
Degree : Master of Science – MSc
Graduation Date : 1994-05

RASP:  Robotics and Animation Simulation Platform
Lee, Gene
URI : http://hdl.handle.net/2429/4947
Degree : Master of Science – MSc
Graduation Date : 1994-05

ST to FL Translation for Hardware Design Verification
Lee, Wing Sang (Trevor)
URI : http://hdl.handle.net/2429/5209
Degree : Master of Science – MSc
Graduation Date : 1994-05

A Distributed Directory Service
Li, Hongbing
URI : http://hdl.handle.net/2429/5221
Degree : Master of Science – MSc
Graduation Date : 1994-05

Shade from Shading
Liu, Lili
URI : http://hdl.handle.net/2429/5396
Degree : Master of Science – MSc
Graduation Date : 1994-11

Priming The Cognitive Pump:  Implicit Memory and Navigating Multiple Window Interfaces
MacIsaac, Gary Lorne
URI : http://hdl.handle.net/2429/5901
Degree : Master of Science – MSc
Graduation Date : 1994-11

Protocol Verification Using Symbolic Model Checking
Mathieson, Charles G.
URI : http://hdl.handle.net/2429/4956
Degree : Master of Science – MSc
Graduation Date : 1994-05

Deformable Models Using Displacement Constraints
Palazzi, Larry
URI : http://hdl.handle.net/2429/1654
Degree : Master of Science – MSc
Graduation Date : 1994-05

Shading and Inverse Shading from Direct Illumination
Poulin, Pierre
URI : http://hdl.handle.net/2429/8735
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

An understanding of light and its interaction with matter is essential to produce images. As the modeling of light sources, light transport and light reflection improves, it becomes possible to render images with increasing realism. The central motivation behind this thesis is to improve realism in computer graphics images by more accurate local shading models and to assist the user to obtain the desired lighting effects with these more complex models. The first part of the thesis addresses the problem of rendering surfaces illuminated by extended (linear and area) light sources. To compute the light reflected by a surface element in a given direction, one needs to determine the unoccluded regions (shadowing) of each light source and then to compute the light reflection (shading) from each of these regions. Traditionally, point light sources are distributed on the lights to approximate both the shadowing and the shading. Instead, an efficient analytical solution is developed for the shading. Shadowing from extended light sources is a fairly expensive process. To give some insights on the complexity of computing shadows, some properties of shadows and algorithms are presented. To reduce the cost of computing shadows from linear light sources, two acceleration schemes, extended from ray tracing, are introduced and evaluated. The second part of this thesis addresses the problem of achieving a desired shading effect by building up systems of constraints. This approach is called inverse shading. It allows a user to obtain a desired lighting effect by interacting directly with the scene model. An interactive modeling system has been built to study the introduction of rendering issues into the modeling process itself. Two rendering aspects are considered: shading and shadows. By specifying a highlight position and size, the unique direction of a directional light source as well as the surface glossiness are determined. Interactively moving this highlight on the surface redefines the light direction while changing its size redefines the surface glossiness. By manipulating shadow volumes (the volume within which a particular light cannot be completely seen), the light source definition and position can be modified. By assigning colours to various points in a 3D scene, information about the colour of a surface as well as other surface characteristics can be deduced. This relatively new concept of defining the causes by manipulating the effects is expected to have a major impact on the design of future computer graphics modeling systems. It begins a new family of tools for the design of more intuitive user interfaces.

DECISION GRAPHS:  Algorithms and Applications to Influence Diagram Evaluation and High-Level Path Planning under Uncertainty
Qi, Runping
URI : http://hdl.handle.net/2429/6999
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

Decision making under uncertainty has been an active research topic in decision theory, operations research and Artificial Intelligence. The main objective of this thesis is to develop a uniform approach to the computational issues of decision making under uncertainty. Towards this objective, decision graphs have been proposed as an intermediate representation for decision making problems, and a number of search algorithms have been developed for evaluating decision graphs. These algorithms are readily applicable to decision problems given in the form of decision trees and in the form of finite stage Markov decision processes. In order to apply these algorithms to decision problems given in the form of influence diagrams, a stochastic dynamic programming formulation of influence diagram evaluation has been developed and a method to systematically transform a decision making problem from an influence diagram representation to a decision graph representation is presented. Through this transformation, a decision making problem represented as an influence diagram can be solved by applying the decision graph search algorithms. One of the advantages of our method for influence diagram evaluation is its ability to exploit asymmetry in decision problems, which can result in exponential savings in computation. Some problems that can be viewed as decision problems under uncertainty, but are given neither in the form of Markov decision processes, nor in the form of influence diagrams, can also be transformed into decision graphs, though this transformation is likely to be problem-specific. One problem of this kind, namely high level navigation in uncertain environments, has been studied in detail. As a result of this case study, a decision theoretic formulation and a class of off-line path planning algorithms for the problem have been developed. Since the problem of navigation in uncertain environments is of importance in its own right, an on-line path planning algorithm with polynomial time complexity for the problem has also been developed. Initial experiments show that the on-line algorithm can result in satisfactory navigation quality.

An Integrated Approach to Programming  and Performance Modeling of Multicomputers
Sreekantaswamy, Halsur V.
URI : http://hdl.handle.net/2429/6951
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

The relative ease with which it is possible to build inexpensive, high-performance multicomputers using regular microprocessors has made them very popular in the last decade. The major problem with multicomputers is the difficulty in effectively programming them. Programmers are often faced with the choice of using high level programming tools that are easy to use but provide poor performance or low level tools that take advantage of specific hardware characteristics to obtain better performance but are difficult to use. In general, existing parallel programming environments do not provide any guarantee of performance and they provide little support for performance evaluation and tuning. This dissertation explores an approach in which users are provided with programming support based on parallel programming paradigms. We have studied two commonly used parallel programming paradigms: Processor Farm and Divide-and-Conquer. Two runtime systems, Pfarm and TrEK, were designed and developed for applications that fit these paradigms. These systems hide the underlying complexities of multicomputers from the users, and are easy-to-use and topology independent. Performance models are derived for these systems, taking into account the computation and communication characteristics of the applications in addition to the characteristics of the hardware and software system. The models were experimentally validated on a large transputer-based system. The models are accurate and proved useful for performance prediction and tuning. Pfarm and TrEK were integrated into Parsec, a programming environment that supports program development and execution tools such as a graphical interface, mapper, loader and debugger. They have also been used to develop several image processing and numerical analysis applications.

Practical Description of Configurations for Distributed Systems Management
Thornton, James Douglass (Jim)
URI : http://hdl.handle.net/2429/5553
Degree : Master of Science – MSc
Graduation Date : 1994-11

Optimizations for Model Computation Based on Partial Instantiation
Tian, Xiaomei
URI : http://hdl.handle.net/2429/5559
Degree : Master of Science – MSc
Graduation Date : 1994-11

The Performance of Robot Manipulation
van den Doel, Cornelis Pieter
URI : http://hdl.handle.net/2429/5240
Degree : Master of Science – MSc
Graduation Date : 1994-05

Informal, Semi-Formal, and Formal Approaches to the Specification of Software Requirements
Wong Cheng In, Marie Helene Lin Lee
URI : http://hdl.handle.net/2429/5607
Degree : Master of Science – MSc
Graduation Date : 1994-11

Maximizing Buffer and Disk Utilizations for News On-Demand
Yang, Jinhai
URI : http://hdl.handle.net/2429/5603
Degree : Master of Science – MSc
Graduation Date : 1994-11

A Computational Theory of Decision Networks
Zhang, Lianwen Nevin
URI : http://hdl.handle.net/2429/6932
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-05

This thesis is about how to represent and solve decision problems in Bayesian decision the ory (e.g. Fishburn 1988). A general representation named decision networks is proposed based on influence diagrams (Howard and Matheson 1984). This new representation incorporates the idea, from Markov decision processes (e.g. Puterman 1990, Denardo 1982), that a decision may be conditionally independent of certain pieces of available information. It also allows multiple cooperative agents and facilitates the exploitation of separability in the utility function. Decision networks inherit the advantages of both influence diagrams and Markov decision processes. Both influence diagrams and finite stage Markov decision processes are stepwise solvable, in the sense that they can be evaluated by considering one decision at a time. However, the evaluation of a decision network requires, in general, simultaneous consider ation of all the decisions. The theme of this thesis is to seek the weakest graph-theoretic condition under which decision networks are guaranteed to be stepwise-solvable, and to seek the best algorithms for evaluating stepwise-solvable decision networks. A concept of decomposability is introduced for decision networks and it is shown that when a decision network is decomposable, a divide and conquer strategy can be utilized to aid its evaluation. In particular, when a decision network is stepwise-decomposable it can be evaluated not only by considering one decision at a time, but also by considering one portion of the network at a time. It is also shown that stepwise-decomposability is the weakest graphical condition that guarantees stepwise-solvability. Given a decision network, there are often nodes and arcs that can harmlessly removed. An algorithm is presented that is able to find and prune all graphically identifiable removable nodes and arcs. Finally, the relationship between stepwise-decomposable decision networks (SDDN‘s) and Markov decision process is investigated, which results in a two-stage approach for evaluating SDDN’s. This approach enables one to make use of the asymmetric nature of decision problems, facilitates parallel computation, and gives rises to an incremental way of computing the value of perfect information.

A Foundation for the Design and Analysis of Robotics Systems and Behaviours
Zhang, Ying (Zhenhai)
URI : http://hdl.handle.net/2429/7195
Degree : Doctor of Philosophy – PhD
Graduation Date : 1994-11

Robots are generally composed of electromechanical parts with multiple sensors and actuators. The overall behavior of a robot emerges from coordination among its various parts and interaction with its environment. Developing intelligent, reliable, robust and safe robots, or real-time embedded systems, has become a focus of interest in recent years. In this thesis, we establish a foundation for modeling, specifying and verifying discrete/continuous hybrid systems and take an integrated approach to the design and analysis of robotic systems and behaviors. A robotic system in general is a hybrid dynamic system, consisting of continuous, discrete and event-driven components. We develop a semantic model for dynamic systems, that we call Constraint Nets (CN). CN introduces an abstraction and a unitary framework to model discrete/continuous hybrid systems. CN provides aggregation operators to model a complex system hierarchically. CN supports multiple levels of abstraction, based on abstract algebra and topology, to model and analyze a system at different levels of detail. CN, because of its rigorous foundation, can be used to define programming semantics of real-time languages for control systems. While modeling focuses on the underlying structure of a system — the organization and coordination of its components — requirements specification imposes global constraints on a system’s behavior, and behavior verification ensures the correctness of the behavior with respect to its requirements specification. We develop a timed linear temporal logic and timed Ʋ-automata to specify timed as well as sequential behaviors. We develop a formal verification method for timed V-automata specification, by combining a generalized model checking technique for automata with a generalized stability analysis method for dynamic systems. A good design methodology can simplify the verification of a robotic system. We develop a systematic approach to control synthesis from requirements specification, by exploring a relation between constraint satisfaction and dynamic systems using constraint methods. With this approach, control synthesis and behavior verification are coupled through requirements specification. To model, synthesize, simulate, and understand various robotic systems we have studied in this research, we develop a visual programming and simulation environment that we call ALERT: A Laboratory for Embedded Real-Time systems.

On Testgen+, An Environment for Protocol Test Generation and Validation
Zhou, Jingsong
URI : http://hdl.handle.net/2429/1674
Degree : Master of Science – MSc
Graduation Date : 1994-05

Topology Building and Random Polygon Generation
Zhu, Chongjian
URI : http://hdl.handle.net/2429/5246
Degree : Master of Science – MSc
Graduation Date : 1994-05



Find us on Twitter

a place of mind, The University of British Columbia

 

ICICS/CS Building 201-2366 Main Mall
Vancouver, B.C. V6T 1Z4 Canada
Tel: 604-822-3061 | Fax: 604-822-5485
General: help@cs.ubc.ca
Undergrad program: undergrad-info@cs.ubc.ca
Graduate program: grad-info@cs.ubc.ca

Emergency Procedures | Accessibility | Contact UBC | © Copyright The University of British Columbia