CS Theses & Dissertations 2010

For 2010 graduation dates (in alphabetical order by last name):

Dijkstra-like ordered upwind methods for solving static Hamilton-Jacobi equations
Alton, Ken
DOI : 10.14288/1.0051750
URI : http://hdl.handle.net/2429/25030
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

The solution of a static Hamilton-Jacobi Partial Differential Equation (HJ PDE) can be used to determine the change of shape in a surface for etching/deposition/lithography applications, to provide the first-arrival time of a wavefront emanating from a source for seismic applications, or to compute the minimal-time trajectory of a robot trying to reach a goal. HJ PDEs are nonlinear so theory and methods for solving linear PDEs do not directly apply. An efficient way to approximate the solution is to emulate the causal property of this class of HJ PDE: the solution at a particular point only depends on values backwards along the characteristic that passes through that point and solution values always increase along characteristics. In our discretization of the HJ PDE we enforce an analogous causal property, that the solution value at a grid node may only depend on the values of nodes in its numerical stencil which are smaller. This causal property is related but not the same thing as an upwinding property of schemes for time dependent problems. The solution to such a discretized system of equations can be efficiently computed using a Dijkstra-like method in a single pass through the grid nodes in order of nondecreasing value. We develop two Dijkstra-like methods for solving two subclasses of static HJ PDEs. The first method is an extension of the Fast Marching Method for isotropic Eikonal equations and it can be used to solve a class of axis-aligned anisotropic HJ PDEs on an orthogonal grid. The second method solves general convex static HJ PDEs on simplicial grids by computing stencils for a causal discretization in an initial pass through the grid nodes, and then solving the discretization in a second Dijkstra-like pass through the nodes. This method is suitable for computing solutions on highly nonuniform grids, which may be useful for extending it to an error-control method based on adaptive grid refinement.

Simulating viscous incompressible fluids with embedded boundary finite difference methods
Batty, Christopher
DOI : 10.14288/1.0051954
URI : http://hdl.handle.net/2429/28642
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

The behaviour of liquids and gases ranks among the most familiar and yet complex physical phenomena commonly encountered in daily life. To create a seamless approximation of the real world, it is clear that we must be able to accurately simulate fluids. However, a crucial element of what makes fluid behaviour so complex and compelling is its interactions with its surroundings. To simulate the motion of a fluid we cannot consider the Navier-Stokes equations in isolation; we must also examine the boundary conditions at the point where the fluid meets the world. Enforcing these boundary conditions has traditionally been a source of tremendous difficulty. Cartesian grid-based methods typically approximate the world in an unrealistic, axis-aligned block representation, while conforming mesh methods frequently suffer from poor mesh quality and expensive mesh generation. This thesis examines the use of embedded boundary finite difference methods to alleviate these shortcomings by providing a degree of sub-grid information that enables more efficient, flexible, accurate, and realistic simulations. The first key contribution of the thesis is the use of a variational approach to derive novel embedded boundary finite difference methods for fluids, by exploiting the concept of natural boundary conditions. This idea is applied first to animate the interaction between incompressible fluids and irregularly shaped dynamic rigid bodies. I then apply a similar technique to properly handle viscous free surfaces, enabling realistic buckling and coiling in viscous flows. Lastly, I unify these ideas to simulate Stokes flows in the presence of both free surface and solid boundaries, and demonstrate the method's convergence on a range of examples. The second main contribution is a study of embedded boundary methods for pressure projection in the context of unstructured Delaunay and Voronoi meshes. By eliminating the need for boundary-conforming meshes, this work enables efficient high-quality adaptive mesh generation and improves simulation accuracy. Furthermore, it demonstrates that by placing simulation samples at Voronoi sites, and choosing these sites intelligently with respect to liquid geometry, one can eliminate surface noise, improve the realism and stability of surface tension, and plausibly simulate nearly arbitrarily thin sheets and droplets.

Markerless 3D reconstruction of temporally deforming surfaces from multiple cameras
Bradley, Derek
DOI : 10.14288/1.0051905
URI : http://hdl.handle.net/2429/25950
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

3D content generation is a major challenge in computer graphics, and generating realistic objects with time-varying shape and motion is a big part of creating realistic virtual environments. An attractive approach for generating content is to acquire the shape of real objects using computer vision techniques. Capturing the real world avoids the complexity of mathematical simulation as well as the time-consuming and difficult process of creating content manually. Traditional acquisition methods, such as range scanning, work well for static objects. However, temporally deformable objects like garments and human faces pose greater challenges. Here, the capture process must reconstruct both the time-varying shape as well as the motion of the object. Additionally, per-frame texture maps should be acquired for realistic rendering. Previous methods contain various limitations, such as the use of structured illumination patterns, or hand-placed markers on the surface in order to guide the reconstruction. Placing markers on garments limits the types of garments that can be captured. Both markers and structured light on human faces limit the ability to capture high-quality per-frame texture maps, which are essential for reconstructing performance-specific details such as sweating, wrinkles and blood flow. In this thesis we propose new, entirely passive techniques for advancing the state-of-the-art in 3D reconstruction of temporally deformable surfaces. We reconstruct high-resolution, temporally compatible geometry using multiple video cameras, without the need for either markers or structured light patterns. This thesis contains several contributions, both in computer graphics as well as computer vision. We start with new methods for multi-camera calibration and synchronization, which allow us to employ inexpensive consumer video cameras for reconstruction. We also present a novel technique for multi-view stereo reconstruction, capable of building high-resolution geometric meshes from a number of images. Finally, we combine these ideas and provide two inexpensive solutions for high-resolution 3D reconstruction of temporally deforming surfaces without the need for markers. The first is an application for capturing the shape and motion of garments, and the second is a new method for acquiring facial performances of actors. These new techniques have the potential for high impact in the film and video game industries.

Patterns and privacy preservation with prior knowledge for classification
Bu, Shaofeng
DOI : 10.14288/1.0051744
URI : http://hdl.handle.net/2429/28749
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

Privacy preservation is a key issue in outsourcing of data mining. When we seek approaches to protect the sensitive information contained in the original data, it is also important to preserve the mining outcome. We study the problem of privacy preservation in outsourcing of classifications, including decision tree classification, support vector machine (SVM), and linear classifications. We investigate the possibility of guaranteeing no-outcome-change (NOC) and consider attack models with prior knowledge. We first conduct our investigation in the context of building decision trees. We propose a piecewise transformation approach using two central ideas of breakpoints and monochromatic pieces. We show that the decision tree is preserved if the transformation functions used for pieces satisfy the global (anti-)monotonicity. We empirically show that the proposed piecewise transformation approach can deliver a secured level of privacy and reduce disclosure risk substantially. We then propose two transformation approaches, (i) principled orthogonal transformation (POT) and (ii) true negative point (TNP) perturbation, for outsourcing SVM. We show that POT always guarantees no-outcome-change for both linear and non-linear SVM. The TNP approach gives the same guarantee when the data set is linearly separable. For linearly non-separable data sets, we show that no-outcome-change is not always possible and propose a variant of the TNP perturbation that aims to minimize the change to the SVM classifier. Experimental results show that the two approaches are effective to counter powerful attack models. In the last part, we extend the POT approach to linear classification models and propose to combine POT and random perturbation. We conduct a detailed set of experiments and show that the proposed combination approach could reduce the change on the mining outcome while still providing high level of protection on privacy by adding less noise. We further investigate the POT approach and propose a heuristic to break down the correlations between the original values and the corresponding transformed values of subsets. We show that the proposed approach could significantly improve the protection level on privacy in the worst cases.

Nonlinear blood pattern reconstruction
Cecchetto, Benjamin Thomas
DOI : 10.14288/1.0051990
URI : http://hdl.handle.net/2429/29394
Degree : Master of Science - MSc
Graduation Date : 2010-11

Deep learning of invariant spatio-temporal features from video
Chen, Bo
DOI : 10.14288/1.0051916
URI : http://hdl.handle.net/2429/27651
Degree : Master of Science - MSc
Graduation Date : 2010-11

Eliminating the long-running process : separating code and state
Colp, Patrick
DOI : 10.14288/1.0051516
URI : http://hdl.handle.net/2429/18164
Degree : Master of Science - MSc
Graduation Date : 2010-05

Nonlinearly constrained optimization via sequential regularized linear programming
Crowe, Mitch
DOI : 10.14288/1.0051992
URI : http://hdl.handle.net/2429/29648
Degree : Master of Science - MSc
Graduation Date : 2010-11

Combining spatial hints with policy reuse in a reinforcement learning agent
da Silva, Bruno Norberto
DOI : 10.14288/1.0051918
URI : http://hdl.handle.net/2429/27708
Degree : Master of Science - MSc
Graduation Date : 2010-11

Using the structure and motion of stereo point clouds for the semantic segmentation of images
Dockrey, Matthew
DOI : 10.14288/1.0051639
URI : http://hdl.handle.net/2429/17415
Degree : Master of Science - MSc
Graduation Date : 2010-05

Multiscale conditional random fields for machine vision
Duvenaud, David
DOI : 10.14288/1.0051919
URI : http://hdl.handle.net/2429/27032
Degree : Master of Science - MSc
Graduation Date : 2010-11

A high-order accurate particle-in-cell method
Edwards, Essex
DOI : 10.14288/1.0051882
URI : http://hdl.handle.net/2429/26106
Degree : Master of Science - MSc
Graduation Date : 2010-11

Efficient approximation of social relatedness over large social networks and application to query enabled recommender systems
Esfandiar, Pooya
DOI : 10.14288/1.0051910
URI : http://hdl.handle.net/2429/27778
Degree : Master of Science - MSc
Graduation Date : 2010-11

A collaborative planning support system for a multi-touch tabletop : the effect of number of touch inputs on collaboration and output quality
Fernquist, Jennifer Ellen
DOI : 10.14288/1.0051956
URI : http://hdl.handle.net/2429/27908
Degree : Master of Science - MSc
Graduation Date : 2010-11

FireInsight : understanding JavaScript behaviors in web pages by visually exploring the browser
Gao, Steven Xinyue
DOI : 10.14288/1.0051171
URI : http://hdl.handle.net/2429/16754
Degree : Master of Science - MSc
Graduation Date : 2010-05

Empirical analysis of the MEA and MFE RNA secondary structure prediction
Hajiaghayi, Monir
DOI : 10.14288/1.0051976
URI : http://hdl.handle.net/2429/29473
Degree : Master of Science - MSc
Graduation Date : 2010-11

Investigating, designing, and validating a haptic-affect interaction loop using three experimental methods
Hazelton, Thomas William
DOI : 10.14288/1.0051934
URI : http://hdl.handle.net/2429/27813
Degree : Master of Science - MSc
Graduation Date : 2010-11

Graphically enhanced keyboard accelerators for GUIs
Hendy, Jeff
DOI : 10.14288/1.0051504
URI : http://hdl.handle.net/2429/17401
Degree : Master of Science - MSc
Graduation Date : 2010-05

Optimally sweeping convex polygons with two and three guards
Jabbari, Zohreh
DOI : 10.14288/1.0051866
URI : http://hdl.handle.net/2429/24248
Degree : Master of Science - MSc
Graduation Date : 2010-11

Aggregation and constraint processing in lifted probabilistic inference
Kisynski, Jacek Jerzy
DOI : 10.14288/1.0051829
URI : http://hdl.handle.net/2429/23170
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-05

Representations that mix graphical models and first-order logic - called either first-order or relational probabilistic models — were proposed nearly twenty years ago and many more have since emerged. In these models, random variables are parameterized by logical variables. One way to perform inference in first-order models is to propositionalize the model, that is, to explicitly consider every element from the domains of logical variables. This approach might be intractable even for simple first-order models. The idea behind lifted inference is to carry out as much inference as possible without propositionalizing. An exact lifted inference procedure for first-order probabilistic models was developed by Poole [2003] and later extended to a broader range of problems by de Salvo Braz et al. [2007]. The C-FOVE algorithm by Milch et al. [2008] expanded the scope of lifted inference and is currently the state of the art in exact lifted inference. In this thesis we address two problems related to lifted inference: aggregation in directed first-order probabilistic models and constraint processing during lifted inference. Recent work on exact lifted inference focused on undirected models. Directed first-order probabilistic models require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed first-order models. We show how to extend the C-FOVE algorithm to perform lifted inference in the presence of aggregation parfactors. There are cases where the polynomial time complexity (in the domain size of logical variables) of the C-FOVE algorithm can be reduced to logarithmic time complexity using aggregation parfactors. First-order models typically contain constraints on logical variables. Constraints are important for capturing knowledge regarding particular individuals. However, the impact of constraint processing on computational efficiency of lifted inference has been largely overlooked. In this thesis we develop an efficient algorithm for counting the number of solutions to the constraint satisfaction problems encountered during lifted inference. We also compare, both theoretically and empirically, different ways of handling constraints during lifted inference.

Identification of gene modules using a generative model for relational data
Lacerda, Gustavo
DOI : 10.14288/1.0051953
URI : http://hdl.handle.net/2429/27952
Degree : Master of Science - MSc
Graduation Date : 2010-11

Numerical solution of skew-symmetric linear systems
Lau, Tracy
DOI : 10.14288/1.0051396
URI : http://hdl.handle.net/2429/17435
Degree : Master of Science - MSc
Graduation Date : 2010-05

Selectivity estimation of approximate predicates on text
Lee, Hongrae
DOI : 10.14288/1.0051964
URI : http://hdl.handle.net/2429/28645
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

This dissertation studies selectivity estimation of approximate predicates on text. Intuitively, we aim to count the number of strings that are similar to a given query string. This type of problem is crucial in handling text in RDBMSs in an error-tolerant way. A common difficulty in handling textual data is that they may contain typographical errors, or use similar but different textual representations for the same real-world entity. To handle such data in databases, approximate text processing has gained extensive interest and commercial databases have begun to incorporate such functionalities. One of the key components in successful integration of approximate text processing in RDBMSs is the selectivity estimation module, which is central in optimizing queries involving such predicates. However, these developments are relatively new and ad-hoc approaches, e.g., using a constant, have been employed. This dissertation studies reliable selectivity estimation techniques for approximate predicates on text. Among many possible predicates, we focus on two types of predicates which are fundamental building blocks of SQL queries: selections and joins. We study two different semantics for each type of operator. We propose a set of related summary structures and algorithms to estimate selectivity of selection and join operators with approximate matching. A common challenge is that there can be a huge number of variants to consider. The proposed data structures enable efficient counting by considering a group of similar variants together rather than each and every one separately. A lattice-based framework is proposed to consider overlapping counts among the groups. We performed extensive evaluation of proposed techniques using real-world and synthetic data sets. Our techniques support popular similarity measures including edit distance, Jaccard similarity and cosine similarity and show how to extend the techniques to other measures. Proposed solutions are compared with state-of-the-arts and baseline methods. Experimental results show that the proposed techniques are able to deliver accurate estimates with small space overhead.

Extensibility, modularity, and quality-adaptive streaming towards collaborative video authoring
Légaré, Jean-Sébastien
DOI : 10.14288/1.0051805
URI : http://hdl.handle.net/2429/21741
Degree : Master of Science - MSc
Graduation Date : 2010-05

LACOME : early evaluation and further development of a multi-user collaboration system for shared large displays
MacKenzie, Russell
DOI : 10.14288/1.0051950
URI : http://hdl.handle.net/2429/28028
Degree : Master of Science - MSc
Graduation Date : 2010-11

Addressing age-related pen-based target acquisition difficulties
Moffatt, Karyn Anne
DOI : 10.14288/1.0051336
URI : http://hdl.handle.net/2429/19189
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-05

Technology is increasingly being promoted as a means of addressing age-related cognitive and sensory impairments and enabling seniors to live more independently. Pen-based devices such as Personal Digital Assistants and Tablet PCs are appealing platforms for these endeavors because they are small, mobile, and powerful. Relative to the mouse, pen-based devices have been shown to be particularly beneficial for older adults. However, in terms of garnering wide-spread adoption, the mouse has historically dominated, leading researchers to focus chiefly on identifying and addressing its age- and motor-related limitations. In contrast, pen-based limitations for older users have been relatively unexplored. This thesis begins to fill that gap in the literature. Our first experiment, an empirical evaluation of pen-based target acquisition across the adult lifespan, identified three main sources of pen-based target acquisition difficulty—missing-just-below, slipping, and drifting—and demonstrated how these difficulties vary across task situation and age. In addition, this work showed that including older adults as participants can help uncover general pen-interaction problems: the missing-just-below and drifting difficulties were evident in both younger and older users alike. We next developed seven new target acquisition techniques to improve pen-based interaction, specifically addressing the three difficulties identified, and particularly targeting older adults. Our techniques built upon existing mouse-based techniques developed for older users and pen techniques for younger users. In total, we conducted three experiments to evaluate the seven new pen-based techniques: Reassigned and Deactivated (for missing-just-below), Tap and Glide (for drifting), and Steady, Bubble, and Steadied-Bubble (for slipping). Through these evaluations, we established where our proposed designs were successful at reducing errors, and where further refinement is needed. Finally, we reflected on our findings across studies to identify age-related, contextual, and technological factors which contributed to our results. These factors help illuminate the underlying reasons for pen-based targeting difficulties and shed light onto areas still needing attention. Overall, the results of this research support our main thesis that the accessibility of pen-based interfaces can be improved for older adults by first examining the sources of age-related acquisition difficulty, and then using the results of this examination to develop improved techniques.

TECTAS : bridging the gap between collaborative tagging systems and structured data
Moosavi, Seyyed Ali
DOI : 10.14288/1.0051986
URI : http://hdl.handle.net/2429/29554
Degree : Master of Science - MSc
Graduation Date : 2010-11

Capturing and modeling of deformable objects
Popa, Tiberiu
DOI : 10.14288/1.0051172
URI : http://hdl.handle.net/2429/14527
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-05

Modeling the behavior of deformable virtual objects has important applications in computer graphics. There are two prevalent approaches for modeling deformable objects, an active one by deforming existing virtual models and a passive one by capturing the geometry and motion of real objects. This thesis explores the problem of modeling and acquisition of objects undergoing deformations, and proposes a set of practical deformation and capturing tools. The first contribution is a new approach to model deformation that incorporates non-uniform materials into the geometric deformation framework. This technique provides a simple and intuitive method to control the deformation using material properties that can be specified by the user with an intuitive interface or can be learned from a sequence of sample deformations facilitating realistic looking results. Some deformable objects such as garments exhibit a complex behavior under motion and thus are difficult to model or simulate, making them suitable target for capture methods. Methods for capturing garments usually use special markers printed on the fabric to establish temporally coherent correspondences between frames. Unfortunately, this approach is tedious and prevents the capture of interesting, off-the-shelf fabrics. A marker-free approach to capturing garment motion that avoids these problems is presented in chapter three. The method establishes temporally coherent parameterizations between incomplete geometries that are extracted at each time step using a multiview stereo algorithm, and the missing geometry is filled in using a template. Garment motion is characterized by dynamic high-frequency folds. However, these folds tend to be shallow, making them difficult to capture. A new method for reintroducing folds into the sequence using data-driven dynamic wrinkling is presented in chapter four. The method first estimates the folds in the video footage and then wrinkle the surface using space-time deformation. The validity of the method is demonstrated on several garments captured using several recent techniques. While this markerless reconstruction method is tailored specifically for garments, this thesis also proposes a more general method for reconstructing a consistent frame sequence from a sequence of point clouds captured using multiple video streams. The method uses optical flow to guide a local-parameterization based cross-parameterization method. This reconstruction method accumulates geometric information from all the frames using a novel correction and completion mechanism.

Graphical model structure learning using L₁-regularization
Schmidt, Mark
DOI : 10.14288/1.0051935
URI : http://hdl.handle.net/2429/27277
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

This work looks at fitting probabilistic graphical models to data when the structure is not known. The main tool to do this is L₁-regularization and the more general group L₁-regularization. We describe limited-memory quasi-Newton methods to solve optimization problems with these types of regularizers, and we examine learning directed acyclic graphical models with L₁-regularization, learning undirected graphical models with group L₁-regularization, and learning hierarchical log-linear models with overlapping group L₁-regularization.

A first and second longitudinal study of haptic icon learnability : the impact of rhythm and melody
Swerdfeger, Bradley Adam
DOI : 10.14288/1.0051505
URI : http://hdl.handle.net/2429/17452
Degree : Master of Science - MSc
Graduation Date : 2010-05

Inductive principles for learning Restricted Boltzmann Machines
Swersky, Kevin
DOI : 10.14288/1.0051929
URI : http://hdl.handle.net/2429/27816
Degree : Master of Science - MSc
Graduation Date : 2010-11

Dynamic local search for SAT : design, insights and analysis
Tompkins, David Andrew Douglas
DOI : 10.14288/1.0051970
URI : http://hdl.handle.net/2429/29538
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-11

In Boolean logic, a formula is satisfiable if a variable assignment exists that will make the formula equivalent to true, and the propositional satisfiability problem (SAT) is to determine if a given formula is satisfiable. SAT is one of the most fundamental problems in computer science, and since many relevant combinatorial problems can be encoded into SAT, it is of substantial theoretical and practical interest. A popular and successful approach to solving combinatorial problems such as SAT is Stochastic Local Search (SLS). In this dissertation we focus on SLS algorithms for SAT, which can find satisfying variable assignments effectively, but cannot determine if no satisfying variable assignment exists. Our primary goal is to advance the state-of-the-art for SLS algorithms for SAT. We accomplish this goal explicitly by developing new SLS algorithms that outperform the existing algorithms on interesting benchmark problems, and implicitly by advancing the understanding of current algorithms and introducing tools for developing new algorithms. The prevalent theme of our work is Dynamic Local Search (DLS), where DLS algorithms use their search history to dynamically change their behaviour. The cornerstone of this dissertation is UBCSAT, a software framework we developed for efficiently implementing and empirically evaluating SLS algorithms for SAT. We present the Scaling and Probabilistic Smoothing (SAPS) algorithm, which is amongst the state-of-the-art SLS algorithms for SAT. We provide an in-depth study of a class of DLS algorithms, analyze their performance and significantly advance the understanding of their behaviour. We also advance the understanding of the role of random decisions in SLS algorithms, by providing an empirical analysis on both the quality and quantity of random decisions. The capstone of this dissertation is a new conceptual model for representing and designing SLS algorithms for SAT. We introduce a new software design architecture that implements our model and is specifically designed to leverage recent tools to automate many of the tedious aspects of algorithm design. We demonstrate that by following our new algorithm design approach, we have achieved significant improvements over previous state-of-the-art SLS algorithms for SAT on encodings of software verification benchmark instances.

Convex optimization for generalized sparse recovery
van den Berg, Ewout
DOI : 10.14288/1.0051332
URI : http://hdl.handle.net/2429/16646
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-05

The past decade has witnessed the emergence of compressed sensing as a way of acquiring sparsely representable signals in a compressed form. These developments have greatly motivated research in sparse signal recovery, which lies at the heart of compressed sensing, and which has recently found its use in altogether new applications. In the first part of this thesis we study the theoretical aspects of joint-sparse recovery by means of sum-of-norms minimization, and the ReMBo-l₁ algorithm, which combines boosting techniques with l₁-minimization. For the sum-of-norms approach we derive necessary and sufficient conditions for recovery, by extending existing results to the joint-sparse setting. We focus in particular on minimization of the sum of l₁, and l₂ norms, and give concrete examples where recovery succeeds with one formulation but not with the other. We base our analysis of ReMBo-l₁ on its geometrical interpretation, which leads to a study of orthant intersections with randomly oriented subspaces. This work establishes a clear picture of the mechanics behind the method, and explains the different aspects of its performance. The second part and main contribution of this thesis is the development of a framework for solving a wide class of convex optimization problems for sparse recovery. We provide a detailed account of the application of the framework on several problems, but also consider its limitations. The framework has been implemented in the SPGL1 algorithm, which is already well established as an effective solver. Numerical results show that our algorithm is state-of-the-art, and compares favorably even with solvers for the easier---but less natural---Lagrangian formulations. The last part of this thesis discusses two supporting software packages: Sparco, which provides a suite of test problems for sparse recovery, and Spot, a Matlab toolbox for the creation and manipulation of linear operators. Spot greatly facilitates rapid prototyping in sparse recovery and compressed sensing, where linear operators form the elementary building blocks. Following the practice of reproducible research, all code used for the experiments and generation of figures is available online at http://www.cs.ubc.ca/labs/scl/thesis/09vandenBerg/.

Semantic spatial interoperability framework : a case study in the architecture, engineering and construction (AEC) domain
Webster, April
DOI : 10.14288/1.0051947
URI : http://hdl.handle.net/2429/28461
Degree : Master of Science - MSc
Graduation Date : 2010-11

Beyond equilibrium : predicting human behaviour in normal form games
Wright, James Robert
DOI : 10.14288/1.0051831
URI : http://hdl.handle.net/2429/22506
Degree : Master of Science - MSc
Graduation Date : 2010-05

On-chip surfing interconnect
Yang, Suwen
DOI : 10.14288/1.0051635
URI : http://hdl.handle.net/2429/23597
Degree : Doctor of Philosophy - PhD
Graduation Date : 2010-05

With growing chip sizes and operating frequencies, on-chip global interconnect has become a critical bottleneck for CMOS technology. With processes scaling into deep submicron scales, the gap between gate delay and global-interconnect delay increases with each technology generation. Bandwidth is also important for on-chip interconnect and is limited by skew and jitter. Due to temperature variation, crosstalk noise, power supply variation and parameter variation, timing variation increases with the length of global interconnect lines. Jitter and skew in the transmitter and receiver's clocks add timing variation to on-chip interconnect communication. Repeaters in a buffering technique amplify clock jitter and drop pulses due to intersymbol interference. Latches can be inserted in place of some of the buffers to control the timing variation. However, these latches increase latency and power consumption. In 2002, a novel circuit technique called ``surfing'' was proposed to bound the timing uncertainty in wave pipelines~\cite{Winters02}. This thesis extends the application of surfing to on-chip interconnects and introduces surfing RC interconnect and surfing LC interconnect techniques. For RC interconnects, we present a jitter attenuating buffer. This buffer uses inverters with variable output strength to implement a simple, low-gain DLL. Chains of these surfing buffers attenuate jitter making them well suited for source-synchronous interconnect. Furthermore, our chains can be used to reliably transmit handshaking signals and support sliding-window protocols to improve the throughput of asynchronous communication. We use distributed varactors to dynamically vary the latency of LC interconnects and thus effect surfing. Different from RC signaling, signals on LC interconnect propagate at nearly the speed-of-light. The varactors not only modulate the line latency, but also sharpen the edges of signals. We present both a full-swing and a low-swing LC interconnect designs. In both interconnects, the jitter and skew are attenuated along the line due to the surfing effect. In the low swing interconnect, the surfing effect also helps to reshape the pulses to increase the eye height. To demonstrate these techniques in real silicon, we designed, fabricated and tested a chip. The testing results show that surfing LC interconnects are promising for deep submicron technology.

An efficient interest management system for networked virtual environment
Zhang, Yunjing
DOI : 10.14288/1.0051926
URI : http://hdl.handle.net/2429/27712
Degree : Master of Science - MSc
Graduation Date : 2010-11



Find us on Twitter

a place of mind, The University of British Columbia

 

ICICS/CS Building 201-2366 Main Mall
Vancouver, B.C. V6T 1Z4 Canada
Tel: 604-822-3061 | Fax: 604-822-5485
General: help@cs.ubc.ca
Undergrad program: undergrad-info@cs.ubc.ca
Graduate program: grad-info@cs.ubc.ca

Emergency Procedures | Accessibility | Contact UBC | © Copyright The University of British Columbia