CS Theses & Dissertations 2011

For 2011 graduation dates (in alphabetical order by last name):

A garbage collector & free space compactor for the parallax storage system
Abdul-Amir, Mohammad
DOI : 10.14288/1.0051194
URI : http://hdl.handle.net/2429/33669
Degree : Master of Science - MSc
Graduation Date : 2011-05

MultiFLIP for energetic two-phase fluid simulation
Boyd, William Landon
DOI : 10.14288/1.0052008
URI : http://hdl.handle.net/2429/32725
Degree : Master of Science - MSc
Graduation Date : 2011-05

Supporting software history exploration
Bradley, Alexander Wilfred John
DOI : 10.14288/1.0051352
URI : http://hdl.handle.net/2429/33722
Degree : Master of Science - MSc
Graduation Date : 2011-05

Usability and the effects of interruption in C-TOC : self-administered cognitive testing on a computer
Brehmer, Matthew Michael
DOI : 10.14288/1.0052109
URI : http://hdl.handle.net/2429/36917
Degree : Master of Science - MSc
Graduation Date : 2011-11

Interactive Bayesian optimization : learning user preferences for graphics and animation
Brochu, Eric
DOI : 10.14288/1.0051462
URI : http://hdl.handle.net/2429/30519
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Bayesian optimization with Gaussian processes has become increasingly popular in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it useful for optimizing expensive black box functions. We examine the case of using Bayesian optimization when the objective function requires feedback from a human. We call this class of problems \emph{interactive} Bayesian optimization. Here, we assume a parameterized model, and a user whose task is to find an acceptable set of parameters according to some perceptual value function that cannot easily be articulated. This requires special attention to the qualities that make this a unique problem, and so, we introduce three novel extensions: the application of Bayesian optimization to "preference galleries", where human feedback is in the form of preferences over a set of instances; a particle-filter method for learning the distribution of model hyperparameters over heterogeneous users and tasks; and a bandit-based method of using a portfolio of utility functions to select sample points. Using a variety of test functions, we validate our extensions empirically on both low- and high-dimensional objective functions. We also present graphics and animation applications that use interactive Bayesian optimization techniques to help artists find parameters on difficult problems. We show that even with minimal domain knowledge, an interface using interactive Bayesian optimization is much more efficient and effective than traditional "parameter twiddling" techniques on the same problem.

Minimizing resource access and management disparities between desktop and Web applications
Cannon, Brett Allen
DOI : 10.14288/1.0052045
URI : http://hdl.handle.net/2429/31025
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Client applications, including both traditional desktop applications and modern Web applications, typically access and manage resources in order to perform their intended work. Unfortunately both approaches lack something the other has when it comes to resource access and management. Desktop applications typically do not provide enough security for resources to prevent malicious abuse of them. While operating systems provide access-control lists, they do not help facilitate enforcing the Principle of Least Privilege to minimize insecure resource access. Web applications, while they operate in a sandboxed environment which provides the necessary resource access restrictions, do not have a rich API for data storage management. While HTML5 provides two primitive APIs for a developer to use to manage stored data, neither approach allows for storing data in an object-oriented manner that a developer is used to. This thesis addresses the question of ”can these two shortcomings in resource access and management be overcome in order to lessen the technological gap between desktop applications and Web applications?” For desktop applications an approach using aspect-oriented software design has been created which adds enforcement of the Principle of Least Privilege by using I/O to dynamically choose what resource permissions to grant the application. I show that this approach can tie into Java applications with rich user interaction and I/O to control resource access while providing a way for third-parties to provide the security code for auditing purposes. For Web applications, a library has been designed which introduces automatic object persistence to JavaScript without re- quiring any modifications of the browser. I show that my library is able to persist a thousand objects without a user-perceptible impact on application performance, all while having minimal requirements placed upon the developer to use the library.

Learning latent theories of relations and individuals
Chiang, Michael Chi-Hao
DOI : 10.14288/1.0051251
URI : http://hdl.handle.net/2429/33750
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Inductive learning of statistical models from relational data is a key problem in artificial intelligence. Two main approaches exist for learning with relational data, and this thesis shows how they can be combined in a uniform framework. The first approach aims to learn dependencies amongst features (relations and properties), e.g. how users' purchases of products depend on users' preferences of the products and associated properties of users and products. Such models abstract over individuals, and are compact and easy to interpret. The second approach learns latent properties of individuals that explain the observed features, without modelling interdependencies amongst features. Latent-property models have demonstrated good predictive accuracy in practise, and are especially useful when few properties and relations are observed. Interesting latent groupings of individuals can be discovered. Our approach aims to learn a unified representation for dependency structures for both observed features and latent properties. We develop a simple approximate EM algorithm for learning the unified representation, and experiments demonstrate cases when our algorithm can generate models that predicts better than dependency-based models of observed features as well as a state-of-the-art latent-property model. We extend our approximate EM algorithm to handle uncertainty about the number of values for latent properties. We search over the number of values and return error bounds, as an alternative to existing proposals based on sampling in the posterior distribution over the number of values. We also solve a specific case where dependencies involve functional relations, which induces a verbose model with many parameters. In comparison, the standard solution of aggregating over all values of the function yields a simple model that predicts poorly. We show how to learn an optimal intermediate-size representation efficiently by clustering the values of the function. The proposed method generates models that capture interesting clusters of function values, dominates the simple model in prediction, and can surpass the verbose model using much fewer parameters.

Real-time planning and control for simulated bipedal locomotion
Coros, Stelian
DOI : 10.14288/1.0051994
URI : http://hdl.handle.net/2429/30307
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Understanding and reproducing the processes that give rise to purposeful human and animal motions has long been of interest in the fields of character animation, robotics and biomechanics. However, despite the grace and agility with which many living creatures effortlessly perform skilled motions, modeling motor control has proven to be a difficult problem. Building on recent advances, this thesis presents several approaches to creating control policies that allow physically-simulated characters to demonstrate skill and purpose as they interact with their virtual environments. We begin by introducing a synthesis-analysis-synthesis framework that enables physically-simulated characters to navigate environments with significant stepping constraints. First, an offline optimization method is used to compute control solutions for randomly-generated example problems. Second, the example motions and their underlying control patterns are analyzed to build a low-dimensional step-to-step model of the dynamics. Third, the dynamics model is exploited by a planner to solve new instances of the task in real-time. We then present a method for precomputing robust task-based control policies for physically simulated characters. This allows our characters to complete higher-level locomotion tasks, such as walking in a user specified direction, while interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware locomotion controllers. A constrained state exploration phase is first used to define a dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning. Lastly, we describe a control strategy for walking that generalizes well across gait parameters, motion styles, character proportions, and a variety of skills. The control requires no character-specific or motion-specific tuning, is robust to disturbances, and is simple to compute. The method integrates tracking using proportional-derivative control, foot placement adjustments using an inverted pendulum model and Jacobian transpose control for gravity compensation and fine-level velocity tuning. We demonstrate a variety of walking-related skills such as picking up objects placed at any height, lifting, pulling, pushing and walking with heavy crates, ducking over and stepping under obstacles and climbing stairs.

LIVESGEO : a location-based multimedia knowledge sharing system
Chung, Yongik
DOI : 10.14288/1.0052106
URI : http://hdl.handle.net/2429/36762
Degree : Master of Science - MSc
Graduation Date : 2011-11

Learning reduced order linear feedback policies for motion skills
Ding, Kai
DOI : 10.14288/1.0052097
URI : http://hdl.handle.net/2429/38044
Degree : Master of Science - MSc
Graduation Date : 2011-11

Plausible cloth wrinkling for captured mesh sequences
Chen, Xi
DOI : 10.14288/1.0052095
URI : http://hdl.handle.net/2429/38243
Degree : Master of Science - MSc
Graduation Date : 2011-11

Automatic determination of puck possession and location in broadcast hockey video
Duan, Xin
DOI : 10.14288/1.0052111
URI : http://hdl.handle.net/2429/36531
Degree : Master of Science - MSc
Graduation Date : 2011-11

Solving correlation matrix completion problems using parallel differential evolution
Enaganti, Srujan Kumar
DOI : 10.14288/1.0051982
URI : http://hdl.handle.net/2429/30302
Degree : Master of Science - MSc
Graduation Date : 2011-05

Template-based sketch recognition using Hidden Markov Models
Flor, Roey
DOI : 10.14288/1.0051971
URI : http://hdl.handle.net/2429/30238
Degree : Master of Science - MSc
Graduation Date : 2011-05

Developer-centric models: easing access to relevant information in a software development environment
Fritz, Thomas
DOI : 10.14288/1.0052138
URI : http://hdl.handle.net/2429/33716
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

During the development of a software system, large amounts of new information, such as source code, work items and documentation, are produced continuously. As a developer works, one of his major activities is to consult portions of this information pertinent to his work to answer the questions he has about the system and its development. Current development environments are centered around models of the artifacts used in development, rather than of the people who perform the work, making it difficult and sometimes infeasible for the developer to satisfy his information needs. We introduce two developer-centric models, the degree-of-knowledge (DOK) model and the information fragments model, which support developers in accessing the small portions of information needed to answer the questions they have. The degree-of-knowledge model computes automatically, for each source code element in the development environment, a real value that represents a developer's knowledge of that element based on a developer's authorship and interaction data. We present evidence that shows that both authorship and interaction information are important in characterizing a developer's knowledge of code. We report on the usage of our model in case studies on expert finding, knowledge transfer and identifying changes of interest. We show that our model improves upon an existing expertise finding approach and can accurately identify changes for which a developer should likely be aware. Finally, we discuss the robustness of the model across multiple development sites and teams. The information fragment model automates the composition of different kinds of information and allows developers to easily choose how to display the composed information. We show that the model supports answering 78 questions that involve the integration of information siloed by existing programming environments. We identified these questions from interviews with developers. We also describe how 18 professional developers were able to use a prototype tool based on our model to successfully and quickly answer 94% of eight of the 78 questions posed in a case study. The separation of composition and presentation supported by the model, allowed the developers to answer the questions according to their personal preferences.

Automatic detection and tracking in underwater environments with marine snow
Gamroth, Catherine A
DOI : 10.14288/1.0052029
URI : http://hdl.handle.net/2429/30440
Degree : Master of Science - MSc
Graduation Date : 2011-05

VM Supported AspectJ
Golbeck, Ryan M.
DOI : 10.14288/1.0051995
URI : http://hdl.handle.net/2429/29910
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Traditional AspectJ implementations use a VM-external implementation approach based on JBC instrumentation. These approaches pose an inherent penalty in both steady state performance as well as startup or recompilation performance. We present a map of the design space for AspectJ implementations with potential advantages and disadvantages to evaluate the coverage of the space by related work to identify unexplored possibilities. We propose an architecture for AspectJ implementations which supports both VM-internal and VM-external implementations. VM-internal implementations of our architecture fit naturally into existing JVMs, are straight forward to implement and support all known AspectJ optimizations as well as maintaining efficient startup performance. However, because our architecture also supports VM-external implementations, it is both backward and forward compatible. Additionally, we present a suite of benchmarks designed to evaluate AspectJ implementations. This suite contains macro and micro benchmarks designed to measure the steady state, startup, and overall performance of an implementation. To demonstrate the viability of our architecture, we present one primary implementation, ajvm, in the Jikes Research Virtual Machine (RVM), and several secondary partial implementations in the Jikes RVM and a second JVM, J9. We use our proposed benchmark suite to produce an analysis that confirms our claims that VM-integrated AspectJ implementations can support known and proposed steady state and startup optimizations. Furthermore, they overcome inherent performance penalties associated with VM-external implementations.

A scalable video streaming approach using distributed b-tree
Gramsci, Shantanu Khan
DOI : 10.14288/1.0051349
URI : http://hdl.handle.net/2429/33848
Degree : Master of Science - MSc
Graduation Date : 2011-05

Using line and ellipse features for rectification of broadcast hockey video
Gupta, Ankur
DOI : 10.14288/1.0052023
URI : http://hdl.handle.net/2429/30490
Degree : Master of Science - MSc
Graduation Date : 2011-05

Epigenetic information in the cell : a potential avenue for biocompatible computing
Hentrich, Thomas
DOI : 10.14288/1.0052107
URI : http://hdl.handle.net/2429/36630
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

Living organisms sense, process and produce molecular signals to regulate their activity and, thus, process information and perform computations on biological substrates. Understanding the fundamental principles of information representation and manipulation along these algorithmic bioprocesses will advance our understanding of biology and inspire novel forms of computation. From one side of this connection, the research field of Bioinformatics uses computational tools to gain insight into the processes of life. On the other side, Biomolecular Computing attempts to utilize molecules and cellular machineries to operate engineered nano-scale biocomputers in live cells for diagnostics, therapeutics and many other applications in biotechnology, bioengineering and biomedicine. Addressing this bifold character of information of life, this thesis contributes to the part of Computing Life by elucidating aspects of chromatin, the nucleo-protein structure of DNA in the cell. As a computation and storage layer chromatin offers a high degree of plasticity to integrate (environmental) signals into DNA-based regulation and allows information propagation according to epigenetic principles. In this work, I present results on the complexity of chromatin modifications and reveal their impact on gene activity and related cellular functions. I further discuss a bioinformatics pipeline I developed and that was applied for genome-wide chromatin profiling and transcriptome analysis. From the perspective of Living Computers, I address the question of information encoding by biomolecules and draw on principles of epigenetics to devise a model for an RNA interference-based equivalent of the electric flip-flop, which is one of the fundamental elements in digital circuits. In particular, I focus on the digital switch abstraction of the flip-flop as it is the pivotal idea in both the electric and biomolecular world that connects and at the same time decouples an underlying physical process and the abstract representation of information. This work contributes elucidating the computational principles of the RNA interference machinery and suggests novel ideas for universal memory units in biomolecular computing. By juxtaposing both the natural and artificial perspective, this thesis attempts to enhance our understanding of epigenetic information processing in the cell and its capacity for biocomputing applications.

A complex analysis based derivation of multigrid smoothing factors of lexicographic Gauss-Seidel
Hocking, Laird Robert
DOI : 10.14288/1.0052104
URI : http://hdl.handle.net/2429/37012
Degree : Master of Science - MSc
Graduation Date : 2011-11

Rising motion controllers for physically simulated characters
Jones, Benjamin James
DOI : 10.14288/1.0052110
URI : http://hdl.handle.net/2429/36261
Degree : Master of Science - MSc
Graduation Date : 2011-11

Staged learning of agile motor skills
Karpathy, Andrej
DOI : 10.14288/1.0051435
URI : http://hdl.handle.net/2429/34643
Degree : Master of Science - MSc
Graduation Date : 2011-11

A machine learning perspective on predictive coding using PAQ8 and new applications
Knoll, Byron
DOI : 10.14288/1.0052088
URI : http://hdl.handle.net/2429/35846
Degree : Master of Science - MSc
Graduation Date : 2011-11

Biologically motivated controllers for robotic eyes
Lesmana, Martin
DOI : 10.14288/1.0052108
URI : http://hdl.handle.net/2429/37015
Degree : Master of Science - MSc
Graduation Date : 2011-11

Improving the learnability of mobile devices for older adults
Leung, Rock Anthony
DOI : 10.14288/1.0052115
URI : http://hdl.handle.net/2429/36391
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

Mobile computing devices, such as smart phones, offer benefits that may be especially valuable to older adults (age 65+). However older adults have been shown to have difficulty learning to use these devices, which is a barrier for technology adoption. The main goal of the research reported in this dissertation was to investigate three promising design approaches – increasing the interpretability of graphical icons, incorporating a multi-layered interface, and augmenting the mobile device’s display – to determine whether each can improve the learnability of mobile devices for older adults. We involved both older and younger adults in our studies to uncover benefits unique to older adults. In our investigation of graphical icons, we conducted an experiment to determine which icon characteristics affect initial icon interpretability for older adults. We found that icon interpretability can be improved for older adults by reducing the semantic distance between the objects depicted in the icon and the icon’s function, and by labelling icons. In our investigation of multi-layered interfaces, we prototyped a two-layer smart phone address book and conducted an experiment to assess its learnability over four learning phases. We found that the multi-layered interface, compared to a non-layered full-functionality control interface, provided greater benefits for older participants than for younger participants in terms of faster task completion times during initial learning, lower perceived interface complexity, and greater interface preference for learning. In our investigation of augmenting a mobile device display with a larger display to help older adults learn new devices, we conducted a comprehensive survey of older adults’ learning needs and preferences. Based on the survey findings, we designed and prototyped Help Kiosk, an augmented display system for helping older adults to learn to use a smart phone. An informal evaluation found preliminary evidence that Help Kiosk may be able to assist older adults in performing new mobile phone tasks. Through these three investigations, our research identified and validated design approaches that researchers and developers can use to improve the learnability of mobile devices for older adults, which should increase the chances of technology adoption.

Numerical solution of the time-harmonic Maxwell equations and incompressible magnetohydrodynamics problems
Li, Dan
DOI : 10.14288/1.0051989
URI : http://hdl.handle.net/2429/30314
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

The goal of this thesis is to develop efficient numerical solvers for the time-harmonic Maxwell equations and for incompressible magnetohydrodynamics problems. The thesis consists of three components. In the first part, we present a fully scalable parallel iterative solver for the time-harmonic Maxwell equations in mixed form with small wave numbers. We use the lowest order Nedelec elements of the first kind for the approximation of the vector field and standard nodal elements for the Lagrange multiplier associated with the divergence constraint. The corresponding linear system has a saddle point form, with inner iterations solved by preconditioned conjugate gradients. We demonstrate the performance of our parallel solver on problems with constant and variable coefficients with up to approximately 40 million degrees of freedom. Our numerical results indicate very good scalability with the mesh size, on uniform, unstructured and locally refined meshes. In the second part, we introduce and analyze a mixed finite element method for the numerical discretization of a stationary incompressible magnetohydrodynamics problem, in two and three dimensions. The velocity field is discretized using divergence-conforming Brezzi-Douglas-Marini (BDM) elements and the magnetic field is approximated by curl-conforming Nedelec elements. Key features of the method are that it produces exactly divergence-free velocity approximations, and that it correctly captures the strongest magnetic singularities in non-convex polyhedral domains. We prove that the energy norm of the error is convergent in the mesh size in general Lipschitz polyhedra under minimal regularity assumptions, and derive nearly optimal a-priori error estimates for the two-dimensional case. We present a comprehensive set of numerical experiments, which indicate optimal convergence of the proposed method for two-dimensional as well as three-dimensional problems. Finally, in the third part we investigate preconditioned Krylov iterations for the discretized stationary incompressible magnetohydrodynamics problems. We propose a preconditioner based on efficient preconditioners for the Maxwell and Navier-Stokes sub-systems. We show that many of the eigenvalues of the preconditioned system are tightly clustered, and hence, rapid convergence is accomplished. Our numerical results show that this approach performs quite well.

LePlaza : an event-centered social networking platform with location based personalized recommendation service
Lin, Cong
DOI : 10.14288/1.0052105
URI : http://hdl.handle.net/2429/36796
Degree : Master of Science - MSc
Graduation Date : 2011-11

Bayesian optimization for adaptive MCMC
Mahendran, Nimalan
DOI : 10.14288/1.0052032
URI : http://hdl.handle.net/2429/30636
Degree : Master of Science - MSc
Graduation Date : 2011-05

Scalability of communicators in MPI
Mir Taheri, Seyed M.
DOI : 10.14288/1.0051999
URI : http://hdl.handle.net/2429/33128
Degree : Master of Science - MSc
Graduation Date : 2011-05

Breaking up is hard to do : security and functionality in a commodity hypervisor
Nanavati, Mihir Sudarshan
DOI : 10.14288/1.0052016
URI : http://hdl.handle.net/2429/35591
Degree : Master of Science - MSc
Graduation Date : 2011-11

Automating meta-algorithmic analysis and design
Nell, Christopher Warren
DOI : 10.14288/1.0052099
URI : http://hdl.handle.net/2429/38247
Degree : Master of Science - MSc
Graduation Date : 2011-11

Transport level features for commodity clusters
Penoff, Bradley Thomas
DOI : 10.14288/1.0052018
URI : http://hdl.handle.net/2429/35395
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

There is a need for systems to provide additional processing to extract useful information from the growing amounts of data. High Performance Computing (HPC) techniques use large clusters comprised of commodity hardware and software to provide the necessary computation when a single machine does not suffice. In addition to the increase in data, there have been other architectural changes like the advent of multicore and the presence of multiple networks on a single compute node, yet the commodity transport protocols in use have not adapted. It is therefore an opportune time to revisit the question of which transport features are necessary in order to best support today’s applications. Popular in HPC, we use the Message Passing Interface (MPI) to provide support for large scale parallel applications. We propose features to the transport protocol to overcome the problems with reliability, performance, and design simplicity existing in Ethernet-based commodity clusters. We use the Stream Control Transmission Protocol (SCTP) as a vehicle to implement tools having the proposed transport features for MPI. We develop several SCTP-based MPI implementations, a full-featured userspace SCTP stack, as well as enable the execution of unmodified MPI programs over a simulated network and SCTP implementation. The tools themselves provide the HPC and networking communities means to utilize improved transport features for MPI by way of SCTP. The tools developed in this thesis are used to show that the proposed transport features enable further capabilities regarding the performance, reliability, and design simplicity of MPI applications running on Ethernet-based cluster systems constructed out of commodity components.

Modular verification of shared-memory concurrent system software
Rakamarić, Zvonimir
DOI : 10.14288/1.0052017
URI : http://hdl.handle.net/2429/32572
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

Software is large, complex, and error-prone. According to the US National Institute of Standards and Technology, software bugs cost the US economy an estimated $60 billion each year. The trend in hardware design of switching to multi-core architectures makes software development even more complex. Cutting software development costs and ensuring higher reliability of software is of global interest and a grand challenge. This is especially true of the system software that is the foundation beneath all general-purpose application programs. The verification of system software poses particular challenges: system software is typically written in a low-level programming language with dynamic memory allocation and pointer manipulation, and system software is also highly concurrent, with shared-memory communication being the main concurrent programming paradigm. Available verification tools usually perform poorly when dealing with the aforementioned challenges. This thesis addresses these problems by enabling precise and scalable verification of low-level, shared-memory, concurrent programs. The main contributions are about the interrelated concepts of memory, modularity, and concurrency. First, because programs use huge amounts of memory, the memory is usually modeled very imprecisely in order to scale to big programs. This imprecise modeling renders most tools almost useless in the memory-intensive parts of code. This thesis describes a scalable, yet precise, memory model that offers on-demand precision only when necessary. Second, modularity is the key to scalability, but it often comes with a price --- a user must manually provide module specifications, making the verification process more tedious. This thesis proposes a light-weight technique for automatically inferring an important family of specifications to make the verification process more automatic. Third, the number of program behaviors explodes in the presence of concurrency, thereby greatly increasing the complexity of the verification task. This explosion is especially true of shared-memory concurrency. The thesis presents a static context-bounded analysis that combines a number of techniques to successfully solve this problem. We have implemented the above contributions in the verification tools developed as a part of this thesis. We have applied the tools on real-life system software, and we are already finding critical, previously undiscovered bugs.

Domain adaptation for summarizing conversations
Sandu, Oana
DOI : 10.14288/1.0051250
URI : http://hdl.handle.net/2429/33932
Degree : Master of Science - MSc
Graduation Date : 2011-05

Guarantees concerning geometric objects with imprecise points
Sember, Jeffery
DOI : 10.14288/1.0052098
URI : http://hdl.handle.net/2429/38084
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

Traditional geometric algorithms are often presented as if input imprecision does not exist, even though it is often unavoidable. We argue that in some cases, it may be desirable for geometric algorithms to treat this imprecision as an explicit component of the input, and to reflect this imprecision in the output. Starting with three problems from computational geometry whose inputs are planar point sets (Voronoi diagrams, convex hulls, and smallest bounding discs), we recast these as problems where each input point's location is imprecise, but known to lie within a particular region of uncertainty. Where algorithms to solve each of the original problems produce a single geometric object as output, the algorithms that we present typically produce either guaranteed or possible output objects. A guaranteed object represents qualities that can be guaranteed for every point set that is consistent with the uncertain regions, and a possible object represents qualities that exist for at least one such point set. By dealing with input imprecision explicitly, these guaranteed and possible objects can represent a more accurate view of what can be reliably inferred from the input data.

The model manager: a multidimensional conceptual modeling tool in the CIM framework
Shakya, Dibesh
DOI : 10.14288/1.0052084
URI : http://hdl.handle.net/2429/31715
Degree : Master of Science - MSc
Graduation Date : 2011-05

Body-centric and shadow-based interaction for large wall displays
Shoemaker, Garth
DOI : 10.14288/1.0052025
URI : http://hdl.handle.net/2429/30533
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

We are entering an era in human-computer interaction where new display form factors, including large displays, promise to efficiently support an entire class of tasks that were not properly supported by traditional desktop computing interfaces. We develop a "body-centric" model of interaction appropriate for use with very large wall displays. We draw on knowledge of how the brain perceives and operates in the physical world, including the concepts of proprioception, interaction spaces, and social conventions, to drive the development of novel interaction techniques. The techniques we develop include an approach for embodying the user as a virtual shadow on the display, which is motivated by physical shadows and their affordances. Other techniques include methods for selecting and manipulating virtual tools, data, and numerical values by enlisting different parts of the user's body, methods for easing multi-user collaboration by exploiting social norms, and methods for mid-air text input. We then present a body-centric architecture for supporting the implementation of interaction techniques such as the ones we designed. The architecture maintains a computational geometric model of the entire scene, including users, displays, and any relevant physical objects, which a developer can query in order to develop novel interaction techniques or applications. Finally, we investigate aspects of low-level human performance relevant to a body-centric model. We conclude that traditional models of performance, particularly Fitts' law, are inadequate when applied to physical pointing on large displays where control-display gain can vary widely, and we show that an approach due to Welford is more suitable. Our investigations provide a foundation for a comprehensive body-centric model of interaction with large wall displays that will enable a number of future research directions.

LIVESMOBILE : extending the LIVES system to support SMS and MMS for mobile learning
Singh, Kuljeet
DOI : 10.14288/1.0052046
URI : http://hdl.handle.net/2429/30756
Degree : Master of Science - MSc
Graduation Date : 2011-05

Towards improved functionality and performance of intrusion detection systems
Singh, Sunjeet
DOI : 10.14288/1.0052037
URI : http://hdl.handle.net/2429/30978
Degree : Master of Science - MSc
Graduation Date : 2011-05

A multi-display collaborative urban planning system with a federated architecture
Su, Tao
DOI : 10.14288/1.0052096
URI : http://hdl.handle.net/2429/37747
Degree : Master of Science - MSc
Graduation Date : 2011-11

Strand-based musculotendon simulation of the hand
Sueda, Shinjiro
DOI : 10.14288/1.0052030
URI : http://hdl.handle.net/2429/30506
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-05

This dissertation develops a framework for modelling biomechanical systems, with special focus on the muscles, tendons, and bones of the human hand. Two complementary approaches for understanding the functions of the hand are developed: the strand simulator for computer modelling, and an imaging apparatus for acquiring a rich data set from cadaver hands. Previous biomechanical simulation approaches, based on either lines-of-force or solid mechanics models, are not well-suited for the hand, where multiple contact constraints make it difficult to route muscles and tendons effectively. In lines-of-force models, wrapping surfaces are used to approximate the curved paths of tendons and muscles near joints. These surfaces affect only the kinematics, and not the dynamics, of musculotendons. In solid mechanics models, the 3D deformation of muscles can be fully accounted for, but these models are difficult to create and expensive to simulate; moreover, the fibre-like properties of muscles are not directly represented and must be added on as auxiliary functions. Neither of these approaches properly handles both the dynamics of the musculotendons and the complex routing constraints. We present a new, strand-based approach, capable of handling the coupled dynamics of muscles, tendons, and bones through various types of routing constraints. The functions of the hand can also be studied from the analysis of data obtained from a cadaver hand. We present a hardware and software setup for scanning a cadaver hand that is capable of simultaneously obtaining the skeletal trajectory, tendon tension and excursion, and tendon marker motion. We finish with a preliminary qualitative comparison of a simulation model of the index finger with real world data acquired from ex vivo specimen, using the strands framework.

Resilience of wireless sensor networks
Tseng, Kuan-Chieh Robert
DOI : 10.14288/1.0051193
URI : http://hdl.handle.net/2429/33713
Degree : Master of Science - MSc
Graduation Date : 2011-05

Background subtraction with a pan/tilt camera
Tsinko, Egor
DOI : 10.14288/1.0052028
URI : http://hdl.handle.net/2429/30386
Degree : Master of Science - MSc
Graduation Date : 2011-05

Robust feature selection for large scale image retrieval
Turcot, Panu James
DOI : 10.14288/1.0051958
URI : http://hdl.handle.net/2429/28474
Degree : Master of Science - MSc
Graduation Date : 2011-05

Physics-based animation of primate locomotion
Wang, Suwen
DOI : 10.14288/1.0052101
URI : http://hdl.handle.net/2429/37098
Degree : Master of Science - MSc
Graduation Date : 2011-11

Computational plenoptic image acquisition and display
Wetzstein, Gordon
DOI : 10.14288/1.0052103
URI : http://hdl.handle.net/2429/37674
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

Recent advances in camera technology, computing hardware, and optical fabrication have led to the emergence of computational photography, a field exploring the joint design of optical light modulation and computational processing. While conventional cameras record two-dimensional images, the "ultimate" computational camera would capture all visual information with a single shot. The amount of control over such high-dimensional data is unprecedented: dynamic range, depth of field, focus, colour gamut, and time scale of a photograph can be interactively controlled in post-processing. Required visual properties include the colour spectrum as well as spatial, temporal, and directional light variation - the plenoptic function. Unfortunately, digital sensors optically integrate over the plenoptic dimensions; most of the desired information is irreversibly lost in the process. We explore the multiplexed acquisition of the plenoptic function in this thesis. For this purpose, we introduce a mathematical framework that models plenoptic light modulation and corresponding computational processing. This framework not only allows us to evaluate and optimize the optical components of computational cameras, but also subsequent reconstructions. The combined design of optical modulation and computational processing is not only useful for photography, displays benefit from similar ideas. Within this scope, we propose multi-layer architectures and corresponding optimization schemes for glasses-free 3D display. Compared to conventional automultiscopic displays, our devices optimize brightness, resolution, and depth of field while preserving thin form factors. In a different application, adaptive coded apertures are introduced to projection displays as next-generation auto-iris systems. Combined with computational processing that exploits limitations of human perception, these systems increase the depth of field and temporal contrast of conventional projectors. With computational optics, integrated into sunglasses or car windshields, the capabilities of the human visual system can be extended. By optically modulating perceived intensities and colours, we demonstrate applications to contrast manipulation, preattentive object detection, and visual aids for the colour blind. Finally, we introduce computational probes as high-dimensional displays designed for computer vision applications, rather than for direct view. These probes optically encode refraction caused by transparent phenomena into observable changes in colour and intensity. Novel coding schemes enable single-shot reconstructions of transparent, refractive objects.

An approach to shaping away wireless interference in 802.11 traffic at different transmission rates
Wong, Jimmy Chak Ming
DOI : 10.14288/1.0052100
URI : http://hdl.handle.net/2429/37746
Degree : Master of Science - MSc
Graduation Date : 2011-11

Supporting domain heterogeneous data sources for semantic integration
Xu, Jian
DOI : 10.14288/1.0052113
URI : http://hdl.handle.net/2429/36583
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

A SEMantic Integration System (SemIS) allows a query over one database to be answered using the knowledge managed in multiple databases in the system. It does so by translating a query across the collaborative databases in which data is autonomously managed in heterogeneous schemas. In this thesis, we investigate the challenges that arise in enabling domain heterogeneous (DH) databases to collaborate in a SemIS. In such a setting, distributed databases modeled as independent data sources are pairwise mapped to form the semantic overlay network (SON) of the SemIS. We study two problems we believe are foremost to allow a SemIS to integrate DH data sources. The first problem tackled in this thesis is to efficiently organize data sources so that query answering is efficient despite the increased level of source heterogeneity. This problem is modeled as an “Acquaintance Selection” problem and our solution helps data sources to choose appropriate acquaintances to create schema mappings with and therefore allows a SemIS to have a single-layered and flexible SON. The second problem tackled in this thesis is to allow aggregate queries to be translated across domain heterogeneous (DH) data sources where objects are usually represented and managed at different granularity. We focus our study on relational databases and propose novel techniques that allow a (non-aggregate) query to be answered by aggregations over objects at a finer granularity. The new query answering framework, named “decomposition aggregation query (DAQ)” processing, integrates data sources holding information in different domains and different granularity. New challenges are identified and tackled in a systematic way. We studied query optimizations for DAQ to provide efficient and scalable query processing. The solutions for both problems are evaluated empirically using real-life data and synthetic data sets. The empirical studies verified our theoretical claims and showed the feasibility, applicability (for real-life applications) and scalability of the techniques and solutions.

Projectagon-based reachability analysis for circuit-level formal verification
Yan, Chao
DOI : 10.14288/1.0052102
URI : http://hdl.handle.net/2429/37135
Degree : Doctor of Philosophy - PhD
Graduation Date : 2011-11

This dissertation presents a novel verification technique for analog and mixed signal circuits. Analog circuits are widely used in many applications include consumer electronics, telecommunications, medical electronics. Furthermore, in deep sub-micron design, physical effects might undermine common digital abstractions of circuit behavior. Therefore, it is necessary to develop systematic methodologies to formally verify hardware design using circuit-level models. We present a formal method for circuit-level verification. Our approach is based on translating verification problems to reachability analysis problems. It applies nonlinear ODEs to model circuit dynamics using modified nodal analysis. Forward reachable regions are computed from given initial states to explore all possible circuit behaviors. Analog properties are checked on all circuit states to ensure full correctness or find a design flaw. Our specification language extends LTL logic with continuous time and values and applies Brockett’s annuli to specify analog signals. We also introduced probability into the specification to support practical analog properties such as metastability behavior. We developed and implemented a reachability analysis tool COHO for a simple class of moderate-dimensional hybrid systems with nonlinear ODE dynamics. COHO employs projectagons to represent and manipulate moderate-dimensional, non-convex reachable regions. COHO solves nonlinear ODEs by conservatively approximating ODEs as linear differential inclusions. COHO is robust and efficient. It uses arbitrary precision rational numbers to implement exact computation and trims projectagons to remove infeasible regions. To improve performance and reduce error, several techniques are developed, including a guess-verify strategy, hybrid computation, approximate algorithms, and so on. The correctness and efficiency of our methods have been demonstrated by the success of verifying several circuits, including a toggle circuit, a flip-flop circuit, an arbiter circuit, and a ring-oscillator circuit proposed by researchers from Rambus Inc. Several important properties of these circuits have been verified and a design flaw was spotted during the toggle verification. During the reachability computation, we recognized new problems (e.g., stiffness) and proposed our solutions to these problems. We also developed new methods to analyze complex properties such as metastable behaviors. The combination of these methods and reachability analysis is capable of verifying practical circuits.



Find us on Twitter

a place of mind, The University of British Columbia

 

ICICS/CS Building 201-2366 Main Mall
Vancouver, B.C. V6T 1Z4 Canada
Tel: 604-822-3061 | Fax: 604-822-5485
General: help@cs.ubc.ca
Undergrad program: undergrad-info@cs.ubc.ca
Graduate program: grad-info@cs.ubc.ca

Emergency Procedures | Accessibility | Contact UBC | © Copyright The University of British Columbia