- TR-2006-01 Preconditioners for the discretized time-harmonic Maxwell equations in mixed form, June 9, 2005 Chen Greif and Dominik Sch\"otzau, 17 pages
We introduce a new preconditioning technique for iteratively solving linear systems arising from finite element discretizations of the mixed formulation of the time-harmonic Maxwell equations. The preconditioners are based on discrete regularization using the scalar Laplacian, and are motivated by spectral equivalence properties of the discrete operators. The analytical observations are accompanied by numerical results that demonstrate the scalability of the proposed approach.

- TR-2006-02 A Better Logic and Decision Procedure for Predicate Abstraction of Heap-Manipulating Programs, January 30, 2006 Zvonimir Rakamaric, Jesse Bingham and Alan J. Hu, 43 pages
Heap-manipulating programs (HMP), which manipulate unbounded linked data structures via pointers, are a major frontier for software model checking. In recent work, we proposed a small logic and inference-rule-based decision procedure and demonstrated their potential by verifying, via predicate abstraction, some simple HMPs. In this work, we generalize and improve our previous results to be practically useful: we allow more than a single pointer field, we permit updating the data stored in heap nodes, we add new primitives and inference rules for cyclic structures, and we greatly improve the performance of our implementation. Experimentally, we are able to verify many more HMP examples, including three small container functions from the Linux kernel. On the theoretical front, we prove NP-hardness for a small fragment of our logic, completeness of our inference rules for a large fragment, and soundness for the full logic.

- TR-2006-03 Finding a Hamiltonian cycle in the dual graph of Right-Triangulations, January 27, 2006 Viann W. "Chan and William S." Evans, 16 pages
In this paper, we describe a method for refining a class of balanced bintree triangulations which maintains a hamiltonian cycle in the dual graph. We also introduce a method for building refinable balanced bintree triangulations using two types of tiles, a diamond tile and a triangular tile.

- TR-2006-04 Presenter-on-Paper: the Camera Phone as an In-Class Educational Technology Tool, March 14, 2006 W. Tian Lim and Steven A. Wolfman, 28 pages
This report documents development work on the "Presenter-on-Paper" project for integrating cell phone cameras as a new channel for communication in the classroom. We present key project design decisions and their rationales. Our target audience includes those who are interested in replicating or extending our work, and therefore would benefit from lessons learned.

- TR-2006-05 “What I Want, Where I Want:” Reference Material Use in Tabletop Work, March 19, 2006 A. Tang and S. Fels, 4 pages
Usable digital tabletop design hinges on a deep understanding of people’s natural work practices over traditional tables. We present an ethnographic study of engineering project teams that highlights the use of reference material—artifacts not the primary product or focus of work activity, but referred to or inspected while the work activity is carried out—in tabletop work. We show how the variety of reference material forms and their role in tabletop work suggest that digital tabletop systems must recognize external artifacts and should allow reconfiguration of external work surfaces and information.

- TR-2006-06 Finding local RNA motifs using covariance models, April 03, 2006 Sohrab P. Shah and Anne E. Condon, 29 pages
We present DISCO, an algorithm to detect conserved motifs in sets of unaligned RNA sequences. Our algorithm uses covariance models (CM) to represent motifs. We introduce a novel approach to initialise a CM using pairwise and multiple sequence alignment. The CM is then iteratively refined. We tested our algorithm on 26 data sets derived from Rfam seed alignments of microRNA (miRNA) precursors and conserved elements in the untranslated regions of mRNAs (UTR elements). Our algorithm outperformed RNAProfile and FOLDALIGN in measures of sensitivity and positive predictive value, although the running time of RNAProfile was considerably faster. The accuracy of our algorithm was unaffected by properties of the input data and performed consistently under different settings of key parameters. The running time of DISCO is $O(N^2L^2W2 + NL3)$ where $W$ is the approximate width of the motif, $L$ is the length of the longest sequence in the input data, and $N$ is the number of sequences. Supplemental material is available at: http://www.cs.ubc.ca/\~{}sshah/disco.

- TR-2006-08 Local Consistency in Junction Graphs for Constraint-Based Inference, March 28, 2006 L. Chang and A. K. Mackworth, 11 pages
The concept of local consistency plays a central role in constraint satisfaction and has been extended to handle general constraint-based inference (CBI) problems. We propose a family of novel generalized local consistency concepts for the junction graph representation of CBI problems. These concepts are based on a general condition that depends only on the existence and property of the multiplicative absorbing element and does not depend on the other semiring properties of CBI problems. We present several local consistency enforcing algorithms and their approximation variants. Theoretical complexity analyses and empirical experimental results for the application of these algorithms to both MaxCSP and probability inference are given. We also discuss the relationship between these local consistency concepts and message passing schemes such as junction tree algorithms and loopy message propagation.

- TR-2006-09 Shuffler: Modeling with Interchangeable Parts, March 31, 2006 Kraevoy V, Julius D and Sheffer A, 9 pages
Many man made and natural objects are easily classified into families of models with a similar part-based structure. Example families include quadrupeds, humans, chairs, and airplanes. In this paper we present Shuffler – a modeling system that automates the process of creating new models by composing interchangeable parts from different existing models within each family. Our system does not require the users to perform any geometric operations; they simply select which parts should come from which input model, and the system composes the parts together. To enable this modeling paradigm, Shuffler precomputes the interchangeable parts across each input family of models by first segmenting the models into meaningful components and then computing correspondences between them. We introduce two new algorithms to perform the segmentation and to establish part correspondences that can also be used for many other applications in computer graphics.

- TR-2006-10 ArtiSynth: A Biomechanical Simulation Platform for the Vocal Tract and Upper Airway, March 01, 2006 Sidney Fels, Florian Vogt, Kees van den Doel, John E. Lloyd, Ian Stavness and Eric Vatikiotis-Bateson, 7 pages
We describe ArtiSynth, a 3D biomechanical simulation platform directed toward modeling the vocal tract and upper airway. It provides an open-source environment in which researchers can create and interconnect various kinds of dynamic and parametric models to form a complete integrated biomechanical system which is capable of articulatory speech synthesis. An interactive graphical Timeline runs the simulation and allows the temporal arrangement of input/output channels to control or observe properties of the model's components. Library support is available for particle-spring and rigid body systems, finite element models, and spline-based curves and surfaces. To date, these have been used to create a dynamic muscle-based model of the jaw, a deformable tongue model, a deformable airway, and a linear acoustics model, which have been connected together to form a complete vocal tract that produces speech and is drivable both by data and by dynamics.

- TR-2006-11 Omnidirectional Humanoid Balance Control:Multiple Strategies for Reacting to a Push, June 07, 2006 KangKang Yin and Michiel van de Panne, 6 pages
We develop and evaluate humanoid balance controllers that can recover from unexpected external perturbations of varying magnitudes in arbitrary directions. Balance strategies explored include ankle and hip strategies for in-place balance, as well as single-step, double-step, and multi-step balance recovery. Simulation results are provided for a 30 DOF humanoid.

- TR-2006-14 Captured Dynamics Data of 5 Mechanical Knobs, August 11, 2006 Colin Swindells and Karon E. MacLean, 36 pages
Torque models are presented for five mechanical knobs that were characterized using a custom built rotary haptic camera. Non-linear least squares fitting was used to estimate model parameters for position, velocity, and acceleration model parts. Additionally, two simulated knobs were modeled to test the accuracy of the characterization algorithm.

- TR-2006-15 Integrating Gaussian Processes with Word-Sequence Kernels for Bayesian Text Categorization, August 12, 2006 Maryam Mahdaviani, Sara Forghanizadeh and Giuseppe Carenini, 8 pages
We address the problem of multi-labelled text classification using word-sequence kernels. However, rather than applying them with Support Vector Machine as in previous work, we chose a classifier based on Gaussian Processes. This is a probabilistic non-parametric method that retains a sound probabilistic semantics while overcoming the limitations of parametric methods. We present the empirical evaluation of our approach on the standard Reuters-21578 datasets.

- TR-2006-16 A Preconditioner For Linear Systems Arising From Interior-Point Optimization Methods, August 29, 2006 Tim Rees and Chen Greif, 25 pages
We investigate a preconditioning technique applied to the problem of solving linear systems arising from primal-dual interior point algorithms in linear and quadratic programming. The preconditioner has the attractive property of improved eigenvalue clustering with increased ill-conditioning of the (1,1) block of the saddle point matrix. We analyze its spectral characteristics, utilizing projections onto the null space of the constraint matrix, and demonstrate performance of the preconditioner on problems from the NETLIB and CUTEr test suites. The numerical experiments include results based on inexact inner iterations, and comparisons of the proposed techniques with constraint preconditioners.

- TR-2006-18 Gradient Projection for General Quadratic Programs (Replaced by TR-2007-16), September 13, 2006 Michael P. Friedlander and Sven Leyffer, 28 pages
We present a hybrid algorithm for solving large-scale quadratic programs (QPs) that is based on a combination of techniques from gradient projection, augmented Lagrangian, and filter methods. The resulting algorithm is globally and finitely convergent. The method efficiently accommodates a matrix-free implementation and is suitable for large-scale problems with many degrees of freedom. The algorithm is based on two main phases. First, gradient projection iterations approximately minimize the augmented Lagrangian function and provide an estimate of the optimal active set. Second, an equality-constrained QP is approximately minimized on this subspace in order to generate a second-order search direction. Numerical experiments on a subset of the CUTEr QP test problems demonstrate the effectiveness of the proposed approach.

- TR-2006-19 Routing Transient Traffic in Mobile Ad Hoc Networks, September 25, 2006 Kan Cai, Michael J. Feeley and Norman C. Hutchinson, 14 pages
Recent research shows that the traffic in public wireless networks is mostly transient and bursty. There is good reason to believe that ad-hoc traffic will follow the same pattern as its popularity grows. Unfortunately transient traffic generates route discoveries much more frequently than the well-studied long-term, constant-bit-rate traffic, causing network congestion problems for existing routing protocols. This paper describes the design of a new routing algorithm, called {\small ECBR}, that uses hybrid backbone routing in a manner that is well suited to workloads that include transient traffic. Our simulation results show that {\small ECBR} outperforms one of the main reactive algorithms (i.e., {\small DSR}). We also explain three key features of our algorithm and demonstrate their roles in substantially improving the performance compared to existing backbone routing techniques.

- TR-2006-20 Understanding 802.11 Performance for Two Competing Flows (Replaced by TR-2007-09), October 12, 2006 Kan Cai, Michael J. Feeley and George Sharath J., 9 pages
- TR-2006-21 Computing nonnegative tensor factorizations, October 19, 2006 (Revised October 7, 2007) Michael P, Friedlander and Kathrin Hatz, 16 pages
Nonnegative tensor factorization (NTF) is a technique for computing a parts-based representation of high-dimensional data. NTF excels at exposing latent structures in datasets, and at finding good low-rank approximations to the data. We describe an approach for computing the NTF of a dataset that relies only on iterative linear-algebra techniques and that is comparable in cost to the nonnegative matrix factorization. (The better-known nonnegative matrix factorization is a special case of NTF and is also handled by our implementation.) Some important features of our implementation include mechanisms for encouraging sparse factors and for ensuring that they are equilibrated in norm. The complete Matlab software package is available under the GPL license.

- TR-2006-23 Comparing Forward and Backward Reachability as Tools for Safety Analysis, December 19, 2006 Ian M. Mitchell, 24 pages
Using only the existence and uniqueness of trajectories for a generic dynamic system with inputs, we define and examine eight types of forward and backward reachability constructs. If the input is treated in a worst-case fashion, any forward or backward reach set or tube can be used for safety analysis, but if the input is treated in a best-case fashion only the backward reach tube always provides the correct results. Fortunately, forward and backward algorithms can be exchanged if well-posed reverse time trajectories can be defined. Unfortunately, backward reachability constructs are more likely to suffer from numerical stability issues, especially in systems with significant contraction---the very systems where forward simulation and reachability are most effective.

- TR-2006-26 Exact regularization of convex programs, November 18, 2006 Michael P. Friedlander and Paul Tseng, 25 pages
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian (1991) involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas (1975) and by Bertsekas, Nedi\'c, and Ozdaglar (2003). When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a ``weak sharp minimum'' is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the L1 regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of L1 regularization in finding sparse solutions.

- TR-2006-27 FAST MARCHING METHODS FOR A CLASS OF ANISOTROPIC STATIONARY HAMILTON-JACOBI EQUATIONS, January 16, 2007 Ken Alton and Ian M. Mitchell, 38 pages
The Fast Marching Method (FMM) has proved to be a very efficient algorithm for solving the isotropic Eikonal equation. Because it is a minor modification of Dijkstra’s algorithm for finding the shortest path through a discrete graph, FMM is also easy to implement. In this paper we describe a new class of Hamilton-Jacobi (HJ) PDEs with axis-aligned anisotropy which satisfy a causality condition for standard finite difference schemes on orthogonal grids and can hence be solved using the FMM; the only modification required to the algorithm is in the local update equation for a node. Since our class of HJ PDEs and grids permit asymmetries, we also examine some methods of improving the efficiency of the local update that do not require symmetric grids and PDEs. This class of HJ PDEs has applications in robotic path planning, and a brief example is included. In support of this and similar applications, we also include explicit update formulas for variations of the Eikonal equation that use the Manhattan, Euclidean and infinity norms on orthogonal grids of arbitrary dimension and with variable node spacing.

- TR-2006-28 Highly Efficient Flooding in Mobile Ad Hoc Networks, December 20, 2006 Majid Khabbazian and Vijay K. Bhargava, 12 pages
This paper presents two efficient flooding algorithms based on 1-hop information.In the first part of the paper, we consider sender-based flooding algorithms, specifically the algorithm proposed by Liu et al. In their paper, Liu et al. propose a sender-based flooding algorithm that can achieve local optimality by selecting the minimum number of forwarding nodes in the lowest computational time complexity O(n logn), where $n$ is the number of neighbors. We show that this optimality only holds for a subclass of sender-based algorithms. We propose an efficient sender-based flooding algorithm based on 1-hop information that reduces the time complexity of computing forwarding nodes to O(n). In Liu's algorithm, n nodes are selected to forward the message in the worst case, whereas in our proposed algorithm, the number of forwarding nodes in the worst case is 11. In the second part of the paper we propose a simple and highly efficient receiver-based flooding algorithm. When nodes are randomly distributed, we prove that the probability of two neighbor nodes broadcasting the same message exponentially decreases when the distance between them decreases or when the node density increases. Using simulation, we confirm these results and show that the number of broadcasts in our proposed receiver-based flooding algorithm can be even less than one of the best-known approximations for the minimum number of required broadcasts.

- TR-2006-29 Cognitive Principles for Information Management: The Principles of Mnemonic Associative Knowledge, August 31, 2006 Holger Hoos, Michael Huggett and Ron Rensink, 54 pages
Information management systems improve the retention of information in large collections. As such they act as memory prostheses, implying an ideal basis in human memory models. Since humans process information by association, and situate it in the context of space and time, systems should maximize their effectiveness by mimicking these functions. Since human attentional capacity is limited, systems should scaffold cognitive efforts in a comprehensible manner. We propose the Principles of Mnemonic Associative Knowledge (P-MAK), which describes a framework for semantically identifying, organizing, and retrieving information, and for encoding episodic events by time and stimuli. Inspired by prominent human memory models, we propose associative networks as a preferred representation. Networks are ideal for their parsimony, flexibility, and ease of inspection. Networks also possess topological properties—such as clusters, hubs, and the small world—that aid analysis and navigation in an information space. Our cognitive perspective addresses fundamental problems faced by information management systems, in particular the retrieval of related items and the representation of context. We present evidence from neuroscience and memory research in support of this approach, and discuss the implications of systems design within the constraints of P-MAK’s principles, using text documents as an illustrative semantic domain.

If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.