Technical Reports

The ICICS/CS Reading Room


2009 UBC CS Technical Report Abstracts

TR-2009-02 Towards an experimental model for exploring the role of touch, February 03, 2009 Joseph P. Hall, Jeswin Jeyasurya, Aurora Phillips, Chirag Vesuvala, Steve Yohanan and K.E. MacLean, 8 pages

In this paper we investigate the ability of a haptic device to reduce anxiety in users exposed to disturbing images, as we begin to explore the utility of haptic display in anxiety therapy. We conducted a within-subjects experimental design where subjects were shown two sets of disturbing images; once with the haptic creature and once without; as well as a control condition with calming images. Subjects were connected to bio-sensors which monitored their skin conductance, heart rate and forehead corrugator muscle changes; we then used these signals to estimate the subject's arousal, which has been correlated with anxiety level. We observed a significant interaction effect on arousal when subjects held the creature in the presence of disturbing (versus calm) images. The results of this exploratory study suggest that the creature was able to reduce the level of anxiety induced in the subjects by the images. Qualitative feedback also indicated that a majority of subjects found the haptic creature comforting, supporting the results from the bio-sensor readings

TR-2009-03 Numerically Robust Continuous Collision Detection for Dynamic Explicit Surfaces, February 20, 2009 Tyson Brochu and Robert Bridson, 5 pages

We present a new, provably robust method for continuous collision detection for moving triangle meshes. Our method augments the spatial coordinate system by one dimension, representing time. We can then apply numerically robust predicates from computational geometry to detect intersections in space-time. These predicates use only multiplication and addition, so we can determine the maximum numerical error accumulated in their computation. From this forward error analysis, we can identify and handle degenerate geometric configurations without resorting to user-tuned error tolerances.

TR-2009-04 Efficient Snap Rounding in Square and Hexagonal Grids using Integer Arithmetic, February 12, 2009 Boaz Ben-Moshe, Binay K. Bhattacharya and Jeff Sember, 20 pages

In this paper we present two efficient algorithms for snap rounding a set of segments to both square and hexagonal grids. The first algorithm takes n line segments as input and generates the set of snapped segments in O((n + k) log n + |I| + |I*|) time, where k is never more than the number of hot pixels (and may be substantially less), |I| is the complexity of the unrounded arrangement I, and |I*| is the multiset of snapped segment fragments. The second algorithm generates the rounded arrangement of segments in O(|I| + ( |I*| + Sigma_c is(c)) log n) time, where |I*| is the complexity of the rounded arrangement I* and is(c) is the number of segments that have an intersection or endpoint in pixel row or column c. Both use simple integer arithmetic to compute the rounded arrangement by sweeping a strip of unit width through the arrangement, are robust, and are practical to implement. They improve upon existing algorithms, since existing running times either include an |I| log n term, or depend upon the number of segments interacting within a particular hot pixel h (is(h) and ed(h), or |h|), whereas ours depend on |I| without the log n factor and are either independent of the number of segments interacting within a hot pixel (algorithm 1) or depend upon the number of segments interacting in an entire hot row or column (is(c)), which is a much coarser partition of the plane (algorithm 2).

TR-2009-06 An Analysis of Spatial- and Fourier-Multiplexed Imaging, March 24, 2009 Gordon Wetzstein, Ivo Ihrke and Wolfgang Heidrich, 11 pages

Multiplexing is a common technique for encoding highdimensional image data into a single two-dimensional image. Examples of spatial multiplexing include Bayer patterns to encode color channels, and integral images to encode light fields. In the Fourier domain, optical heterodyning has been used to encode light fields. In this paper, we analyze the relationship between spatial and Fourier multiplexing techniques. We develop this analysis on the example of multi-spectral imaging, and then generalize it to light fields and other properties. We also analyze the effects of sensor saturation on Fourier multiplexing techniques such as optical heterodyning, and devise a new optimization approach for recovering saturated detail.

TR-2009-07 Joint-sparse recovery from multiple measurements, April 14, 2009 Ewout van den Berg and Michael P. Friedlander, 19 pages

The joint-sparse recovery problem aims to recover, from sets of compressed measurements, unknown sparse matrices with nonzero entries restricted to a subset of rows. This is an extension of the single-measurement-vector (SMV) problem widely studied in compressed sensing. We analyze the recovery properties for two types of recovery algorithms. First, we show that recovery using sum-of-norm minimization cannot exceed the uniform recovery rate of sequential SMV using L1 minimization, and that there are problems that can be solved with one approach but not with the other. Second, we analyze the performance of the ReMBo algorithm M. Mishali and Y. Eldar, IEEE Trans. Sig. Proc., 56 (2008) in combination with L1 minimization, and show how recovery improves as more measurements are taken. From this analysis it follows that having more measurements than number of nonzero rows does not improve the potential theoretical recovery rate.

TR-2009-08 Learning a contingently acyclic, probabilistic relational model of a social network, April 06, 2009 Peter Carbonetto, Jacek Kisynski, Michael Chiang and David Poole, 14 pages

We demonstrate through experimental comparisons that modeling relations in a social network with a directed probabilistic model provides a viable alternative to the standard undirected graphical model approach. Our model incorporates special latent variables to guarantee acyclicity. We investigate the inference and learning challenges entailed by our approach.

TR-2009-09 Revenue Monotonicity in Deterministic, Dominant-Strategy Combinatorial Auctions, April 11, 2009 Baharak Rastegari, Anne Condon and Kevin Leyton-Brown, 28 pages

In combinatorial auctions using VCG, a seller can sometimes increase revenue by dropping bidders. In this paper we investigate the extent to which this counter-intuitive phenomenon can also occur under other deterministic dominant-strategy combinatorial auction mechanisms. Our main result is that such failures of “revenue monotonicity” can occur under any such mechanism that is weakly maximal—meaning roughly that it chooses allocations that cannot be augmented to cause a losing bidder to win without hurting winning bidders—and that allows bidders to express arbitrary single-minded preferences. We also give a set of other impossibility results as corollaries, concerning revenue when the set of goods changes, false-name-proofness, and the core.

TR-2009-10 Promoting Collaborative Learning in Lecture Halls using Multiple Projected Screens with Persistent and Dynamic Content, April 20, 2009 Joel Lanir, Kellogg S. Booth and Steven Wolfman, 10 pages

A necessary condition for collaborative learning is shared access and control of the representations of information under discussion. Much of the teaching in higher education today is done in classroom lectures with a largely one-way flow of information from the instructor to students, often using computer slides. Persistence of the information is not under student control and there is little student-initiated interaction or dynamic use of the visual aids. We evaluated the use of MultiPresenter, a presentation system that utilizes multiple screens to afford more flexibility in the delivery of the lecture and persistence of information so students can selectively attend to information on their own terms. Data about the use of multiple screens provides insights into how MultiPresenter affects classroom interactions and students’ learning. We discuss these findings and make recommendations for extending MultiPresenter to better support symmetric collaborative learning within the context of large lecture presentations.

TR-2009-11 On the Power of Local Broadcast Algorithms, April 24, 2009 Hosna Jabbari, Majid Khabbazian, Ian Blake and Vijay Bhargava, 9 pages

There are two main approaches, static and dynamic, to broadcasting in wireless ad hoc networks. In the static approach, local algorithms determine the status (forwarding/non-forwarding) of each node proactively based on local topology information and a globally known priority function. In this paper, we first show that local broadcast algorithms based on the static approach cannot achieve a good approximation factor to the optimum solution (an NP-hard problem). However, we show that a constant approximation factor is achievable if (relative) position information is available. In the dynamic approach, local algorithms determine the status of each node ``on-the-fly'' based on local topology information and broadcast state information. Using the dynamic approach, it was recently shown that local broadcast algorithms can achieve a constant approximation factor to the optimum solution when (approximate) position information is available. However, using position information can simplify the problem. Also, in some applications it may not be practical to have position information. Therefore, we wish to know whether local broadcast algorithms based on the dynamic approach can achieve a constant approximation factor without using position information. We answer this question in the positive - we design a local broadcast algorithm in which the status of each node is decided ``on-the-fly'' and prove that the algorithm can achieve both full delivery and a constant approximation to the optimum solution.

TR-2009-12 Body-Centric Interactions With Very Large Wall Displays, April 28, 2009 Garth Shoemaker, Takayuki Tsukitani, Yoshifumi Kitamura and Kellogg S. Booth, 10 pages

We explore a set of body-centric interaction techniques for very large wall displays. The techniques described include: virtual tools that are stored on a user’s own body, protocols for sharing personal information between co-located collaborators, a shadow representation of users’ bodies, and methods for positioning virtual light sources in the work environment. These techniques are important as a group because they serve to unify the virtual world and the physical world, breaking down the barriers between display space, personal body space, and shared room space. We describe an implementation of these techniques as integrated into a collaborative map viewing and editing application.

TR-2009-13 Degree-of-Knowledge: Investigating an Indicator for Source Code Authority, May 07, 2009 Thomas Fritz, Jingwen Ou and Gail C. Murphy, 10 pages

Working on the source code as part of a large team productively requires a delicate balance. Optimally, a developer might like to thoroughly assess each change to the source code entering their development environment lest the change introduce a fault. In reality, a developer is faced with thousands of changes to source code elements entering their environment each day, forcing the developer to make choices about how often and to what depth to assess changes. In this paper, we investigate an approach to help a developer make these choices by providing an indicator of the authority with which a change has been made. We refer to our indicator of source code authority as a degree-of-knowledge (DOK), a real value that can be computed automatically for each source code element and each developer. The computation of DOK is based on authorship data from the source revision history of the project and on interaction data collected as a developer works. We present data collected from eight professional software developers to demonstrate the rate of information flow faced by developers. We also report on two experiments we conducted involving nine professional software developers to set the weightings of authorship and interaction for the DOK computation. To show the potential usefulness of the indicator, we report on three case studies. These studies considered the use of the indicator to help find experts, to help with onboarding and to help with assessing information in changesets.

TR-2009-15 An Automatically Configured Modular Algorithm for Post Enrollment Course Timetabling, June 15, 2009 Chris Fawcett, Holger H. Hoos and Marco Chiarandini, 14 pages

Timetabling tasks form a widely studied type of resource scheduling problem, with important real-world applications in schools, universities and other educational settings. In this work, we focus on post-enrollment course timetabling, the problem that was covered by Track 2 of the recent 2nd International Timetabling Competition (ITC2007). Following an approach that makes strong use of automated exploration of a large design space of modular and highly parameterised stochastic local search algorithms for this problem, we produced a solver that placed third in Track 2 of ITC2007. In subsequent work, we further improved both the solver framework and the automated algorithm design procedure, and obtained a solver that achieves consistently better performance than the top-ranked solver from the competition and represents a substantial improvement in the state of the art for post-enrollment course timetabling.

TR-2009-18 On Improving Key Pre-distribution Schemes for Sensor Networks, July 24, 2009 Majid "Khabbazian, Ian Blake, Vijay Bhargava and Hosna" Jabbari, 9 pages

In this work, we show how to improve the resilience or computational cost of two primary key pre-distribution schemes.First, we consider the primary key pre-distribution scheme proposed by Eschenauer and Gligor and its extension by Chan, Perrig and Song. We propose a modified version of their schemes and prove that it provides significantly higher resilience than the original schemes at almost no extra cost. The second part of this work deals with the primary key pre-distribution scheme proposed by Blom and its extension by Du, Deng Han and Varshney. The key pre-distribution scheme by Blom and its extension offer much higher resilience than random key pre-distribution schemes at the cost of higher computational cost. We show that the computational cost of the Blom scheme can be significantly reduced at the cost of slight reduction in resilience or a small increase in memory requirement. It is expected that aspects of the techniques introduced here, suitably adapted, can be applied to other key distribution schemes to improve efficiency.

TR-2009-19 Optimization Methods for L1-Regularization, August 04, 2009 Mark Schmidt, Glenn Fung and Romer Rosales, 20 pages

In this paper we review and compare state-of-the-art optimization techniques for solving the problem of minimizing a twice-differentiable loss function subject to L1-regularization. The first part of this work outlines a variety of the approaches that are available to solve this type of problem, highlighting some of their strengths and weaknesses. In the second part, we present numerical results comparing 14 optimization strategies under various scenarios.

TR-2009-20 On Efficient Replacement Policies for Cache Objects with Non-uniform Sizes and Costs, July 31, 2009 Ying Su and Laks Lakshmanan, 12 pages

Replacement policies for cache management have been studied extensively. Most policies proposed in the literature tend to be ad hoc and typically exploit the retrieval cost or latency of the objects, object sizes, popularity of requests and temporal correlations between requests. A recent paper [1] studied the caching problem and developed an optimal replacement policy C∗ 0 under the independent reference model (IRM), assuming nonuniform object retrieval costs. In this paper, we consider the more general setting where both object sizes and their retrieval costs are non-uniform. This setting arises in a number of applications such as web caching and view management in databases and in data warehouses. We consider static selection as a benchmark when evaluating the performance of replacement policies. Our first result is negative: no dynamic policy can achieve a better performance than the optimal static selection in terms of long run average metric. We also prove that a (dynamic) replacement policy attains this optimum iff the stochastic chain induced by it is irreducible. We show that previously studied optimal policies such as A0 and C∗ 0 are special cases of our optimal policy. This motivates the study of static selection. For the general case we are considering, static selection is NP-complete. Let K denote the maximum cache capacity and let K′ be the sum of sizes of all objects minus the cache capacity K. We propose a polynomial time algorithm that is both K-approximate w.r.t. the fractional optimum solution and HK′ -approximate w.r.t. the integral optimum solution, where HK′ is the K′-th Harmonic number. In addition, we develop a K-competitive dynamic policy and show that K is the best possible approximation ratio for both static algorithms and dynamic policies.

TR-2009-21 Tradeoffs in the Empirical Evaluation of Competing Algorithm Designs, October 20, 2009 Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown, 27 pages

We propose an empirical analysis approach for characterizing tradeoffs between different methods for comparing a set of competing algorithm designs. Our approach can provide insight into performance variation both across candidate algorithms and across instances. It can also identify the best tradeoff between evaluating a larger number of candidate algorithm designs, performing these evaluations on a larger number of problem instances, and allocating more time to each algorithm run. We applied our approach to a study of the rich algorithm design spaces offered by three highly-parameterized, state-of-the-art algorithms for satisfiability and mixed integer programming, considering six different distributions of problem instances. We demonstrate that the resulting algorithm design scenarios differ in many ways, with important consequences for both automatic and manual algorithm design. We expect that both our methods and our findings will lead to tangible improvements in algorithm design methods.

TR-2009-22 A Box Shaped Cyclically Reduced Operator, October 23, 2009 Chen Greif and L. Robert Hocking, 25 pages

A new procedure of cyclic reduction is proposed, whereby instead of performing a step of elimination on the original cartesian mesh using a two-color ordering and a standard 5-point or 7-point operator, we perform the decoupling step on the reduced mesh associated with one color, using non-standard operators that are better aligned with that mesh. This yields a cartesian mesh and box shaped 9-point (in 2D) or 27-point (in 3D) operators that are easy to deal with. Convergence analysis for multi-line and multi-plane orderings is carried out. Numerical experiments demonstrate the merits of the approach taken.

TR-2009-23 A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning, November 16, 2009 Eric Brochu, Mike Cora and Nando de Freitas, 50 pages

We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments -- active user modelling with preferences, and hierarchical reinforcement learning. While the most common prior for Bayesian optimization is a Gaussian process, we also present random forests as an example of an alternative prior.

TR-2009-24 Reflections on QuestVis: A Visualization System for an Environmental Sustainability Model, November 19, 2009 Tamara Munzner, Aaron Barsky and Matt Williams, 10 pages

We present lessons learned from the iterative design of QuestVis, a visualization interface for the QUEST environmental sustainability model. The QUEST model predicts the effects of policy choices in the present using scenarios of future outcomes that consist of several hundred indicators. QuestVis treats this information as a high-dimensional dataset, and shows the relationship between input choices and output indicators using linked views and a compact multilevel browser for indicator values. A first prototype also featured an overview of the space of all possible scenarios based on dimensionality reduction, but this representation was deemed to be be inappropriate for a target audience of people unfamiliar with data analysis. A second prototype with a considerably simplified and streamlined interface was created that supported comparison between multiple scenarios using a flexible approach to aggregation. However, QuestVis was not deployed because of a mismatch between the design goals of the project and the true needs of the target user community, who did not need to carry out detailed analysis of the high-dimensional dataset. We discuss this breakdown in the context of a nested model for visualization design and evaluation.

TR-2009-25 Visualizing Interactions on Distributed Tabletops, December 11, 2009 Anthony Tang, Michel Pahud and Bill Buxton, 6 pages

We describe our experiences with a tool we created to interactively visualize the collaborative interactions of groups making use of a distributed tabletop collaboration system. The tool allows us to ask and explore several questions about how users are actually interacting and making use of the space. We briefly describe the tool, discussing both our experience in building and using the tool. Finally, we describe our goals for attending this workshop.


If you have any questions or comments regarding this page please send mail to help@cs.ubc.ca.