Technical Reports

The ICICS/CS Reading Room

2005 UBC CS Technical Report Abstracts

TR-2005-01 Fast Implementation of Lemke's Algorithm for Rigid Body Contact Simulation, January 18, 2005 John E. Lloyd, 30 pages

We present a fast method for solving rigid body contact problems with friction, based on optimizations incorporated into Lemke's algorithm for solving linear complementarity problems. These optimizations reduce the expected solution complexity (in the number of contacts) from O(n3) to nearly O(n m + m3), where m is the number of bodies in the system. For a fixed m the expected complexity is then close to O(n). By simplifying internal computations our method also improves numerical robustness, and removes the need to explicitly compute the large matrices associated with rigid body contact problems.

TR-2005-02 Probability and Equality: A Probabilistic Model of Identity Uncertainty, April 03, 2006 R "Sharma and David " Poole, 12 pages

Identity uncertainty is the task of deciding whether two descriptions correspond to the same object. It is a difficult and important problem in real world data analysis. It occurs whenever objects are not assigned with unique identifiers or when those identifiers may not be observed perfectly. Traditional approaches to identity uncertainty assume that the attributes in the descriptions are independent of each other given whether or not the descriptions refer to the same object. However, this assumption is often faulty. For example, in the person identity uncertainty problem -- the problem of deciding whether two descriptions refer to the same person, the attributes ``date of birth'' and ``last name'' have the same values for twins. In this paper we discuss the identity uncertainty problem in the context of person identity uncertainty. We model the inter-dependence of the attributes and the probabilistic relations between the observed value of attributes and their actual values using a similarity network representation. Our approach allows queries such as, ``what is the distribution over the actual names of a person given the names that appear in the description of the person'', or, ``what is the probability that two descriptions refer to the same person''. We present results that show that our method outperforms the traditional approach for person identity uncertainty which considers the attributes as independent of each other.

TR-2005-03 Empirical Testing of Fast Kernel Density Estimation Algorithms, May 19, 2005 Dustin Lang, Mike Klaas and Nando de Freitas, 6 pages

We present results of experiments testing the Fast Gauss Transform, Improved Fast Gauss Transform, and Dual-Tree methods (using $kd$-tree and Anchors Hierarchy data structures) for fast Kernel Density Estimation (KDE). We examine the performance of these methods with respect to data set size, dimension, allowable error, and data set structure (``clumpiness''), measured in terms of CPU time and memory usage. This is the first multi-method comparison in the literature. The results are striking, challenging several claims that are commonly made about these methods. The results are useful for researchers considering fast methods for KDE problems. Along the way, we provide a corrected error bound and a parameter-selection regime for the IFGT algorithm.

TR-2005-04 Toward indicative discussion fora summarization, March 07, 2005 Mike Klaas, 10 pages

Summarization of electronic discussion fora is a unique challenge; techniques that work startlingly well on monolithic documents tend to fare poorly in this informal setting. Additionally, conventional techniques ignore much of the structures that have the potential to serve as valuable features in the summarization task. We present several novel examples of such features, including the catalyst score, which is effective at identifying salient messages without looking at their content. We also describe and evaluate NewsSum, a prototype summarization system that is able to efficiently generate variable-length summarizations of Usenet threads.

TR-2005-06 Fostering Student Learning and Motivation: an Interactive Educational Tool for AI, March 18, 2005 Saleema Amershi, Nicole Arksey, Giuseppe Carenini, Cristina Conati, Alan Mackworth, Heather Maclaren and David Poole, 5 pages

There are inherent challenges in teaching and learning Artificial Intelligence (AI) due to the complex dynamics of the many fundamental AI concepts and algorithms. Interactive visualization tools have the potential to overcome these challenges. However, there are reservations towards adopting interactive visualizations due to mixed results on their pedagogical effectiveness. Previous work has also often failed to directly assess student preferences and motivation. CIspace is a set of nine interactive visualization tools demonstrating fundamental principles in AI. The CIspace tools are currently in use in undergraduate and graduate classrooms at the University of British Columbia and around the world. In this paper, we present two experiments aimed at assessing the effectiveness of one the tools in terms of knowledge gain and user preference. Our results provide evidence that the tool is as effective as a traditionally accepted form of learning in terms of knowledge gain, and that students significantly prefer to use the tools over traditional forms of study. These results strengthen the case for the incorporation of CIspace, and other interactive visualizations, into courses.

TR-2005-07 Empirically Efficient Verification for a Class of Infinite-State Systems, March 23, 2005 Jesse Bingham and Alan J. Hu, 19 pages

Well-structured transition systems (WSTS) are a broad and well-studied class of infinite-state systems, for which the problem of verifying the reachability of an upward-closed set of error states is decidable (subject to some technicalities). Recently, Bingham proposed a new algorithm for this problem, but applicable only to the special cases of broadcast protocols and petri nets. The algorithm exploits finite-state symbolic model checking and was shown to outperform the classical WSTS verification algorithm on a contrived example family of petri nets. In this work, we generalize the earlier results to handle a larger class of WSTS, which we dub "nicely sliceable", that includes broadcast protocols, petri nets, context-free grammars, and lossy channel systems. We also add an optimization to the algorithm that accelerates convergence. In addition, we introduce a new reduction that soundly converts the verification of parameterized systems with unbounded conjunctive guards into a verification problem on nicely sliceable WSTS. The reduction is complete if a certain decidable side condition holds. This allows us to access industrially relevant challenge problems from parameterized memory system verification. Our empirical results show that, although our new method performs worse than the classical approach on small petri net examples, it performs substantially better on the larger examples based on real, parameterized protocols (e.g., German's cache coherence protocol, with data paths).

TR-2005-08 Nonparametric BLOG, April 06, 2005 Peter Carbonetto, Jacek Kisynski, Nando de Freitas and David Poole, 8 pages

The BLOG language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends the expressiveness of the BLOG language by adopting generative processes over function spaces --- known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By exploiting exchangeability, distributions over unknown objects and their attributes are cast as Dirichlet processes, which resolve difficulties in model selection and inference caused by varying numbers of objects. We demonstrate these concepts with applications to air traffic control and citation matching. #U /grads2/pcarbo/npblog.pdf

TR-2005-09 The Twiddler: A Haptic Teaching Tool: Low-Cost Communication and Mechanical Design, February 18, 2004 Michael Shaver and Karon E. MacLean, 42 pages

The previous haptic interface in the research lab was prohibitively expensive for distribution to a class of students and required a specialized Input/Output board. In order to solve these problems a new device was designed with the stipulations that its interface did not require an I/O board and that it have half the power of the previous device and cost less than $400cdn. The resulting design is called the Twiddler. The Twiddler is a single degree of freedom rotary haptic device. An electronic box and an electric DC motor make up the Twiddler. The electronic box reads the current rotational position of the motor sends it to the host PC through the Parallel Port. The algorithm for the output force command as a function of the position is easily accessed and changed on the host PC to make prototyping and development simple. The host sends the command through the Parallel Port back to the electronic box. The command is converted to a motor driving signal and sent to the motor. This loop happens ever millisecond so that reliable haptic forces can be simulated on the rotational axis of the motor. The parts for the Twiddler cost approximately $400 and the peak torque output is approximately 0.04 Nm (or six oz-in). The software, mechanical and electrical designs are freely available for reproduction.

TR-2005-10 Generalized Constraint-Based Inference, May 11, 2005 Le Chang and Alan K. Mackworth, 21 pages

Constraint-Based Inference (CBI) is a unified framework that subsumes many practical problems in different research communities. These problems include probabilistic inference, decision-making under uncertainty, constraint satisfaction, propositional satisfiability, decoding problems, and possibility inference. Recently, researchers have presented various unified representation and algorithmic frameworks for CBI problems in their fields, based on the increasing awareness that these problems share common features in representation and essentially identical inference approaches. As the first contribution of this paper, we explicitly use the semiring concept to generalize various CBI problems into a single formal representation framework that provides a broader coverage of the problem space based on the synthesis of existing generalized frameworks. Second, the proposed semiring-based unified framework is also a single formal algorithmic framework that provides a broader coverage of both exact and approximate inference algorithms, including variable elimination, junction tree, and loopy message propagation methods. Third, we discuss inference algorithm design and complexity issues. Finally, we present a software toolkit named the Generalized Constraint-Based Inference Toolkit in Java (GCBIJ) as the last contribution of this paper. GCBIJ is the first concrete software toolkit that implements the abstract semiring approach to unify the CBI problem representations and the inference algorithms. The discussion and the experimental results based on GCBIJ show that the generalized CBI framework is a useful tool for both research and applications.

TR-2005-11 A Framework for Multiparty Communication Types, April 28, 2005 Chamath Keppitiyagama and Norman C. Hutchinson, 22 pages

Several multiparty communication paradigms, such as multicast and anycast, have been discussed in the literature and some of them have been used to build applications. There is a vast design space to be explored in implementing these communication paradigms over the wide area Internet. Ideally, application programmers should be able to use these paradigms independent of their implementation details and implementors should be able to explore the design space. However, this is hindered by the lack of three components; a naming system to identify the paradigms, a standard API, and a system to deploy the implementations. We provide a framework to address the above problems. The framework includes a model to name the communication paradigms through the notion of communication types. It also provides an API suitable for all communication types. The framework also includes a middleware that facilitates the implementation and deployment of communication types. We have implemented a wide assortment of communication types and we demonstrate their utility and the effectiveness of the framework through some simple example applications. We also show that the cost interposed by the middleware is minimal and that the framework facilitates the concise implementation of communication types.

TR-2005-12 Role-Based Policies to Control Shared Application Views, May 02, 2005 L. Berry, L. Bartram and K.S. Booth, 24 pages

Collaboration often relies on all group members having a shared view of a single-user application. A common situation is a single active presenter sharing a live view of her workstation screen with a passive audience, using simple hardware-based video signal projection onto a large screen or simple bitmap-based sharing protocols. This offers simplicity and some advantages over more sophisticated software-based replication solutions, but everyone has the exact same view of the application. This conflicts with the presenter's need to keep some information and interaction details private. It also fails to recognize the needs of the passive audience, who may struggle to follow the presentation because of verbosity, display clutter or insufficient familiarity with the application. Views that cater to the different roles of the presenter and the audience can be provided by custom solutions, but these tend to be bound to a particular application. In this paper we describe a general technique and implementation details of a prototype system that allows standardized role-specific views of existing single-user applications and permits additional customization that is application-specific with no change to the application source code. Role-based policies control manipulation and display of shared windows and image buffers produced by the application, providing proactive privacy protection and relaxed verbosity to meet both presenter and audience needs.

TR-2005-13 Perceiving Ordinal Data Haptically Under Workload, May 08, 2005 Anthony Tang, Peter McLachlan, Karen Lowe, Chalapati Rao Saka and Karon MacLean, 8 pages

Visual information overload is a threat to the interpretation of displays presenting large data sets or complex application environments. To combat this problem, researchers have begun to explore how haptic feedback can be used as another means for information transmission. In this paper, we show that people can perceive and accurately process haptically rendered ordinal data while under cognitive workload. We evaluated three haptic models for rendering ordinal data with participants who were performing a taxing visual tracking task. The evaluation demonstrates that information rendered by these models is perceptually available even when users are visually busy. This preliminary research has promising implications for haptic augmentation of visual displays for information visualization.

TR-2005-14 A Generalization of Generalized Arc Consistency: From Constraint Satisfaction to Constraint-Based Inference, May 11, 2005 Le Chang and Alan K. Mackworth, 15 pages

Arc consistency and generalized arc consistency are two of the most important local consistency techniques for binary and non-binary classic constraint satisfaction problems (CSPs). Based on the Semiring CSP and Valued CSP frameworks, arc consistency has also been extended to handle soft constraint satisfaction problems such as fuzzy CSP, probabilistic CSP, max CSP, and weighted CSP. This extension is based on an idempotent or strictly monotonic constraint combination operator. In this paper, we present a weaker condition for applying the generalized arc consistency approach to constraint-based inference problems other than classic and soft CSPs. These problems, including probability inference and maximal likelihood decoding, can be processed using generalized arc consistency enforcing approaches. We also show that, given an additional monotonic condition on the corresponding semiring structure, some of constraint-based inference problems can be approximately preprocessed using generalized arc consistency algorithms.

TR-2005-16 A Trust-based Model for Collaborative Intrusion Response, June 02, 2005 Kapil Singh and Norman C. Hutchinson, 6 pages

Intrusion detection systems (IDS) are quickly becoming a standard component of a network security infrastructure. Most IDS developed to date emphasize detection; response is mainly concentrated on blocking a part of the network after an intrusion has been detected. This mechanism can help in temporarily stopping the intrusion, but such a limited response means that attacking is free for the attacker. The idea behind our approach is to frustrate the intruder by attacking back. This requires developing a sense of trust in the network for the attacked host and establishing proof of the attack so the attack-back action can be justified. To develop this trust model, we propose a protocol that allows the attacked host to prove to the attacker’s edge router that it has been attacked. The model is quite flexible, and based on the level of trust developed for the host, an appropriate countermeasure is taken. Besides attack-back, other possible responses could be blocking a part of the network and use of network puzzles to limit the attacker’s access to network resources. We believe that the attack-back approach would certainly demoralize novice attackers, and even expert attackers will think twice before attacking again. In addition, the protocol prevents a host from pretending that it has been attacked. We are building a system that can handle a majority of known attacks (signature-based). We are also exploring the idea of adding a third trusted party into the system in order to provide countermeasure action for novel attacks (anomaly-based).

TR-2005-17 Improving Backbone Routing for Transient Communication in Mobile Ad Hoc Networks, July 04, 2005 Kan Cai, Michael J. Feeley and Norman C. Hutchinson, 12 pages

Department of Computer Science, University of British Columbia

TR-2005-18 Hot Coupling: A Particle Approach to Inference and Normalization on Pairwise Undirected Graphs of Arbitrary Topology, July 12, 2005 Firas Hamze and de Freitas. Nando, 8 pages

This paper presents a new sampling algorithm for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the notoriously difficult partition function of the graph. The algorithm fits into the framework of sequential Monte Carlo methods rather than the more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the idea of using ``tempered'' proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables that are, initially, sampled exactly from a spanning tree of the variables. We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs.

TR-2005-19 A Logic and Decision Procedure for Predicate Abstraction of Heap-Manipulating Programs, September 19, 2005 Jesse Bingham and Zvonimir Rakamaric, 28 pages

An important and ubiquitous class of programs are heap-manipulating programs (HMP), which manipulate unbounded linked data structures by following pointers and updating links. Predicate abstraction has proved to be an invaluable technique in the field of software model checking; this technique relies on an efficient decision procedure for the underlying logic. The expression and proof of many interesting HMP safety properties require transitive closure predicates; such predicates express that some node can be reached from another node by following a sequence of (zero or more) links in the data structure. Unfortunately, adding support for transitive closure often yields undecidability, so one must be careful in defining such a logic. Our primary contributions are the definition of a simple transitive closure logic for use in predicate abstraction of HMPs, and a decision procedure for this logic. Through several experimental examples, we demonstrate that our logic is expressive enough to prove interesting properties with predicate abstraction, and that our decision procedure provides us with both a time and space advantage over previous approaches.

TR-2005-20 Coping With an Open Bug Repository, August 26, 2005 Anvik John, Hiew Lyndon and Murphy Gail C., 5 pages

Most open source software development projects include an open bug repository---one to which users of the software can gain full access---that is used to report and track problems with, and potential enhancements to, the software system. There are several potential advantages to the use of an open bug repository: more problems with the system might be identified because of the relative ease of reporting bugs, more problems might be fixed because more developers might engage in problem solving, and developers and users can engage in focused conversations about the bugs, allowing users input into the direction of the system. However, there are also some potential disadvantages such as the possibility that developers must process irrelevant bugs that reduce their productivity. Despite the rise in use of open bug repositories, there is little data about what is stored inside these repositories and how they are used. In this paper, we provide an initial characterization of two open bug repositories from the Eclipse and Firefox projects, describe the duplicate bug and bug triage problems that arise with these open bug repositories, and discuss how we are applying machine learning technology to help automate these processes.

TR-2005-21 Theory, Software, and Psychophysical Studies for the Tactile Handheld Miniature Bimodal Device, October 23, 2005 Shannon H. Little, 35 pages

The intention of the work described in this report began as an exploratory effort to determine the basic principles and properties of the THMB device. The result of this experimentation is a description of the basic configuration and perceptual limits of the THMB device, terminology and theory to describe the output of the device, libraries that implement this theory, software that utilizes these libraries to interact with the THMB device, and two user studies, a speed study and a multi-dimensional scaling (MDS) study, that attempt to relate the terminology and theories developed to human perceptual limitations and capabilities.

TR-2005-22 Go with the Flow: How Users Monitor Incoming Email, September 20, 2005 Anthony Tang, Nelson Siu, Lee Iverson and Sidney Fels, 4 pages

We have only a limited understanding of how users continuously monitor and manage their incoming email flow. A series of day-long field observations uncovered three distinct strategies people use to handle their incoming email flow: glance, scan, and defer. Consequently, supporting email flow involves providing simplified views of the email inbox and mechanisms to support the revisitation of overflow messages.

TR-2005-23 Remaining Oriented During Software Development Tasks: An Exploratory Field Study, July 17, 2005 Brian S. de Alwis and Gail C. Murphy, 24 pages

Humans have been observed to become \emph{disoriented} when using menu or hypertext systems. Similar phenomena have been reported by software developers, often manifesting as a feeling of \emph{lostness} while exploring a software system. To investigate this phenomena in the context of software development, we undertook a field study, observing eight developers of the open-source Eclipse project for two hours each as they conducted their normal development work. We also interviewed two other developers using the same tools but who were working on a closed-source system. The developers did report some instances of disorientation, but it was a rare occurrence; rather we observed strategies the developers used to r emain \emph{oriented}. Based on the study results, we hypothesize factors that contribute to disorientation during programming tasks as well as factors that contribute to remaining oriented. Our results can help encode best practices for code navigation, can help inform the development of tools, and can help in the further study of orientation and disorientation in software development.

TR-2005-24 A Formal Mathematical Framework for Modeling Probabilistic Hybrid Systems, October 12, 2005 Robert St-Aubin and Alan K. Mackworth, 22 pages

The development of autonomous agents, such as mobile robots and software agents, has generated considerable research in recent years. Robotic systems, which are usually built from a mixture of continuous (analog) and discrete (digital) components, are often referred to as hybrid dynamical systems. Traditional approaches to real-time hybrid systems usually define behaviors purely in terms of determinism or sometimes non-determinism. However, this is insufficient as real-time dynamical systems very often exhibit uncertain behaviour. To address this issue, we develop a semantic model, Probabilistic Constraint Nets (PCN), for probabilistic hybrid systems. PCN captures the most general structure of dynamic systems, allowing systems with discrete and continuous time/variables, synchronous as well as asynchronous event structures and uncertain dynamics to be modeled in a unitary framework. Based on a formal mathematical paradigm uniting abstract algebra, topology and measure theory, PCN provides a rigorous formal programming semantics for the design of hybrid real-time embedded systems exhibiting uncertainty.

TR-2005-25 Visual Mining of Power Sets with Large Alphabets, December 25, 2005 Tamara Munzner, Qiang Kong, Raymond T. Ng, Jordan Lee, Janek Klawe, Dragana Radulovic and Carson K. Leung, 10 pages

We present the PowerSetViewer visualization system for the lattice-based mining of power sets. Searching for itemsets within the power set of a universe occurs in many large dataset knowledge discovery contexts. Using a spatial layout based on a power set provides a unified visual framework at three different levels: data mining on the filtered dataset, browsing the entire dataset, and comparing multiple datasets sharing the same alphabet. The features of our system allow users to find appropriate parameter settings for data mining algorithms through lightweight visual experimentation showing partial results. We use dynamic constrained frequent set mining as a concrete case study to showcase the utility of the system. The key challenge for spatial layouts based on power set structure is handling large alphabets, because the size of the power set grows exponentially with the size of the alphabet. We present scalable algorithms for enumerating and displaying datasets containing between 1.5 and 7 million itemsets, and alphabet sizes of over 40,000.

TR-2005-26 Material Aware Mesh Deformations, November 07, 2005 Tiberiu Popa, Dan Julius and Alla Sheffer, 11 pages

We propose a novel approach to mesh deformation centered around material properties. Using these, we allow the user to achieve meaningful deformations easily with a small set of anchor triangles. Material properties, as defined here, are stiffness coefficients assigned to the mesh triangles and edges. We use these to describe the bending and shearing flexibility of the surface. By adjusting these coefficients, we provide fine continuous control over the resulting deformations. These material properties can be user-driven using a simple paint-like interface to define them, or data-driven, inferred from a sequence of sample poses. Also, a combination of the two, where the user can refine the resulting data-driven materials may be used to achieve more controlled results. As an alternative to skeleton based deformation methods, our method is both simpler and more powerful, allowing various degrees of stiffness along the surface without requiring a skeleton. Moreover, our method handles non-articulated models which are not suitable for skeleton deformations in a natural way. The formulation is simple, requiring only two linear systems whose coefficient matrices can be pre-inverted, thus allowing the algorithm to work at interactive rates on large models. The resulting deformations are as-rigid-as-possible, subject to the material properties, thus mesh details are well preserved as seen in our results.

TR-2005-27 The Role of Prototyping Tools for Haptic Behavior Design, November 14, 2005 Colin Swindells, Evgeny Maksakov, Karon E. MacLean and Victor Chung, 8 pages

We describe key affordances required by tools for developing haptic behaviors. Haptic icon design involves the envisioning, expression and iterative modification of haptic behavior representations. These behaviors are then rendered on a haptic device. For example, a sinusoidal force vs. position representation rendered on a haptic knob would produce the feeling of detents. Our contribution is twofold. We introduce a custom haptic icon prototyper that includes novel interaction features. We then use the lessons learnt from its development plus our experiences with many haptic devices to present and argue high-level design choices for such prototyping tools in general.

TR-2005-28 Co-locating Haptic and Graphic Feedback in Manual Controls, November 16, 2005 Colin Swindells, Mario J. Enriquez, Karon E. MacLean and Kellogg S. Booth, 4 pages

Based on data showing performance benefits separately for programmable versus static feedback in manual controls and for co-location of dynamic haptic/graphic controls, we hypothesized that combining these benefits would further improve context- and task-specific feedback. Two application prototypes created to explore this premise for (a) streaming media navigation and (b) information flow demonstrate the potential affordances and performance benefits for integration in manual controls. Finally, we describe two practical fabrication methods: embedding a haptic controller into an active graphic display panel, and rear projection onto a passive surface with an embedded haptic control.

TR-2005-29 Building a Haptic Language: Communication Through Touch, November 22, 2005 K. Maclean, J. Pasquero and J. Smith, 16 pages

Designing haptic signals to enrich technology interactions requires a clear understanding of the task, the user and the intricate affordances of touch. This is especially true when the haptics are not implemented as direct renderings of real world forces and textures, but as new interactions designed to convey meaning in new physical ways and support communication. The overall goal of our group's research is to provide the foundations for haptic interactions that are simple, usable and intuitive and that fit within the context of the user's life. In this paper, we describe three avenues through which our group is exploring and building a haptic language that will effectively support communication: signaling and monitoring, expressive communication and shared control. We use scenarios to illustrate where this approach could take us, and emphasize the importance of process and appropriate tools and representations.

TR-2005-30 TopoLayout: Graph Layout by Topological Features, December 02, 2005 D. Archambault, T. Munzner and D. Auber, 9 pages

We describe TopoLayout, a novel framework to draw undirected graphs based on the topological features they contain. Topological features are detected recursively, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. The final layout is drawn using an appropriate algorithm for each topological feature. A more general goal is to partition the graph into features for which there exist good layout algorithms, so in addition to strictly topological features such as trees, connected components, biconnected components, and clusters, we have a detector function to determine when High-Dimensional Embedder is an appropriate choice for subgraph layout. Our framework is the first multi-level approach to provide a phase for reducing the number of node-edge and edge-edge crossings and a phase to eliminate all node-node overlaps. The runtime and layout visual quality of TopoLayout depend on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multi-level algorithms on ten datasets with a range of connectivities and sizes, including real-world graphs of web sites, social networks, and Internet routers. TopoLayout is frequently faster or has results of higher visual quality, and sometimes, it has both. For example, the router dataset of about 140,000 nodes which contains many large tree subgraphs is drawn an order of magnitude faster with improved visual quality.

TR-2005-31 Exact regularization of linear programs, December 26, 2005 Michael P. Friedlander, 10 pages

We show that linear programs (LPs) admit regularizations that either contract the original (primal) solution set or leave it unchanged. Any regularization function that is convex and has compact level sets is allowed---differentiability is not required. This is an extension of the result first described by Mangasarian and Meyer (SIAM J. Control Optim., 17(6), pp. 745-752, 1979). We show that there always exist positive values of the regularization parameter such that a solution of the regularized problem simultaneously minimizes the original LP and minimizes the regularization function over the original solution set. We illustrate the main result using the nondifferentiable L1 regularization function on a set of degenerate LPs. Numerical results demonstrate how such an approach yields sparse solutions from the application of an interior-point method.

If you have any questions or comments regarding this page please send mail to