We examine the convergence properties of a level set algorithm designed to track evolving interfaces; in particular, its convergence properies on a series of two and three dimensional backwards reachable sets whose flow fields involve kink formation (sharp features) and, in some cases, rarefaction fans introduced by input parameters in the dynamics. The chosen examples have analytic solutions to facilitate the convergence analysis. We describe the error analysis method, the formulation of reachability in terms of a Hamilton-Jacobi equation, and our implementation of the level set method in some detail. In addition to the convergence analysis presented here, these techniques and examples could be used to validate either other nonlinear reachability algorithms or other level set implementations.
ian learning facial displays and gestures in interaction. Changes in the human face occur due to many factors, including communication, emotion, speech, and physiology. Most systems for facial expression analysis attempt to recognize one or more of these factors, resulting in a machine whose inputs are video sequences or static images, and whose outputs are, for example, basic emotion categories. Our approach is fundamentally different. We make no prior commitment to some particular recognition task. Instead, we consider that the meaning of a facial display for an observer is contained in its relationship to actions and outcomes. Agents must distinguish facial displays according to their affordances, or how they help an agent to maximize utility. To this end, our system learns relationships between the movements of a person's face, the context in which they are acting, and a utility function. The model is a partially observable Markov decision process, or POMDP. The video observations are integrated into the POMDP using a dynamic Bayesian network, which creates spatial and temoral abstractions amenable to decision making at the high level. The parameters of the model are learned from training data using an a-posteriori constrained optimization technique based on the expectation-maximization algorithm. One of the most significant advantages of this type of learning is that it does not require labeled data from expert knowledge about which behaviors are significant in a particular interaction. Rather, the learning process discovers clusters of facial motions and their relationship to the context automatically. As such, it can be applied to any situation in which non-verbal gestures are purposefully used in a task. We present an experimental paradigm in which we record two humans playing a collaborative game, or a single human playing against an automated agent, and learn the human behaviors. We use the resulting model to predict human actions. We show results on three simple games.
The index of an XML document typically consists of a set of lists of node references. For each node type, a list gives the references of all nodes of this type, in the prefix traversal order. A twig pattern query is answered by the list of all occurrences of a labeled tree structure, and is computed faster using an index. While previous results propose index structures and algorithms which answer twig pattern queries with a complexity linear in the size of the document, we propose an index which allows to answer twig pattern queries with a number of comparisons logarithmic in the size of the document. As answering efficiently twig pattern matching queries necessitates a sophisticate encoding of the output, we expose our technique on two simpler problems, and we claim that the technique can be applied to answer twig pattern queries using a logarithmic number of comparisons as well.
Due to the recent technological advances, real-time hybrid dynamical systems are becoming ubiquitous. Most of these systems behave unpredictably, and thus, exhibit uncertainty. Hence, a formal framework to model systems with unpredictable behaviours is needed. We develop Probabilistic Constraint Nets (PCN), a new framework that can handle a wide range of uncertainty, whether it be probabilistic, stochastic or non-deterministic. In PCN, we view probabilistic dynamical systems as online constraint-solvers for dynamic probabilistic constraints and requirements specification as global behavioural constraints on the systems. We demonstrate the power of PCN by applying it to a fully hybrid model of an elevator system which encompasses several different types of uncertainty. We present verification rules, which have been fully implemented, to perform automatic behavioural constraint verification.
With the proliferation of peer-to-peer storage it is now possible to protect one's data at a level that is comparable to traditional replication systems but at reduced cost and complexity. These systems provide the needed flexibility, reliability, and scalability to operate in present day environments, and handle present day loads. These peer-to-peer storage systems must be able to replicate data on hosts that are trusted, secure, and available. However, recent research has shown that the traditional model, where nodes are assumed to have identical levels of trust, to behave independently, and to have similar failure modes, is incorrect. Thus, there is a need for a mechanism that automatically, correctly, and efficiently selects replica nodes from a large number of available hosts with varying capabilities and trust levels. In this paper we present an algorithm to handle node selection either for new replica groups or to replace failed replicas in a peer-to-peer replication system. We show through simulation that our algorithm maintains the interconnection topology such that the cost of recovery from a failed replica, measured by the number of messages and bandwidth, is minimized.
In this paper we present a new framework for computing the backward reachability from an upward-closed set in a class of parameterized (i.e. infinite state) systems that includes broadcast protocols and petri nets. In contrast to the standard approach, which performs a single least fixpoint computation, we consecutively compute the finite state least fixpoint for constituents of increasing size, which allows us to employ binary decision diagram (BDD)-based symbolic model checking. In support of this framework, we prove necessary and sufficient conditions for convergence and intersection with the initial states, and provide an algorithm that uses BDDs as the underlying data structure. We give experimental results that demonstrate the existence of a petri net for which our algorithm is two orders of magnitude faster than the standard approach, and speculate properties that might suggest which approach to apply.
This document describes a toolbox of level set methods for solving time-dependent Hamilton-Jacobi partial differential equations (PDEs) in the \matlab\ programming environment. Level set methods are often used for simulation of dynamic implicit surfaces in graphics, fluid and combustion simulation, image processing, and computer vision. Hamilton-Jacobi and related PDEs arise in fields such as control, robotics, differential games, dynamic programming, mesh generation, stochastic differential equations, financial mathematics, and verification. The algorithms in the toolbox can be used in any number of dimensions, although computational cost and visualization difficulty make dimensions four and higher a challenge. All source code for the toolbox is provided as plain text in the \matlab\ m-file programming language. The toolbox is designed to allow quick and easy experimentation with level set methods, although it is not by itself a level set tutorial and so should be used in combination with the existing literature.
This paper investigates a novel approach to rendering color information from pictures as haptic feedback at the fingers. Our approach uses a 1D haptic rotary display to render the color information to the fingers using sinusoidal textures of different frequency and amplitude. We tested 12 subjects on their ability to associate colors laid out in a spatially irregular pattern with haptic feedback displayed to their fingers, with the numbers of color/haptic stimuli pairs presented increasing in successive trials. The experiment results suggest that subjects are able to comfortably learn and distinguish up to 8 color/haptic stimuli pairs based on this particular mapping; and with some effort, many can distinguish as many as 16 pairs. The results also raise key issues for further investigation in subsequent studies including the role of multimodal inputs like audio along with haptics.
Considering indexes and algorithms to answer XPath queries over XML data, we propose an index structure and a related algorithm, both adapted to the comparison model, where elements can be accessed non-sequentially. The indexing scheme uses classical labelling techniques, but structurally represents the ancestor-descendant relationships of nodes of each type, in order to allow exponential searches. The algorithm performs XPath location steps along the descendant axis, and it generates few intermediate results. The complexity of the algorithm is proved worst-case optimal in an adaptive comparison model where the index is given, and where the instances are grouped by the number of comparisons needed to check their answer.
Crossover is believed to initiate at specific sites called hotspots, by combinational-repair mechanism in which the initiating hotspot is replaced by a copy of its homologue. Boulton et al. studied through simulation the effect of this mechanism, and observed in their model that active hotspot alleles are rapidly replaced by inactive alleles. This is paradoxical because active hotspots alleles do not disappear in natural systems. We give a theoretical analysis of this model, which confirms their experimental result, and we argue that they failed to take properly into account the benefits of recombination, because of the optimality of their initial population. On the other hand, we show that even with an initial population of low fitness the model does not sustain the active hotspot alleles. Those results suggest that at least one model is wrong, either the one for the recombination of chromosomes, or the one for the diffusion of the hotspot alleles: we suggest another model for the diffusion of hotspots alleles.
RNA secondary structure prediction has become a major bioinformatics research area, since it could be inferred that all functions of a single-stranded RNA are influenced by its secondary structure . Progress in this field has been hindered, among other things, by the lack of a unified repository for RNA informatics data exchange, and by the lack of a standardized file format. We propose to advance the cause for such a centralized RNA database, and to look at what would be the fastest query approach, should one exist: to store the indexes in a relational table, and use SQL to narrow the set of potential answers to only the matching files, prior to performing XPATH on the RNAML file itself, or to store the indexes at the highest (i.e. first) level of an XML file, and use XPATH exclusively. We have found that storing the indexes in a relational table and using both SQL and XPATH is faster by at least one order of magnitude than storing the indexes at the 1st level of an XML file and using XPATH only. Furthermore, the discrepancy between the speeds of the two query methods increases with the number of files. We describe system we have build to test our hypothesis, our testing procedure and results, and explore avenues that will allow us to generalize our results to other XML databases.
Real-time collaboration systems that enable distributed access to a shared application often require a turn-taking protocol. Current protocols rely on the visual channel using GUI widgets, and do not support expressions of urgency. We describe a novel urgency-based turn-taking protocol that is mediated through haptics: vibrotactile signals inform users of their current role in the collaboration. For example, a control holder receives different signals according to the urgency with which collaborators request control. In an observational user study we compare three implementations of the protocol: one dominated by haptic signals, one with visual cues alone, and one balancing both modalities. Our results suggest that a modestly-sized set of well-designed haptic stimuli can be learned to a high degree of accuracy in a short time, that the inclusion of haptic stimuli can make turn-taking behavior more equitable, and that the ability to communicate urgency positively impacts collaboration dynamics.
This work addresses the use of vibrotactile haptic feedback to transmit background information with variable intrusiveness, when recipients are engrossed in a primary visual and/or auditory task. We describe two studies designed to (a) perceptually optimize a set of vibrotactile "icons" and (b) evaluate users' ability to identify them in the presence of varying degrees of workload. Seven icons learned in approximately 3 minutes were each typically identified within 2.5 s and at 95% accuracy in the absence of workload.
We present an implementation of the standard sum of absolute differences (SAD) stereo disparity algorithm, performing all computation in graphics hardware. To our knowledge, this is the fastest published stereo disparity implementation on commodity hardware. With an inexpensive graphics card, we achieve `raw' SAD performance above 170 MPDS (mega-pixel disparities per second), corresponding to 5x5 neighbourhoods, 640x480 pixel images, 54 disparities, 10 frames per second (fps) (or 320x240 pixels, 96 disparities, 25 fps). The CPU is approximately 90% idle while this computation is being performed. Other authors have presented stereo disparity implementations for graphics hardware. However, we focus on filtering the raw results in order to eliminate unreliable pixels, thereby decreasing the error in the final disparity maps. Since the standard SAD algorithm produces disparity maps with relatively high error rates, such filtering is essential for many applications. We implement shiftable windows, left-right consistency, texture, and disparity smoothness filters, all using graphics hardware. We investigate the accuracy/density tradeoff of the latter three filters using a novel analysis. We find that the left-right consistency and smoothness filters are particularly effective, and using these filters we achieve performance above 110 MPDS: 640x480 pixel images, 36 disparities, 10 frames per second (or 320x240 pixels, 66 disparities, 25 fps). This level of performance demonstrates that graphics cards are powerful co-processors for low-level computer vision tasks.
If you have any questions or comments regarding this page please send mail to email@example.com.