An algorithm is presented which, given a prescribed manipulator path and corresponding kinematic solution, computes a feasible trajectory in the presence of kinematic singularities. The resulting trajectory is close to minimum time, subject to explicit bounds on joint velocities and accelerations, and follows the path with precision. The algorithm has complexity O(M log M), with respect to the number of joint coordinates M, and works using "coordinate pivoting", in which the path timing near singularities is controlled using the fastest changing joint coordinate. This allows the handling of singular situations, including linear self-motions (e.g., wrist singularities), where the path speed is zero but other joint velocities are non-zero. To compute the trajectory, knots points are inserted along the input path, with increased knot density near singularities. Appropriate path velocities are then computed at each knot point, and the resulting knot-velocity sequence can be integrated to yield the path timing. Examples involving the PUMA manipulator are shown.
Entropy and Enumeration Nicholas Pippenger Shannon's notion of the entropy of a random variable is used to give simplified proofs of asymptotic formulas for the logarithms of the numbers of monotone Boolean functions and Horn functions, and for equivalent results concerning families of sets and closure operations.
Aspect-oriented programming is a new software design and implementation technique proposed by researchers at Xerox PARC. This project is assessing the claims of aspect-oriented programming to improve the software development cycle for particular kinds of applications. The project is divided into three experiments, the first of which has been completed. These experiments have been designed to investigate, separately, such characteristics of aspect-oriented development as the creation of new aspect-oriented programs and ease of debugging aspect-oriented programs.
A partially automated process for generating tests has been experimentally applied to a formal specification of a real world specification for air traffic separation minima. This report discusses the problems addressed by this process along with how and why this automation was achieved.
A partially automated process for generating tests has been experimentally applied to a portion of a real world system-level requirements specification. This paper discusses the problems addressed by this process along with how and why this automation was achieved. The requirements were formalized using a notation designed to be readable by a large proportion of requirements stakeholders. This report also addresses traceability of requirements to tests and introduces the requirements specification language Q.
This paper proposes an approach to the safety verification of the source code of a software-intensive system. This approach centers upon the production of a document intended to ensure the inspectability, maintainability and repeatability of the source code safety verification. This document, called a "safety verification case", is intended to be a part of the overall system safety case. Although the approach was designed for large software-intensive real-time information systems, it may also be useful for other kinds of large software systems with safety-related functionality. The approach involves the construction of a rigorous argument that the source code is safe. The steps of the argument include simplifying the safety verification case structure by isolating the relevant details of the source code, and reducing the "semantic gap" between the source code and the system level hazards through a series of hierarchical refinement steps. Some of the steps in a process based on this approach may be partially automated with tool-based support. Current research and industry practices are reviewed in this paper for supporting tools and techniques.
The Computational Complexity of Knot and Link Problems Joel Hass Department of Mathematics University of California, Davis Davis, CA 95616 USA email@example.com Jeffrey C. Lagarias Information Sciences Research A T & T Labs 180 Park Avenue Florham Park NJ 07932-0971 USA firstname.lastname@example.org Nicholas Pippenger Department of Computer Science University of British Columbia Vancouver, BC V6T 1Z4 CANADA email@example.com We consider the problem of deciding whether a polygonal knot in three dimensional space, or alternatively a knot diagram, is unknotted (that is, whether it is capable of being deformed continuously without self-intersection so that it lies in a plane.) We show that the problem, UNKNOTTING PROBLEM, is in NP. We also consider the problem, SPLITTING PROBLEM, of determining whether two or more such polygons can be split (that is, whether they are capable of being continuously deformed without self-intersection so that they occupy both sides of a plane without intersecting it) and show that it is also in NP. We show that the problem of determining the genus of a polygonal knot (a generalization of the problem of determining whether it is unknotted) is in PSPACE. We also give exponential worst-case running time bounds for deterministic algorithms to solve each of these problems. These algorithms are based on the use of normal surfaces and decision procedures due to W. Haken, with recent extensions by W. Jaco and J. F. Tollefson.
Galois Theory for Minors of Finite Functions Nicholas Pippenger A Boolean function f is a minor of a Boolean function g if f is obtained from g by substituting an argument of f, the complement of an argument of f, or a Boolean constant for each argument of g. The theory of minors has been used to study threshold functions (also known as linearly separable functions) and their generalization to functions of bounded order (where the degree of the separating polynomial is bounded, but may be greater than one). We construct a Galois theory for sets of Boolean functions closed under taking minors, as well as for a number of generalizations of this situation. In this Galois theory we take as the dual objects certain pairs of relations that we call ``constraints'', and we explicitly determine the closure conditions on sets of constraints.
Network applications are applications capable of selecting, at run-time, portions of their code to execute at remote network locations. By executing remote code in a restricted environment and providing convenient communication mechanisms within the application, network applications enable the implementation of tasks that cannot be implemented using traditional techniques. Even existing applications can realize significant performance improvements and reduced resource consumption when redesigned as network applications.
By examining several application domains, we expose specific desirable capabilities of a software infrastructure to support network applications. These capabilities entail a variety of interacting software development challenges for which we recommend solutions.
The solutions are applied in the design and implementation of a network application infrastructure, Jay, based on the Java language. Jay meets most of the desired capabilities, particularly demonstrating a cohesive and expressive communication framework and an integrated yet simple security model.
In all, network applications combine the best qualities of intelligent networks, active networks, and mobile agents into a single framework to provide a unique and effective development environment.
Two of the most important and most difficult questions one can ask about a new software development technique are whether the technique is useful and whether the technique is usable. Various flavours of empirical study are available to evaluate these questions, including surveys, case studies, and experiments. These different approaches have been used extensively in a number of domains, including management science and human-computer interaction. A growing number of software engineering researchers are using experimental methods to statistically validate hypotheses about relatively mature software development aids. Less guidance is available for a developer of a new and evolving software development technique who is attempting to determine, within some cost bounds, if the technique shows some usefulness. We faced this challenge when assessing a new programming technique called aspect-oriented programming. To assess the technique, we chose to apply both a case study approach and a series of four experiments because we wanted to understand and characterize the kinds of information that each approach might provide when studying a technique that is in its infancy. Our experiences suggest some avenues for further developing empirical methods aimed at evaluating software engineering questions. For instance, guidelines on how different observational techniques can be used as multiple sources of data would be helpful when planning and conducting a case study. For the experimental situation, more guidance is needed on how to balance the precision of measurement with the realism necessary to investigate programming issues. In this paper, we describe and critique the evaluation methods we employed, and discuss the lessons we have learned. These lessons are applicable to researchers attempting to assess other new programming techniques that are in an early stage of development.
An algorithm is presented which, given a prescribed manipulator path and corresponding kinematic solution, computes a feasible trajectory in the presence of kinematic singularities. The resulting trajectory is close to minimum time, subject to explicit bounds on joint velocities and accelerations, and follows the path with precision. The algorithm has complexity O(M log M), with respect to the number of joint coordinates M, and works using ``coordinate pivoting'', in which the path timing near singularities is controlled using the fastest changing joint coordinate. This allows the handling of singular situations, including linear self-motions (e.g., wrist singularities), where the path speed is zero but other joint velocities are non-zero. To compute the trajectory, knots points are inserted along the input path, with increased knot density near singularities. Appropriate path velocities are then computed at each knot point, and the resulting knot-velocity sequence can be integrated to yield the path timing. Examples involving the PUMA manipulator are shown.
The principle of separation of concerns has long been used by software engineers to manage the complexity of software system development. Programming languages help software engineers explicitly maintain the separation of some concerns in code. As another step towards increasing the scope of concerns that can be captured cleanly within the code, Kiczales and colleagues have introduced aspect-oriented programming. In aspect-oriented programming, explicit language support is provided to help modularize design decisions that cross-cut a functionally-decomposed program. Aspect-oriented programming is intended to make it easier to reason about, develop, and maintain certain kinds of application code. To investigate these claims, we conducted two exploratory experiments that considered the impact of aspect-oriented programming, as found in AspectJ version 0.1, on two common programming activities: debugging and change. Our experimental results provide insights into the usefulness and usability of aspect-oriented programming. Our results also raise questions about the characteristics of the interface between aspects and functionally-decomposed core code that are necessary to accrue programming benefits. Most notably, the separation provided by aspect-oriented programming seems most helpful when the interface is narrow (i.e., the separation is more complete); partial separation does not necessarily provide partial benefit.
MMX technology is designed to accelerate multimedia and communications applications. The technology includes new processor instructions and data types and exploits the parallelism in processing multiple data. In this work we demonstrate how to use MMX technology in digital image processing applications and discuss important aspects of practical implementation with the GCC compiler. We also focus on the experimental results of common image processing operations and provide a comparative performance analysis.
This paper presents an approach to verifying timed designs based on refinement: first, correctness is established for a speed-independent model; then, the timed design is shown to be a refinement of this model. Although this approach is less automatic than methods based on timed state space enumeration, it is tractable for larger designs. Our method is implemented using a proof checker with a built-in model checker for verifying properties of high-level models, a tautology checker for establishing refinement, and a graph-based timing verification procedure for showing timing properties of transistor level models. We demonstrate the method by proving the timing correctness of Williams' self-timed divider.
Filter path which is sufficiently well behaved that it can be tracked by a manipulator. However, the creation of good paths becomes somewhat problematic in situations where a manipulator is required to follow a target whose position is varying erratically (for instance, if the target is specified using a position sensor held in an operator's hand). This paper presents a simple solution for such situations, in which the ``trajectory generator'' is implemented as a non-linear filter which tries to bring its output (manipulator setpoints) to the input (target position) as quickly as possible, subject to constraints on velocity and acceleration. The solution to this problem in one dimension is quite easy. For multiple dimensions, the problem can be handled by applying one-dimensional solutions to a pair of appropriately chosen coordinate axes. An interesting feature of the approach is that it can handle spatial rotations as well as vector quantities.
If you have any questions or comments regarding this page please send mail to firstname.lastname@example.org.