Alan K. Mackworth
Laboratory for Computational Intelligence
Department of Computer Science
University of British Columbia
Vancouver, B.C., Canada V6T 1Z4
The Constraint Satisfaction Problem (CSP) paradigm has evolved and matured over the last two decades. The algorithms developed in the CSP paradigm were made more available and more useful when they were incorporated into the Constraint Programming (CP) language paradigm. This success should be celebrated and more needs to be done in both paradigms.
Despite this success, however, a major challenge still facing the constraint research community is to develop useful theoretical and practical tools for the constraint-based design of embedded intelligent systems. An archetypal example of an application in this class is the design of controllers for sensory-based robots [Zhang and Mackworth, 1994b,Sahota and Mackworth, 1994]. If we examine this problem we see that almost all the tools developed to date in the CSP and CP paradigms are inadequate for the task, despite the superficial attraction of the constraint-based approach.
The fundamental difficulty is that, for the most part, the CSP and CP paradigms presume an offline model of computation. But intelligent systems embedded as controllers in real physical systems must be designed in an online model. Moreover, the online model must be based on various time structures: continuous, discrete and event-based. The requisite online computations, or transductions, are to be performed over various type structures including continuous and discrete domains. These hybrid systems require new models of computation, constraint satisfaction and constraint programming. To this end, Zhang and Mackworth [Zhang and Mackworth, 1994a] defined constraint satisfaction as a dynamic system process that approaches asymptotically the solution set of the given, possibly time-varying, constraints. Under this view, constraint programming is the creation of a dynamic system with the required property.
In the classic `knowledge-based' Good Old-Fashioned Artificial Intelligence and Robotics (GOFAIR) paradigm, robot architectures are based on offline perception and reasoning [Mackworth, 1993]. This approach has failed to make substantial progress for several reasons. One difficulty is the engineering problem of building robots by integrating offline perception and reasoning systems with online control-based motor systems; this integration is difficult, ugly and inefficient. Because of such objections, some in the AI-robotics community have rejected the knowledge-based approach, adopting instead an ad hoc Gibsonian situated approach to perception that exploits regularities of the particular environmental niche of the robot [Horswill and Brooks, 1988]. However, with a radical re-interpretation of `knowledge-based', we can design, build and verify quick and clean knowledge-based situated robot systems.
The Constraint Net (CN) framework [Zhang and Mackworth, 1995a] is a formal and practical model for building hybrid intelligent systems as situated agents [Zhang and Mackworth, 1994b]. In CN, a robotic system is modeled formally as a symmetrical coupling of a robot with its environment. Even though a robotic system is, typically, a hybrid dynamic system, its CN model is unitary. Most other robot design methodologies use hybrid models of hybrid systems, awkwardly combining offline computational models of high-level perception, reasoning and planning with online models of low-level sensing and control.
CN is a model for robotic systems software implemented as modules with I/O ports. A module performs a transduction from its input traces to its output traces, subject to the principle of causality: an output value at any time can depend only on the input values before, or at, that time. The model has a formal semantics based on the least fixpoint of sets of equations [Zhang and Mackworth, 1995a]. In applying it to a robot operating in a given environment, one separately models the dynamics of the robot plant, the robot control program, and the environment. The total system may then be shown to have various properties, such as safety and liveness, based on provable properties of its subsystems. This approach allows one to specify and verify models of embedded control systems. Our goal is to develop it as a practical tool for building real, complex, sensor-based robots.
Although CN can carry out traditional symbolic computation online, such as solving Constraint Satisfaction Problems and path planning, notice that much of the symbolic reasoning and theorem-proving may be outside the agent, in the mind of the designer. GOFAIR does not make this distinction, assuming that such symbolic reasoning occurs explicitly in, and only in, the mind of the agent.
In CN the modeling language and the specification language are totally distinct since they have very different requirements. The modeling language is a generalized dynamical system language. Two versions of the specification language, Timed Linear Temporal Logic [Zhang and Mackworth, 1995c] and Timed -automata [Zhang and Mackworth, 1994a], have been developed with appropriate theorem-proving and model-checking techniques for verifying systems.
Many robots can be designed as online constraint-satisfying devices [Zhang and Mackworth, 1994a,Zhang and Mackworth, 1995b,Zhang and Mackworth, 1995c]. A robot in this restricted scheme can be verified more easily. Moreover, given a constraint-based specification and a model of the plant and the environment, automatic synthesis of a correct constraint-satisfying controller becomes feasible, as shown for a simple ball-chasing robot in [Zhang and Mackworth, 1995c].
Theory is vacuous without an appropriate application to drive designs, experiments and implementations. These ideas are being tested by their application to the task of designing, building and verifying perception, communication and planning systems for robot soccer players with off-board or on-board vision systems.
In the Dynamo Project in our laboratory, we are experimenting with multiple mobile robots under visual control. The Dynamite testbed consists of a fleet of radio-controlled vehicles that receive commands from a remote computer. Using our custom hardware and a distributed MIMD environment, vision programs are able to monitor the position and orientation of each robot at 60 Hz; planning and control programs generate and send motor commands at the same rate. This approach allows umbilical-free behavior and very rapid, lightweight fully autonomous robots. Using this testbed we have demonstrated various robot tasks [Barman et al., 1993], including playing soccer [Sahota and Mackworth, 1994].
Using situated robots as one of our target applications we are developing the framework of a new approach to the problem of constraint-based design of embedded intelligent systems. This is a benchmark problem posed as a challenge for constraint-based theories of computation.
I am grateful to Rod Barman, Cullen Jennings, Stewart Kingdon, Jim Little, Valerie McRae, Don Murray, Dinesh Pai, Michael Sahota, Vlad Tucakov and Ying Zhang for help with this. This work is supported, in part, by the Natural Sciences and Engineering Research Council of Canada and the Institute for Robotics and Intelligent Systems Network of Centres of Excellence.