next up previous
Next: Incorporating Learning and Backjumping Up: The LPSAT Engine & Previous: The LCNF Formalism

The LPSAT Solver

Our first step in constructing the LPSAT engine was to choose solvers to use as the foundation for its metric and propositional solving portions. The choice was motivated by the following criteria:

1.
It must be easy to modify the propositional solver in order to support triggers and handle reports of inconsistency from the constraint reasoner.

2.
The metric solver must support incremental modifications to the constraint set.

3.
Because a Simplex solve is more expensive than setting a single propositional variable's value, the propositional solver should minimize modifications to the constraint set.

These principles led us to implement the LPSAT engine by modifying the RELSAT satisfiability engine [Bayardo and Schrag1997] and combining it with the CASSOWARY constraint solver [Borning et al.1997,Badros and Borning1998] using the method described in [Nelson and Oppen1979]. RELSAT makes an excellent start for processing LCNF for three reasons. First, it performs a systematic, depth-first search through the space of partial truth assignments; this minimizes changes to the set of active metric constraints. Second, the code is exceptionally well-structured. Third, RELSAT incorporates powerful learning and backjumping optimizations. CASSOWARY is an appropriate Simplex solver for handling LCNF because it was designed to support and quickly respond to small changes in its constraint set.

In order to build LPSAT, we modified RELSAT to include trigger variables and constraints. This required four changes. First, the solver must trigger constraints as the truth assignment changes. Second, the solver must now check for a solvable constraint set to ensure that a truth assignment is satisfying. Third, the solver must report in its solution not only a truth assignment to the boolean variables, but an assignment of real values to the constraint variables4. Finally, since even a purely positive trigger variable may (if set to true) trigger an inconsistent constraint, pure literal elimination cannot act on positive trigger variables5. Figure 3 displays pseudocode for the resulting algorithm.

Figure: Core LPSAT algorithm (without learning or backjumping). BAD? denotes a check for constraint inconsistency; SOLVE returns constraint variable values. $\mbox{$\cal T$}(u)$ returns the constraint triggered by u (possibly null). $\mbox{$\Sigma$}(u)$ denotes the result of setting literal u true in $\Sigma $ and simplifying.
\begin{figure}
\begin{tabbing}
Procedure \mbox{\sc lpsat}(\mbox{\sc lcnf}\ probl...
...Sigma$}(\neg v), \mbox{$\cal T$}}\rangle$})$.
\end{tabbing}
{} {} \end{figure}


next up previous
Next: Incorporating Learning and Backjumping Up: The LPSAT Engine & Previous: The LCNF Formalism

2000-02-24