Configurable SAT Solver Challenge (CSSC) 2013

 


Mechanics | Eligibility and RulesCategoriesPrizesContribution of Benchmarks | Important Dates | OrganizersExecution Environment | Automated Algorithm ConfigurationResults

News: the results are available here.
There is also a technical report with the results of CSSC 2013 and an arXiv paper about the CSSC.

CSSC 2013 is a competitive event that assesses the peak performance of solvers for the Boolean satisfiability (SAT) problem that accept parameters. A broad range of SAT solvers expose such parameters to enable automated customization for different instance distributions. Indeed, such customization often yields large improvements over the solver defaults. This competition recognizes that the value of a SAT solver therefore often comes from its customizability rather than just its performance in a default configuration.


Flexibility in a solver is most useful if it can be exploited automatically to customize the solver to a given application domain (defined by a set of training benchmark instances). In this year’s inaugural competition, this automated customization process will happen by running a combination of several automated algorithm configuration procedures to optimize the solver’s parameters (see automated algorithm configuration for details). Future events may allow other customization processes.

Mechanics

Solver developers will submit their solvers as usual in the SAT Competition, but they will also submit a file listing solver parameters and their possible values and ensure that these parameters can be set in a command line call to their solver. (Please see What solver designers need to provide for details.) On each of several sets of homogeneous instances (see “categories” for details), the organizers will then apply an automated algorithm configuration procedure to identify good parameter settings and score solvers by their performance with these optimized settings. Solver developers who submit an entry to the competition will be given feedback on which parameters turned out to be important empirically, and which values yielded high performance.

Eligibility and Rules

In line with the emphasis of the SAT Competition 2013, this year’s competition focuses on core solvers. Proprietary solvers are eligible for submission, but prizes will only be awarded to solvers whose developers agree to make source code (and parameter specification / solver wrapper) available on the competition website. Solvers will be ranked as in the SAT Competition 2013, with the important difference that the CSSC 2013 Competition will measure peak performance (i.e., performance with optimized parameters) rather than performance of the solver defaults.

Categories

Similar to previous SAT competitions, there will be three categories:

Each category will include multiple sub-categories of benchmark instances, where each such sub-category is relatively homogeneous (e.g., instances created with the same generator) and is split into a training and a test set. For each sub-category, submitted solvers will be automatically configured on the training set portion and the best configuration found will then be evaluated on the test set portion. The performances on the respective test sets (which are of equal size across sub-categories) will then be merged and used to score solvers just like in the SAT Competition. For simplicity, in this inaugural event, parallel solvers will not be supported.

Prizes

In each category, gold, silver, and bronze medals will be awarded to the open-source solvers whose customized versions perform best, provided that tracks receive enough entries. Specifically, a gold medal will be awarded only if there are at least three solvers participating in a track; for silver and bronze, respectively, four and five solvers will need to participate. (Closed-source solvers are welcome to participate, but are not eligible for prizes.)

Contribution of Benchmarks

We welcome the contribution of benchmarks from the community. We particularly encourage the submission of benchmark generators that allow the organizers to generate instances of an appropriate empirical hardness.

Important Dates (all 2013; coordinated to come just after the SAT Competition 2013)

30 April     Solver registration and testing period opens
30 April   Benchmarks and generators submission opens
1 June   Final versions of registered solvers due
1 June   Benchmarks and generators submission closes
1-30 June   Execution of the competition
9-12 July   Announcement of results at the SAT 2013 Conference

Organizers

Execution Environment and Resources

For each solver run, we’ll use a single core on the QDR partition of the Westgrid orcinus cluster, using Olivier Roussel's runsolver tool to limit memory to 3GB and time to 300 seconds.

The cluster nodes have the following specification:

Automated Algorithm Configuration

To obtain representative results of the peak performance that can be obtained by means of automated algorithm configuration, we will use a combination of several state-of-the-art algorithm configuration methods. Specifically, we plan to perform at least 5 runs of the configuration procedures ParamILS and SMAC, and one run of GGA using 5 processor cores (GGA is inherently parallel). We intend to use the best configuration found by any of these runs (best with respect to training performance; test runs will only be performed for that one configuration). The time budget for each individual configuration run will be 48 CPU hours (or 50 wall clock hours, whichever is reached earlier).

What solver developers need to provide

Configurators operate by running a solver hundreds or thousands of times, on different instances and with different assignments to its parameters. Thus, next to source code (or a binary), solver designers need to provide a specification of their solver’s parameters and their possible values. Since every solver has its own format for specifying parameter values on the command line (and since we don't want to force algorithm designers to support a format we define), we also require a wrapper method for translating an assignment of parameter values into a command line call. 

We provide a test environment that mirrors the directory structure we will use in the competition. This test environment contains several examples of parameterized solvers, each of them in a subdirectory of the solvers/ directory. Your submission will simply be another subdirectory of solvers/ following the interface desribed in the README.txt. That README also describes how to define the parameter file for your solver (using the .pcs format ParamILS, SMAC, and GGA support), how to define your solver wrapper, and how to test and debug your wrapper and your parameterized solver. Once a sample SMAC run goes through for your solver, please simply zip up your solver subdirectory and submit it (along with directions for building your solver from source) through the CSSC Google submission form. After the competition, we will make all submissions publically available on this website.


Please send any questions, concerns or comments to cssc.organizers@gmail.com.