Empirical analysis tools for the study of algorithm design scenarios
|Oct 25, 2010||Added the actual samples of each algorithm's configuration space|
|June 8, 2009||Added link to the
parameters we considered for each
|May 1, 2009||First version of this page online.|
We propose empirical analysis approaches for studying the tradeoffs arising when comparing several competing algorithm designs. These approaches can provide insight into performance variation both across candidate algorithms and across instances. They can also identify the best tradeoff between evaluating a larger number of candidate algorithm designs, performing these evaluations on a larger number of problem instances, and allocating more time to each algorithm run. We applied these tools to a study of the rich algorithm design spaces offered by three highly-parameterized, state-of-the-art algorithms for satisfiability and mixed integer programming, considering six different distributions of problem instances. We demonstrate that the resulting algorithm design scenarios differ in many ways, with important consequences for both automatic and manual algorithm design, and we expect that our methods and findings will lead to tangible improvements in algorithm design methods.
Please send any questions, concerns or comments to Frank Hutter