The key idea behind Hydra is that a new candidate algorithm should be preferred exactly to the extent that it improves upon the performance of a (slightly idealized) portfolio. Hydra is thus implemented by changing the performance measure given to the algorithm configuration. A candidate algorithm is scored with its actual performance in cases where it is better than the existing portfolio, but with the portfolios in cases where it is worse. Thus an algorithm is not penalized for bad performance on instances for which it should not be selected anyway, but is only rewarded to the extent that it outperforms the portfolio. We note that our method implicitly assumes that a new algorithm will be selected if and only if it would outperform the portfolios previous selection.

As shown in the pseudo-code, Hydra takes five inputs: a parameterized solver $s$, a set of training problem instances $N$, an algorithm configuration procedure $AC$ with a performance metricmto $m$ be optimized, and a procedure for building portfolio-based algorithm selectors $PB$.

In its first iteration, Hydra uses configurator $AC$ to produce
a configuration of $s$, dubbed $s$_{1} , that is optimized on
training set $N$ according to performance metric $m$. Solver
$s$_{1} is then run on all instances of $N$ in order to collect data
that can eventually be input to $PB$; runs performed during
the earlier configuration process can be cached and reused
as appropriate. We define portfolio $P$_{1} as the portfolio that
always selects $s$_{1} , and solver set $S$_{1} as $\{s$_{1}}
.

Then, in each subsequent iteration $i$, Hydra defines
a modified performance metric $m$_{1} as the better of the performance
of the solver being assessed and the performance
of the current portfolio, both measured according to $m$. The
configurator $AC$ is run to find a configuration $s$_{i} of $s$ that
optimizes $m$_{i} on the entire training set $N$. As before, the
resulting solver is evaluated on the entire set $N$ and then
added to the solver set $S$. We then use $PC$ to construct a
new portfolio $P$_{i} from the given set of solvers. We note that
in each iteration of Hydra, the size of the candidate solver
set $S$_{i} grows by one; however, $PC$ may drop solvers that
do not contribute to the performance of portfolio $P$_{i} (this
is done, e.g., in SATzilla). Hydra can be terminated
using various criteria, such as a user-specified bound on the
number of iterations and/or a total computation-time budget.

The algorithm configuration procedure $AC$ used within
Hydra must be able to deal efficiently with configurations
having equal performance on some or all instances, because
such configurations can be expected to be encountered frequently.
(For example, all configurations dominated by portfolio $P$_{i} will have equal performance under
performance metric
$m$_{i} ). It is also possible to exploit $m$_{i} for computational
gain when optimizing runtime. Specifically, a run of $s$ on some instance
$k$ in $N$ can be terminated during configuration once its runtime
reaches portfolio $P$_{i-1}'s runtime on $k$.

*Source code (download) *

Please send us your feedback to xulin730@cs.ubc.ca