CPSC 536H: Empirical Algorithmics (Spring 2008)
Notes by Holger H. Hoos, University of British Columbia
---------------------------------------------------------------------------------
Module 6: Advanced Topics
---------------------------------------------------------------------------------
---
6.1 Multi-objective optimisation [covered only briefly in class]
Multi-objective optimisation (MOO) problems
(aka multi-criteria optimisation problems):
- motivation
- informal definition (as in single-objective case, but multiple objective functions, f_1, f_2, ..., f_k)
- examples: bi-criteria TSP, bi-criteria clustering, multi-criteria scheduling
Main issue (in addition to issues already discussed in Module 5):
trade-off between optimisation objectives
Def: Domination
Given an instance of an MOO problem, a solution s dominates a solution s'
iff
1) for all objectives f_i: f_i(s) is at least as good as f_i(s') and
2) there exists an f_i: f_i(s) is better than f_i(s')
Def: Pareto-optimal (PO) solutions
Given an instance of an MOO problem, a Pareto-optimal solution
is the smallest set S of solutions such that
every solution s *not* in S is dominated by a solution in S.
[draw illustration]
Simple approaches for empirical analysis:
- combine objectives, e.g., in the form of a weighted sum (f = sum_i=1..k a_i*f_i
(also used for solving MOOs)
-> reduces MOO to single objective optimisation = SOO)
sometimes possible (particularly, if objectives can be translated into money),
but in general (at least for weighted sums) there can be PO solutions that don't correspond
to global optima w.r.t. the combined objective (unsupported solutions)
[draw illustration]
[slide]
- study objectives separately
-> can use techniques from Module 5
doesn't capture trade-offs between objectives, but can be sufficient when there
is little (or no) interaction between objectives, or for multi-phase MOO algorithms that
sequentially optimise single objectives
General approach:
study individual objectives using techniques from Ch.5
+ trade-offs / joint development of solution quality over time
(using generalisations of concepts / techniques for SOO algorithms)
Many MOO algorithms are randomised,
in particular, generalisations of SLS methods for SOO problems
Empirical analysis of randomised MOO algorithms:
- natural generalisation of bivariate RTD for SOO algorithms:
=> 3-variate distribution (= 4-dim surface)
- cuts (marginal distr):
- bound run-time -> generalisation of SQDs (= Fonsecca's attainment function)
- bound one of the two sq's -> analogue of bivariate rtd for soo-sls
- fix percentile -> generalisation of SQT
= development of median sq-curve over time
(note: median sq = cut through 3-dim SQD surface)
all of these are 3-dim surfaces
- RTD-based comparison of randomised MOO algorithms:
- comparison of 3-dim cuts from above,
e.g., SQT-surfaces (for median sq)
domination = surfaces are disjoint (no intersection)
local domination = area in which one surface is above/below the other
(can project boundaries on ground plain of 3-surface plot
and label areas)
partition sq_1/sq_2 plain into areas of local domination
(analogue for randomised SOO algorithms: sq and/or rt intervals of local domination)
similar approach for SQD surfaces (for same, fixed time)
---
6.2 Experimental design [not covered in class]
Intro, basic issues/approaches, space-filling designs, Latin hypercube designs
[dace-ch5 slides]
of particular interest to empirical algorithmics (+ practice):
- effects of parameters on algorithm performance (parameter response)
-> parameter optimisation, peak performance / robustness analysis
some observations from practice:
- response curves for single parameters are often captured by
convex response curves (single optimum)
(similarly for some types of problem instance features)
- this knowledge can be used to design experiments / find optimal parameters
more efficiently (bracketing - some issues with randomised algorithms)
- but: parameter effects are typically not independent!
-> cannot analyse / optimise each parameter separately
- parameter response typically depends on features of the given problem instance
(these are not always known / efficiently computable)
- it is typically best to keep the number of algorithm parameters as small as possible
(Occam's razor in design, KISS)
advanced approach to characterising parameter response
- model parameter response based on experimental data
(issue: do this with minimal number of data points -> active learning)
-> regression techniques from stats (basis function regression, Gaussian process regression, ...)
- can be used for parameter optimisation
- can be used for peak performance / robustness analysis
Further literature:
- DACE book, Ch.5+6
- multi-stage approximation [Bernardo et al., 1992]
- surrogate management framework [Booker et al., 1999]
- forthcoming work by Hutter, Hoos, Leyton-Brown)
[slides]
---
Other topics:
- Empirical analysis of real-time algorithms
- Algorithm portfolios
- Self-tuning mechanisms, meta-parameters
- Interaction with a non-deterministic environment (humans, internet, ...)
...
---
learning goals:
- be able to explain the following concepts: multi-objective optimisation problem,
domination (between solutions), Pareto-optimal solution
- be able to explain how MOO problems can be reduced to single objective optimisation problems
and under which circumstances this is appropriate
- be able to outline the basic approach used for studying the empirical performance
of MOO algorithms, in particular with respect to trade-offs between the optimisation objectives,
run-time and probability (in the case of randomised algorithms)
- be able to outline how the concepts of bivariate RTDs generalise to MOO problems with 2 objectives
- be able to explain the fundamental goals of experimental design
- be able to explain the basic issues addressed in experimental design and how these
relate to empirical algorithms
- be able to explain how experimental design techniques relate to parameter optimisation
- be able to explain the concept of a Latin-hypercube design and the motivation behind it
- be able to outline at least one general approach to modelling parameter response
- be able to outline engineering implications of the complexity of experimental design
/ parameter optimisation for algorithms with high-dimensional parameter spaces