Intelligence 2E

foundations of computational agents

Iterative best improvement is a local search algorithm that selects a successor of the current assignment that most improves some evaluation function. If there are several possible successors that most improve the evaluation function, one is chosen at random. When the aim is to minimize, this algorithm is called greedy descent. When the aim is to maximize a function, this is called hill climbing or greedy ascent. We only consider minimization; to maximize a quantity, you can minimize its negation.

Iterative best improvement requires a way to evaluate each total assignment. For constraint satisfaction problems, a common evaluation function is the number of constraints that are violated. A violated constraint is called a conflict. With the evaluation function being the number of conflicts, a solution is a total assignment with an evaluation of zero. Sometimes this evaluation function is refined by weighting some constraints more than others.

A local optimum is an assignment such that no possible successor improves the evaluation function. This is also called a local minimum in greedy descent, or a local maximum in greedy ascent. A global optimum is an assignment that has the best value out of all assignments. All global optima are local optima, but there can be many local optima that are not a global optimum.

If the heuristic function is the number of conflicts, a satisfiable CSP has a global optimum with a heuristic value of 0, and an unsatisfiable CPS has a global optimum with a value above 0. If the search reaches a local minimum with a value above 0, you do not know if it is a global minimum (which implies the CSP is unsatisfiable) or not.

Consider the delivery scheduling in Example 4.9. Suppose greedy descent starts with the assignment ${A}{\mathrm{=}}{\mathrm{2}}$, ${B}{\mathrm{=}}{\mathrm{2}}$, ${C}{\mathrm{=}}{\mathrm{3}}$, ${D}{\mathrm{=}}{\mathrm{2}}$, ${E}{\mathrm{=}}{\mathrm{1}}$. This assignment has an evaluation of 3, because it violates ${A}{\mathrm{\ne}}{B}$, ${B}{\mathrm{\ne}}{D}$, and $$. One possible successor with the minimal evaluation has ${B}{\mathrm{=}}{\mathrm{4}}$ with an evaluation of ${\mathrm{1}}$ because only $$ is unsatisfied. This assignment is selected. This is a local minimum. One possible successor with the fewest conflicts can be obtained by changing ${D}$ to ${\mathrm{4}}$, which has an evaluation of 2. It can then change ${A}$ to ${\mathrm{4}}$, with an evaluation of ${\mathrm{2}}$, and then change ${B}$ to ${\mathrm{2}}$ with an evaluation of 0, and a solution is found.

The following gives a trace of the assignments through the walk:

${A}$ | ${B}$ | ${C}$ | ${D}$ | ${E}$ | evaluation |
---|---|---|---|---|---|

2 | 2 | 3 | 2 | 1 | 3 |

2 | 4 | 3 | 2 | 1 | 1 |

2 | 4 | 3 | 4 | 1 | 2 |

4 | 4 | 3 | 4 | 1 | 2 |

4 | 2 | 3 | 4 | 1 | 0 |

Different initializations, or different choices when multiple assignments have the same evaluation, give different sequences of assignments to the variables and possibly different results.

Iterative best improvement considers the best successor assignment even if it is equal to or even worse than the current assignment. This means that if there are two or more assignments that are possible successors of each other and are all local, but not global, optima, it will keep moving between these assignments, and never reach a global optimum. Thus, this algorithm is not complete.