JBoss.orgCommunity Documentation

Chapter 9. Local search

9.1. Overview
9.2. Hill climbing (simple local search)
9.2.1. Algorithm description
9.3. Tabu search
9.3.1. Algorithm description
9.4. Simulated annealing
9.4.1. Algorithm description
9.5. Late acceptance
9.5.1. Algorithm description
9.6. About neighborhoods, moves and steps
9.6.1. Move generation tips
9.6.2. A step
9.6.3. Getting stuck in local optima
9.7. Deciding the next step
9.7.1. Acceptor
9.7.2. Forager
9.8. Using a custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor

Local search starts from an initial solution and evolves that single solution into a mostly better and better solution. It uses a single search path of solutions, not a search tree. At each solution in this path it evaluates a number of moves on the solution and applies the most suitable move to take the step to the next solution. It does that for a high number of iterations until it's terminated (usually because its time has run out).

Local search acts a lot like a human planner: it uses a single search path and moves facts around to find a good feasible solution. Therefore it's pretty natural to implement.

Local search often needs to start from an initialized solution, therefore it's recommended to configure a construction heuristic solver phase before it.

A step is the winning move. The local search solver tries every move on the current solution and picks the best accepted move as the step:


Because the move B0 to B3 has the highest score (-3), it is picked as the next step. Notice that C0 to C3 (not shown) could also have been picked because it also has the score -3. If multiple moves have the same highest score, one is picked randomly, in this case B0 to B3.

The step is made and from that new solution, the local search solver tries all the possible moves again, to decide the next step after that. It continually does this in a loop, and we get something like this:


Notice that the local search solver doesn't use a search tree, but a search path. The search path is highlighted by the green arrows. At each step it tries all possible moves, but unless it's the step, it doesn't investigate that solution further. This is one of the reasons why local search is very scalable.

As you can see, the local search solver solves the 4 queens problem by starting with the starting solution and make the following steps sequentially:

  1. B0 to B3

  2. D0 to B2

  3. A0 to B1

If we turn on debug logging for the category org.optaplanner, then those steps are shown into the log:

INFO  Solving started: time spend (0), score (-6), new best score (-6), random seed (0).
DEBUG     Step index (0), time spend (20), score (-3), new best score (-3), accepted/selected move count (12/12) for picked step (col1@row0 => row3).
DEBUG     Step index (1), time spend (31), score (-1), new best score (-1), accepted/selected move count (12/12) for picked step (col0@row0 => row1).
DEBUG     Step index (2), time spend (40), score (0), new best score (0), accepted/selected move count (12/12) for picked step (col3@row0 => row2).
INFO  Phase (0) localSearch ended: step total (3), time spend (41), best score (0).
INFO  Solving ended: time spend (41), best score (0), average calculate count per second (1780).

Notice that the logging uses the toString() method of our Move implementation: col1@row0 => row3.

The local search solver solves the 4 queens problem in 3 steps, by evaluating only 37 possible solutions (3 steps with 12 moves each + 1 starting solution), which is only fraction of all 256 possible solutions. It solves 16 queens in 31 steps, by evaluating only 7441 out of 18446744073709551616 possible solutions. Note: with construction heuristics it's even a lot more efficient.

The local search solver decides the next step with the aid of 3 configurable components:


In the above example the selector generated the moves shown with the blue lines, the acceptor accepted all of them and the forager picked the move B0 to B3.

If we turn on trace logging for the category org.optaplanner, then the decision making is shown in the log:

INFO  Solver started: time spend (0), score (-6), new best score (-6), random seed (0).
TRACE         Ignoring not doable move (col0@row0 => row0).
TRACE         Move index (1), score (-4), accepted (true) for move (col0@row0 => row1).
TRACE         Move index (2), score (-4), accepted (true) for move (col0@row0 => row2).
TRACE         Move index (3), score (-4), accepted (true) for move (col0@row0 => row3).
...
TRACE         Move index (6), score (-3), accepted (true) for move (col1@row0 => row3).
...
TRACE         Move index (9), score (-3), accepted (true) for move (col2@row0 => row3).
...
TRACE         Move index (12), score (-4), accepted (true) for move (col3@row0 => row3).
DEBUG     Step index (0), time spend (6), score (-3), new best score (-3), accepted/selected move count (12/12) for picked step (col1@row0 => row3).
...

Because the last solution can degrade (especially in tabu search and simulated annealing), the Solver remembers the best solution it has encountered through the entire search path. Each time the current solution is better than the last best solution, the current solution is cloned and referenced as the new best solution.

An acceptor is used (together with a forager) to active tabu search, simulated annealing, great deluge, ... For each move it checks whether it is accepted or not.

You can implement your own Acceptor, although the build-in acceptors should suffice for most needs. You can also combine multiple acceptors.

When tabu search takes steps it creates tabu's. It does not accept a move as the next step if that move breaks tabu. OptaPlanner implements several tabu types:

You can even combine tabu types:


    <acceptor>
        <solutionTabuSize>1000</solutionTabuSize>
        <moveTabuSize>7</moveTabuSize>
    </acceptor>

If you pick a too small tabu size, your solver can still get stuck in a local optimum. On the other hand, with the exception of solution tabu, if you pick a too large tabu size, your solver can get stuck by bouncing of the walls. Use the benchmarker to fine tweak your configuration. Experiments teach us that it is generally best to use a prime number for the move tabu, undo move tabu, entity tabu or value tabu size.

A tabu search acceptor should be combined with a high subset selection, such as 1000.

Simulated annealing does not always pick the move with the highest score, neither does it evaluate many moves per step. At least at first. Instead, it gives non improving moves also a chance to be picked, depending on its score and the time gradient of the Termination. In the end, it gradually turns into a hill climber, only accepting improving moves.

In many use cases, simulated annealing surpasses tabu search. By changing a few lines of configuration, you can easily switch from tabu search to simulated annealing and back.

Start with a simulatedAnnealingStartingTemperature set to the maximum score delta a single move can cause. Use the Benchmarker to tweak the value.


    <acceptor>
      <simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
    </acceptor>
    <forager>
        <minimalAcceptedSelection>4</minimalAcceptedSelection>
    </forager>

A simulated annealing acceptor should be combined with a low subset selection. The classic algorithm uses a minimalAcceptedSelection of 1, but usually 4 performs better.

You can even combine it with a tabu acceptor at the same time. Use a lower tabu size than in a pure tabu search configuration.


    <acceptor>
      <simulatedAnnealingStartingTemperature>10.0</simulatedAnnealingStartingTemperature>
      <planningEntityTabuSize>5</planningEntityTabuSize>
    </acceptor>
    <forager>
        <minimalAcceptedSelection>4</minimalAcceptedSelection>
    </forager>

This differs from phasing, another powerful technique, where first simulated annealing is used, followed by tabu search.

A forager gathers all accepted moves and picks the move which is the next step. Normally it picks the accepted move with the highest score. If several accepted moves have the highest score, one is picked randomly.

You can implement your own Forager, although the build-in forager should suffice for most needs.

You can plug in a custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor by extending the abstract class and also the related *Config class.

For example, to use a custom MoveSelector, extend the AbstractMoveSelector class, extend the MoveSelectorConfig class and configure it in the solver configuration.

If you build a better implementation that's not domain specific, consider contributing it back as a pull request on github and we'll optimize it and take it along in future refactors.