JBoss.orgCommunity Documentation

Chapter 9. Local search

9.1. Overview
9.2. Local Search concepts
9.2.1. Taking steps
9.2.2. Deciding the next step
9.2.3. Acceptor
9.2.4. Forager
9.3. Hill Climbing (Simple Local Search)
9.3.1. Algorithm description
9.3.2. Getting stuck in local optima
9.3.3. Configuration
9.4. Tabu Search
9.4.1. Algorithm description
9.4.2. Configuration
9.5. Simulated Annealing
9.5.1. Algorithm description
9.5.2. Configuration
9.6. Late Acceptance
9.6.1. Algorithm description
9.6.2. Configuration
9.7. Step counting hill climbing
9.7.1. Algorithm description
9.7.2. Configuration
9.8. Late Simulated Annealing (experimental)
9.8.1. Algorithm description
9.8.2. Configuration
9.9. Using a custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor

Local Search starts from an initial solution and evolves that single solution into a mostly better and better solution. It uses a single search path of solutions, not a search tree. At each solution in this path it evaluates a number of moves on the solution and applies the most suitable move to take the step to the next solution. It does that for a high number of iterations until it's terminated (usually because its time has run out).

Local Search acts a lot like a human planner: it uses a single search path and moves facts around to find a good feasible solution. Therefore it's pretty natural to implement.

Local Search often needs to start from an initialized solution, therefore it's often required to configure a construction heuristic solver phase before it.

A step is the winning Move. The local search solver tries every move on the current solution and picks the best accepted move as the step:


Because the move B0 to B3 has the highest score (-3), it is picked as the next step. If multiple moves have the same highest score, one is picked randomly, in this case B0 to B3. Note that C0 to C3 (not shown) could also have been picked because it also has the score -3.

The step is applied on the solution. From that new solution, the local search solver tries every move again, to decide the next step after that. It continually does this in a loop, and we get something like this:


Notice that the local search solver doesn't use a search tree, but a search path. The search path is highlighted by the green arrows. At each step it tries all possible moves, but unless it's the step, it doesn't investigate that solution further. This is one of the reasons why local search is very scalable.

As you can see, Local Search solves the 4 queens problem by starting with the starting solution and make the following steps sequentially:

  1. B0 to B3

  2. D0 to B2

  3. A0 to B1

If we turn on debug logging for the category org.optaplanner, then those steps are shown into the log:

INFO  Solving started: time spend (0), score (-6), new best score (-6), random seed (0).
DEBUG     Step index (0), time spend (20), score (-3), new best score (-3), accepted/selected move count (12/12) for picked step (col1@row0 => row3).
DEBUG     Step index (1), time spend (31), score (-1), new best score (-1), accepted/selected move count (12/12) for picked step (col0@row0 => row1).
DEBUG     Step index (2), time spend (40), score (0), new best score (0), accepted/selected move count (12/12) for picked step (col3@row0 => row2).
INFO  Phase (0) localSearch ended: step total (3), time spend (41), best score (0).
INFO  Solving ended: time spend (41), best score (0), average calculate count per second (1780).

Notice that the logging uses the toString() method of the Move implementation: col1@row0 => row3.

A naive Local Search configuration solves the 4 queens problem in 3 steps, by evaluating only 37 possible solutions (3 steps with 12 moves each + 1 starting solution), which is only fraction of all 256 possible solutions. It solves 16 queens in 31 steps, by evaluating only 7441 out of 18446744073709551616 possible solutions. Note: with construction heuristics it's even a lot more efficient.

The local search solver decides the next step with the aid of 3 configurable components:

The solver phase configuration looks like this:


  <localSearch>
    <unionMoveSelector>
      ...
    </unionMoveSelector>
    <acceptor>
      ...
    </acceptor>
    <forager>
      ...
    </forager>
  </localSearch>

In the example below, the MoveSelector generated the moves shown with the blue lines, the Acceptor accepted all of them and the Forager picked the move B0 to B3.

Turn on trace logging to show the decision making in the log:

INFO  Solver started: time spend (0), score (-6), new best score (-6), random seed (0).
TRACE         Move index (0) not doable, ignoring move (col0@row0 => row0).
TRACE         Move index (1), score (-4), accepted (true) for move (col0@row0 => row1).
TRACE         Move index (2), score (-4), accepted (true) for move (col0@row0 => row2).
TRACE         Move index (3), score (-4), accepted (true) for move (col0@row0 => row3).
...
TRACE         Move index (6), score (-3), accepted (true) for move (col1@row0 => row3).
...
TRACE         Move index (9), score (-3), accepted (true) for move (col2@row0 => row3).
...
TRACE         Move index (12), score (-4), accepted (true) for move (col3@row0 => row3).
DEBUG     Step index (0), time spend (6), score (-3), new best score (-3), accepted/selected move count (12/12) for picked step (col1@row0 => row3).
...

Because the last solution can degrade (for example in Tabu Search), the Solver remembers the best solution it has encountered through the entire search path. Each time the current solution is better than the last best solution, the current solution is cloned and referenced as the new best solution.

A Forager gathers all accepted moves and picks the move which is the next step. Normally it picks the accepted move with the highest score. If several accepted moves have the highest score, one is picked randomly.

You can implement your own Forager, but the build-in forager should suffice for most needs.

When Tabu Search takes steps it creates one or more tabu's. For a number of steps, it does not accept a move if that move breaks tabu. That number of steps is the tabu size.


  <localSearch>
    ...
    <acceptor>
      <entityTabuSize>7</entityTabuSize>
    </acceptor>
    <forager>
      <acceptedCountLimit>1000</acceptedCountLimit>
    </forager>
  </localSearch>

OptaPlanner implements several tabu types:

You can even combine tabu types:


    <acceptor>
      <entityTabuSize>7</entityTabuSize>
      <valueTabuSize>3</valueTabuSize>
    </acceptor>

If you pick a too small tabu size, your solver can still get stuck in a local optimum. On the other hand, with the exception of solution tabu, if you pick a too large tabu size, your solver can get stuck by bouncing of the walls. Use the Benchmarker to fine tweak your configuration.

Simulated Annealing does not always pick the move with the highest score, neither does it evaluate many moves per step. At least at first. Instead, it gives non improving moves also a chance to be picked, depending on its score and the time gradient of the Termination. In the end, it gradually turns into Hill Climbing, only accepting improving moves.

Start with a simulatedAnnealingStartingTemperature set to the maximum score delta a single move can cause. Use the Benchmarker to tweak the value.


  <localSearch>
    ...
    <acceptor>
      <simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
    </acceptor>
    <forager>
      <acceptedCountLimit>1</acceptedCountLimit>
    </forager>
  </localSearch>

Simulated Annealing should use a low acceptedCountLimit. The classic algorithm uses an acceptedCountLimit of 1, but often 4 performs better.

You can even combine it with a tabu acceptor at the same time. That gives Simulated Annealing salted with a bit of Tabu. Use a lower tabu size than in a pure Tabu Search configuration.


  <localSearch>
    ...
    <acceptor>
      <simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
      <entityTabuSize>5</entityTabuSize>
    </acceptor>
    <forager>
      <acceptedCountLimit>1</acceptedCountLimit>
    </forager>
  </localSearch>

You can plug in a custom Termination, MoveSelector, EntitySelector, ValueSelector or Acceptor by extending the abstract class and also the related *Config class.

For example, to use a custom MoveSelector, extend the AbstractMoveSelector class, extend the MoveSelectorConfig class and configure it in the solver configuration.

If you build a better implementation that's not domain specific, consider contributing it back as a pull request on github: we'll optimize it and take it along in future refactors.