JBoss.orgCommunity Documentation

Chapter 14. Benchmarking And Tweaking

14.1. Find The Best Solver Configuration
14.2. Benchmark Configuration
14.2.1. Add Dependency On optaplanner-benchmark
14.2.2. Build And Run A PlannerBenchmark
14.2.3. SolutionFileIO: Input And Output Of Solution Files
14.2.4. Warming Up The HotSpot Compiler
14.2.5. Benchmark Blueprint: A Predefined Configuration
14.2.6. Write The Output Solution Of Benchmark Runs
14.2.7. Benchmark Logging
14.3. Benchmark Report
14.3.1. HTML Report
14.3.2. Ranking The Solvers
14.4. Summary Statistics
14.4.1. Best Score Summary (Graph And Table)
14.4.2. Best Score Scalability Summary (Graph)
14.4.3. Best Score Distribution Summary (Graph)
14.4.4. Winning Score Difference Summary (Graph And Table)
14.4.5. Worst Score Difference Percentage (ROI) Summary (Graph and Table)
14.4.6. Score Calculation Speed Summary (Graph and Table)
14.4.7. Time Spent Summary (Graph And Table)
14.4.8. Time Spent Scalability Summary (Graph)
14.4.9. Best Score Per Time Spent Summary (Graph)
14.5. Statistic Per Dataset (Graph And CSV)
14.5.1. Enable A Problem Statistic
14.5.2. Best Score Over Time Statistic (Graph And CSV)
14.5.3. Step Score Over Time Statistic (Graph And CSV)
14.5.4. Score Calculation Speed Statistic (Graph And CSV)
14.5.5. Best Solution Mutation Over Time Statistic (Graph And CSV)
14.5.6. Move Count Per Step Statistic (Graph And CSV)
14.5.7. Memory Use Statistic (Graph And CSV)
14.6. Statistic Per Single Benchmark (Graph And CSV)
14.6.1. Enable A Single Statistic
14.6.2. Constraint Match Total Best Score Over Time Statistic (Graph And CSV)
14.6.3. Constraint Match Total Step Score Over Time Statistic (Graph And CSV)
14.6.4. Picked Move Type Best Score Diff Over Time Statistic (Graph And CSV)
14.6.5. Picked Move Type Step Score Diff Over Time Statistic (Graph And CSV)
14.7. Advanced Benchmarking
14.7.1. Benchmarking Performance Tricks
14.7.2. Statistical Benchmarking
14.7.3. Template Based Benchmarking And Matrix Benchmarking
14.7.4. Benchmark Report Aggregation

Planner supports several optimization algorithms, so you're probably wondering which is the best one? Although some optimization algorithms generally perform better than others, it really depends on your problem domain. Most solver phases have parameters which can be tweaked. Those parameters can influence the results a lot, even though most solver phases work pretty well out-of-the-box.

Luckily, Planner includes a benchmarker, which allows you to play out different solver phases with different settings against each other in development, so you can use the best configuration for your planning problem in production.

Build a PlannerBenchmark instance with a PlannerBenchmarkFactory. Configure it with a benchmark configuration XML file, provided as a classpath resource:

        PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
                "org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
        PlannerBenchmark plannerBenchmark = plannerBenchmarkFactory.buildPlannerBenchmark();
        plannerBenchmark.benchmark();

A benchmark configuration file looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
  <benchmarkDirectory>local/data/nqueens</benchmarkDirectory>

  <inheritedSolverBenchmark>
    <problemBenchmarks>
      ...
      <inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
      <inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
    </problemBenchmarks>
    <solver>
      ...<!-- Common solver configuration -->
    </solver>
  </inheritedSolverBenchmark>

  <solverBenchmark>
    <name>Tabu Search</name>
    <solver>
      ...<!-- Tabu Search specific solver configuration -->
    </solver>
  </solverBenchmark>
  <solverBenchmark>
    <name>Simulated Annealing</name>
    <solver>
      ...<!-- Simulated Annealing specific solver configuration -->
    </solver>
  </solverBenchmark>
  <solverBenchmark>
    <name>Late Acceptance</name>
    <solver>
      ...<!-- Late Acceptance specific solver configuration -->
    </solver>
  </solverBenchmark>
</plannerBenchmark>

This PlannerBenchmark will try 3 configurations (Tabu Search, Simulated Annealing and Late Acceptance) on 2 data sets (32queens and 64queens), so it will run 6 solvers.

Every <solverBenchmark> element contains a solver configuration and one or more <inputSolutionFile> elements. It will run the solver configuration on each of those unsolved solution files. The element name is optional, because it is generated if absent. The inputSolutionFile is read by a SolutionFileIO (relative to the working directory).

Note

Use a forward slash (/) as the file separator (for example in the element <inputSolutionFile>). That will work on any platform (including Windows).

Do not use backslash (\) as the file separator: that breaks portability because it does not work on Linux and Mac.

The benchmark report will be written in the directory specified the <benchmarkDirectory> element (relative to the working directory).

Note

It's recommended that the benchmarkDirectory is a directory ignored for source control and not cleaned by your build system. This way the generated files are not bloating your source control and they aren't lost when doing a build. Usually that directory is called local.

If an Exception or Error occurs in a single benchmark, the entire Benchmarker will not fail-fast (unlike everything else in Planner). Instead, the Benchmarker will continue to run all other benchmarks, write the benchmark report and then fail (if there is at least 1 failing single benchmark). The failing benchmarks will be clearly marked as such in the benchmark report.

To quickly configure and run a benchmark for typical solver configs, use a solverBenchmarkBluePrint instead of solverBenchmarks:

<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
  <benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
  <warmUpSecondsSpentLimit>30</warmUpSecondsSpentLimit>

  <inheritedSolverBenchmark>
    <problemBenchmarks>
      <xStreamAnnotatedClass>org.optaplanner.examples.nqueens.domain.NQueens</xStreamAnnotatedClass>
      <inputSolutionFile>data/nqueens/unsolved/32queens.xml</inputSolutionFile>
      <inputSolutionFile>data/nqueens/unsolved/64queens.xml</inputSolutionFile>
      <problemStatisticType>BEST_SCORE</problemStatisticType>
    </problemBenchmarks>
    <solver>
      <scanAnnotatedClasses/>
      <scoreDirectorFactory>
        <scoreDefinitionType>SIMPLE</scoreDefinitionType>
        <scoreDrl>org/optaplanner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
        <initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
      </scoreDirectorFactory>
      <termination>
        <minutesSpentLimit>1</minutesSpentLimit>
      </termination>
    </solver>
  </inheritedSolverBenchmark>

  <solverBenchmarkBluePrint>
    <solverBenchmarkBluePrintType>EVERY_CONSTRUCTION_HEURISTIC_TYPE_WITH_EVERY_LOCAL_SEARCH_TYPE</solverBenchmarkBluePrintType>
  </solverBenchmarkBluePrint>
</plannerBenchmark>

The following SolverBenchmarkBluePrintTypes are supported:

Benchmark logging is configured like solver logging.

To separate the log messages of each single benchmark run into a separate file, use the MDC with key singleBenchmark.name in a sifting appender. For example with Logback in logback.xml:

  <appender name="fileAppender" class="ch.qos.logback.classic.sift.SiftingAppender">
    <discriminator>
      <key>singleBenchmark.name</key>
      <defaultValue>app</defaultValue>
    </discriminator>
    <sift>
      <appender name="fileAppender.${singleBenchmark.name}" class="...FileAppender">
        <file>local/log/optaplannerBenchmark-${singleBenchmark.name}.log</file>
        ...
      </appender>
    </sift>
  </appender>

If you have multiple processors available on your computer, you can run multiple benchmarks in parallel on multiple threads to get your benchmarks results faster:

<plannerBenchmark>
  ...
  <parallelBenchmarkCount>AUTO</parallelBenchmarkCount>
  ...
</plannerBenchmark>

The following parallelBenchmarkCounts are supported:

To minimize the influence of your environment and the Random Number Generator on the benchmark results, configure the number of times each single benchmark run is repeated. The results of those runs are statistically aggregated. Each individual result is also visible in the report, as well as plotted in the best score distribution summary.

Just add a <subSingleCount> element to an <inheritedSolverBenchmark> element or in a <solverBenchmark> element:

<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
  ...
  <inheritedSolverBenchmark>
    ...
    <solver>
      ...
    </solver>
    <subSingleCount>10<subSingleCount>
  </inheritedSolverBenchmark>
  ...
</plannerBenchmark>

The subSingleCount defaults to 1 (so no statistical benchmarking).

Note

If subSingleCount is higher than 1, the benchmarker will automatically use a different Random seed for every sub single run, without losing reproducibility (for each sub single index) in EnvironmentMode REPRODUCIBLE and lower.

Matrix benchmarking is benchmarking a combination of value sets. For example: benchmark 4 entityTabuSize values (5, 7, 11 and 13) combined with 3 acceptedCountLimit values (500, 1000 and 2000), resulting in 12 solver configurations.

To reduce the verbosity of such a benchmark configuration, you can use a Freemarker template for the benchmark configuration instead:

<plannerBenchmark>
  ...
  <inheritedSolverBenchmark>
    ...
  </inheritedSolverBenchmark>

<#list [5, 7, 11, 13] as entityTabuSize>
<#list [500, 1000, 2000] as acceptedCountLimit>
  <solverBenchmark>
    <name>Tabu Search entityTabuSize ${entityTabuSize} acceptedCountLimit ${acceptedCountLimit}</name>
    <solver>
      <localSearch>
        <unionMoveSelector>
          <changeMoveSelector/>
          <swapMoveSelector/>
        </unionMoveSelector>
        <acceptor>
          <entityTabuSize>${entityTabuSize}</entityTabuSize>
        </acceptor>
        <forager>
          <acceptedCountLimit>${acceptedCountLimit}</acceptedCountLimit>
        </forager>
      </localSearch>
    </solver>
  </solverBenchmark>
</#list>
</#list>
</plannerBenchmark>

To configure Matrix Benchmarking for Simulated Annealing (or any other configuration that involves a Score template variable), use the replace() method in the solver benchmark name element:

<plannerBenchmark>
  ...
  <inheritedSolverBenchmark>
    ...
  </inheritedSolverBenchmark>

<#list ["1hard/10soft", "1hard/20soft", "1hard/50soft", "1hard/70soft"] as startingTemperature>
  <solverBenchmark>
    <name>Simulated Annealing startingTemperature ${startingTemperature?replace("/", "_")}</name>
    <solver>
      <localSearch>
        <acceptor>
          <simulatedAnnealingStartingTemperature>${startingTemperature}</simulatedAnnealingStartingTemperature>
        </acceptor>
      </localSearch>
    </solver>
  </solverBenchmark>
</#list>
</plannerBenchmark>

Note

A solver benchmark name doesn't allow some characters (such a /) because the name is also used a file name.

And build it with the class PlannerBenchmarkFactory:

        PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromFreemarkerXmlResource(
                "org/optaplanner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
        PlannerBenchmark plannerBenchmark = plannerBenchmarkFactory.buildPlannerBenchmark();

The BenchmarkAggregator takes 1 or more existing benchmarks and merges them into new benchmark report, without actually running the benchmarks again.

This is useful to:

To use it, provide a PlannerBenchmarkFactory to the BenchmarkAggregatorFrame to display the GUI:

    public static void main(String[] args) {
        PlannerBenchmarkFactory plannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource(
                "org/optaplanner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
        BenchmarkAggregatorFrame.createAndDisplay(plannerBenchmarkFactory);
    }

In the GUI, select the interesting benchmarks and click the button to generate the report.