Black-Box Multi-Objective Optimization Benchmarking Platform
Welcome to Black-Box Multi-Objective Optimization Benchmarking (BMOB) Platform.
The aim of this platform is to /consolidate/ black-box multi-objectives problems from the literature into a single framework; which makes it easier for researchers in the Multi-Objective Optimization community to compare, assess, and analyze previous and new algorithms comprehensively. In essence, adding a brick to tools for reproducible research.
With BMOBench, you can test your newly developed algorithms on 100 established problems from the multi-objective optimization community and automatically get the experiments results in latex-based paper template. The results—as data profiles—are reported in terms of four quality indicators: hypervolume, additive epsilon-indicator, inverted generational distance, and generational distance.
MATLAB
, C
compiler C
compiler and Python 32-bit
with the Numpy, matplotlib, and palettable packages, Anaconda is a good start.At this point of time, the platform is supporting MATLAB
and C
. Future releases may support Python
as well.
To start with the BMOBench platform, download its code from github
:
problems
: problems-related files and descriptions postprocess
: for data post-processing matlab
: for running experiments in MATLAB
c
: for running experiments in C
latex-template
: paper template incorporating results generated from the post-processing step.mex
files to speed up the computation:cd
to matlab/benchmark
setup.m
script.
>>setup
c/algs
similar to the random search baseline algorithm, MO-RANDOM
. c/benchmark/main.c
file to include your algorithm. You may want to change the number of runs (NUM_RUNS
) as well as the budget multiplier factor (BUDGET_MULTIPLIER
) for your experiments. This is can be done by editing the c/benchmark/globaldeclare.h
file.C
libraries to speed up the post-processing computation:cd
to postprocess/scripts
setup.py
python script.
>>python setup.py
matlab
directory matlab/algs
directorymatlab/benchmark/run_performance_experiments.m
and matlab/benchmark/run_timing_performance.m
in a way similar to the exemplar algorithm.matlab/runBenchmark.m
by incorporating the algorithm’s name; If you wish to change the number of function evaluations and the number of runs, you can do this by editing the same.matlab/runBenchmark.m
. The collected results will be generated into a new directory EXP_RESULTS
.matlab/run_hv_computation.m
to include the algorithms of interest (whose results are now in the EXP_RESULTS
directory). The computed hv value will also stored in the EXP_RESULTS
directory.cd
to the c
directory in a system shell, and hit make
to compile.runme
executable. The collected results will be generated into a new directory EXP_RESULTS
.step 6
of MATLAB Experiment procedure (above) to compute the hv profile.If you have already ran the postprocess/scripts/setup.py
, you can generate the results as follow:
cd
to postprocess/scripts
.data
dictionary within the main scripting file run_postprocessing
to incorporate the benchmarked algorithms similar to the description of the exemplar algorithm, you may choose among the available quality indicators to compare. step 6
of MATLAB Experiment procedure (above). isMean
to True. To report the best data profiles, set it to False. For more details, please refer to the technical report.run_postprocessing.py
python script.
>>python run_postprocessing.py
With a LaTeX editor, the paper can be compiled directly. Note than the paper only compiles a portion of the results: timing and the aggregated performance of the algorithms over the problem categories and used quality indicators. The rest of the generated results can be found in postprocess/postproc
.
This platform is inspired by the following:
A. L. Custódio, J. F. A. Madeira, A. I. F. Vaz, and L. N. Vicente. “Direct multisearch for multiobjective optimization.“ SIAM Journal on Optimization 21.3 (2011): 1109-1140.
D. Brockhoff, T.D. Tran, and N. Hansen. “Benchmarking numerical multiobjective optimizers revisited.“ Genetic and Evolutionary Computation Conference (GECCO 2015). 2015.
D. Brockhoff, T. Tušar, D. Tušar, T. Wagner, N. Hansen, A. Auger, (2016). Biobjective Performance Assessment with the COCO Platform. ArXiv e-prints, arXiv:1605.01746.
N. Hansen, A. Auger, O. Mersmann, T. Tušar, D. Brockhoff (2016). COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting, ArXiv e-prints, arXiv:1603.08785
N. Hansen, A. Auger, D. Brockhoff, D. Tušar, T. Tušar. COCO: Performance Assessment. ArXiv e-prints, arXiv:1605.03560, 2016.
N. Hansen, T. Tušar, A. Auger, D. Brockhoff, O. Mersmann (2016). COCO: The Experimental Procedure, ArXiv e-prints, arXiv:1603.08776.
T. Tušar, D. Brockhoff, N. Hansen, A. Auger (2016). COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite, ArXiv e-prints, arXiv:1604.00359.
Thanks extended to Bhavarth Pandya, Chaitanya Prasad, Khyati Mahajan, and Shaleen Gupta. They contributed greatly to the core of the C
platform and verified the correctness of the C
-coded problems to their AMPL
counterpart.
If you write a scientific paper describing research that made use of this code, please cite the following report:
@article{bmobench-16,
author = {Abdullah Al-Dujaili and S. Suresh},
title = {BMOBench: Black-Box Multi-Objective Optimization Benchmarking Platform},
journal = {ArXiv e-prints},
year = {2016},
volume = {arXiv:1605.07009},
url = {http://arxiv.org/abs/1605.07009}
}