Git Product home page Git Product logo

esa / pygmo2 Goto Github PK

View Code? Open in Web Editor NEW
434.0 10.0 57.0 14.2 MB

A Python platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.

Home Page: https://esa.github.io/pygmo2/

License: Mozilla Public License 2.0

CMake 3.15% Python 41.99% C++ 53.38% Shell 1.34% PowerShell 0.14%
optimization optimization-algorithms optimization-methods optimization-problem evolutionary-algorithms multiobjective-optimization stochastic-optimization island-model parallel-computing parallel-processing

pygmo2's Introduction

pygmo

Build Status Build Status

Anaconda-Server Badge PyPI

Join the chat at https://gitter.im/pagmo2/Lobby

DOI DOI

pygmo is a scientific Python library for massively parallel optimization. It is built around the idea of providing a unified interface to optimization algorithms and to optimization problems and to make their deployment in massively parallel environments easy.

If you are using pygmo as part of your research, teaching, or other activities, we would be grateful if you could star the repository and/or cite our work. For citation purposes, you can use the following BibTex entry, which refers to the pygmo paper in the Journal of Open Source Software:

@article{Biscani2020,
  doi = {10.21105/joss.02338},
  url = {https://doi.org/10.21105/joss.02338},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2338},
  author = {Francesco Biscani and Dario Izzo},
  title = {A parallel global multiobjective framework for optimization: pagmo},
  journal = {Journal of Open Source Software}
}

The DOI of the latest version of the software is available at this link.

The full documentation can be found here.

Upgrading from pygmo 1.x.x

If you were using the old pygmo, have a look here on some technical data on what and why a completely new API and code was developed: https://github.com/esa/pagmo2/wiki/From-1.x-to-2.x

You will find many tutorials in the documentation, we suggest to skim through them to realize the differences. The new pygmo (version 2) should be considered (and is) as an entirely different code.

pygmo2's People

Contributors

baluyotraf avatar bluescarni avatar ccrutchf avatar darioizzo avatar julicot9 avatar kirbyherm avatar mlooz avatar nunorc avatar thisandthatuser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pygmo2's Issues

[BUG] fatal error: 'IpReturnCodes.hpp' file not found

pygmo2-2.15.0-139-g15cf30d fails to build with pagmo2-2.15.0-102-gdecda188 and Ipopt-3.12.13:

/usr/local/include/pagmo/algorithms/ipopt.hpp:41:10: fatal error: 'IpReturnCodes.hpp' file not found
#include <IpReturnCodes.hpp>
         ^~~~~~~~~~~~~~~~~~~
1 error generated.

The file /usr/local/include/coin/IpReturnCodes.hpp is installed, but this directory apparently isn't searched.

OS: FreeBSD 12.2
clang-10

population is not iterable

population should support indexing and iteration so that we can write:

for x, f in pop:
         popNew.set_xf(x,f)

[FEATURE] Track or Record each generation's champion_x during evolve?

Is your feature request related to a problem? Please describe.
While using SGA or other evolutionary algorithms, users may not access each generation's champion_x now.
So now it is hard to visualize the process of evolution. Even users set verbosity and extract some logs, they only store the best fitness values of each generation.

Describe the solution you'd like
Can users have an optional function to record each gen's champion_x? Is there any way to extract each generation's champion_x?

Best,

Non linear constraints detected - but no nonlinear constraints?

Describe the bug
I've made an extremely minimal test problem which is multiobjective and has a dummy constraint. I get an error in the algorithm.evolve call, suggesting that my constraint is nonlinear, even though in this very simple example the constraint is in fact a constant!

To Reproduce
https://gist.github.com/optiluca/d58f92de8c1560ecc8092994d592e452

File "pygmo2.py", line 62, in
population = algorithm.evolve(population)
ValueError:
function: evolve
where: C:\projects\pagmo2\src\algorithms\nspso.cpp, 104
what: Non linear constraints detected in <class 'main.TestProblem'> instance. NSPSO cannot deal with them.

Expected behavior
I'd expect it to run

Environment (please complete the following information):

  • OS: Windows
  • Installation method: Conda
  • Version: 2.15

EDIT:
Referring to this table https://esa.github.io/pygmo2/overview.html#heuristic-global-optimization, it seems that nspso doesn't support constrained optimisation at all. It seems like IHS might work, but switching to that throws this error:

Multiple objectives and non linear constraints detected in the <class 'main.TestProblem'> instance. IHS: Improved Harmony Search cannot deal with this type of problem.

Which of the available algorithms (if any!) is suitable for a M-C problem? Thanks!

MOEAD algorithm: crashes in windows enviroment (pip package)

Hi,
Just trying to deploy the moead with a already existing udp (pygmo.dtlz for example) in windows results in crashing python (Python 3.6.6). The interpreter exits to the windows terminal without any message, after one generation (if I enable verbosity of the algorithm, I get one log line).

The exact same script is able to run in Linux (Linux 5.0.8-arch1-1-ARCH x86_64 GNU/Linux), but with issue esa/pagmo2#357 .

Best,
Thomas

[New user question] Using autograd for derivatives

Hi,

When using the Scipy SLSQP directly I use the autograd python package to compute derivatives. When using SLSQP through pygmo or indeed ipopt through pygmo, is it possible to use autograd to compute the derivatives rather than the numerical differentiation provided? If so would it be possible to have a small working example showing how to do that?

Many Thanks,
Andy

Jointly use PyTorch and Pygmo

When I try to use pytoch and pygmo jointly, the RuntimeError: Unable to cast Python instance to C++ type occours.

  1. Is there a way to use pytorch in pygmo?

  2. Should I use pagmo instead, with c++ of pytorch, because the error is the cast from python to c++?

Archipelago slices

Currently, an archipelago object allows to access its islands by indexing:

archi[1]
Island name: Multiprocessing island
Status: idle
...

Looping like for isl in archi is supported as well. However, accessing islands via slicing throws an error:

for isl in archi[0:3]:
pass

ArgumentError Traceback (most recent call last)
in
----> 1 for isl in archi[0:3]:
2 pass

ArgumentError: Python argument types in
archipelago.getitem(archipelago, slice)
did not match C++ signature:
getitem(pagmo::archipelago {lvalue}, unsigned long)

Test and release against Python 3.8

Python 3.8 was released in October 2019. It would be useful to expand the CI to include tests against 3.8 and to do a release on PyPI (+others) for it.

Large variation in performances between PSO variants for high-dimensional problems

For problems with large number of variables (>1000), there is a significant variation in the performance of PSO variants in terms of run-time and solution quality. The code given below calculates the time taken by a PSO variant on a toy problem excluding the time taken for all fitness evaluations.

import time
import timeit
import numpy as np
import pygmo as pg

# PSO parameters
n_pop = 10
n_gen = 10000
n_dim = 5000
pso_variant = 1

lb = list(np.zeros(n_dim))
ub = list(np.ones(n_dim))

class sphere_function:
    def fitness(self, x):
        return [np.sum(x*x)]
    def get_bounds(self):
        return (lb, ub)

if __name__ == '__main__':
    # Calculate time taken for whole optimisation
    start_time = time.time()
    prob = pg.problem(sphere_function())
    algo = pg.algorithm(pg.pso(gen = n_gen, variant = pso_variant))
    pop = pg.population(prob, n_pop)
    pop = algo.evolve(pop)
    total_time = time.time() - start_time

    # Calculate time taken for a single fitness evaluation
    x = np.random.randn(n_dim)
    fitness_time = timeit.timeit(stmt='np.sum(x*x)', globals=globals())
    fitness_time = fitness_time / 1000000  # convert microseconds to seconds

    # Calculate time taken by all fitness evaluations
    fevals_time = fitness_time * n_pop * n_gen
    fevals_percent = fevals_time / total_time * 100

    # Calculate time taken by PSO excluding the fitness evaluations
    pso_time = total_time - fevals_time
    pso_percent = pso_time / total_time * 100

    print('Best fitness         = {:.2f}'.format(pop.champion_f[0]))
    print('fevals time          = {:.2f} s'.format(fevals_time))
    print('PSO time             = {:.2f} s'.format(pso_time))
    print('Total time           = {:.2f} s'.format(total_time))
    print('fevals percent       = {:.2f} %'.format(fevals_percent))
    print('PSO percent          = {:.2f} %'.format(pso_percent))

Using different values for n_pop, n_gen, n_dim, and pso_variant, I got the following results:

n_pop n_gen n_dim pso_ vrnt Best fitness fevals time (percent) PSO time (percent) Total time
10 10000 5000 1 1463.88 0.60 s
(2.17 %)
27.15 s (97.83 %) 27.75 s
10 10000 5000 2 1540.88 0.60 s
(3.90 %)
15.01 s (96.10 %) 15.62 s
10 10000 5000 3 1072.19 0.60 s
(24.09 %)
1.89 s
(75.91 %)
2.50 s
10 10000 5000 4 1090.89 0.60 s
(20.57 %)
2.33 s
(79.43 %)
2.94 s
10 10000 5000 5 311.00 0.60 s
(2.23 %)
26.66 s (97.77 %) 27.26 s
10 10000 5000 6 139.62 0.60 s
(1.15 %)
54.03 s (98.85 %) 54.65 s

n_pop n_gen n_dim pso_ vrnt Best fitness fevals time (percent) PSO time (percent) Total time
1000 100 5000 1 1440.17 0.60 s
(2.14 %)
28.08 s (97.86 %) 28.69 s
1000 100 5000 2 1470.63 0.60 s
(3.67 %)
15.79 s (96.33 %) 16.40 s
1000 100 5000 3 1242.72 0.60 s
(17.02 %)
2.94 s
(82.98 %)
3.54 s
1000 100 5000 4 1244.08 0.60 s
(15.69 %)
3.32 s
(84.31 %)
3.94 s
1000 100 5000 5 1327.64 0.60 s
(2.22 %)
27.77 s (97.78 %) 28.40 s
1000 100 5000 6 1249.19 0.60 s
(1.08 %)
55.25 s (98.92 %) 55.85 s

n_pop n_gen n_dim pso_ vrnt Best fitness fevals time (percent) PSO time (percent) Total time
10 10000 100 1 1.00 0.30 s
(26.20 %)
0.85 s
73.80 %
1.15 s
10 10000 100 2 2.00 0.30 s
(32.34 %)
0.63 s
67.66 %
0.93 s
10 10000 100 3 0.00 0.30 s
(46.68 %)
0.35 s
53.32 %
0.65 s
10 10000 100 4 0.00 0.30 s
(46.93 %)
0.34 s
53.07 %
0.65 s
10 10000 100 5 0.00 0.30 s
(26.03 %)
0.86 s
73.97 %
1.16 s
10 10000 100 6 0.00 0.30 s
(17.94 %)
1.39 s
82.06 %
1.70 s

As you can see, the time taken by PSO variants 1,2,5, and 6 are significantly higher than variants 3 and 4 when number of optimization variables is high (Tables 1 and 2). For low-dimensional problems, the overhead is negligible (Table 3).

In addition, the solution quality of variant 6 in Table 1 is significantly better than the rest but has the worst run-time (20 times than the fastest variant).

[BUG] Conda install unexpected package

Describe the bug
Hello,
when trying to use the hypervolume class
(https://esa.github.io/pygmo/documentation/hypervolume.html).
from pygmo.util import * I face a
ModuleNotFoundError: No module named 'pygmo.util'.
Indeed when I verify my local files I found an unexpected package, which I tracked down to this source code: https://github.com/esa/pygmo2

I followed the installation process documented here using Conda https://esa.github.io/pygmo2/install.html

 conda config --add channels conda-forge
 conda config --set channel_priority strict
 conda install pygmo

To Reproduce
Steps to reproduce the behavior:

  1. Install via Conda
 conda config --add channels conda-forge
 conda config --set channel_priority strict
 conda install pygmo
  1. In Python conda env
    from pygmo.util import *
  2. See error
    ModuleNotFoundError: No module named 'pygmo.util'

Expected behavior
I would expect no errors, a normal package import.

Screenshots

Environment (please complete the following information):

  • OS: Windows
  • Installation method: Conda
  • Version: pygmo 2.16.1 py38h206a64a_1 conda-forge

Additional context

>>> from pygmo.util import *
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pygmo.util'
>>> import pygmo
>>> pygmo.test.run_test_suite()
runTest (pygmo.test.core_test_case) ... ok
runTest (pygmo._bfe_test.bfe_test_case) ... ok
runTest (pygmo._bfe_test.thread_bfe_test_case) ... ok
runTest (pygmo._bfe_test.member_bfe_test_case) ... ok
runTest (pygmo._bfe_test.mp_bfe_test_case) ... ok
runTest (pygmo._bfe_test.ipyparallel_bfe_test_case) ... Waiting for connection file: ~\.ipython\profile_default\security\ipcontroller-client.json
ok
runTest (pygmo._bfe_test.default_bfe_test_case) ... ok
runTest (pygmo.test.archipelago_test_case) ... ok
runTest (pygmo._island_test.island_test_case) ... ok
runTest (pygmo._s_policy_test.s_policy_test_case) ... ok
runTest (pygmo._r_policy_test.r_policy_test_case) ... ok
runTest (pygmo._topology_test.topology_test_case) ... ok
runTest (pygmo.test.fair_replace_test_case) ... ok
runTest (pygmo.test.select_best_test_case) ... ok
runTest (pygmo.test.unconnected_test_case) ... ok
runTest (pygmo.test.ring_test_case) ... ok
runTest (pygmo.test.free_form_test_case) ... ok
runTest (pygmo.test.fully_connected_test_case) ... ok
runTest (pygmo.test.thread_island_test_case) ... ok
runTest (pygmo.test.thread_island_torture_test_case) ... ok
runTest (pygmo._problem_test.problem_test_case) ... ok
runTest (pygmo._algorithm_test.algorithm_test_case) ... C:\Users\Debs\anaconda3\envs\planningenv\lib\site-packages\scipy\optimize\_minimize.py:524: RuntimeWarning: Method L-BFGS-B does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
C:\Users\Debs\anaconda3\envs\planningenv\lib\site-packages\scipy\optimize\_trustregion_constr\projections.py:181: UserWarning: Singular Jacobian matrix. Using SVD decomposition to perform the factorizations.
  warn('Singular Jacobian matrix. Using SVD decomposition to ' +
C:\Users\Debs\anaconda3\envs\planningenv\lib\site-packages\scipy\optimize\_hessian_update_strategy.py:182: UserWarning: delta_grad == 0.0. Check if the approximated function is linear. If the function is linear better results can be obtained by defining the Hessian as zero instead of using quasi-Newton approximations.
  warn('delta_grad == 0.0. Check if the approximated '
C:\Users\Debs\anaconda3\envs\planningenv\lib\site-packages\pygmo\_py_algorithms.py:527: UserWarning: Problem Hock Schittkowsky 71 has constraints and hessians, but trust-constr requires the callable to also accept lagrange multipliers. Thus, hessians of constraints are ignored.
  warnings.warn(
C:\Users\Debs\anaconda3\envs\planningenv\lib\site-packages\scipy\optimize\_minimize.py:524: RuntimeWarning: Method SLSQP does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
ok
runTest (pygmo._island_test.mp_island_test_case) ... ok
runTest (pygmo._island_test.ipyparallel_island_test_case) ... Waiting for connection file: ~\.ipython\profile_default\security\ipcontroller-client.json
ok
runTest (pygmo.test.golomb_ruler_test_case) ... ok
runTest (pygmo.test.lennard_jones_test_case) ... ok
runTest (pygmo.test.de_test_case) ... ok
runTest (pygmo.test.nsga2_test_case) ... ok
runTest (pygmo.test.gaco_test_case) ... ok
runTest (pygmo.test.gwo_test_case) ... ok
runTest (pygmo.test.de1220_test_case) ... ok
runTest (pygmo.test.sea_test_case) ... ok
runTest (pygmo.test.pso_test_case) ... ok
runTest (pygmo.test.pso_gen_test_case) ... ok
runTest (pygmo.test.bee_colony_test_case) ... ok
runTest (pygmo.test.compass_search_test_case) ... ok
runTest (pygmo.test.sa_test_case) ... ok
runTest (pygmo.test.moead_test_case) ... ok
runTest (pygmo.test.sga_test_case) ... ok
runTest (pygmo.test.ihs_test_case) ... ok
runTest (pygmo.test.population_test_case) ... ok
runTest (pygmo.test.null_problem_test_case) ... ok
runTest (pygmo.test.hypervolume_test_case) ... ok
runTest (pygmo.test.mo_utils_test_case) ... ok
runTest (pygmo.test.con_utils_test_case) ... ok
runTest (pygmo.test.global_rng_test_case) ... ok
runTest (pygmo.test.estimate_sparsity_test_case) ... ok
runTest (pygmo.test.estimate_gradient_test_case) ... ok
runTest (pygmo.test.random_decision_vector_test_case) ... ok
runTest (pygmo.test.batch_random_decision_vector_test_case) ... ok
runTest (pygmo.test.cmaes_test_case) ... ok
runTest (pygmo.test.xnes_test_case) ... ok
runTest (pygmo.test.dtlz_test_case) ... ok
runTest (pygmo.test.cec2006_test_case) ... ok
runTest (pygmo.test.cec2009_test_case) ... ok
runTest (pygmo.test.cec2013_test_case) ... ok
runTest (pygmo.test.cec2014_test_case) ... ok
runTest (pygmo.test.luksan_vlcek1_test_case) ... ok
runTest (pygmo.test.minlp_rastrigin_test_case) ... ok
runTest (pygmo.test.rastrigin_test_case) ... ok
runTest (pygmo.test.translate_test_case) ... ok
runTest (pygmo.test.decompose_test_case) ... ok
runTest (pygmo.test.unconstrain_test_case) ... ok
runTest (pygmo.test.mbh_test_case) ... ok
runTest (pygmo.test.cstrs_self_adaptive_test_case) ... ok
runTest (pygmo.test.decorator_problem_test_case) ... ok
runTest (pygmo.test.wfg_test_case) ... ok
runTest (pygmo.test.nlopt_test_case) ...
 objevals:        objval:      violated:    viol. norm:
         1         146149             18        691.455 i
         6        73021.9             18        2267.21 i
        11         692.87             18        18.8293 i
        16        13.7193             12      0.0835552 i
        21        6.23246              0              0

Optimisation return status: NLOPT_XTOL_REACHED (value = 4, Optimization stopped because xtol_rel or xtol_abs was reached)

 objevals:        objval:      violated:    viol. norm:
         1         192142             18        788.428 i
         6        1058.66             18        23.5218 i
        11        40.2466             12       0.423955 i
        16       0.512001              5      0.0350397 i
        21    1.55952e-13              0              0

Optimisation return status: NLOPT_XTOL_REACHED (value = 4, Optimization stopped because xtol_rel or xtol_abs was reached)
ok
runTest (pygmo.test.ipopt_test_case) ...
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit https://github.com/coin-or/Ipopt
******************************************************************************


 objevals:        objval:      violated:    viol. norm:
         1         217330             18        678.508 i
         6        107.986             18        4.81889 i
        11        6.23246              0              0

Optimisation return status: Solve_Succeeded (value = 0)

 objevals:        objval:      violated:    viol. norm:
         1         249667             18        674.318 i
         6        526.836             18        6.70519 i
        11        90.7682              6       0.437948 i
        16     0.00068101              3    0.000356771 i

Optimisation return status: Solve_Succeeded (value = 0)
ok

----------------------------------------------------------------------
Ran 69 tests in 96.544s

OK

[FEATURE] make pygmo available for python 3.9

I am trying to install the module with the command:

pip install pygmo

in a python 3.9 environment but I get the following error message:

ERROR: Could not find a version that satisfies the requirement pygmo (from versions: none)
ERROR: No matching distribution found for pygmo

Is there any possibility to make pymgo available for python 3.9?

thanks in advance
Giovanni

Accessing the number of generations and the exit flag

Is there a way to access and store the total number of generations and the exit flag?

I only came up with the following solution

...
algo.set_verbosity(1)
...
uda = algo.extract(pg.sade)
uda.get_log() 

but it is slow and solves only half of the problem.

moead and nsga2 cannot work

I failed to run moead by using the code from the tutorial (website: https://esa.github.io/pygmo2/tutorials/moo_moead.html, showing below).

1 from pygmo import *
2 udp = dtlz(prob_id = 1)
3 pop = population(prob = udp, size = 105)
4 algo = algorithm(moead(gen = 100))
5 for i in range(10):
6 pop = algo.evolve(pop)
7 print(udp.p_distance(pop))

I find that line 6 cannot work normally. when I debug to line 6, the procedure finished immediately without any error and evolution process. The nsga2 cannot run normally as well. How can I fix this problem?

Expose setters for island selection and replacement policies

Getting list of islands would already be helpful, because as it stands now, one has to index each island directly or loop over the whole archipelago if one is interested at a specific part of it.

Useful for a migration study (given a specific topology) would be for example:

sending_islands = archi[:k]
receiving_islands = archi[k:]

for isl in sending_islands:
    isl.set_s_policy(...)

for isl in receiving_islands:
    isl.set_r_policy(...)

While those setters still don't exist and constitute probably another issue...

Originally posted by @CoolRunning in #5 (comment)

2.16 release

Just wondering when you wish to release 2.16 ?
Can we add manually in the code the 2.15 --> 2.16 changes in the sites-package source code ?

Getting the best particle after each iteration of PSO

For debugging purposes, I need to get the best particle (and do some computation with it) after each iteration of PSO. To my understanding, there is no direct way of doing it.
I thought that one workaround could be to set the gen parameter to 1, call the evolve method in a loop and get the best after each call. However, I was wondering whether calling many times evolve is the same as calling it once, with the gen parameter appropriately set.
Looking at the code of PSO, it seems that my solution would be inefficient (since it would repeat the initialization at the beginning of evolve many times), but it should not change the result. Am I right?

[New user question] Pass options to ipopt

I currently use pyomo, when using ipopt I can pass options to ipopt as follows:

solver = SolverFactory('ipopt')
opts = {'halt_on_ampl_error': 'yes',
           'tol': tolerance, 'bound_relax_factor': 0.0}
results = solver.solve(model, tee=False, options=opts)

When using pygmo I set up the ipopt solver like this:

prob = pg.problem(problem)
uda = pg.ipopt()
algo = pg.algorithm(uda)

How would I pass the same options to ipopt that I do when using pyomo?

Thanks,
Andy

[FEATURE] Add batch_fitness support to pg.unconstrain()ed problems

Is your feature request related to a problem? Please describe.
I am trialling pygmo2 with a constrained optimisation problem, where each function evaluation takes a few minutes but I can write a highly parallel batch_fitness function. I've implemented a batch_fitness method in my problem, and want to use PSO to optimise it. Since PSO doesn't support constraints I run problem = pg.unconstrain(problem). Running my code then throws:

what: The batch_fitness() method has been invoked, but it is not implemented in a UDP of type '<class '__main__.RBSIdealBalProblem'> [unconstrained]'

This was already on the radar here.

[FEATURE] ARM support, is it possible?

Is your feature request related to a problem? Please describe.
I work on a Wi-Fi fuzzing appliance, which is being ported to an ARM platform, is it possible to use pygmo on ARM? Install via pip doesn't work for obvious reasons

Can we specify a number of evaluations (fevals) in pygmo in each run?

Dear All,
I am using pygmo to solve an optimization problem. Could you please show me how to specify a number of evaluations (fevals) in each run. Thank you very much!

My code is as follows:

# Define pygmo classs
class alge_function:
   # def __init__(self):
    #    self.starting = np.random.randint(1,5,16)

    def fitness(self, x):
        x = np.array(x, dtype=int) # to make integers
       x = list(x)
        fsr = look_up_function(x)
        vec = [fsr[1], fsr[2], fsr[3], fsr[4], fsr[5], fsr[6], fsr[7], fsr[8], fsr[9], \
               fsr[10], fsr[11], fsr[12], fsr[13], fsr[14], fsr[15], fsr[16], fsr[17], fsr[18]]
        obj = -choquet_pwl_function(k, mu, vec)

return [obj]

    def get_nix(self):
        return 16

# bounds [1,4]
    def get_bounds(self):
        return ([1]*16, [4] * 16)

# Algorithm of gradient
    def gradient(self, x):
        return pg.estimate_gradient_h(lambda x: self.fitness(x), x)

# use "slsqp"- sequential least squares programming algorithm
start = time.time()
algo = pg.algorithm(uda = pg.mbh(pg.nlopt("slsqp"), stop = 20, perturb = .2))
algo.set_verbosity(1) # print dialog
# Formulate minimization problem
pop = pg.population(prob = alge_function(), size = 2)
pop.set_x(1,starting_point)
# Solve the problem
pop = algo.evolve(pop)

# Collect information
print("objective value: obj=", pop.champion_f[0])

feature request: setter for migration database

Currently, the migration database can only be read

archi.get_migrants_db()

however migrants can not be manipulated (besides implicitly by evolution).
This creates some undesirable side-effects: i.e. resetting an island population while migrants are still in the database will bump the new random population quickly to the old fitness level.

Consequently, it would be useful to have a way to manipulate the migrant database directly. Additionally, other convenient functions would be to "flush" (trigger a migration manually) and "erase" (reset the complete database, removing all migrants).

Make types uniform in the Python exposition

Some problems/algorithms seem to have an inconsistent python exposition in terms of types.

For instance, in ../src/problems/dtlz.cpp we have:

dtlz::dtlz(unsigned prob_id, vector_double::size_type dim, vector_double::size_type fdim, unsigned alpha)

But in the ../pygmo/expose_problems_0.cpp it is exposed as:

dtlz_p.def(bp::init<unsigned, unsigned, unsigned, unsigned>

Instead of:

dtlz_p.def(bp::init<unsigned, vector_double::size_type, vector_double::size_type, unsigned> ...

A check for all these type consistencies seems thus necessary.

Cannot import from Conda

Hi, I am trying to import pygmo from Conda. I used the Anaconda prompt on my windows 10 machine. I didn't have any luck with the install directions in the docs so I tried the first one here:

https://anaconda.org/conda-forge/pygmo

conda install -c conda-forge pygmo

(base) C:\Users\Alex>conda install -c conda-forge pygmo
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \

Goes on continuously.

I tried removing conda forge to no avail, per the suggestion here.

conda/conda#8051

Would it be possible to have a pygmo binary for windows 10?

Thanks!

Alex

Objective function using numpy argsort

Dear Pygmo Community

I have experimented a bit using pygmo for financial portfolio optimization in the context of risk and return measured relative to a benchmark.

With regards to measuring risk, many methods can be used. The usual is minimizing the variance of the portfolio, but I have a preference for minimizing tail risk.

The "hello world" example is to find the portfolio that minimizes risk relative to the benchmark, given that weights are constraint to the [0, 1] range and must sum to 1. It is easy to realize that the solution is a portfolio with weights equal to the benchmark weights.

I have implemented a class I thought should be capable of solving the problem for both of the above mentioned methods for defining risk. However, only the "minimize variance" gives an appropriate result. Minimizing tail risk seems not to change x_init values that I start the optimization from at all.

I suspect, that a problem might be that the objective function contains a call to np.argsort. I have previously had success implementing this with Matlabs fmincon, so I know there should be a way through. But I am unsure how I should attack the problem using pygmo. What would be the right algorithm the use, what would be appropriate parameters for the chosen algo, etc. I have found inspiration in tutorials.

Below you'll find a self containing piece of code that replicates my problem. First call to the utility function "run_problem" uses the "minimize variance (cov)" objective. The resulting portfolio equals the benchmark as desired. Second call to the function "run_problem" seems to not alter the initial value x at all.

If any one has any idea to how I can get around my problem, I would be very gratefull!

Thank you very much in advance!

(I am sorry for the below layout (FIXED BY @darioizzo). Cannot figure out the appropriate way to share a notebook here when submitting an issue, hence you can also find the notebook in the attached zip-folder:
opt_test_pygmo_2.zip )

import pygmo as pg
import numpy as np
import time
import matplotlib.pyplot as pltfrom numba import jit, float64, int32

class optimize_risk_return:
    
    def __init__(self, cov, market, weight_bm, objective='cov'):
        
        self.cov = cov
        self.market = market
        self.weight_bm = weight_bm
        self.n = len(self.weight_bm)
        self.objective = objective
        
        assert objective in ['cov','es'], 'Input "objective" can only be "cov" or "es".'def get_nic(self):
    
        return 0
    
    
    def get_nec(self):
        
        return 1def get_bounds(self):
        
        return ([0]*self.n,[1]*self.n)
    
        
    
    def fitness(self, x):
          
        if self.objective == 'cov':
            return optimize_risk_return._fitness_std_cov(x,self.cov,self.weight_bm)
        else:
            return optimize_risk_return._fitness_es_market(x,self.market,self.weight_bm,1000)
    
​
    
    @jit(float64[:](float64[:],float64[:,:],float64[:]),nopython=True)
    def _fitness_std_cov(x,cov,bm_w):
    
        ret_val = np.zeros((2,),dtype=float64)
        w = x-bm_w
        ret_val[0] = w.T@cov@w*1000
        ret_val[1] = np.sum(x)-1
        
        return ret_val
    
    @jit(float64[:](float64[:],float64[:,:],float64[:], int32),nopython=True) # , parallel=True
    def _fitness_es_market(x,market,bm_w,n_fractile):
    
        ret_val = np.zeros((2,),dtype=float64)
        w = x-bm_w
        psi = market@w
        indx = np.argsort(psi)
        ret_val[0] = -1*np.mean(psi[indx[:n_fractile]])
        ret_val[1] = np.sum(x)-1
        return ret_val
    
    
    def gradient(self, x):
​
        if self.objective == 'cov':
            return optimize_risk_return._gradient_std_cov(x,self.cov,self.weight_bm)
        else:
            return optimize_risk_return._gradient_es_market(x,self.market,self.weight_bm,1000)
            
    
    @jit(float64[:](float64[:],float64[:,:],float64[:]),nopython=True) # , parallel=True
    def _gradient_std_cov(x,cov,bm_w):
    
        w = x-bm_w
        return np.concatenate((w.T@cov*1000,np.ones((len(w),),dtype=float64)))
    
    @jit(float64[:](float64[:],float64[:,:],float64[:],int32),nopython=True) # , parallel=True
    def _gradient_es_market(x,market,bm_w, n_fractile):
    
        w = x-bm_w
        psi = market@w
        indx = np.argsort(psi)
        tmp = -1*market[indx[:n_fractile],:]*w
        ret_val = np.empty(tmp.shape[1])
        for i in range(len(ret_val)):
            ret_val[i] = np.mean(tmp[:, i])
            
        return np.concatenate((ret_val,np.ones((len(w),),dtype=float64)))
        

weight_bm = np.array([0.11582448, 0.35305939, 0.34733299, 0.10375922, 0.08002392])
cov_mat = np.array([[2.87275736e-04, 6.72493473e-05, 1.68465649e-04, 4.18551925e-04, 1.19171347e-04],
                       [6.72493473e-05, 3.20281710e-05, 5.42226697e-05, 1.17381173e-04, 4.30776541e-05],
                       [1.68465649e-04, 5.42226697e-05, 2.01671669e-04, 2.66489778e-04, 9.81955361e-05],
                       [4.18551925e-04, 1.17381173e-04, 2.66489778e-04, 1.08587282e-03, 3.10847907e-04],
                       [1.19171347e-04, 4.30776541e-05, 9.81955361e-05, 3.10847907e-04, 1.72575569e-04]])
​
market = np.random.multivariate_normal(np.zeros((5,)),cov_mat,20000)

def run_problem(opt_obj):
​
    prob = pg.problem(opt_obj)
    print(prob)
    uda = pg.nlopt('auglag')
    uda.ftol_rel = 1e-12
    algo = pg.algorithm(uda = uda)
    algo.extract(pg.nlopt).local_optimizer = pg.nlopt('var2')
    algo.extract(pg.nlopt).local_optimizer.ftol_rel = 1e-20algo.set_verbosity(100) # in this case this correspond to logs each 200 objevals
    n = 100
    pop = pg.population(prob = prob, size = n)
    pop.problem.c_tol = [1E-12]
​
    for i in range(n):
        pop.set_x(i,np.ones((5,))/5)
​
    start_time = time.time()
    pop = algo.evolve(pop)
    print(time.time()-start_time)
​
    print('Fevals: {0}'.format(pop.problem.get_fevals()) )
    print('Gevals: {0}'.format(pop.problem.get_gevals()) )
​
    best_x = pop.get_x()[pop.best_idx()]
    
    print('Sum of weights (should be 1) = {0}'.format(np.sum(best_x)))
    
    plt.scatter(best_x,opt_obj.weight_bm,s=50)
    plt.scatter(opt_obj.weight_bm,opt_obj.weight_bm,s=25)
    plt.legend(['Actual Result','Desired Result'])


opt_obj = optimize_risk_return(cov_mat, market,weight_bm,'cov')
run_problem(opt_obj)

opt_obj = optimize_risk_return(cov_mat, market,weight_bm,'es')
run_problem(opt_obj)

[BUG] Some tests fail: Optimization stopped because xtol_rel or xtol_abs was reached), etc.

Describe the bug

===>   py38-pygmo2-2.18.0 depends on file: /usr/local/bin/python3.8 - found
runTest (pygmo.test.core_test_case) ... ok
runTest (pygmo._bfe_test.bfe_test_case) ... ok
runTest (pygmo._bfe_test.thread_bfe_test_case) ... ok
runTest (pygmo._bfe_test.member_bfe_test_case) ... ok
runTest (pygmo._bfe_test.mp_bfe_test_case) ... ok
runTest (pygmo._bfe_test.ipyparallel_bfe_test_case) ... ok
runTest (pygmo._bfe_test.default_bfe_test_case) ... ok
runTest (pygmo.test.archipelago_test_case) ... ERROR
runTest (pygmo._island_test.island_test_case) ... ok
runTest (pygmo._s_policy_test.s_policy_test_case) ... ok
runTest (pygmo._r_policy_test.r_policy_test_case) ... ok
runTest (pygmo._topology_test.topology_test_case) ... ok
runTest (pygmo.test.fair_replace_test_case) ... ok
runTest (pygmo.test.select_best_test_case) ... ok
runTest (pygmo.test.unconnected_test_case) ... ok
runTest (pygmo.test.ring_test_case) ... ok
runTest (pygmo.test.free_form_test_case) ... ok
runTest (pygmo.test.fully_connected_test_case) ... ok
runTest (pygmo.test.thread_island_test_case) ... ok
runTest (pygmo.test.thread_island_torture_test_case) ... ok
runTest (pygmo._problem_test.problem_test_case) ... ok
runTest (pygmo._algorithm_test.algorithm_test_case) ... /usr/local/lib/python3.8/site-packages/scipy/optimize/_minimize.py:524: RuntimeWarning: Method L-BFGS-B does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
/usr/local/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/projections.py:181: UserWarning: Singular Jacobian matrix. Using SVD decomposition to perform the factorizations.
  warn('Singular Jacobian matrix. Using SVD decomposition to ' +
/usr/local/lib/python3.8/site-packages/scipy/optimize/_hessian_update_strategy.py:182: UserWarning: delta_grad == 0.0. Check if the approximated function is linear. If the function is linear better results can be obtained by defining the Hessian as zero instead of using quasi-Newton approximations.
  warn('delta_grad == 0.0. Check if the approximated '
/usr/local/lib/python3.8/site-packages/pygmo/_py_algorithms.py:527: UserWarning: Problem Hock Schittkowsky 71 has constraints and hessians, but trust-constr requires the callable to also accept lagrange multipliers. Thus, hessians of constraints are ignored.
  warnings.warn(
/usr/local/lib/python3.8/site-packages/scipy/optimize/_minimize.py:524: RuntimeWarning: Method SLSQP does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
ok
runTest (pygmo._island_test.mp_island_test_case) ... ok
runTest (pygmo._island_test.ipyparallel_island_test_case) ... ok
runTest (pygmo.test.golomb_ruler_test_case) ... ok
runTest (pygmo.test.lennard_jones_test_case) ... ok
runTest (pygmo.test.de_test_case) ... ok
runTest (pygmo.test.nsga2_test_case) ... ok
runTest (pygmo.test.gaco_test_case) ... ok
runTest (pygmo.test.gwo_test_case) ... ok
runTest (pygmo.test.de1220_test_case) ... ok
runTest (pygmo.test.sea_test_case) ... ok
runTest (pygmo.test.pso_test_case) ... ok
runTest (pygmo.test.pso_gen_test_case) ... ok
runTest (pygmo.test.bee_colony_test_case) ... ok
runTest (pygmo.test.compass_search_test_case) ... ok
runTest (pygmo.test.sa_test_case) ... ok
runTest (pygmo.test.moead_test_case) ... ok
runTest (pygmo.test.sga_test_case) ... ok
runTest (pygmo.test.ihs_test_case) ... ok
runTest (pygmo.test.population_test_case) ... ok
runTest (pygmo.test.null_problem_test_case) ... ok
runTest (pygmo.test.hypervolume_test_case) ... ok
runTest (pygmo.test.mo_utils_test_case) ... ok
runTest (pygmo.test.con_utils_test_case) ... ok
runTest (pygmo.test.global_rng_test_case) ... ok
runTest (pygmo.test.estimate_sparsity_test_case) ... ok
runTest (pygmo.test.estimate_gradient_test_case) ... ok
runTest (pygmo.test.random_decision_vector_test_case) ... ok
runTest (pygmo.test.batch_random_decision_vector_test_case) ... ok
runTest (pygmo.test.cmaes_test_case) ... ok
runTest (pygmo.test.xnes_test_case) ... ok
runTest (pygmo.test.dtlz_test_case) ... ok
runTest (pygmo.test.cec2006_test_case) ... ok
runTest (pygmo.test.cec2009_test_case) ... ok
runTest (pygmo.test.cec2013_test_case) ... ok
runTest (pygmo.test.cec2014_test_case) ... ok
runTest (pygmo.test.luksan_vlcek1_test_case) ... ok
runTest (pygmo.test.minlp_rastrigin_test_case) ... ok
runTest (pygmo.test.rastrigin_test_case) ... ok
runTest (pygmo.test.translate_test_case) ... ok
runTest (pygmo.test.decompose_test_case) ... ok
runTest (pygmo.test.unconstrain_test_case) ... ok
runTest (pygmo.test.mbh_test_case) ... ok
runTest (pygmo.test.cstrs_self_adaptive_test_case) ... ok
runTest (pygmo.test.decorator_problem_test_case) ... ok
runTest (pygmo.test.wfg_test_case) ... ok
runTest (pygmo.test.nlopt_test_case) ... 
 objevals:        objval:      violated:    viol. norm:
         1         146149             18        691.455 i
         6        73016.4             18         2265.5 i
        11        691.388             18        18.8091 i
        16        12.8864             12      0.0650723 i
        21        6.23246              0              0

Optimisation return status: NLOPT_XTOL_REACHED (value = 4, Optimization stopped because xtol_rel or xtol_abs was reached)

 objevals:        objval:      violated:    viol. norm:
         1         192142             18        788.428 i
         6        1058.66             18        23.5218 i
        11        40.2466             12       0.423955 i
        16       0.512006              5      0.0350398 i
        21    1.55921e-13              0              0

Optimisation return status: NLOPT_XTOL_REACHED (value = 4, Optimization stopped because xtol_rel or xtol_abs was reached)
ok
runTest (pygmo.test.ipopt_test_case) ... 
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
 Ipopt is released as open source code under the Eclipse Public License (EPL).
         For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************


 objevals:        objval:      violated:    viol. norm:
         1         143465             18        723.615 i
         6        2069.31             18        43.8783 i
        11        300.481              7        13.6892 i
        16        7.37596              3      0.0130333 i
        21        6.23246              0              0

Optimisation return status: Solve_Succeeded (value = 0)

 objevals:        objval:      violated:    viol. norm:
         1         137229             18        441.927 i
         6        169.094             18         6.2434 i
        11    1.70233e-08              1    2.59369e-06 i

Optimisation return status: Solve_Succeeded (value = 0)
ok

======================================================================
ERROR: runTest (pygmo.test.archipelago_test_case)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/pygmo/test.py", line 878, in runTest
    self.run_torture_test_0()
  File "/usr/local/lib/python3.8/site-packages/pygmo/test.py", line 1348, in run_torture_test_0
    archi2 = archipelago(n=1000, algo=de(
  File "/usr/local/lib/python3.8/site-packages/pygmo/__init__.py", line 575, in _archi_init
    self.push_back(**kwargs)
  File "/usr/local/lib/python3.8/site-packages/pygmo/__init__.py", line 620, in _archi_push_back
    self._push_back(island(**kwargs))
RuntimeError: thread constructor failed: Resource temporarily unavailable

----------------------------------------------------------------------
Ran 69 tests in 63.152s

FAILED (errors=1)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.8/site-packages/pygmo/test.py", line 2957, in run_test_suite
    raise RuntimeError('One or more tests failed.')
RuntimeError: One or more tests failed.
*** Error code 1

Version: 2.18.0
Python: 3.8
OS: FreeBSD 13

pip for OSX

Is there any reason pygmo does not have pip option for OSX? Any plan to provide it?

[BUG] Meta-problem decorator for the fitness function does not work with algo.evolve(pop)

Describe the bug
Meta-problem decorator for the fitness function does not work with algo.evolve(pop)

To Reproduce
Below code mostly taken from: https://esa.github.io/pygmo2/tutorials/udp_meta_decorator.html

import pygmo as pg

def f_log_decor(orig_fitness_function):
    def new_fitness_function(self, dv):
        if hasattr(self, "dv"):
            self.dv.append(dv)
        else:
            self.dv = [dv]
        return orig_fitness_function(self, dv)
    return new_fitness_function

rb = pg.rosenbrock()
drb = pg.problem(pg.decorator_problem(rb, fitness_decorator=f_log_decor))
algo = pg.algorithm(pg.cmaes(gen=20, sigma0=0.3))
pop = pg.population(drb, 10)
pop = algo.evolve(pop)

assert(hasattr(drb.extract(pg.decorator_problem), "dv"))

Expected behavior
While the documentation for the decorator is nice and works as expected when calling the fitness function directly, the standard way to execute Pygmo is via algo.evolve(pop), which seems to render the decorator useless since the fitness function isn't triggered.
This should probably not be labeled as a bug - but certainly unexpected behaviour, as we would hope the decorator to be called as part of the algo evolving.
Alternatively, there is perhaps a way to call the fitness function directly to accomplish what algo.evolve() does, but I haven't seen a clear example in the docs.

Screenshots
NA

Environment (please complete the following information):

  • OS: Mac OS 10.15.7
  • Installation method: conda
  • Version: 2.16.1

Additional context

[QUESTIONS/FEATURE] pygmo.decompose_objectives

Is your feature request related to a problem? Please describe.

I have three questions about pygmo.decompose_objectives.

  1. Does this function assume that the objectives are normalized, i.e., all in the range [0,1]?

  2. Should the ref_point be the "best" point or the "worst" point? I assume the best point, but in the example it looks like the worst point (assuming minimization).

  3. The Tchebycheff decomposition is not "augmented", correct? (See pages 7 and 8 in this paper.)

Describe the solution you'd like

It would be helpful to update the documentation / example accordingly.

It would be helpful to implement the augmented Tchebycheff decomposition.

Describe alternatives you've considered

I have implemented the augmented Tchebycheff decomposition myself, but it would still be convenient if it were included in the library.

pickle.loads fails to find module for user-defined problem for ipyparallel_island

Is there a way to use ipyparallel_island archipelago with a user-defined problem I created using modules loaded from my working directory?

archi.wait_check() yields:

RuntimeError: The asynchronous evolution of a pythonic island of type 'Ipyparallel island' raised an error: Traceback (most recent call last): File "C:\Users\Skyler\Anaconda3\lib\site-packages\pygmo\_py_islands.py", line 593, in run_evolve return pickle.loads(ret.get()) File "C:\Users\Skyler\Anaconda3\lib\site-packages\ipyparallel\client\asyncresult.py", line 169, in get raise self.exception() File "C:\Users\Skyler\Anaconda3\lib\site-packages\ipyparallel\client\asyncresult.py", line 226, in _resolve_result raise r ipyparallel.error.RemoteError: ModuleNotFoundError(No module named 'module_containing_udp')

It seems like a pickling issue; is there a work-around?

Problem solves when not using archipelago, and pygmo.rosenbrock solves when I use ipyparallel_island.

allowing nested parallelism in python multiprocessing

Right now the use of an archipelago with, for example, a bfe based algo will not work if python multiprocessing islands are used with a multiprocessing bfe. The reason is that daemonic processes do not allow nested parallelism.

AssertionError: daemonic processes are not allowed to have children

A possible solution would be the use of non daemonic processes in the python mp module.

See related discussions:
https://stackoverflow.com/questions/28491558/launching-nested-processes-in-multiprocessing
https://stackoverflow.com/questions/6974695/python-process-pool-non-daemonic

Mac OS install breaks— cmake only finds Python2

When attempting to build on Mac, cmake will only find Python2 and abort. I believe there are functions around specifically finding python3 packages, which should probably be the default since Python2 is no longer supported.

Population champion not updated after evolution

Hello,
I'm trying to use pygmo to optimize the position of some plants in a Garden. The fitness function is non linear, and I have also a non-linear equality constraint. I used the local solver of scipy "trust-constr".
the problem I have is that the value of the champion does not seem to be updated after the evolution.

Printing the population before and after evoution, the individual is converging to the minimum. But the champion is not updated.
What could be the problem? (See below for the prints)

The code I use is very simple:
gd_pos = Pl.PlantPosition(inter_mat, lh_garden, pos_plant_fixed, radii_non_overlap, radii_inter) # user defined problem
prob = pg.problem(gd_pos)
algo = pg.algorithm(pg.scipy_optimize(method="trust-constr"))
pop = pg.population(prob, size=1)
print(pop)
pop = algo.evolve(pop)
print(pop)

PRINT BEFORE EVOLUTION:
"Problem name: Plant Positioning
C++ class name: pybind11::object

Global dimension:			6
Integer dimension:			0
Fitness dimension:			2
Number of objectives:			1
Equality constraints dimension:		1
Inequality constraints dimension:	0
Tolerances on constraints: [0]
Lower bounds: [0, 0, 0, 0, 0, ... ]
Upper bounds: [1, 1, 1, 1, 1, ... ]
Has batch fitness evaluation: false

Has gradient: true
User implemented gradient sparsity: false
Expected gradients: 12
Has hessians: false
User implemented hessians sparsity: false

Fitness evaluations: 1
Gradient evaluations: 0

Thread safety: none

Extra info:

Number of plants:
3

[XY] size of the garden:
[1 1]

Matrix interaction:
[[0 1 1]
[1 0 1]
[1 1 0]]

Radii vector:
[0.1 0.1 0.1]

Population size: 1

List of individuals:
#0:
ID: 11859392664243719972
Decision vector: [0.411191, 0.620839, 0.740699, 0.543479, 0.922681, ... ]
Fitness vector: [-3.37961, 0]

Champion decision vector: [0.411191, 0.620839, 0.740699, 0.543479, 0.922681, ... ]
Champion fitness: [-3.37961, 0]
"

PRINT AFTER THE EVOLUTION:
"Problem name: Plant Positioning
C++ class name: pybind11::object

Global dimension:			6
Integer dimension:			0
Fitness dimension:			2
Number of objectives:			1
Equality constraints dimension:		1
Inequality constraints dimension:	0
Tolerances on constraints: [0]
Lower bounds: [0, 0, 0, 0, 0, ... ]
Upper bounds: [1, 1, 1, 1, 1, ... ]
Has batch fitness evaluation: false

Has gradient: true
User implemented gradient sparsity: false
Expected gradients: 12
Has hessians: false
User implemented hessians sparsity: false

Fitness evaluations: 171
Gradient evaluations: 169

Thread safety: none

Extra info:

Number of plants:
3

[XY] size of the garden:
[1 1]

Matrix interaction:
[[0 1 1]
[1 0 1]
[1 1 0]]

Radii vector:
[0.1 0.1 0.1]

Population size: 1

List of individuals:
#0:
ID: 11859392664243719972
Decision vector: [0.454543, 0.577233, 0.652667, 0.45644, 0.61438, ... ]
Fitness vector: [-5.67577, 7.33971e-17]

Champion decision vector: [0.411191, 0.620839, 0.740699, 0.543479, 0.922681, ... ]
Champion fitness: [-3.37961, 0]
"

Documentation of parallel processing of individual fitness function evaluations

I have been tinkering with pygmo to automate the optimisation of a 3D structure in an electromagnetism problem. The evaluation of the fitness function can take up to 1h on an AWS cluster. In this type of problem, sequential computation of generations of more than a couple of individuals is out of the question. The batch parallel initialization helps, but it is of course just the first step.

I was wondering if any of the algorithms in the library could use this same parallel evaluation for subsequent generations, or whether it would be possible to launch individual evaluations in parallel through the existing mp/thread/ipyparallel based classes. As long as the calculation of the parameters of the individuals in a generation do not depend from the results of previous members of their generation, I guess this should work.

Issues with @jit and archipielago

Hi!
I am trying to use the jit decorator to increase speed in a large problem.
I am testing with the following function:

`class toy_problem_o2:

def __init__(self, dim):
    self.dim = dim
    
def fitness(self, x):
    return [toy_problem_o2._a(x)[0], toy_problem_o2._b(x)[0], 
            -toy_problem_o2._c(x)[0]]

@jit(float64[:](float64[:]), nopython=True)
def _a(x):
    retval = np.zeros((1,))
    for x_i in x:
        retval[0]+=x_i
    return retval

@jit(float64[:](float64[:]), nopython=True)
def _b(x):
    retval = np.zeros((1,))
    sqr = np.zeros((1,))
    for x_i in x:
        sqr[0] += x_i*x_i
    retval[0]=1.-sqr[0]
    return retval

@jit(float64[:](float64[:]), nopython=True)
def _c(x):
    retval = np.zeros((1,))
    for x_i in x:
        retval[0]+=x_i
    return retval
    
def gradient(self, x):
    return pg.estimate_gradient(lambda x: self.fitness(x), x)  # numerical gradient

def get_nec(self):
    return 1

def get_nic(self):
    return 1

def get_bounds(self):
    return ([-1] * self.dim, [1] * self.dim)

def get_name(self):
    return "A toy problem, 2nd optimization"

def get_extra_info(self):
    return "\tDimensions: " + str(self.dim)`

`def archipielago_opt(f,n):
start = time.time()
funct=f(n)
name=funct.get_name()

a_cstrs_sa = pg.algorithm(pg.cstrs_self_adaptive(iters=1000))
t1=time.time()
p_toy = pg.problem(funct)
p_toy.c_tol = [1e-4, 1e-4]
archi = pg.archipelago(n=16, algo=a_cstrs_sa, prob=p_toy, pop_size=10)
archi.evolve(2)
archi.wait_check()`

if __name__ == '__main__': archipielago_opt(toy_problem_o2,2)
I am having an AssertionError,

RuntimeError: The asynchronous evolution of a pythonic island of type 'Multiprocessing island' raised an error:
Traceback (most recent call last):
File "D:\Programming\Anaconda3\lib\site-packages\pygmo_py_islands.py", line 225, in run_evolve
ser_algo_pop = pickle.dumps((algo, pop))
File "D:\Programming\Anaconda3\lib\site-packages\cloudpickle\cloudpickle.py", line 1125, in dumps
cp.dump(obj)
File "D:\Programming\Anaconda3\lib\site-packages\cloudpickle\cloudpickle.py", line 482, in dump
return Pickler.dump(self, obj)
File "D:\Programming\Anaconda3\lib\pickle.py", line 437, in dump
self.save(obj)
File "D:\Programming\Anaconda3\lib\pickle.py", line 549, in save
self.save_reduce(obj=obj, *rv)
File "D:\Programming\Anaconda3\lib\pickle.py", line 633, in save_reduce
save(cls)
File "D:\Programming\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "D:\Programming\Anaconda3\lib\site-packages\cloudpickle\cloudpickle.py", line 877, in save_global
self.save_dynamic_class(obj)
File "D:\Programming\Anaconda3\lib\site-packages\cloudpickle\cloudpickle.py", line 686, in save_dynamic_class
save(clsdict)
File "D:\Programming\Anaconda3\lib\pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "D:\Programming\Anaconda3\lib\pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "D:\Programming\Anaconda3\lib\pickle.py", line 885, in _batch_setitems
save(v)
File "D:\Programming\Anaconda3\lib\pickle.py", line 524, in save
rv = reduce(self.proto)
File "D:\Programming\Anaconda3\lib\site-packages\numba\dispatcher.py", line 626, in reduce
(self.class, str(self._uuid),
File "D:\Programming\Anaconda3\lib\site-packages\numba\dispatcher.py", line 661, in _uuid
self._set_uuid(u)
File "D:\Programming\Anaconda3\lib\site-packages\numba\dispatcher.py", line 665, in _set_uuid
assert self.__uuid is None
AssertionError

When I do it without the decorator, the problem works fine and I get the theoretical optimal solutions for 2 variables.

Can it be solved with the jit decorator?

How to specify an initial starting point in Pygmo

Dear All,
I am using Pygmo to solve both single and multi-objective optimization problems. I have indicated an initial starting point for each method but it did not work when running the codes. Could you take a look at the following modified codes to see if I have put this initial starting point correctly? Thank you very much for your kind help.

Code for single objective optimization problem:

Define pygmo classs

class alge_function:
def init(self):
self.starting = np.random.randint(1,5,16)

def fitness(self, x):
    x = np.array(x, dtype=int) # to make integers
    x = list(x)
    vec=look_up_function(x)
    obj = scalar_function(w, vec)
    return [obj]

number of objectives

#def get_nobj(self):
#    return 18

# Integer Dimension
def get_nix(self):
    return 16

bounds [1,4]

def get_bounds(self):
    return ([1]*16, [4] * 16)

Algorithm of gradient

def gradient(self, x):
    return pg.estimate_gradient_h(lambda x: self.fitness(x), x)

use "slsqp"- sequential least squares programming algorithm

start = time.time()
algo = pg.algorithm(uda = pg.mbh(pg.nlopt("slsqp"), stop = 20, perturb = .2))
algo.set_verbosity(1) # print dialog

Formulate minimization problem

pop = pg.population(prob = alge_function(), size = 200)

Solve the problem

pop = algo.evolve(pop)

  1. Code for Multiobjective optimization problem:

Define pygmo classs

class alge_function:
def init(self):
self.starting = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,1.0]

def fitness(self, x):
    x = np.array(x, dtype=int) # to make integers
    x = list(x)
    objvector = look_up_function(x)
    objv1 = reduce_function1(objvector)  # 13-objectives
    objv2 = reduce_function2(objv1)  # 8-objectives
    fs = reduce_function(objv2) # reduce to 4-objectives
    return fs

number of objectives

def get_nobj(self):
    return 4

# Integer Dimension
def get_nix(self):
    return 16

bounds [1,4]

def get_bounds(self):
    return ([1]*16, [4] * 16)

Formulate the problem and take populations (budgets)

pro = pg.problem(alge_function())

#create random population of 10 initial
pop = pg.population(pro, size=200)

#Methods: (nspso)Non-dominated Sorting Particle Swarm Optimization
algo = pg.algorithm(pg.nspso(gen=1000)) #gen: number of generations
algo.set_verbosity(100) # print dialog

solve problem

pop = algo.evolve(pop)
fits, vectors = pop.get_f(), pop.get_x()
print(pro)

Installing on Mac OS 11.5.2

Hi. I'm trying to install pygmo through conda as instructed.
When I run
import pygmo
pygmo.test.run_test_suite()
the test suite finds different errors. What can I do to fix it?

[FEATURE] save&load (e.g., pickle) population snapshot easily based on certain criteria

Hi,

i couldnt find any example to be able to save&load the population during evolving.

For example, something like this would be really useful:

pseudo code:

for i in range(n_iterations):
    pop =  evolve(pop)
    if pop.champion_f < best_f:
         save_snapshot(pop, i)

I wonder how / if this wold be easily possible with pygmo. Any example would be highly appreciated!

P.S: Would this way of evolving break any adaptive DE variants ?
Many Thanks

NSPSO doesn't work correctly with integer variables

In my tests, NSPSO doesn't appear to work correctly with integer variables:

After the first population, which works fine, subsequent populations have non-integer values even for "integer" variables.

The same is true for MOEA/D, but, according to the documentation, it doesn't handle integers.
So perhaps the issue is not with the implementation of NSPO, but with the documentation?

Thanks!
Thomas

PS: With NSGA-II, integer handling works fine, which is why I think that the issue isn't on my end.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.