Git Product home page Git Product logo

pymoo's People

Contributors

alaya-in-matrix avatar anyopt-demo-student avatar apanichella avatar avivsham avatar blankjul avatar cyrilpic avatar davide-q avatar deathn0t avatar dependabot[bot] avatar emenod1 avatar ewouth avatar fanoway avatar gemsanyu avatar gresavage avatar hugolmn avatar jentrialgo avatar joshkarpel avatar jschmie avatar mkouhia avatar mlopez-ibanez avatar mooscaliaproject avatar peng-ym avatar rzshrote avatar sebastianmortag avatar shoaib42 avatar stonebig avatar thchang avatar tomtkg avatar waterfall-xi avatar yashvesikar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pymoo's Issues

Multi-objective discrete parameter sampling

Hi Julian,

thank you for your work and for providing such a great framework to the community.

I've been trying in vain to get NSGA2 working with discrete parameter sampling.
Does the framework generally provide the possibility of discrete parameter sampling
for multi-objective problems in the current state?

How to print custom output. In the subset problem

How to print custom output. In the subset problem, there are hundreds of possible variables. I want to track the number of variables that are not selected for evaluation and use them to adjust mutation parameters. For example, calculate x.sum (0) in the _evaluate function and add the parameter column to the print list.

TypeError: can't pickle _thread.RLock objects

The code below gives multiple errors. Did I define my problem in the wrong way, or is this a bug?

import numpy as np

from pymoo.model.problem import Problem
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation, get_termination
from pymoo.optimize import minimize

initial_population = 60
num_individuals_per_generation = 60
max_num_generations = 100

class OptimQuads(Problem):

    def __init__(self, surrogate):
        super().__init__(
            n_var=8,
            n_obj=2,
            n_constr=0,
            xl=np.array([200, 170, -30, 1.5, -8, -8, -8, -8]),
            xu=np.array([500, 260, 0, 10, 8, 8, 8, 8])
        )

        self._model = surrogate

    def _evaluate(self, x, out, *args, **kwargs):
        prediction = self._model.predict(x)
        # just a dummy for minimal example
        out['F'] = np.ones((x.shape[0], 2))

problem = OptimQuads(surr)

algorithm = NSGA2(
    pop_size=initial_population,
    n_offsprings=num_individuals_per_generation,
)

termination = get_termination("n_gen", max_num_generations)

res = minimize(problem,
               algorithm,
               termination,
               seed=1,
               pf=problem.pareto_front(use_cache=False),
               save_history=True,
               verbose=False)

This displays the following error message:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-d91e210c618c> in <module>
      5                pf=problem.pareto_front(use_cache=False),
      6                save_history=True,
----> 7                verbose=False)

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/optimize.py in minimize(problem, algorithm, termination, **kwargs)
     56 
     57     # actually execute the algorithm
---> 58     res = algorithm.solve()
     59 
     60     # store the copied algorithm in the result object

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in solve(self)
    152 
    153         # call the algorithm to solve the problem
--> 154         self._solve(self.problem)
    155 
    156         # store the resulting population

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in _solve(self, problem)
    201         self.n_gen = 1
    202         self._initialize()
--> 203         self._each_iteration(self, first=True)
    204 
    205         # while termination criterion not fulfilled

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in _each_iteration(self, D, first, **kwargs)
    229             self.history, self.callback = None, None
    230 
--> 231             obj = copy.deepcopy(self)
    232             self.history = hist
    233             self.callback = _callback

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_list(x, memo, deepcopy)
    213     append = y.append
    214     for a in x:
--> 215         append(deepcopy(a, memo))
    216     return y
    217 d[list] = _deepcopy_list

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_list(x, memo, deepcopy)
    213     append = y.append
    214     for a in x:
--> 215         append(deepcopy(a, memo))
    216     return y
    217 d[list] = _deepcopy_list

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    178                     y = x
    179                 else:
--> 180                     y = _reconstruct(x, memo, *rv)
    181 
    182     # If is its own copy, don't memoize.

/usr/lib64/python3.6/copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
    278     if state is not None:
    279         if deep:
--> 280             state = deepcopy(state, memo)
    281         if hasattr(y, '__setstate__'):
    282             y.__setstate__(state)

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    148     copier = _deepcopy_dispatch.get(cls)
    149     if copier:
--> 150         y = copier(x, memo)
    151     else:
    152         try:

/usr/lib64/python3.6/copy.py in _deepcopy_dict(x, memo, deepcopy)
    238     memo[id(x)] = y
    239     for key, value in x.items():
--> 240         y[deepcopy(key, memo)] = deepcopy(value, memo)
    241     return y
    242 d[dict] = _deepcopy_dict

/usr/lib64/python3.6/copy.py in deepcopy(x, memo, _nil)
    167                     reductor = getattr(x, "__reduce_ex__", None)
    168                     if reductor:
--> 169                         rv = reductor(4)
    170                     else:
    171                         reductor = getattr(x, "__reduce__", None)

TypeError: can't pickle _thread.RLock objects

When I set verbose = True in minimize(), I get the following error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-11-fec0f54fcc98> in <module>
      5                pf=problem.pareto_front(use_cache=False),
      6                save_history=True,
----> 7                verbose=True)

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/optimize.py in minimize(problem, algorithm, termination, **kwargs)
     56 
     57     # actually execute the algorithm
---> 58     res = algorithm.solve()
     59 
     60     # store the copied algorithm in the result object

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in solve(self)
    152 
    153         # call the algorithm to solve the problem
--> 154         self._solve(self.problem)
    155 
    156         # store the resulting population

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in _solve(self, problem)
    201         self.n_gen = 1
    202         self._initialize()
--> 203         self._each_iteration(self, first=True)
    204 
    205         # while termination criterion not fulfilled

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/algorithm.py in _each_iteration(self, D, first, **kwargs)
    216         # display the output if defined by the algorithm
    217         if self.verbose and self.func_display_attrs is not None:
--> 218             disp = self.func_display_attrs(self.problem, self.evaluator, self, self.pf)
    219             if disp is not None:
    220                 self._display(disp, header=first)

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/util/display.py in disp_multi_objective(problem, evaluator, algorithm, pf)
     70     if len(feasible) > 0:
     71         if pf is not None:
---> 72             attrs.append(('igd', format_float(IGD(pf).calc(F[feasible])), width))
     73             attrs.append(('gd', format_float(GD(pf).calc(F[feasible])), width))
     74             if problem.n_obj == 2:

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/model/indicator.py in calc(self, F)
     47         self.range = np.ones(self.n_dim)
     48 
---> 49         return self._calc(F)
     50 
     51     @abc.abstractmethod

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/performance_indicator/distance_indicator.py in _calc(self, F)
     24 
     25     def _calc(self, F):
---> 26         D = vectorized_cdist(self.pf, F, func_dist=self.dist_func, norm=self.range)
     27         return np.mean(np.min(D, axis=self.axis))
     28 

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/util/misc.py in vectorized_cdist(A, B, func_dist, **kwargs)
    100     v = np.tile(B, (A.shape[0], 1))
    101 
--> 102     D = func_dist(u, v, **kwargs)
    103     M = np.reshape(D, (A.shape[0], B.shape[0]))
    104     return M

/data/user/bellotti_r/.env/lib64/python3.6/site-packages/pymoo/performance_indicator/distance_indicator.py in euclidean_distance(a, b, norm)
      6 
      7 def euclidean_distance(a, b, norm=None):
----> 8     return np.sqrt((((a - b) / norm) ** 2).sum(axis=1))
      9 
     10 

TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

My versions are:

  • pymoo: 0.3.1
  • numpy: 1.17.2

Initialize population of choice or access the initialized population by toolbox?

Hi,

Is there any way to access the initial population defined by the toolbox. Apart from the decision variable, I have one additional variable in the objective function whose value depend upon the decision variable and comes from a software. In other words, I pass the initial population of decision variables to the software, get the values of other variable and then evaluate the objective function. Is there a way that I can get initial population before using "minimize" function?

Thanks in advance.

First run issue

Good morning,
I installed pymoo using pip on a win10 pc.
When I run one of the example (I started with the one in the readme file) I have always this error:

File "<__array_function__ internals>", line 4, in any
NameError: name 'dispatcher' is not defined

How can I fix it?
Thank you!

EDIT
I'm really sorry,
I just figured out that something broke my anaconda installation.
I solved reinstalling it.

different decision variable using different value range

I set a problem myself, n_var is 20, but I hope that the range of these 20 variables is different. For example, the value range of the first variable is 0,1,2,3. The value range of the second variable is 0, 2. The value range of the twentieth variable is 0,1,3. Therefore, I cannot use lb and ub to control the upper and lower limits of the range. I need to specify the range for each variable.

Hope to get your help

How do you handle equality constraints?

Thanks for this great library.
I am wondering if it is possible to handle equality constraints and how to do it in pymoo.
For example, if we want the sum of x <= 500 we do:
g1 = anp.sum(x ,axis=1) - 500
How can we do if we want the sum of x = 500?
Thanks/

found a bug in nsga3.py

    pop = self.pop[I]
    print(pop.shape)
    print(self.pop.get("is_closest"))
    if len(pop) == 1:
        self.opt = pop
    else:
        self.opt = pop[self.pop.get("is_closest")]

here the last line should be
self.opt = self.pop[self.pop.get("is_closest")]

Can I save population in every 1000 evaluations?

Currently, pymoo saves the population to res.history in every generation. However, when comparing different algorithms, the number of function evaluations in each generation are usually different.

When comparing the convergence property of different algorithms, I would like to compare the population in 1000 (or other pre-specified numbers) evaluations. It there any way to do that? Thanks in advance!

Provide "warm start" population

I am curious if there's a way to provide initial input values to the algorithm so that the search would be quicker?

I am not quite familiar with those genetic algorithms, correct me if I am wrong in this.

Does algorithms include nsga3-part2

I only found "selection=TournamentSelection(func_comp=comp_by_cv_then_random)" in nsga3.py. But I didn't find code about addition of new reference points and deletion of existing reference points,which are also included in the paper, An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach,
Part {II}: Handling Constraints and Extending to an Adaptive Approach.
Looking forward to your reply.

@Article{nsga3-part2,
author={H. Jain and K. Deb},
journal={IEEE Transactions on Evolutionary Computation},
title={An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach,
Part {II}: Handling Constraints and Extending to an Adaptive Approach},
year={2014},
volume={18},
number={4},
pages={602-622},
ISSN={1089-778X},
month={Aug},
}

Manual initialisation

Is it possible to initialise the population manually, e. g. by giving an array to the Problem constructor?

Is there anyway to save and load the result?

The way I save result is really ugly:

     res = search(problem,
             pop_size=args.pop_size,
             n_offsprings=args.n_offsprings,
             n_generations=args.n_generations,
             seed=args.seed)
     state = dict(res=res, problem=problem)
     torch.save(state, f"{save_path}/search.tar")
     logger.success(res)

Runtime error when running parallel experiments on windows os

RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if name == 'main':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.

This error comes up when trying to run parallelization on windows but not when I run on linux.

How to recompile libraries

I have recently installed pymoo, and when I've the command for verifying if speedup libraries were compiled, the following message appeared:

Compiled libraries can not be used. Compile for speedup!

How could I recompile (or try to) this libraries manually?

import module error

There is no folder for Problem definition, so:
from pymop.problem import Problem
give error.
Please check.
Thank you

[Question] Error

Hello~
I'm following getting started(https://pymoo.org/getting_started.html) changing constraints and objective, but it shows error code below


TypeError Traceback (most recent call last)
in
13 pf=problem.pareto_front(use_cache=False),
14 save_history=True,
---> 15 verbose=True)

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\optimize.py in minimize(problem, algorithm, termination, **kwargs)
56
57 # actually execute the algorithm
---> 58 res = algorithm.solve()
59
60 # store the copied algorithm in the result object

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\model\algorithm.py in solve(self)
152
153 # call the algorithm to solve the problem
--> 154 self._solve(self.problem)
155
156 # store the resulting population

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\model\algorithm.py in _solve(self, problem)
201 self.n_gen = 1
202 self._initialize()
--> 203 self._each_iteration(self, first=True)
204
205 # while termination criterion not fulfilled

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\model\algorithm.py in _each_iteration(self, D, first, **kwargs)
216 # display the output if defined by the algorithm
217 if self.verbose and self.func_display_attrs is not None:
--> 218 disp = self.func_display_attrs(self.problem, self.evaluator, self, self.pf)
219 if disp is not None:
220 self._display(disp, header=first)

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\util\display.py in disp_multi_objective(problem, evaluator, algorithm, pf)
70 if len(feasible) > 0:
71 if pf is not None:
---> 72 attrs.append(('igd', format_float(IGD(pf).calc(F[feasible])), width))
73 attrs.append(('gd', format_float(GD(pf).calc(F[feasible])), width))
74 if problem.n_obj == 2:

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\model\indicator.py in calc(self, F)
47 self.range = np.ones(self.n_dim)
48
---> 49 return self._calc(F)
50
51 @abc.abstractmethod

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\performance_indicator\distance_indicator.py in _calc(self, F)
24
25 def _calc(self, F):
---> 26 D = vectorized_cdist(self.pf, F, func_dist=self.dist_func, norm=self.range)
27 return np.mean(np.min(D, axis=self.axis))
28

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\util\misc.py in vectorized_cdist(A, B, func_dist, **kwargs)
100 v = np.tile(B, (A.shape[0], 1))
101
--> 102 D = func_dist(u, v, **kwargs)
103 M = np.reshape(D, (A.shape[0], B.shape[0]))
104 return M

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\performance_indicator\distance_indicator.py in euclidean_distance(a, b, norm)
6
7 def euclidean_distance(a, b, norm=None):
----> 8 return np.sqrt((((a - b) / norm) ** 2).sum(axis=1))
9
10

TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

How to solve it? (obj: 1 constr: 4)
Thank you in advance.

Generate Reference Directions Class

Create a class that will generate the reference directions (Das-Dennis Unit simplex points).

Inputs to the class:

  • Population size
  • p - num. sections
  • Matrix of reference directions ( directions that have been calculated and provided)

generate() method should generate the appropriate directions based on the input info provided.

Allowing continuous and discrete variable on AWS batch

Hi there,

Great package! I was wondering how to optimize with continuous as well as discrete(integer) variables at the same time. Is this possible?

Also, to evaluate my function.. the problem _evaluate is rather processor intense...would I be able to send the work to something like AWS batch to run the def _evaluate portion of the problem to speed up the solution?

Thanks!

Phylroy

Error in NSGA2 with mixed-variable optimization problem.

Hi,

I am using pymoo to solve a multi-objective mixed variable problem. I get the following error at the end of the simulation. If I reduce the number of generation to less than 5 the error does not appear. I have been simulating a similar problem without integer variables, it works fine.

Any thoughts?

Thank you.

Error

using multi process

I am running NSGA-2 with multiple threads, but I found that the x in the _evaluate function is different in each thread. I want each thread to use the same x. Whether x is generated using the random function? if that is the case, Add random.seed() to control all threads to produce the same x?
In addition, where do the parameters x and out of the _evaluate function come from? What are the functions of the parameters n_var, n_obj, n_constr, lb, ub?

image

ValueError: negative dimensions are not allowed

when i run this flowing the tutorial:https://pymoo.org/getting_started.html#Multi-Objective-Optimization
i get a misstake describ as “ValueError: negative dimensions are not allowed”

C:\Users\hero\Anaconda3\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2018.2.4\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 7154 --file C:/Users/hero/PycharmProjects/GCS/test_pymoo.py
pydev debugger: process 3788 is connecting

Connected to pydev debugger (build 182.4505.26)
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.2.4\helpers\pydev\pydevd.py", line 1664, in
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.2.4\helpers\pydev\pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.2.4\helpers\pydev\pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.2.4\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/hero/PycharmProjects/GCS/test_pymoo.py", line 58, in
verbose=True
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\optimize.py", line 58, in minimize
res = algorithm.solve()
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\model\algorithm.py", line 168, in solve
self._solve(self.problem)
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\model\algorithm.py", line 218, in _solve
self._initialize()
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\algorithms\genetic_algorithm.py", line 87, in _initialize
pop = self.sampling.do(self.problem, self.pop_size, pop=pop, algorithm=self)
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\model\sampling.py", line 41, in do
val = self._do(problem, n_samples, **kwargs)
File "C:\Users\hero\Anaconda3\lib\site-packages\pymoo\operators\sampling\random_sampling.py", line 18, in _do
val = np.random.random((n_samples, problem.n_var))
File "mtrand.pyx", line 861, in mtrand.RandomState.random_sample
File "mtrand.pyx", line 167, in mtrand.cont0_array
ValueError: negative dimensions are not

Source Code Problem

Hi Julian, I am here seeking you asistance.
There are two problems I met when I read the MOEA/D algorihtm given by pymoo 0.3.0 documentation.

source code : " ref_dirs = UniformReferenceDirectionFactory(3, n_points=100).do() "
I found it in "from pymop.factory import get_problem, UniformReferenceDirectionFactory"
But when I check the directory of pymop, I didn't find "UniformReferenceDirectionFactory" in factory.py but in util.py. Is there any other method that can link the "UniformReferenceDirectionFactory" ?
2.
I want to change something in your source code to have a better understanding of code implementation of MOEA/D. I just add some "print()" in, but it didn't work even restarting my computer. Why?
Ubuntu 16.04.
Got the pymoo by pip.
source code I changed is in "/usr/local/lib/python3.5/dist_packages/pymop/util.py" after I modify the file permissions through " sudo chmod 777 * " under the directory of ".../pymop".

To be honest, I am new to both python and the framework.
I hope you would not mind my stupid problems and give me a hand.
Thanks a lot.

Possibility to enforce hard constraints

Hi,
I'm trying to solve an optimisation problem (obviously). Unfortunately, I don't have a single feasible solution in my final generation. Is it possible to enforce the constraints in a "hard" way? I'm thinking about only adding individuals that fulfil the constraints.

Bug in PYMOO N-Dimensional Scatter Plot

I was playing around with the n-dimensional scatter plot and possibly found a bug. Please see the image below:
image

It is expected that about the diagonal corresponding plots would be rotated. In fact all the plots look exactly the same.

Here's the random data I used (space separated):

0.50043 0.060429 0.737314 0.742259
0.14787 0.907006 0.394185 0.798932
0.953716 0.159536 0.034512 0.86805
0.200498 0.311165 0.617604 0.606972
0.928566 0.340058 0.397085 0.152281
0.888746 0.477383 0.531482 0.69961
0.878204 0.502142 0.710674 0.455302
0.328805 0.85086 0.834285 0.176581
0.397921 0.212607 0.32161 0.894788
0.199863 0.986696 0.635643 0.054622

Running bug when calculating hv

pymoo version: 0.3.0
File "../py36/lib/python3.6/site-packages/pymoo/model/indicator.py", line 21, in calc
return self._calc(F)
File "../py36/lib/python3.6/site-packages/pymoo/indicators/hv.py", line 43, in _calc
val = hv.compute(_F)
File "../py36/lib/python3.6/site-packages/pymoo/indicators/hv.py", line 117, in compute
hyperVolume = self.hvRecursive(dimensions - 1, len(relevantPoints), bounds)
File "../py36/lib/python3.6/site-packages/pymoo/indicators/hv.py", line 153, in hvRecursive
while q.cargo != None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

I think "q.cargo != None" may be:
q.cargo is not None

Warnings

problem = MyProblem(dataset)
ref_dirs = UniformReferenceDirectionFactory(3, n_points=91).do()
res = minimize(problem,
               method='nsga3',
               method_args={
                   'pop_size': 30,
                   'ref_dirs': ref_dirs},
               termination=('n_gen', 100),               
               seed=1,
               disp=True)

While running above code I am getting below warning. Will it effect the end results?

86 | 2580
/home/coea/anaconda3/envs/pymoo/lib/python3.6/site-packages/pymoo/algorithms/nsga3.py:221: RuntimeWarning: invalid value encountered in true_divide
N = (F - utopian_point) / (nadir_point - utopian_point)
87 | 2610

New problem sets

Is there any plans to incorporate new problem sets: WFG and MaF for example?

Plot in separate windows

Hi,

Is it possible to make the graph plotted in separate windows instead of inline plot.
Regarding to RNSGA3 is it developped based on this paper:

Reference point based multi-objective optimization using evolutionary algorithms.

Thank you

NSGA-3 gat_nadir_point argument inversion

I might be wrong but it seems to me that the two last argument order when calling get_nadir_point in nsga3 is wrong (at least from the api point of view)

The function is defined as follow

def get_nadir_point(extreme_points, ideal_point, worst_point, worst_of_front, worst_of_population):

and called with

get_nadir_point(self.extreme_points, self.ideal_point, self.worst_point, worst_of_population, worst_of_front)

[Question] NSGA2 and 3 algorithm are good methods for my project?

Hello,
I'm searching optimizing algorithms for my project.
my project has one or two objectives(ex. x+y+z or px+qy+rz) and 18 constraint functions(ex. a1x+b1y+c1z-d1=0 -> equality)
Is it possible to find solutions by using NSGA2 or 3? If it is possible, which algorithm is the best for my project?

Thank you in advance!!!

class Termination

Hi, What a good multi-objective optimization platform!!! I would like to ask how the termination criterion 'termination' is a member variable of the algorithm? Because some adaptive algorithms require the termination condition.

[Question] error

[-2.21617357e-02 -1.50119180e-01 -9.09995681e-02 1.25947932e-01
-6.15241812e-02 -2.23912419e-01 5.85929008e-01 -1.31129174e-01
-4.12985813e-01 -5.06295108e-01 2.03546639e-01 5.36321300e-02
-2.33383690e-01 1.14347371e-01 4.88084609e-01 -3.17626463e-02
4.24874150e-02 -2.58434832e-01 1.88217357e-02 1.38999180e-01
7.75795681e-02 -1.47667932e-01 4.32841812e-02 2.16312419e-01
-5.91369008e-01 1.17529174e-01 4.03005813e-01 4.89135108e-01
-2.19946639e-01 -5.98121300e-02 2.18683690e-01 -1.40927371e-01
-5.21544609e-01 2.31226463e-02 -5.27274150e-02 2.39014832e-01]]
10 | 230 | 3.3645104619 / 3.7806289242 | - | -

AttributeError Traceback (most recent call last)
in
13 pf=None,
14 save_history=True,
---> 15 verbose=True)
16 # res = minimize(problem,algorithm,termination)

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\optimize.py in minimize(problem, algorithm, termination, **kwargs)
56
57 # actually execute the algorithm
---> 58 res = algorithm.solve()
59
60 # store the copied algorithm in the result object

C:\Users\Farmboy\Anaconda3\envs\flavor\lib\site-packages\pymoo\model\algorithm.py in solve(self)
169 X, F, CV, G = opt.get("X", "F", "CV", "G")
170 else:
--> 171 X, F, CV, G = opt.X, opt.F, opt.CV, opt.G
172
173 if opt is not None:

AttributeError: 'NoneType' object has no attribute 'X'

This is my code
class MyProblem(Problem):

def __init__(self):
    super().__init__(n_var=2,
                     n_obj=1,
                     n_constr=36,
                     xl=anp.array([0,0]),
                     xu=anp.array([100,100]))

   
def _evaluate(self, x, out, *args, **kwargs):
    f1 = x[:,0] + x[:,1]      
    out["F"] =anp.column_stack([f1])
    out["G"] =anp.column_stack([out_cal(temp_c1,temp_c2,temp_c3,x),out_cal_neg(temp_c1_neg,temp_c2_neg,temp_c3,x)])
    print(out["G"])
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_algorithm, get_sampling, get_crossover, get_mutation
from pymoo.model.repair import Repair    
algorithm = NSGA2(
    pop_size=50,
    n_offsprings=20,
    sampling=get_sampling("real_random"),
    crossover=get_crossover("real_sbx", prob=0.9, eta=15),
    mutation=get_mutation("real_pm", eta=20),
    eliminate_duplicates=True
)
from pymoo.factory import get_termination

termination = get_termination("n_gen", 10)
import matplotlib.pyplot as plt

from pymoo.optimize import minimize
from pymoo.util import plotting

res = minimize(problem,
               algorithm,
               termination,
               seed=1,
               pf=None,
               save_history=True,
               verbose=True)

obj: 1 constr:36 var:2
I can check that out["G"] has 36 elements, but it shows the same error like last issue(pf=True).
Thank you in advance.

nsga3 algorithm evolution problem

class NSGA3(GeneticAlgorithm):

    def __init__(self,
                 ref_dirs,
                 pop_size=None,
                 sampling=IntegerFromFloatSampling(clazz=FloatRandomSampling),
                 selection=TournamentSelection(func_comp=comp_by_cv_then_random),
                 crossover=SimulatedBinaryCrossover(eta=30, prob=1.0),
                 mutation=PolynomialMutation(eta=20, prob=None),
                 eliminate_duplicates=True,
                 n_offsprings=None,
                 display=MultiObjectiveDisplay(),
                 **kwargs)

I initialized an nsga3 in the above way. I want my structure to be made up of integers, so I use sampling = IntegerFromFloatSampling (clazz = FloatRandomSampling) to initialize the population. But the structure in the first offspring evolved has decimals.

Seems to be caused by PolynomialMutation. Doesn't support integer mutation?

Hope for help

Question: verbose flag in minimize & elementwise_evaluation + threads

Hi,

I've modified the getting-started example for elementwise evaluation and parallelization with threads.
The result is correct, but it doesn't work with verbose flag in minimize.
Can you reproduce this behavior?
How can I monitor evaluation in this case?

Code (comment/uncomment verbose in minimize):

import autograd.numpy as anp
import numpy as np
from pymoo.model.problem import Problem

# Define problem parameters
knobs=2                    # number of variables (decision space dimension)
number_of_objectives=2     # number of objectives (all to be minimized)
number_of_constraints=2    # number of constraints (all have the form const_i <= 0)
lbound=[-2,-2]             # set lower bounds for all decision variables
ubound=[+2.+2]             # set upper bounds for all decision variables
number_of_threads=4        # number of threads

# Define problem
class task(Problem):
    
    # Set main parameters
    def __init__(self,**kwargs):
        super().__init__(
            n_var=knobs,                  
            n_obj=number_of_objectives,
            n_constr=number_of_constraints,
            xl=lbound,
            xu=ubound,
            elementwise_evaluation=True,
            **kwargs
            )
        
    # Set objectives and constraints
    def _evaluate(self,x,out,*args,**kwargs):
        # Define objective functions and constraints
        f1 = x[0]**2+x[1]**2
        f2 = (x[0]-1)**2+x[1]**2
        g1 = 2*(x[0]-0.1)*(x[0]-0.9)/0.18
        g2 = -20*(x[0]-0.4)*(x[0]-0.6)/4.8
        out["F"] = [f1,f2]
        out["G"] = [g1,g2]
    
# Set problem    
problem = task(parallelization=("threads",number_of_threads))

#  Algorithm
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation
algorithm = NSGA2(
    pop_size=50,
    n_offsprings=20,
    sampling=get_sampling("real_random"),
    crossover=get_crossover("real_sbx",prob=0.9,eta=15),
    mutation=get_mutation("real_pm",eta=20),
    eliminate_duplicates=True
)

# Termination
from pymoo.factory import get_termination
termination = get_termination("n_gen",30)

# Optimize (works OK without verbose)
from pymoo.optimize import minimize
res = minimize(problem,algorithm,termination,seed=1,save_history=True,verbose=True)

# Plot
import matplotlib.pyplot as plt
from pymoo.visualization.scatter import Scatter
plot = Scatter(title = "Design Space", axis_labels="x")
plot.add(res.X, s=30, facecolors='none', edgecolors='r')
plot.do()
plot.apply(lambda ax: ax.set_xlim(-0.5, 1.5))
plot.apply(lambda ax: ax.set_ylim(-2, 2))
plt.show()

plot = Scatter(title = "Objective Space")
plot.add(res.F)
plot.do()
plt.show()

Errors:

Traceback (most recent call last):
  File "template.py", line 59, in <module>
    res = minimize(problem,algorithm,termination,seed=1,save_history=True,verbose=True)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/optimize.py", line 58, in minimize
    res = algorithm.solve()
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/model/algorithm.py", line 154, in solve
    self._solve(self.problem)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/model/algorithm.py", line 203, in _solve
    self._each_iteration(self, first=True)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/model/algorithm.py", line 218, in _each_iteration
    disp = self.func_display_attrs(self.problem, self.evaluator, self, self.pf)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/util/display.py", line 72, in disp_multi_objective
    attrs.append(('igd', format_float(IGD(pf).calc(F[feasible])), width))
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/model/indicator.py", line 49, in calc
    return self._calc(F)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/performance_indicator/distance_indicator.py", line 26, in _calc
    D = vectorized_cdist(self.pf, F, func_dist=self.dist_func, norm=self.range)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/util/misc.py", line 102, in vectorized_cdist
    D = func_dist(u, v, **kwargs)
  File "/home/imorozov/.conda/envs/pymoo/lib/python3.7/site-packages/pymoo/performance_indicator/distance_indicator.py", line 8, in euclidean_distance
    return np.sqrt((((a - b) / norm) ** 2).sum(axis=1))
TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

Problem in matplotlib

Hi,
Would you please check this error: "

raise ValueError("'transform' must be an instance of "

ValueError: 'transform' must be an instance of 'matplotlib.transform.Transform'"

I am using Anaconda 3 with python 3.7. I have got this error when I run NSGA2 test.

Thank you

Time limit

Hello,
is there support for time-limiting the evolutionary algorithms? (after X time period return the current population).

Thanks,
Elad

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.