Git Product home page Git Product logo

optimparallel-python's People

Contributors

florafauna avatar lewisblake avatar nikosavola avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

optimparallel-python's Issues

hess_inv returned as array and not object

I am using the minimize_parallel for a optimization of negative log likelihoods.

But for the continuation in my program I need the inv_hessian in the same way that it is returned from scipy.minimize, but it is returned as a multidimensional array.

is it possible to change this return Value in a future release?

kind regards

Python 3.11/Multiple initial guesses

Hi,

  1. Do you support Python 3.11?

  2. My objective is not so nice (i.e., non-convex), so I want to try a handful of x0’s. Would you consider “extending” the parallelization to multiple initial guesses (my solver is not thread-safe)?

Thanks,
Jake

Import ctypes DLL breaks execution of minimize_parallel

I run a minimization of a cost function with a lot of variables.
To reduce the calculation time I first use your package without any issue.
It gets a python function as cost function and minimize it with four threads.
To further reduce the calculation time I wrote the cost function in C.
To run the C-code in python I use ctypes.
I import the shared C lib with the ctypes.CDLL() function and wrote a Python wrapper function for the minimizer.
But as soon as is run the ctypes.CDLL() function the minimize_parallel stops working as intended. The debugger shows that four threads are running but there is no display output whatsoever. The CPU load is also idling.
I am not sure why just the import of the ctypes.CDLL breaks the minimize function.
Thanks in advance for your help.

System:
Linux Ubuntu 18.04
Python 3.7.15
optimParallel 0.1.2

[help] Poor performances when running mpi at each evaluation

Hello,
First of all, thank you for this implementation of minimization.
The optimization scheme I try to do is roughly as follows:

from optimparallel import minimize_parallel
import os

def costFun(p)
   os.system( "mpirun -np 5 somejob {}".format(p) )
   return job_result()

x0=(1,2,3,4,5)
optim=minimize_parallel(costFun, x0)

As you may understand, the idea is to run 6 different evaluations at the same time, but each of those evaluations should be made on 5 cores with OpenMPI (it is actually a Finite Element Analysis). I really don't know why, but this scheme is actually very poor in terms of performances, since each MPI job seems to be twice slower than expected.

Do you have any hint?

Thank you in advance.
Regards.

Early Stopping based on objective rather than gradient: Issue for the optimParallel package in R

Hello! as stated above, this is really an issue for your R-package at
https://git.math.uzh.ch/florian.gerber/optimParallel/-/tree/master

(But I'm not affiliated with uzh so I don't think I can post an issue there)

I love the package!

I would like institute an early stopping condition for L-BFGS-B that's based on my objective and not based on the gradient.

Once my objective crosses a threshold, I don't need my gradient to improve anymore. This is already good enough. Despite a lot of looking, I haven't been able to find a parameter which allows me to do this. (abstol support in L-BFGS-B basically)

I could implement this by adding my own gradient, and then setting the value to 0 when it reaches a certain threshold, but then I need to re-implement your excellent search!

Basically I want to do:

# Gradient function in FGgenerator
g <- function(par){
        # if abstol is set and stopping_condition is met, return a 0 gradient
        if(!is.null(abstol) && stopping_condition(par, abstol)) return(0)
        evalFG(par) 
        i_g <<- i_g+1
        return(grad)
    }

Is this possible? am I overlooking an easier way to accomplish this?

Thank you for your time!

allow additional arguments to ProcessPoolExecutor

I am running an optimization that uses some unpicklable objects. Because spawned or forked processes require pickling the environment, I would like to supply ProcessPoolExecutor with an initializer function to set up the unpicklable objects independently for each process.

To implement this with optimparallel, I've branched the repo and made the following modifications to optimparallel.py

parallel_used = {'max_workers': None, 'forward': True, 'verbose': False, 'loginfo': False, 'time': False, 'initializer': None, 'initargs': (), 'mp_context'=None}

and

with concurrent.futures.ProcessPoolExecutor(max_workers=parallel_used.get('max_workers'), initializer=parallel_used.get('initializer'), mp_context=parallel_used.get('mp_context'), initargs=parallel_used.get('initargs')) as executor:

Would love to see functionality like this added in a future release!

Having difficulties to properly install the module on anaconda-python

Hello,

I installed this module with pip, and executed the .py example :

from optimparallel import minimize_parallel
from scipy.optimize import minimize
import numpy as np
import time

## objective function
def f(x, sleep_secs=.5):
    print('fn')
    time.sleep(sleep_secs)
    return sum((x-14)**2)

## start value
x0 = np.array([10,20])

## minimize with parallel evaluation of 'fun' and
## its approximate gradient.
o1 = minimize_parallel(fun=f, x0=x0, args=.5)
print(o1)

## test against scipy.optimize.minimize()
o2 = minimize(fun=f, x0=x0, args=.5, method='L-BFGS-B')
print(all(np.isclose(o1.x, o2.x, atol=1e-10)),
      np.isclose(o1.fun, o2.fun, atol=1e-10),
      all(np.isclose(o1.jac, o2.jac, atol=1e-10)))

But I get this error in _base.py

line 389, in __get_result
    raise self._exception`

BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Do someone has an advice to properly install it ?

Regards,

Roman

PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

Hello,

I have tried to use the package in Python 2.7.18, macOS Monterey 12.0.1, to run the following code that minimizes a function and keeps track of the computing time:

# Import modules
import numpy as np
import timeit
from scipy.optimize import minimize
from optimparallel import minimize_parallel

# Define the function
def objective(x):
    return x[0]**2.0 + x[1]**2.0

# Define the range for input
r_min , r_max = -5.0, 5.0

# Define the starting point as a random sample from the domain
pt = r_min + np.random.rand(2)*(r_max - r_min)

# Minimize the function
start = timeit.default_timer()
resultpar = minimize_parallel(fun=objective, x0=pt)
finish = timeit.default_timer()
print('Finished in', round(finish-start, 2), 'second(s)')

However, Python issues this error when I try to execute this script:

Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
Traceback (most recent call last):
  File "/Users/montesinos/opt/anaconda3/envs/gambit/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

According to what I read, it seems that Python cannot pass the function to the worker processes. Does anyone know what could be happening?

Method Powell

Hi. I'm trying to speed up the calculation of minimizing a function for which it's not possible to do a derivation, so I'm using method='Powell' which gives me relatively good results (the best of all the scipy.optimize.minimize methods). I've sped the code up using jit, but I'd like to speed it up even more. Would it be possible to rewrite your code to use method Powell?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.