Git Product home page Git Product logo

eago.jl's People

Contributors

dimitrialston avatar evrenmturan avatar hasundue avatar matthewstuber avatar mewilhel avatar mlubin avatar odow avatar rxgottlieb avatar simeonschaub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

eago.jl's Issues

Local NLP solver specific wrappers

Currently, EAGO using Ipopt by default.

Setting absolute and relative tolerances does not change the required tolerances for the upper problem's solution. There currently isn't a standardized manner of setting this in MOI in large part due to the lack of standard convergence metrics across NLP solvers.

So we'll want to explicit support loading each NLP solver in the future, to prevent the user from needing to specify all options for solving the local problem or encountering numerical issues from mismatching tolerances.

We'll need to add a functions to do this the parsing steps, basically we should check for the solver name, then update parameters based on the solver name and the parameters set in EAGO. Throw a warning if the solver name isn't recognized.

  • Ipopt
  • Knitro

Error from a user-defined branch-and-bound implementation

Ran into the following error when using EAGO's branch-and-bound with user-defined subroutines

ArgumentError: heap must be non-empty
Stacktrace:
 [1] fathom!(::IntervalExt, ::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\DataStructures\SAI1X\src\heaps\minmax_heap.jl:265
 [2] fathom!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:761
 [3] global_solve!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:837
 [4] optimize!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:875
......

EAGO version: 0.3.1
DataStructures versionL 0.17.5

Issue with McCormick relaxation

May find an issue with McCormick relaxation on exponentiation. Plots look weird (both NS and Diff mode). MWE is attached in the following:

[email protected]

using EAGO.McCormick, DataFrames, IntervalArithmetic

# initializes function to explicitly solve for x
f(x,y) = (x-2)^2 + (x-y)^3+ (x-y)^4

# create a data frame to store output data
df = DataFrame(x = Float64[], y = Float64[], z = Float64[], cv1 = Float64[], cv2 = Float64[],
               cc1 = Float64[], cc2 = Float64[], l1 = Float64[], l2 = Float64[],
               u1 = Float64[], u2 = Float64[])

n = 30
X = EAGO.Interval(-1,1)
Y = EAGO.Interval(-1,1)
xrange = range(X.lo,stop=X.hi,length=n)
yrange = range(Y.lo,stop=Y.hi,length=n)

for (i,x) in enumerate(xrange)
    for (j,y) in enumerate(yrange)
        z = f(x,y)
        x_mc = MC{1,NS}(x,X,1)
        y_mc = MC{1,NS}(y,Y,2)
        x_dmc = MC{1,Diff}(x,X,1)
        y_dmc = MC{1,Diff}(y,Y,2)
        f_mc = f(x_mc,y_mc)
        f_dmc = f(x_dmc,y_dmc)
        save_tuple = (x, y, z, f_dmc.cv, f_mc.cv, f_dmc.cc, f_mc.cc,
                      lo(f_dmc.Intv), lo(f_mc.Intv), hi(f_dmc.Intv), hi(f_mc.Intv))
        push!(df, save_tuple)
    end
end

using Plots, CSV
pyplot()

Plots.plot(df.x, df.y, df.z, st = [:surface], title = "Smooth McCormick Relaxation")
Plots.plot!(df.x, df.y, df.cv1, st = [:surface])
Plots.plot!(df.x, df.y, df.cc1, st = [:surface])

# Plots.plot(df.x, df.y, df.z, st = [:surface], title = "Nonsmooth McCormick Relaxation")
# Plots.plot!(df.x, df.y, df.cv2, st = [:surface])
# Plots.plot!(df.x, df.y, df.cc2, st = [:surface])
# Plots.plot!(camera = (150,20))

# CSV.write("plot_data.csv", df)

How to display the optimality bounds [LBD, UBD]

Hi, I am using EAGO to solve a nonlinear least squares problem. There are some issues that need your help.

After calling optimize!(model), the output information is

-----------------------------------------------------------------------------------------------------------------------------
|  Iteration #  |     Nodes    | Lower Bound  |  Upper Bound  |      Gap     |     Ratio    |     Time     |    Time Left   |
-----------------------------------------------------------------------------------------------------------------------------
 
First Solution Found at Node 1
UBD = 3.527847092443349e-5
Solution is :
    X[1] = 0.7607755418437602
    X[2] = 0.3230414037667083
    X[3] = 1.481190017639433
    X[4] = 0.03637683583967992
    X[5] = 0.53719735371654

However, I don't know the LBD and cannot tell the quality of the final solution from the above info. By contrast, another optimizer Couenne tells us both bounds and the gap.
In addition, when checking the final status, I have the following result:

termination_status(model) = MathOptInterface.OPTIMAL
objective_value(model) = 3.527847092443349e-5

It seems that the minimum objective value reported by JuMP is just the above UBD.
Nonetheless, by substituting the optimal solution (i.e., the five parameters) into the original objective function, I get a different result 2.527822e-05 that is smaller than the UBD. If fact, the value 2.527822e-05 is exactly what Couenne obtains with zero optimality gap between lower and upper bounds. Thus, even though EAGO managed to find the global optimal solution, the objective value reported was wrong.

Are there any options that control the display? Besides, I can post my code and data if needed. Thank you.

Consider not restricting compat for Julia to each minor version

For new versions in Julia we run the tests for all packages to look for regressions. This package (and some of its dependencies) restrict the Julia version compatibility to not be compatible with these new Julia versions:

julia = "~1.3, ~1.4, ~1.5"

What happens when we try to run the tests for a new Julia version of this pacakge is that version EAGO v0.2.1 is installed because that one doesn't restrict the minor version:

https://github.com/JuliaRegistries/General/blob/fe561d17bf40e7bcf003ded665c45b336efe6ce1/E/EAGO/Compat.toml#L18

I would suggest just having

julia = "1"

in your project file instead. That would make it easier for us to thest the package on new Julia versions and ensure that eventual regressions would get caught.

Use forward-reverse McCormick propagation

This was reverted on v0.4 of EAGO as the tolerance based approach we initially tried resulted in incorrect results. So to enable this we need:

  • a correctly-rounded McCormick relaxation mode in McCormick.jl
  • update ReverseMcCormick.jl to make use of this mode
  • Update EAGO to default to this correctly rounded mode (also to revert the reverse interval pass we currently use (which should become an option)).

Strange error with Gurobi and `@NLexpression`s

I'm not sure if the issue is in Gurobi.jl, MOI, or EAGO, but I thought I'd ask here first.

I have a small problem in which using GLPK has no issues, but Gurobi reports NaN's. What is extra strange is that this only occurs if the problem is formulated where the objective is an @NLexpression. I have a gist with a Manifest.toml that you can use to reproduce these results:

https://gist.github.com/ericphanson/a50888d0749d3c51db112148284ff5d1

The problem is just

m = Model()
@variable(m, 0 <= v[i=1:2] <= 1)
@variable(m, 0 <= D[i=1:2,j=1:2] <= 1)
@NLexpression(m, R, sum(D[i,j] for j = 1:2, i = 1:2))
@NLexpression(m, term, sum(v[l] * R * v[k] for l = 1:2, k = 1:2))
@NLobjective(m, Min, term)

and if I use

@NLobjective(m, Min, sum(v[l] * R * v[k] for l = 1:2, k = 1:2))

instead of the @NLobjective, the error goes away, or if I use GLPK instead of Gurobi, the error goes away.

Any ideas of what could be going wrong?

Handling nonlinear expressions near domain violations

Right now, we're using a fancy overloading approach to build relaxations of nonlinear terms. Expansiveness of the interval calculations can lead to intermediate terms containing an interval bound on which some functions aren't defined (e.g. sqrt on [-0.001, 10]). Two potential approaches to dealing with this:

  • Option 1: We move the user-defined function handling to contextual dispatch (Cassette.jl), move domain violation checks to wrapper functions in McCormick.jl around the computational kernel, have the contextual dispatch check the domain violation condition call the kernel on execution, if a domain violation occurs then set a flag in the contexts metadata and dispatch to trivial operations from there on out (unless there is a clearer way to return directly in Cassette). Update EAGO's logic to include branching when this flag is set to domain violation. As w(X) -> 0, the degree of overestimation should go to zero and eventually the routine should converge unless the model formulation itself has domain violations.
  • Option 2: Parse the expression graph of user-defined functions to introduce an equality constraint z = f(x) for sqrt(f(x)) (or a similar function), substituting in z for f(x) in the expression graph, and branch on z whenever a domain violation such that one resulting interval contains the domain violation Zbad (and the other one or two contain valid domains). You'd then need to use a domain reduction technique to fathom the interval containing the violation for all (Zbad,X). Possibly faster than Option 1.

Add support for second-order cone constraints

  • Add buffered storage object
  • Add lower_interval_bound, interval_bound functions
  • Add relax! function
  • Update relax_all_constraints!, delete_nl_constraints!, label_branch_variables!, parse_classify_problem!
  • Update optimize!(::Val{SOCP}, m::Optimizer)

SIP Extendability and New Algorithms

Adjust the SIP solution routines to prevent allocations and support extension by user-defined extension type.

  • Re-code Mitsos 2011 solution routines to support extension type
  • Introduce revision to Mitsos 2011 detailed in "Djelassi, Hatim, and Alexander Mitsos. "A hybrid discretization algorithm with guaranteed feasibility for the global solution of semi-infinite programs." Journal of Global Optimization 68.2 (2017): 227-253.
  • Finalize hybrid algorithm detailed in "Djelassi, Hatim, and Alexander Mitsos. "A hybrid discretization algorithm with guaranteed feasibility for the global solution of semi-infinite programs." Journal of Global Optimization 68.2 (2017): 227-253.

Breaking change in IntervalArithmetic 0.13

The latest tagged release of IntervalArithmetic 0.13 introduces a breaking change which seems to be affecting EAGO, according to http://juliarun-ci.s3.amazonaws.com/606e2260ef91c5dcbefdece629d6e2e4417c72a5/IntervalArithmetic_reverse_dependency_tests.html

I'm guessing that this is as a result of changing the semantics of things like 0.1 + x:
previously, 0.1 was treated as the interval @interval(0.1); now it is treated as the floating-point number 0.1 for efficiency.

You can add @interval 0.1 + x to recover the old behaviour.

(But note that using @interval performs some slow conversions.)

EAGO fails to precompile

Starting from a fresh install of Julia 1.4, EAGO gives me the following precompilation error:

% julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.4.0 (2020-03-21)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

(@v1.4) pkg> add JuMP, EAGO
    Cloning default registries into `~/.julia`
    Cloning registry from "https://github.com/JuliaRegistries/General.git"
      Added registry `General` to `~/.julia/registries/General`
  Resolving package versions...
  Installed NaNMath ────────────────────── v0.3.3
  Installed Calculus ───────────────────── v0.5.1
  Installed Reexport ───────────────────── v0.2.0
  Installed SetRounding ────────────────── v0.2.0
  Installed LinQuadOptInterface ────────── v0.6.0
  Installed ErrorfreeArithmetic ────────── v0.5.0
  Installed EAGO ───────────────────────── v0.2.1
  Installed JuMP ───────────────────────── v0.19.2
  Installed Parsers ────────────────────── v1.0.4
  Installed OrderedCollections ─────────── v1.2.0
  Installed FillArrays ─────────────────── v0.8.10
  Installed RecipesBase ────────────────── v1.0.1
  Installed SpecialFunctions ───────────── v0.10.3
  Installed IntervalContractors ────────── v0.4.2
  Installed DiffRules ──────────────────── v0.0.10
  Installed MacroTools ─────────────────── v0.5.5
  Installed Cassette ───────────────────── v0.2.6
  Installed FastRounding ───────────────── v0.2.0
  Installed ForwardDiff ────────────────── v0.10.10
  Installed DiffResults ────────────────── v0.0.4
  Installed BandedMatrices ─────────────── v0.15.8
  Installed Compat ─────────────────────── v2.2.0
  Installed GLPK ───────────────────────── v0.10.0
  Installed ArrayLayouts ───────────────── v0.3.3
  Installed MathOptInterface ───────────── v0.8.4
  Installed FunctionWrappers ───────────── v1.1.1
  Installed RoundingEmulator ───────────── v0.2.1
  Installed CommonSubexpressions ───────── v0.2.0
  Installed CRlibm ─────────────────────── v0.8.0
  Installed MathProgBase ───────────────── v0.7.8
  Installed BinaryProvider ─────────────── v0.5.10
  Installed StaticArrays ───────────────── v0.8.3
  Installed CompilerSupportLibraries_jll ─ v0.3.3+0
  Installed IntervalArithmetic ─────────── v0.17.2
  Installed DataStructures ─────────────── v0.17.17
  Installed Ipopt ──────────────────────── v0.5.4
  Installed BenchmarkTools ─────────────── v0.5.0
  Installed JSON ───────────────────────── v0.21.0
  Installed ReverseDiff ────────────────── v0.3.1
  Installed OpenSpecFun_jll ────────────── v0.5.3+3
Downloading artifact: CompilerSupportLibraries
######################################################################## 100.0%##O#- #                                 Downloading artifact: OpenSpecFun
######################################################################## 100.0%##O=#  #                                   Updating `~/.julia/environments/v1.4/Project.toml`
  [bb8be931] + EAGO v0.2.1
  [4076af6c] + JuMP v0.19.2
   Updating `~/.julia/environments/v1.4/Manifest.toml`
  [4c555306] + ArrayLayouts v0.3.3
  [aae01518] + BandedMatrices v0.15.8
  [6e4b80f9] + BenchmarkTools v0.5.0
  [b99e7846] + BinaryProvider v0.5.10
  [96374032] + CRlibm v0.8.0
  [49dc2e85] + Calculus v0.5.1
  [7057c7e9] + Cassette v0.2.6
  [bbf7d656] + CommonSubexpressions v0.2.0
  [34da2185] + Compat v2.2.0
  [e66e0078] + CompilerSupportLibraries_jll v0.3.3+0
  [864edb3b] + DataStructures v0.17.17
  [163ba53b] + DiffResults v0.0.4
  [b552c78f] + DiffRules v0.0.10
  [bb8be931] + EAGO v0.2.1
  [90fa49ef] + ErrorfreeArithmetic v0.5.0
  [fa42c844] + FastRounding v0.2.0
  [1a297f60] + FillArrays v0.8.10
  [f6369f11] + ForwardDiff v0.10.10
  [069b7b12] + FunctionWrappers v1.1.1
  [60bf3e95] + GLPK v0.10.0
  [d1acc4aa] + IntervalArithmetic v0.17.2
  [15111844] + IntervalContractors v0.4.2
  [b6b21f68] + Ipopt v0.5.4
  [682c06a0] + JSON v0.21.0
  [4076af6c] + JuMP v0.19.2
  [f8899e07] + LinQuadOptInterface v0.6.0
  [1914dd2f] + MacroTools v0.5.5
  [b8f27783] + MathOptInterface v0.8.4
  [fdba3010] + MathProgBase v0.7.8
  [77ba4419] + NaNMath v0.3.3
  [efe28fd5] + OpenSpecFun_jll v0.5.3+3
  [bac558e1] + OrderedCollections v1.2.0
  [69de0a69] + Parsers v1.0.4
  [3cdcf5f2] + RecipesBase v1.0.1
  [189a3867] + Reexport v0.2.0
  [37e2e3b7] + ReverseDiff v0.3.1
  [5eaf0fd0] + RoundingEmulator v0.2.1
  [3cc68bcd] + SetRounding v0.2.0
  [276daf66] + SpecialFunctions v0.10.3
  [90137ffa] + StaticArrays v0.8.3
  [2a0f44e3] + Base64 
  [ade2ca70] + Dates 
  [8bb1440f] + DelimitedFiles 
  [8ba89e20] + Distributed 
  [b77e0a4c] + InteractiveUtils 
  [76f85450] + LibGit2 
  [8f399da3] + Libdl 
  [37e2e46d] + LinearAlgebra 
  [56ddb016] + Logging 
  [d6f4376e] + Markdown 
  [a63ad114] + Mmap 
  [44cfe95a] + Pkg 
  [de0858da] + Printf 
  [3fa0cd96] + REPL 
  [9a3f8284] + Random 
  [ea8e919c] + SHA 
  [9e88b42a] + Serialization 
  [1a1011a3] + SharedArrays 
  [6462fe0b] + Sockets 
  [2f01184e] + SparseArrays 
  [10745b16] + Statistics 
  [8dfed614] + Test 
  [cf7118a7] + UUIDs 
  [4ec0a83e] + Unicode 
   Building GLPK ──→ `~/.julia/packages/GLPK/Fb12F/deps/build.log`
   Building CRlibm → `~/.julia/packages/CRlibm/NFCH5/deps/build.log`
   Building Ipopt ─→ `~/.julia/packages/Ipopt/Iu7vT/deps/build.log`

julia> using JuMP
[ Info: Precompiling JuMP [4076af6c-e467-56ae-b986-b466b2749572]

julia> using EAGO
[ Info: Precompiling EAGO [bb8be931-2a91-5aca-9f87-79e1cb69959a]
┌ Warning: Package EAGO does not have SparseArrays in its dependencies:
│ - If you have EAGO checked out for development and have
│   added SparseArrays as a dependency but haven't updated your primary
│   environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with EAGO
└ Loading SparseArrays into EAGO from project dependency, future warnings for EAGO are suppressed.
WARNING: could not import IntervalArithmetic.pi_interval into McCormick
WARNING: could not import IntervalContractors.pow_rev into McCormick
ERROR: LoadError: LoadError: LoadError: UndefVarError: pi_interval not defined
Stacktrace:
 [1] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick_utilities/constants.jl:38
 [2] include(::Module, ::String) at ./Base.jl:377
 [3] include(::String) at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:1
 [4] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:76
 [5] include(::Module, ::String) at ./Base.jl:377
 [6] include(::String) at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:1
 [7] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:29
 [8] include(::Module, ::String) at ./Base.jl:377
 [9] top-level scope at none:2
 [10] eval at ./boot.jl:331 [inlined]
 [11] eval(::Expr) at ./client.jl:449
 [12] top-level scope at ./none:3
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick_utilities/constants.jl:38
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:76
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:29
ERROR: Failed to precompile EAGO [bb8be931-2a91-5aca-9f87-79e1cb69959a] to /home/oliver/.julia/compiled/v1.4/EAGO/t0N0c_A6IiH.ji.
Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1272
 [3] _require(::Base.PkgId) at ./loading.jl:1029
 [4] require(::Base.PkgId) at ./loading.jl:927
 [5] require(::Module, ::Symbol) at ./loading.jl:922

EAGO.Optimizer not solving correctly with user-defined subroutines

Cannot reproduce the following example after updating JuMP to v0.21

https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_interval_bnb.ipynb

Julia: v1.4 or v1.5
JuMP: v0.21.4
EAGO: 0.4.2 or 0.5.2

MWE:

using EAGO, IntervalArithmetic, JuMP

struct IntervalExt <: EAGO.ExtensionType end

import EAGO.lower_problem!
function lower_problem!(t::IntervalExt, x::EAGO.Optimizer)

    # retrieve bounds at current node
    n = x._current_node
    lower = n.lower_variable_bounds
    upper = n.upper_variable_bounds

    # define X for node and compute interval extension
    x_value = Interval.(lower, upper)
    F = sin(x_value[1])x_value[2]^2-cos(x_value[3])/x_value[4]

    x._lower_objective_value = F.lo
    x._lower_solution = IntervalArithmetic.mid.(x_value)
    x._lower_feasibility = true
    x._cut_add_flag = false

    return
end

import EAGO.upper_problem!
function EAGO.upper_problem!(t::IntervalExt, x::EAGO.Optimizer)

    # retrieve bounds at current node
    n = x._current_node
    lower = n.lower_variable_bounds
    upper = n.upper_variable_bounds

    # compute midpoint value
    x_value = 0.5*(upper + lower)
    f_val = sin(x_value[1])*x_value[2]^2-cos(x_value[3])/x_value[4]
    x._upper_objective_value = f_val
    x._upper_solution = x_value
    x._upper_feasibility = true

    return
end

import EAGO: preprocess!, postprocess!, cut_condition
function EAGO.preprocess!(t::IntervalExt, x::Optimizer)
    x._preprocess_feasibility = true
    return
end
function EAGO.postprocess!(t::IntervalExt, x::Optimizer)
    x._postprocess_feasibility = true
    return
end
EAGO.cut_condition(t::IntervalExt, x::Optimizer) = false

m = JuMP.Model(EAGO.Optimizer)
JuMP.set_optimizer_attribute(m, "branch_variable", Bool[true; true; true; true])
JuMP.set_optimizer_attribute(m, "verbosity",		10)
JuMP.set_optimizer_attribute(m, "time_limit",		10.0)
JuMP.set_optimizer_attribute(m, "obbt_depth",		0)
JuMP.set_optimizer_attribute(m, "dbbt_depth",		0)
JuMP.set_optimizer_attribute(m, "cp_depth",			0)

# Adds variables and bounds
x_L = [-10, -1, -10, 2]
x_U = [10, 1, 10, 20]
@variable(m, x_L[i] <= x[i=1:4] <= x_U[i])

# Solves the problem
JuMP.optimize!(m)

Exploit sparsity in relaxations of nonlinear expressions

Currently EAGO when provided a problem in containing m nonlinear variables will use subgradients of size m in all nonlinear expressions. We currently get sparsity information from the nonlinear structures when translating the JuMP.NLPEvaluator to the EAGO.Evaluator, so it would make sense to use MC{2,NS} objects when computing a two-variable nonlinear term and then using grad_sparsity values for unpack this to the appropriate components of the subgradient.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Better handling for conic and quadratic constraints.

EAGO's handling of this class of problems is somewhat limited relative to existing MINLP offerings (BARON, ANTIGONE, etc.). We aren't currently exploiting specialized forms or reformulations therein. We still need to determine which approaches we want to adopt for these constraints.

Basic support for MINLP Problems

  • add_variable of ZeroOne type
  • Update local nlp problem to solve on integer values of binary variables.
  • Update socp and quadratic relax! to handle integer values.
  • Set treat_x_as_number flag for integer values of binary variables prior to relaxing nonlinear terms.

Add support for multiple SIP constraints

Currently the SIP solver only functions on single semi-infinite constraints. This will require the lower level problem maximimze the single nonsmooth SIP as gSIP(x,p) = max{gSIP_1, gSIP_2, ...,} this could be reformulated as a 0-1 MINLP.

Requires either MINLP or nonsmooth upper bounding problem support for EAGO.

Compatibility issues with DiffRules in Julia 1.3

I am running into compatibility issues with DiffRules when I try to install EAGO in Julia 1.3, see below:

(v1.3) pkg> add EAGO
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package EAGO [bb8be931]:
EAGO [bb8be931] log:
|- possible versions are: [0.2.0-0.2.1, 0.3.0] or uninstalled
|- restricted to versions * by an explicit requirement, leaving only versions [0.2.0-0.2.1, 0.3.0]
|- restricted by compatibility requirements with JuMP [4076af6c] to versions: 0.3.0 or uninstalled, leaving only versions: 0.3.0
|-- JuMP [4076af6c] log:
|--- possible versions are: [0.18.3-0.18.6, 0.19.0-0.19.2, 0.20.0-0.20.1] or uninstalled
|--- restricted to versions 0.20.1 by an explicit requirement, leaving only versions 0.20.1
|- restricted by compatibility requirements with DiffRules [b552c78f] to versions: 0.2.0-0.2.1 or uninstalled — no versions left
|-- DiffRules [b552c78f] log:
|--- possible versions are: [0.0.8-0.0.10, 0.1.0, 1.0.0] or uninstalled
|--- restricted to versions 1.0.0 by an explicit requirement, leaving only versions 1.0.0

I was wondering if you have any advice as how to move forward.

Reduce minor type instability issues

  • Parameterize the EAGO.Optimizer with an ExtensionType and also would simplify f(m.ext_type, m) calls.
  • relax_tag field can be inferred from Optimizer parameter.
  • relaxed_optimizer can we infer this type from an extension?
  • upper_optimizer can we infer this type from an extension?

BufferedNonlinearFunction & NonlinearExpr induce a few allocations but require a large re-write

SIP Algorithms respects JuMP input and problem class

Currently, the high level SIP algorithm requests a objective and constraint in the form f(x,p), g(x,p) were f and g are used defined functions. This limits the number of viable sub-problem optimizers substantially. Also, more specialized routines may outperform the global solver for specialized problem types. Roadmap to fixing this below:

  • Define a SIPModel(nx, nz, np) which attaches an extension to the the JuMP and potentially an SIPOptimizer to be defined. This function should return the model, decision variables, and uncertain variables.
  • Define an optimize hook that checks each constraint (and the objective) for the types of participating variables (decision vs. uncertain) labels each constraint accordingly then calls an SIP solution algorithm.
  • For Mitsos 2009, etc. the constraints containing only decision variables simply participate in the lower and upper bounding problem. The constraints containing only uncertain variables participate only in the lower level problems. The constraints containing a mix label as semi-infinite constraints and treated as such (some check and error for certain equality constraints may be necessary).
  • Add check for convexity based solely on constraint types and dispatch models to more specialized routines.
  • Incorporate parametric DCP techniques to diagnose convexity and dispatch problem to more specialized.

Longer term it'll be interesting to look into InfiniteOpt.jl support. However, most of the use cases for the nonconvex SIPs would require support for nonlinear terms (1-3 years out).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.