Git Product home page Git Product logo

plasmo.jl's Introduction

CI codecov DOI

Plasmo.jl

(Platform for Scalable Modeling and Optimization)

Note

Plasmo.jl has undergone significant refactorization with the release of version 0.6. While most syntax should still work, we advise checking out the documentation for the latest updates and filing an issue if a v0.5 model produces errors.

Plasmo.jl is a graph-based algebraic modeling framework for building, managing, and solving optimization problems that utilizes graph-theoretic concepts and modular data structures. The package extends JuMP.jl to offer concise syntax, interfaces with MathOptInterface.jl to access standard optimization solvers, and utilizes Graphs.jl to provide graph analysis and processing methods. Plasmo.jl facilitates developing optimization models for networked systems such as supply chains, power systems, industrial processes, or any coupled system that involves multiple components and connections. The package also acts as a high-level platform to develop customized optimization-based decomposition techniques and meta-algorithms to optimize problems over large systems.

Overview

The core object in Plasmo.jl is the OptiGraph, a graph data structure that represents optimization problems as a set of optinodes and optiedges. Optinodes encapsulate variables, expressions, and constraints (and objective functions) as modular models and edges encapsulate linking constraints that couple variables across optinodes. Optigraphs can be embedded within other optigraphs to induce nested hierarchical structures, or they can be partitioned using different graph projections and partitioning algorithms to create new decomposition structures.

The core data structure in Plasmo.jl is the OptiGraph. The optigraph contains a set of optinodes which represent self-contained optimization problems and optiedges that represent coupling between optinodes (which produces an underlying hypergraph structure of optinodes and optiedges). Optigraphs can further be embedded within other optigraphs to create nested hierarchical graph structures. The graph structures obtained using Plasmo.jl can be used for simple model and data management, but they can also be used to perform graph partitioning or develop interfaces to structured optimization solvers.

License

Plasmo.jl is licensed under the MPL 2.0 license.

Installation

Install Plasmo using Pkg.add:

import Pkg
Pkg.add("Plasmo")

Documentation

The current documentation is available through GitHub Pages. Additional examples can be found in the examples folder.

Simple Example

using Plasmo
using Ipopt

#create an optigraph
graph = OptiGraph()

#add nodes to an optigraph
@optinode(graph, n1)
@optinode(graph, n2)

#add variables and constraints to nodes
@variable(n1, 0 <= x <= 2)
@variable(n1, 0 <= y <= 3)
@constraint(n1, x+y <= 4)

@variable(n2,x)
@constraint(n2, exp(x) >= 2)

#add linking constraints that couple nodes
@linkconstraint(graph, n1[:x] == n2[:x])

# set an optigraph objective
@objective(graph, Min, n1[:x] + n2[:x])

#optimize with Ipopt
set_optimizer(graph, Ipopt.Optimizer)
optimize!(graph)

#Print solution values
println("n1[:x] = ", value(n1[:x]))
println("n2[:x] = ", value(n2[:x]))

Acknowledgments

This code is based on work supported by the following funding agencies:

  • U.S. Department of Energy (DOE), Office of Science, under Contract No. DE-AC02-06CH11357
  • DOE Office of Electricity Delivery and Energy Reliability’s Advanced Grid Research and Development program at Argonne National Laboratory
  • National Science Foundation under award NSF-EECS-1609183 and under award CBET-1748516

The primary developer is Jordan Jalving (@jalving) with support from the following contributors.

  • Victor Zavala (University of Wisconsin-Madison)
  • Yankai Cao (University of British Columbia)
  • Kibaek Kim (Argonne National Laboratory)
  • Sungho Shin (University of Wisconsin-Madison)

Citing Plasmo.jl

If you find Plasmo.jl useful for your work, you may cite the manuscript as:

@article{Jalving2022,
  title={A Graph-Based Modeling Abstraction for Optimization: Concepts and Implementation in Plasmo.jl},
  author={Jordan Jalving and Sungho Shin and Victor M. Zavala},
  journal={Mathematical Programming Computation},
  year={2022},
  volume={14},
  pages={699 - 747},
  doi={10.1007/s12532-022-00223-3}
}

You can also access a freely available pre-print.

There is also an earlier manuscript where we presented the initial ideas behind Plasmo.jl which you can find here:

@article{JalvingCaoZavala2019,
author = {Jalving, Jordan and Cao, Yankai and Zavala, Victor M},
journal = {Computers {\&} Chemical Engineering},
pages = {134--154},
title = {Graph-based modeling and simulation of complex systems},
volume = {125},
year = {2019},
doi = {10.1016/j.compchemeng.2019.03.009}
}

A pre-print of this paper can be found here

plasmo.jl's People

Contributors

bbrunaud avatar dlcole3 avatar github-actions[bot] avatar jalving avatar odow avatar tso-martin avatar yankaicao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

plasmo.jl's Issues

Modeling with many optinodes does not scale

This issue comes up if a user wants to model individual variables as optinodes to do things like partitioning. The problem with this is that memory use explodes because each optinode creates an empty JuMP model. A possible solution is for us to inspect the hypergraph of an optinode directly and provide the mechanisms to perform and apply partitions.

Flat model doesn't work with Fixed variables

julia> create_jump_graph_model(d.graph)
ERROR: Unrecognized variable category Fixed. Should be one of:
    Symbol[:Cont, :Int, :Bin, :SemiCont, :SemiInt]
Stacktrace:
 [1] setcategory(::JuMP.Variable, ::Symbol) at /home/bbrunaud/.julia/v0.6/JuMP/src/JuMP.jl:680
 [2] _buildnodemodel!(::JuMP.Model, ::Plasmo.PlasmoModels.JuMPNode, ::Plasmo.PlasmoModels.ModelNode) at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoModels/solve.jl:209
 [3] create_jump_graph_model(::Plasmo.PlasmoModels.ModelGraph) at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoModels/solve.jl:116
```julia

Quadratic objectives with variables on separate nodes is not supported

Plasmo seems to only support quadratic objectives when the two terms of the quadratic objectives are on the same node. If a quadratic term is added to the objective function which has variables on separate nodes, the line here returns an error. The example below returns the error:

using Plasmo, JuMP, Ipopt

graph = OptiGraph()
set_optimizer(graph, Ipopt.Optimizer)

@optinode(graph, nodes[1:4])
for (i, node) in enumerate(nodes)
    @variable(node, x >= i)
    @objective(node, Min, 2 * x^2)
end

optimize!(graph)

obj_func = objective_function(graph)

new_term = UnorderedPair(nodes[1][:x], nodes[2][:x])
obj_func.terms[new_term] = 3.0

optimize!(graph)

On the modeling side, this can be avoided by placing dummy variables on each node and then adding a linking constraint between the dummy variable and the variable it represents, but it could be nice to someday support quadratic objectives on different nodes.

Plasmo.set_model() no longer appears to work

Hi Jordan,

Thanks for you great work in moving Plasmo forward! I've updated to Plasmo v.0.4 and my existing model that worked fine in 0.3 is throwing errors on the optimize!(graph) call. It seems to be due to using Plasmo.set_model() as I can reproduce this error in the optigraph_4_nodes.jl example,e g. by using this form:

m = Model()

@variable(m,0 <= x <= 2)
@variable(m,0 <= y <= 3)
@variable(m, 0 <= z <= 2)
@constraint(m,x+y+z >= 4)
@objective(m,Min,y)

Plasmo.set_model(nodes[1], m)

it throws the following error:

 include("optigraph_4_nodes.jl")
ERROR: LoadError: type CachingOptimizer has no field optimizers
Stacktrace:
 [1] getproperty(x::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}}, f::Symbol)
   @ Base ./Base.jl:33
 [2] _aggregate_backends!(graph::OptiGraph)
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:25
 [3] _moi_optimize!(graph::OptiGraph)
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:258
 [4] optimize!(graph::OptiGraph; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:248
 [5] optimize!(graph::OptiGraph)
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:227
 [6] top-level scope
   @ /mnt/e/ANL/Projects/NG_Modeling/NGTransient/examples/optigraph_4_nodes.jl:44
 [7] include(fname::String)
   @ Base.MainInclude ./client.jl:444
 [8] top-level scope
   @ REPL[1]:1
in expression starting at .../optigraph_4_nodes.jl:44

caused by: ArgumentError: ModelLike of type Ipopt.Optimizer does not support accessing the attribute Plasmo.OptiGraphOptimizeHook()
Stacktrace:
 [1] get_fallback(::Ipopt.Optimizer, ::Plasmo.OptiGraphOptimizeHook)
   @ MathOptInterface ~/julia-1.6.2/share/julia/packages/MathOptInterface/YDdD3/src/attributes.jl:296
 [2] get(::Ipopt.Optimizer, ::Plasmo.OptiGraphOptimizeHook)
   @ MathOptInterface ~/julia-1.6.2/share/julia/packages/MathOptInterface/YDdD3/src/attributes.jl:293
 [3] get(b::MathOptInterface.Bridges.LazyBridgeOptimizer{Ipopt.Optimizer}, attr::Plasmo.OptiGraphOptimizeHook)
   @ MathOptInterface.Bridges ~/julia-1.6.2/share/julia/packages/MathOptInterface/YDdD3/src/Bridges/bridge_optimizer.jl:804
 [4] get(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.GenericModel{Float64, MathOptInterface.Utilities.ModelFunctionConstraints{Float64}}}}, attr::MathOptInterface.Utilities.AttributeFromOptimizer{Plasmo.OptiGraphOptimizeHook})
   @ MathOptInterface.Utilities ~/julia-1.6.2/share/julia/packages/MathOptInterface/YDdD3/src/Utilities/cachingoptimizer.jl:890
 [5] optimize!(graph::OptiGraph; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:241
 [6] optimize!(graph::OptiGraph)
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/djSWE/src/optimizer_interface.jl:227
 [7] top-level scope
   @ .../optigraph_4_nodes.jl:44
 [8] include(fname::String)
   @ Base.MainInclude ./client.jl:444
 [9] top-level scope
   @ REPL[1]:1

Is the Plasmo.set_model() being deprecated, or is there an bug in 0.4? I'd appreciate your suggestion on how to best to update my model to verion 0.4, if that involves simply passing the optinode objects to the JuMP macros I can do that but wanted to avoid re-writing lots of code if there is another way!

Eric

Aggregated graph backend does not update variables correctly

Ran into this problem trying to implement a copy function for an OptiGraph, and it might be related to the way backends are set up. In the following code, the aggregated graph backend does not correctly record the variables:

using Plasmo

function build_partitioned_graph()
    g = OptiGraph()
    @optinode(g, nodes[1:4])

    for node in nodes
        @variable(node, 0 <= x)
        @objective(node, Min, x)
    end
    @linkconstraint(g, nodes[1][:x] + nodes[2][:x] >= 1)
    @linkconstraint(g, nodes[2][:x] + nodes[3][:x] >= 1)
    @linkconstraint(g, nodes[3][:x] + nodes[4][:x] >= 1)
    @linkconstraint(g, nodes[4][:x] + nodes[1][:x] >= 1)

    node_membership_vector = [1, 1, 2, 2]
    partition = Plasmo.Partition(g, node_membership_vector)

    apply_partition!(g, partition)
    return g
end
g1 = build_partitioned_graph()
g2 = build_partitioned_graph()

g0 = OptiGraph()
add_subgraph!(g0, g1)
add_subgraph!(g0, g2)

set_to_node_objectives(g0)

@optinode(g1, n_g1)
@optinode(g2, n_g2)
@optinode(g0, n_g)

agg_graph_layer1, ref_map_layer1 = aggregate_to_depth(g0, 1)

gb = graph_backend(g0)
gb_agg = graph_backend(agg_graph_layer1

If you look at graph backend of the original vs. aggregated graphs (gb vs. gb_agg), the latter reports 0 variables and constraints. In addition, the element_to_graph_map and graph_to_element_map are both empty and node_variables is missing nodes and variables. The all_variables function still correctly works though.

I looked into this a little deeper, and I think it may have to do with MOIU.pass_attributes in the aggregate.jl file here. In the source code for MOIU.pass_attributes here, it relies on the function MOI.get function to collect attributes of the MOI.ListOfModelAttributesSet() or MOI.ListOfVariableAttributesSet(). However, if I call either MOI.get(gb, MOI.ListOfVariableAttributesSet()) or MOI.get(gb.backend, MOI.ListOfVariableAttributesSet()), I get an empty vector returned. However, the vector is nonempty if I call it on the lowest level subgraph's backend (e.g., on graph_backend(getsubgraphs(g1)[1]). I think perhaps the MOI backend is not seeing the subgraph information correctly? Any thoughts on how best to address this? I think my knowledge on the MOI backend and graph backend interfaces is not yet good enough to know how to resolve this.

Setting `Max` instead of `Min` in objective makes `objective_value` negative

If you set the objective on a node to maximize instead of minimize, the objective value it returns will be negative. For example, in the code below, the objective value that is returned should be +2, but it is -2.

using Plasmo, HiGHS

g = OptiGraph()
set_optimizer(g, HiGHS.Optimizer)
@optinode(g, n)
@variable(n, x <= 2)
@objective(n, Max, x)
optimize!(g)
println(objective_value(g))

function optimize!(...) does not return JuMP termination status

in the plasmo solve.jl, the function function JuMP.optimize!(graph::OptiGraph,optimizer;kwargs...) always return an empty (nothing) status from the JuMP.optimize!(...) call. Using the JuMP.termination_status however does work as follows:

model = Plasmo.getmodel(optinode)
JuMP.set_optimizer(optinode,optimizer)
JuMP.optimize!(model)  #,optimizer;kwargs...)
status = JuMP.termination_status(model)

It would be nice to update this function as there's no other way to get the JuMP termination status of the model from the Plasmo.optimize!() functions.

How to insyall Plasmo in julia v0.6.4

Excuse me.I am trying to present the analysis in an article. I saw they were using plasmo v0.1.0- in Julia v0.6.4. Then I try to install it in my computer just as:

julia> Pkg.add("Plasmo",v"0.1.0-")
ERROR: Plasmo can't be installed because it has no versions that support 0.6.4 of julia. You may need to update METADATA by running Pkg.update()

And I foundd Plasmo v0.1.0- is not in julia METADATA, then I tried :

julia> Pkg.clone("https://github.com/plasmo-dev/Plasmo.jl","Plasmo")
INFO: Cloning Plasmo from https://github.com/plasmo-dev/Plasmo.jl

It worked , but the version of Plasmo I got was v0.2.0+ ,which depends on LinearAlgebra. And the function was append in julia v0.7.0.
I turned to the author of the article for help.They told me to try and ask for your help.
Could you give me some advise about that? Thanks a lot.

`dual(subgraph, link_ref)` does not work like `value(subgraph, var_ref)`

When using expanded subgraphs with the same variables defined on multiple subgraphs, the value of the variables in the different subgraphs after optimization can be queried by calling value(subgraph, var_ref). However, the same is not true for the dual function when the same linking constraints are defined on different subgraphs.

The code below replicates this issue. In the last print statement, I would expect that calling dual(expanded_subgraphs[1], middle_link) would return the value of dual1, but it returns dual2.

using Plasmo, JuMP, Ipopt

graph = OptiGraph()
set_optimizer(graph, Ipopt.Optimizer)

@optinode(graph, nodes[1:4])
for (i, node) in enumerate(nodes)
    @variable(node, x >= i)
    @objective(node, Min, 2 * x)
end
for i in 1:3
    @linkconstraint(graph, nodes[i + 1][:x] + nodes[i][:x] >= i * 4)
end

node_membership = [1, 1, 2, 2]
hypergraph, hyper_map = gethypergraph(graph)
partition = Partition(hypergraph, node_membership, hyper_map)
apply_partition!(graph, partition)
subgraphs = getsubgraphs(graph)
expanded_subgraphs = Plasmo.expand.(graph, subgraphs, 1)
set_optimizer(expanded_subgraphs[1], Ipopt.Optimizer)
set_optimizer(expanded_subgraphs[2], Ipopt.Optimizer)

middle_link = graph.optiedges[1].linkrefs[1]

optimize!(expanded_subgraphs[1])
dual1 = dual(middle_link)

optimize!(expanded_subgraphs[2])
dual2 = dual(middle_link)

println(dual(expanded_subgraphs[1], middle_link) == dual1) # this line is false

I believe this is because the code here depends on the last_solution_id attribute of the optiedge, which links to the solution from expanded_subgraphs[2]. For now, I am able to get around this by writing a wrapper to set the last_solution_id of the optiedge to point to expanded_subgraphs[1] as in the code below.

graph.optiedges[1].backend.last_solution_id = expanded_subgraphs[1].id

deepcopy not preserving variable names

When I copy a graph I expect the graph to be an exact copy of the original, including variable names. For some reason the objDict is breaking.

julia> g2 = deepcopy(g)
Model Graph
graph_id: ##825
nodes:2
simple links:32
hyper links: 0

julia> m2 = getmodel(getnode(g2,1))
Minimization problem with:
 * 32 linear constraints
 * 66 variables
Solver is Gurobi

julia> m2.objDict
Dict{Symbol,Any} with 6 entries:
  Error showing value of type Dict{Symbol,Any}:
ERROR: AssertionError: j isa Array{Variable}
Stacktrace:
 [1] cont_str(::Type{T} where T, ::JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}}, ::Dict{Symbol,String}) at /home/bbrunaud/.julia/v0.6/JuMP/src/print.jl:417
 [2] show(::IOContext{Base.AbstractIOBuffer{Array{UInt8,1}}}, ::JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}}) at /home/bbrunaud/.julia/v0.6/JuMP/src/print.jl:397
 [3] #sprint#228(::IOContext{Base.Terminals.TTYTerminal}, ::Function, ::Int64, ::Function, ::JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}}, ::Vararg{JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}},N} where N) at ./strings/io.jl:64
 [4] (::Base.#kw##sprint)(::Array{Any,1}, ::Base.#sprint, ::Int64, ::Function, ::JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}}, ::Vararg{JuMP.JuMPArray{JuMP.Variable,4,Tuple{Array{Symbol,1},Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}},N} where N) at ./<missing>:0
 [5] show(::IOContext{Base.Terminals.TTYTerminal}, ::MIME{Symbol("text/plain")}, ::Dict{Symbol,Any}) at ./replutil.jl:65
 [6] display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::MIME{Symbol("text/plain")}, ::Dict{Symbol,Any}) at /home/bbrunaud/.julia/v0.6/OhMyREPL/src/output_prompt_overwrite.jl:8
 [7] display(::Base.REPL.REPLDisplay{Base.REPL.LineEditREPL}, ::Dict{Symbol,Any}) at ./REPL.jl:125
 [8] display(::Dict{Symbol,Any}) at ./multimedia.jl:194
 [9] eval(::Module, ::Any) at ./boot.jl:235
 [10] print_response(::Base.Terminals.TTYTerminal, ::Any, ::Void, ::Bool, ::Bool, ::Void) at ./REPL.jl:144
 [11] print_response(::Base.REPL.LineEditREPL, ::Any, ::Void, ::Bool, ::Bool) at ./REPL.jl:129
 [12] (::Base.REPL.#do_respond#16{Bool,Base.REPL.##26#36{Base.REPL.LineEditREPL,Base.REPL.REPLHistoryProvider},Base.REPL.LineEditREPL,Base.LineEdit.Prompt})(::Base.LineEdit.MIState, ::Base.AbstractIOBuffer{Array{UInt8,1}}, ::Bool) at ./REPL.jl:646

julia> getindex(m2, :x)
Error showing value of type JuMP.JuMPArray{JuMP.Variable,3,Tuple{Array{Symbol,1},Array{Symbol,1},UnitRange{Int64}}}:
ERROR: AssertionError: j isa Array{Variable}

`dual` does not work on linkconstraints set after `optimize!` is called

Plasmo throws an error if you try to query the dual of a link constraint that was added after optimize! was called once. A MWE is included below. If I call optimize! on a graph and then add a link constraint to the graph, then call optimize! again and try to query the dual of the link constraint I added, I get an error. This is on Plasmo v0.5.4.

using Plasmo, HiGHS
g = OptiGraph()
set_optimizer(g, HiGHS.Optimizer)

@optinode(g, n1)
@optinode(g, n2)

@variable(n1, x[1:2] >= 0)
@variable(n2, x[1:2] >= 0)

lc = @linkconstraint(g, [i = 1:2], n1[:x][i] == n2[:x][i])

@objective(n1, Min, n1[:x][1])

optimize!(g)

dual.(lc)

lc_new = @linkconstraint(g, n1[:x][1] >= n2[:x][2])

optimize!(g)

dual(lc_new)

This is a pretty minor error (I ran into this trying to implement a decomposition scheme). The stacktrace points to this line, so I don't think the EdgePointer object is being properly updated after optimize! has been called once.

Non linear link constraints not yet supported

I seen in your documentation that non linear links are "not yet" supported. Is there any timeline on that? Would be a feature we would consider working on if it is a reasonable feature to start in on.

Very, very cool package.

Adding nodes to a subgraph after the main graph's backend is initialized does not reset graph backend

I have found that adding to a subgraph does not necessarily update the overall graph's backend. For example, the code below throws an error because the first incident_edges call sets the backend of the graph, and then when the second node is added, the backend on graph g0 is not updated. When Plasmo runs the second call of incident_edges, it does not have the extra_node in its hypermap.

using Plasmo

g0 = OptiGraph()
g1 = OptiGraph()
g2 = OptiGraph()

@optinode(g1, node)
@variable(node, x >= 0)

@optinode(g2, node)
@variable(node, x >= 0)

add_subgraph!(g0, g1)
add_subgraph!(g0, g2)

incident_edges(g0, all_nodes(g1))

@optinode(g1, extra_node)
@variable(extra_node, x >= -5)

incident_edges(g0, all_nodes(g1))

This does make sense since there is no mapping to go from the subgraph to its "owning" graph (only to go from the "owning" graph to the subgraph), so there is no way for the upper graph to know to update its backend. Currently, I get around this in my code by forcing the upper graph to reset the backend by calling Plasmo.set_graph_backend, but if there is an efficient way to fix this in the future, it could be nice.

Does Plasmo.jl decompose solving process?

I have been lucky to meet @odow in a workshop and he introduced Plasmo.jl for my research. I can already see that it will enhance the modeling method and we could differentiate linking constraints from normal constraints. However, I didn't find any documentation regarding the solving process. does it use decomposition in the solving process? if so, which method has been used for this matter? ADMM? Benders cut?

I appreciate your response.

append_to_backend accesses private methods from MOI.Utilities

See

function append_to_backend!(dest::MOI.ModelLike, src::MOI.ModelLike)
vis_src = MOI.get(src, MOI.ListOfVariableIndices()) #returns vector of MOI.VariableIndex
index_map = MOIU.IndexMap()
# per the comment in MOI:
# "The `NLPBlock` assumes that the order of variables does not change (#849)
# Therefore, all VariableIndex and VectorOfVariable constraints are added
# seprately, and no variables constrained-on-creation are added.""
# Consequently, Plasmo avoids using the constrained-on-creation approach because
# of the way it constructs the NLPBlock for the optimizer.
# has_nlp = MOI.NLPBlock() in MOI.get(src, MOI.ListOfModelAttributesSet())
# constraints_not_added = if has_nlp
constraints_not_added = Any[
MOI.get(src, MOI.ListOfConstraintIndices{F,S}()) for
(F, S) in MOI.get(src, MOI.ListOfConstraintTypesPresent()) if
MOIU._is_variable_function(F)
]
# else
# Any[
# MOIU._try_constrain_variables_on_creation(dest, src, index_map, S)
# for S in MOIU.sorted_variable_sets_by_cost(dest, src)
# ]
# end
# Copy free variables into graph optimizer
MOI.Utilities._copy_free_variables(dest, index_map, vis_src)

x-ref jump-dev/MathOptInterface.jl#2520

getValue() doesnt seem to work with VariableRef arguments

JuMP.value(node::OptiNode,vref::VariableRef) throws an error:

ERROR: KeyError: key Symbol("##260") not found
Stacktrace:
 [1] getindex(h::Dict{Symbol, MathOptInterface.ModelLike}, key::Symbol)
   @ Base ./dict.jl:482
 [2] value(node::OptiNode, vref::VariableRef)
   @ Plasmo ~/julia-1.6.2/share/julia/packages/Plasmo/s5YhC/src/optinode.jl:48
 [3] top-level scope
   @ REPL[22]:1

This can be replicated in the example optigraph_4_nodes.jl by calling:

JuMP.value(node, node[:x])

In optinode.jl:48, the call to JuMP.backend(node).result_location[node.id] throws the error because the node.id symbol is different from the key in the result_location dict.

Can Plasmo be used for nested decomposition (in principle, at least)?

Hi Plasmo developers,

I just came across one of the older issues that asks about decomposition support in Plasmo.jl and also read the developer's reply. I am wondering if a combo of Plasmo.jl and PlasmoAlgorithms.jl can be used for nested decomposition. i.e. in other words, can it be possible for each of the nodes of the OptiGraph encapsulate an entire "daughter" OptiGraph (and the nodes of the "daughter" OptiGraph to in turn encapsulate "granddaughter" OptiGraphs and so on ... ) so that, nested decomposition can be performed between those?

Thanks very much for your reply, in advance !!!

Project.toml compat is out of date

The compat dependencies need to be updated. For instance, DataStructures is limited to ~0.17 but the current version of JuMP exacts that 0.18 be used. One suggestion to help keep up to date is to use CompatHelper.jl to automate this process.

Better graph show

We need a better show function for graphs, it needs to indicate the type of graph, the nodes, and edges.

julia> typeof(g)
Plasmo.PlasmoModels.ModelGraph

julia> g
graph_id: ##816
lightgraph:Plasmo.PlasmoGraphBase.HyperGraph([1, 2], Plasmo.PlasmoGraphBase.AbstractHyperEdge[Hyper Edge (1, 2)], Dict(2=>Plasmo.PlasmoGraphBase.AbstractHyperEdge[Hyper Edge (1, 2)],1=>Plasmo.PlasmoGraphBase.AbstractHyperEdge[Hyper Edge (1, 2)]))

Calling `optimize!` on an expanded subgraph resets the solution of a different subgraph

I am using expanded subgraphs in Plasmo using the Plasmo.expand function. The solution of whichever subgraph is optimized first is reset when optimize! is called on the other expanded subgraph. This only happens in the first instance, and I am not sure why. You can see the error replicated in the code below.

using Plasmo, JuMP, Ipopt

graph = OptiGraph()
set_optimizer(graph, Ipopt.Optimizer)

@optinode(graph, nodes[1:4])
for (i, node) in enumerate(nodes)
    @variable(node, x >= i)
    @objective(node, Min, 2 * x)
end
for i in 1:3
    @linkconstraint(graph, nodes[i + 1][:x] + nodes[i][:x] >= i * 4)
end

node_membership = [1, 1, 2, 2]
hypergraph, hyper_map = gethypergraph(graph)
partition = Partition(hypergraph, node_membership, hyper_map)
apply_partition!(graph, partition)
subgraphs = getsubgraphs(graph)
expanded_subgraphs = Plasmo.expand.(graph, subgraphs, 1)
set_optimizer(expanded_subgraphs[1], Ipopt.Optimizer)
set_optimizer(expanded_subgraphs[2], Ipopt.Optimizer)

optimize!(expanded_subgraphs[1])
vars1 = value.(all_variables(expanded_subgraphs[1])) #solution exists and can be queried

optimize!(expanded_subgraphs[2]) # this line resets the solutino of expanded_subgraphs[1]
vars1 = value.(all_variables(expanded_subgraphs[1])) # this line returns an "OptimizeNotCalled()" error

Note that the solution is only reset once, and only for the very first expanded subgraph optimized. If I call optimize!(expanded_subgraphs[1]) after the above code, no solutions are reset and I can query solutions from either expanded subgraph

Using a solver other than Ipopt

Hi, I've just started to use Plasmo.jl and have been going through some of the example code. My issue is that when I try to use a solver other than Ipopt, I get the following error:

LoadError: MathOptInterface.GetAttributeNotAllowed{MathOptInterface.NLPBlockDual}: Getting attribute MathOptInterface.NLPBlockDual(1) cannot be performed: Gurobi.Optimizer does not support getting the attribute MathOptInterface.NLPBlockDual(1). You may want to use a CachingOptimizer in AUTOMATIC mode or you may need to call reset_optimizer before doing this operation if the CachingOptimizer is in MANUAL mode.

In this case I'm trying to use Gurobi, but I get this error with every solver, even GLPK. Can you please help me out on what I'm doing wrong?

`aggregate` throws an error for some OptiGraphs when `max_depth` > 0

I found that I am getting an error when trying to aggregate a problem with a max_depth of 1. I believe the issue is coming because the variable nodes in this line has a length of 0. For the problem I am working on, I have an OptiGraph with subgraphs where all OptiNodes are contained on the subgraphs. Consequently, optinodes(graph) is empty since it appears that optinodes only gets the OptiNodes defined on the highest layer. Below is a minimum working example that returns the same error that I am getting.

using Plasmo

graph0 = OptiGraph()

graph1 = OptiGraph()
graph2 = OptiGraph()

@optinode(graph1, node1)
@variable(node1, 0 <= x)
@optinode(graph2, node2)
@variable(node2, 0 <= x)

add_subgraph!(graph0, graph1)
add_subgraph!(graph0, graph2)

@linkconstraint(graph0, graph1[:node1][:x] + graph2[:node2][:x] >= 1)

println(length(optinodes(graph0)) == 0)

agg_1, extras = Plasmo.aggregate(graph0, 1)

master fail to load PlasmoWorkflows

julia> using Plasmo
INFO: Recompiling stale cache file /home/bbrunaud/.julia/lib/v0.6/CodecZlib.ji for module CodecZlib.
INFO: Recompiling stale cache file /home/bbrunaud/.julia/lib/v0.6/LightGraphs.ji for module LightGraphs.
ERROR: LoadError: LoadError: ArgumentError: Module PlasmoGraphBase not found in current path.
Run `Pkg.add("PlasmoGraphBase")` to install the PlasmoGraphBase package.
Stacktrace:
 [1] _require(::Symbol) at ./loading.jl:435
 [2] require(::Symbol) at ./loading.jl:405
 [3] include_from_node1(::String) at ./loading.jl:576
 [4] include(::String) at ./sysimg.jl:14
 [5] include_from_node1(::String) at ./loading.jl:576
 [6] eval(::Module, ::Any) at ./boot.jl:235
 [7] _require(::Symbol) at ./loading.jl:490
 [8] require(::Symbol) at ./loading.jl:405
while loading /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoWorkflows/PlasmoWorkflows.jl, in expression starting on line 5
while loading /home/bbrunaud/.julia/v0.6/Plasmo/src/Plasmo.jl, in expression starting on line 123

After fixing the error in line 5 (.. are missing). I get the following warnings

julia> using Plasmo
WARNING: Method definition print(IO, Plasmo.PlasmoWorkflows.State) in module PlasmoWorkflows at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoWorkflows/state_manager/signal_print.jl:13 overwritten at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoWorkflows/node_task.jl:31.
WARNING: Method definition show(IO, Plasmo.PlasmoWorkflows.State) in module PlasmoWorkflows at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoWorkflows/state_manager/signal_print.jl:14 overwritten at /home/bbrunaud/.julia/v0.6/Plasmo/src/PlasmoWorkflows/node_task.jl:32.

we need to import Base.print and Base.show in PlasmoWorkflows. Submitting a PR to fix both.

Advantages of Plasmo

Hi, first of all forgive me if the question is trivial. I recently read the paper "Scalable nonlinear programming framework
for parameter estimation in dynamic biological system models" which makes use of Plasmo in the context of parameters estimation in dynamical systems. What is not completely clear to me is whether the advantage of using a modeling approach based on Plasmo is evident only in the case in which PIPS-NLP is used as a solver or if there is a speedup even in the case of IPOPT.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

import JuMP.getvalue

WARNING: both Plasmo and JuMP export "getvalue"; uses of it in module SCSimulation must be qualified
ERROR: UndefVarError: getvalue not defined

Retrieving callback data for lazy constraints fails

I try to use lazy constraints on a OptiGraph using Gurobi, but unfortunately I get this error:
ERROR: Cannot query MathOptInterface.CallbackVariablePrimal{Gurobi.CallbackData}(Gurobi.CallbackData( sense : minimize ... , Ptr{Nothing} @0x000002be63f18ad0, 4)) from caching optimizer because no optimizer is attached.

The code a ran was this:

function plasmo_lazy_constraint()
    graph = OptiGraph()
    @optinode(graph, n1)
    @variable(n1, 0 <= x <= 2.5, Int)
    @variable(n1, 0 <= y <= 2.5, Int)
    @objective(n1, Max, y)
    # set_optimizer(n1.model, Gurobi.Optimizer)

    set_optimizer(graph, Gurobi.Optimizer)
    
    function my_callback_function(cb_data)
        x_val = callback_value(cb_data, x)
        y_val = callback_value(cb_data, y)

        status = callback_node_status(cb_data, graph)
      if status == MOI.CALLBACK_NODE_STATUS_FRACTIONAL
            println(" - Solution is integer infeasible!")
        elseif status == MOI.CALLBACK_NODE_STATUS_INTEGER
            println(" - Solution is integer feasible!")
        else
            @assert status == MOI.CALLBACK_NODE_STATUS_UNKNOWN
            println(" - I don't know if the solution is integer feasible :(")
        end
        if y_val - x_val > 1 + 1e-6
            con = @build_constraint(y - x <= 1)
            MOI.submit(graph, MOI.LazyConstraint(cb_data), con)
        elseif y_val + x_val > 3 + 1e-6
            con = @build_constraint(y + x <= 3)
            MOI.submit(graph, MOI.LazyConstraint(cb_data), con)
        end
    end
    set_attribute(graph, MOI.LazyConstraintCallback(), my_callback_function)
    optimize!(graph)
end
plasmo_lazy_constraint()

Of course, there is currently only one node, but I want to run the OptiGraph with multiple nodes and with PIPS-NLP afterwards. I appreciate your help.

Adding variables or objective to overall `OptiGraph` object

Hi, Thanks for the exciting toolkit.

I see in the API documentation that the OptiGraph object extends a JuMP.AbstractModel and supports most JuMP.Model functions. My understanding was that one could specify certain OptiNode objects within the OptiGraph, and then one could have a custom overall objective that one would specify in the OptiGraphtaking variables from different OptiNodes.

For example, I would like to only specify feasibility problems in the OptiNodes and then set up an objective for the OptiGraph (instead of using a linear combination as is default).

However, when I try to add a variable or an objective to the Optigraph object, I get a MethodError:

image

  1. I realize that jump_model(node::OptiNode) prints out the respective JuMP model. I was wondering if there is a corresponding function for the OptiGraph

using setmodel() for edge

I am trying to use the setmodel() function for an edge that adds an edge to my graph with a specific model for that edge.

Example:

edge1 = Plasmo.add_edge(graph,node_R,node_P)
pipe_model = sspassivelink(pdata_short)
Plasmo.setmodel(edge1, pipe_model)

The setmodel() method seems to work for an older Plasmo version (I believe the Plasmo version 0.2), but it is not working for this current version of Plasmo. Is there a different method I should call for adding a model to an edge?

Set model does not update object data in v0.6

The set_model functionality does not update node object data in v0.6. The following snippet fails.

using Plasmo
using JuMP
using HiGHS

graph = OptiGraph()
@optinode(graph, n1)
@optinode(graph, n2)

m1 = Model()
@variable(m1, x[1:2]>=0)
@objective(m1, Min, sum(x))
set_model(n1, m1)
n1[:x]

This should be a straight-forward fix. We just need to make sure we're copying over JuMP model data.

Optigraph set_objective is too slow

We need to use add_to_expression! is our code that aggregates optinode objective functions. The current approach is poorly implemented and leads to very long model build times.

`unset_binary` function does not work after `optimize!` is called

I found that calling unset_binary does not work after optimize! is called when using Juniper as the solver. Below is a MWE that returns the same error.

using Ipopt, Juniper, Plasmo

graph = OptiGraph()

@optinode(graph, nodes[1:2])
@variable(nodes[1], x, Bin)
@variable(nodes[1], 1 <= y)
@variable(nodes[2], 1 <= y)
@linkconstraint(graph, nodes[1][:y] + nodes[2][:y] >= 3)
@constraint(nodes[1], nodes[1][:x] + nodes[1][:y] == 1.5)
@objective(graph, Min, nodes[1][:y] + nodes[2][:y] - nodes[1][:x])

ipopt = optimizer_with_attributes(Ipopt.Optimizer, "print_level"=>0)
optimizer = optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>ipopt)

set_optimizer(graph, optimizer)
optimize!(graph)
unset_binary(graph[:nodes][1][:x])

I initially thought it might be a problem on Juniper's side, but I can run the same problem in JuMP using Juniper and it does not return an error (see below).

using JuMP
m = Model()

@variable(m, x, Bin)
@variable(m, y1 >= 1)
@variable(m, y2 >= 1)
@constraint(m, y1 + y2 >= 3)
@constraint(m, y1 + x == 1.5)
@objective(m, Min, y1 + y2 - x)

set_optimizer(m, optimizer)

optimize!(m)
unset_binary(m[:x])

I followed the stacktrace, and it seems like it might be because this line calls delete on a MOI.Bridges.LazyBridgeOptimizer object. In contrast, this line works, but it is calling delete on a MOI.Utilities.CachingOptimizer object. It looks like the JuMP version also only calls delete on the CachingOptimizer object.

Upcoming refactoring of JuMP's nonlinear API

The upcoming release of JuMP v1.2 will break Plasmo. Read more here: https://discourse.julialang.org/t/ann-upcoming-refactoring-of-jumps-nonlinear-api/83052

This is going to affect Plasmo because you extend some internal functions:

function JuMP._tape_to_expr(m::OptiNode, k, nd::Vector{JuMP.NodeData}, adj, const_values,
parameter_values, subexpressions::Vector{Any},
user_operators::JuMP._Derivatives.UserOperatorRegistry,
generic_variable_names::Bool, splat_subexpressions::Bool,
print_mode=MIME("text/plain"))
return JuMP._tape_to_expr(Model(),k, nd, adj, const_values,
parameter_values, subexpressions,
user_operators,
generic_variable_names, splat_subexpressions,
print_mode)
end

x-ref: jump-dev/JuMP.jl#2955

Please ping me if you have questions.

Can't Seem to install Plasmo in Julia 1.1

Hi I am trying using Plasmo but can't seem to add it and use it. I get the following error below. Do you have any suggestions?

(v1.1) pkg> add Plasmo
Resolving package versions...
Updating ~/.julia/environments/v1.1/Project.toml
[d3f7391f] + Plasmo v0.2.0
Updating ~/.julia/environments/v1.1/Manifest.toml
[ec485272] + ArnoldiMethod v0.0.4
[00ebfdb7] + CSTParser v0.5.2
[d25df0c9] + Inflate v0.1.1
[093fc24a] + LightGraphs v1.2.0
[1914dd2f] + MacroTools v0.5.0
[d3f7391f] + Plasmo v0.2.0
[189a3867] + Reexport v0.2.0
[ae029012] + Requires v0.5.2
[699a6c99] + SimpleTraits v0.8.0
[0796e94c] + Tokenize v0.5.3

(v1.1) pkg> using Plasmo
ERROR: could not determine command

julia> using Plasmo
[ Info: Precompiling Plasmo [d3f7391f-f14a-50cc-bbe4-76a32d1bad3c]
ERROR: LoadError: LoadError: LoadError: UndefVarError: LinearConstraint not defined
Stacktrace:
[1] getproperty(::Module, ::Symbol) at ./sysimg.jl:13
[2] top-level scope at none:0
[3] include at ./boot.jl:326 [inlined]
[4] include_relative(::Module, ::String) at ./loading.jl:1038
[5] include at ./sysimg.jl:29 [inlined]
[6] include(::String) at /Users/dsigler/.julia/packages/Plasmo/gYt9r/src/ModelGraph/PlasmoModelGraph.jl:1
[7] top-level scope at none:0
[8] include at ./boot.jl:326 [inlined]
[9] include_relative(::Module, ::String) at ./loading.jl:1038
[10] include at ./sysimg.jl:29 [inlined]
[11] include(::String) at /Users/dsigler/.julia/packages/Plasmo/gYt9r/src/Plasmo.jl:8
[12] top-level scope at none:0
[13] include at ./boot.jl:326 [inlined]
[14] include_relative(::Module, ::String) at ./loading.jl:1038
[15] include(::Module, ::String) at ./sysimg.jl:29
[16] top-level scope at none:2
[17] eval at ./boot.jl:328 [inlined]
[18] eval(::Expr) at ./client.jl:404
[19] top-level scope at ./none:3
in expression starting at /Users/dsigler/.julia/packages/Plasmo/gYt9r/src/ModelGraph/linkmodel.jl:15
in expression starting at /Users/dsigler/.julia/packages/Plasmo/gYt9r/src/ModelGraph/PlasmoModelGraph.jl:58
in expression starting at /Users/dsigler/.julia/packages/Plasmo/gYt9r/src/Plasmo.jl:15
ERROR: Failed to precompile Plasmo [d3f7391f-f14a-50cc-bbe4-76a32d1bad3c] to /Users/dsigler/.julia/compiled/v1.1/Plasmo/7Vng3.ji.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1197
[3] _require(::Base.PkgId) at ./loading.jl:960
[4] require(::Base.PkgId) at ./loading.jl:858
[5] require(::Module, ::Symbol) at ./loading.jl:853

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.