psorlab / eago.jl Goto Github PK
View Code? Open in Web Editor NEWA development environment for robust and global optimization
License: MIT License
A development environment for robust and global optimization
License: MIT License
Currently, EAGO using Ipopt by default.
Setting absolute and relative tolerances does not change the required tolerances for the upper problem's solution. There currently isn't a standardized manner of setting this in MOI in large part due to the lack of standard convergence metrics across NLP solvers.
So we'll want to explicit support loading each NLP solver in the future, to prevent the user from needing to specify all options for solving the local problem or encountering numerical issues from mismatching tolerances.
We'll need to add a functions to do this the parsing steps, basically we should check for the solver name, then update parameters based on the solver name and the parameters set in EAGO. Throw a warning if the solver name isn't recognized.
Ran into the following error when using EAGO's branch-and-bound with user-defined subroutines
ArgumentError: heap must be non-empty
Stacktrace:
[1] fathom!(::IntervalExt, ::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\DataStructures\SAI1X\src\heaps\minmax_heap.jl:265
[2] fathom!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:761
[3] global_solve!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:837
[4] optimize!(::Optimizer{GLPK.Optimizer,Ipopt.Optimizer}) at ...\.julia\packages\EAGO\nX0UV\src\eago_optimizer\optimize.jl:875
......
EAGO version: 0.3.1
DataStructures versionL 0.17.5
May find an issue with McCormick relaxation on exponentiation. Plots look weird (both NS and Diff mode). MWE is attached in the following:
using EAGO.McCormick, DataFrames, IntervalArithmetic
# initializes function to explicitly solve for x
f(x,y) = (x-2)^2 + (x-y)^3+ (x-y)^4
# create a data frame to store output data
df = DataFrame(x = Float64[], y = Float64[], z = Float64[], cv1 = Float64[], cv2 = Float64[],
cc1 = Float64[], cc2 = Float64[], l1 = Float64[], l2 = Float64[],
u1 = Float64[], u2 = Float64[])
n = 30
X = EAGO.Interval(-1,1)
Y = EAGO.Interval(-1,1)
xrange = range(X.lo,stop=X.hi,length=n)
yrange = range(Y.lo,stop=Y.hi,length=n)
for (i,x) in enumerate(xrange)
for (j,y) in enumerate(yrange)
z = f(x,y)
x_mc = MC{1,NS}(x,X,1)
y_mc = MC{1,NS}(y,Y,2)
x_dmc = MC{1,Diff}(x,X,1)
y_dmc = MC{1,Diff}(y,Y,2)
f_mc = f(x_mc,y_mc)
f_dmc = f(x_dmc,y_dmc)
save_tuple = (x, y, z, f_dmc.cv, f_mc.cv, f_dmc.cc, f_mc.cc,
lo(f_dmc.Intv), lo(f_mc.Intv), hi(f_dmc.Intv), hi(f_mc.Intv))
push!(df, save_tuple)
end
end
using Plots, CSV
pyplot()
Plots.plot(df.x, df.y, df.z, st = [:surface], title = "Smooth McCormick Relaxation")
Plots.plot!(df.x, df.y, df.cv1, st = [:surface])
Plots.plot!(df.x, df.y, df.cc1, st = [:surface])
# Plots.plot(df.x, df.y, df.z, st = [:surface], title = "Nonsmooth McCormick Relaxation")
# Plots.plot!(df.x, df.y, df.cv2, st = [:surface])
# Plots.plot!(df.x, df.y, df.cc2, st = [:surface])
# Plots.plot!(camera = (150,20))
# CSV.write("plot_data.csv", df)
Hi, I am using EAGO to solve a nonlinear least squares problem. There are some issues that need your help.
After calling optimize!(model)
, the output information is
-----------------------------------------------------------------------------------------------------------------------------
| Iteration # | Nodes | Lower Bound | Upper Bound | Gap | Ratio | Time | Time Left |
-----------------------------------------------------------------------------------------------------------------------------
First Solution Found at Node 1
UBD = 3.527847092443349e-5
Solution is :
X[1] = 0.7607755418437602
X[2] = 0.3230414037667083
X[3] = 1.481190017639433
X[4] = 0.03637683583967992
X[5] = 0.53719735371654
However, I don't know the LBD and cannot tell the quality of the final solution from the above info. By contrast, another optimizer Couenne tells us both bounds and the gap.
In addition, when checking the final status, I have the following result:
termination_status(model) = MathOptInterface.OPTIMAL
objective_value(model) = 3.527847092443349e-5
It seems that the minimum objective value reported by JuMP is just the above UBD.
Nonetheless, by substituting the optimal solution (i.e., the five parameters) into the original objective function, I get a different result 2.527822e-05
that is smaller than the UBD. If fact, the value 2.527822e-05
is exactly what Couenne obtains with zero optimality gap between lower and upper bounds. Thus, even though EAGO managed to find the global optimal solution, the objective value reported was wrong.
Are there any options that control the display? Besides, I can post my code and data if needed. Thank you.
For new versions in Julia we run the tests for all packages to look for regressions. This package (and some of its dependencies) restrict the Julia version compatibility to not be compatible with these new Julia versions:
Line 46 in dac69e5
What happens when we try to run the tests for a new Julia version of this pacakge is that version EAGO v0.2.1 is installed because that one doesn't restrict the minor version:
I would suggest just having
julia = "1"
in your project file instead. That would make it easier for us to thest the package on new Julia versions and ensure that eventual regressions would get caught.
This was reverted on v0.4 of EAGO as the tolerance based approach we initially tried resulted in incorrect results. So to enable this we need:
I'm not sure if the issue is in Gurobi.jl, MOI, or EAGO, but I thought I'd ask here first.
I have a small problem in which using GLPK has no issues, but Gurobi reports NaN's. What is extra strange is that this only occurs if the problem is formulated where the objective is an @NLexpression
. I have a gist with a Manifest.toml that you can use to reproduce these results:
https://gist.github.com/ericphanson/a50888d0749d3c51db112148284ff5d1
The problem is just
m = Model()
@variable(m, 0 <= v[i=1:2] <= 1)
@variable(m, 0 <= D[i=1:2,j=1:2] <= 1)
@NLexpression(m, R, sum(D[i,j] for j = 1:2, i = 1:2))
@NLexpression(m, term, sum(v[l] * R * v[k] for l = 1:2, k = 1:2))
@NLobjective(m, Min, term)
and if I use
@NLobjective(m, Min, sum(v[l] * R * v[k] for l = 1:2, k = 1:2))
instead of the @NLobjective
, the error goes away, or if I use GLPK instead of Gurobi, the error goes away.
Any ideas of what could be going wrong?
I'd like to report that some included filenames are not correct.
For ex. in EAGO.jl
include("Relaxations/scheme.jl")
But the filename is 'Scheme.jl'.
Create an evaluator for the upper bounding problem that supports Lexiographic differentation.
Right now, we're using a fancy overloading approach to build relaxations of nonlinear terms. Expansiveness of the interval calculations can lead to intermediate terms containing an interval bound on which some functions aren't defined (e.g. sqrt on [-0.001, 10]). Two potential approaches to dealing with this:
lower_interval_bound
, interval_bound
functionsrelax!
functionrelax_all_constraints!
, delete_nl_constraints!
, label_branch_variables!
, parse_classify_problem!
optimize!(::Val{SOCP}, m::Optimizer)
Adjust the SIP solution routines to prevent allocations and support extension by user-defined extension type.
EAGO#master is failing with MOI#master with the error free(): invalid pointer
.
See https://github.com/blegat/SolverTests/runs/1934867045
The latest tagged release of IntervalArithmetic 0.13 introduces a breaking change which seems to be affecting EAGO, according to http://juliarun-ci.s3.amazonaws.com/606e2260ef91c5dcbefdece629d6e2e4417c72a5/IntervalArithmetic_reverse_dependency_tests.html
I'm guessing that this is as a result of changing the semantics of things like 0.1 + x
:
previously, 0.1
was treated as the interval @interval(0.1)
; now it is treated as the floating-point number 0.1 for efficiency.
You can add @interval 0.1 + x
to recover the old behaviour.
(But note that using @interval
performs some slow conversions.)
Starting from a fresh install of Julia 1.4, EAGO gives me the following precompilation error:
% julia
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.4.0 (2020-03-21)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
(@v1.4) pkg> add JuMP, EAGO
Cloning default registries into `~/.julia`
Cloning registry from "https://github.com/JuliaRegistries/General.git"
Added registry `General` to `~/.julia/registries/General`
Resolving package versions...
Installed NaNMath ────────────────────── v0.3.3
Installed Calculus ───────────────────── v0.5.1
Installed Reexport ───────────────────── v0.2.0
Installed SetRounding ────────────────── v0.2.0
Installed LinQuadOptInterface ────────── v0.6.0
Installed ErrorfreeArithmetic ────────── v0.5.0
Installed EAGO ───────────────────────── v0.2.1
Installed JuMP ───────────────────────── v0.19.2
Installed Parsers ────────────────────── v1.0.4
Installed OrderedCollections ─────────── v1.2.0
Installed FillArrays ─────────────────── v0.8.10
Installed RecipesBase ────────────────── v1.0.1
Installed SpecialFunctions ───────────── v0.10.3
Installed IntervalContractors ────────── v0.4.2
Installed DiffRules ──────────────────── v0.0.10
Installed MacroTools ─────────────────── v0.5.5
Installed Cassette ───────────────────── v0.2.6
Installed FastRounding ───────────────── v0.2.0
Installed ForwardDiff ────────────────── v0.10.10
Installed DiffResults ────────────────── v0.0.4
Installed BandedMatrices ─────────────── v0.15.8
Installed Compat ─────────────────────── v2.2.0
Installed GLPK ───────────────────────── v0.10.0
Installed ArrayLayouts ───────────────── v0.3.3
Installed MathOptInterface ───────────── v0.8.4
Installed FunctionWrappers ───────────── v1.1.1
Installed RoundingEmulator ───────────── v0.2.1
Installed CommonSubexpressions ───────── v0.2.0
Installed CRlibm ─────────────────────── v0.8.0
Installed MathProgBase ───────────────── v0.7.8
Installed BinaryProvider ─────────────── v0.5.10
Installed StaticArrays ───────────────── v0.8.3
Installed CompilerSupportLibraries_jll ─ v0.3.3+0
Installed IntervalArithmetic ─────────── v0.17.2
Installed DataStructures ─────────────── v0.17.17
Installed Ipopt ──────────────────────── v0.5.4
Installed BenchmarkTools ─────────────── v0.5.0
Installed JSON ───────────────────────── v0.21.0
Installed ReverseDiff ────────────────── v0.3.1
Installed OpenSpecFun_jll ────────────── v0.5.3+3
Downloading artifact: CompilerSupportLibraries
######################################################################## 100.0%##O#- # Downloading artifact: OpenSpecFun
######################################################################## 100.0%##O=# # Updating `~/.julia/environments/v1.4/Project.toml`
[bb8be931] + EAGO v0.2.1
[4076af6c] + JuMP v0.19.2
Updating `~/.julia/environments/v1.4/Manifest.toml`
[4c555306] + ArrayLayouts v0.3.3
[aae01518] + BandedMatrices v0.15.8
[6e4b80f9] + BenchmarkTools v0.5.0
[b99e7846] + BinaryProvider v0.5.10
[96374032] + CRlibm v0.8.0
[49dc2e85] + Calculus v0.5.1
[7057c7e9] + Cassette v0.2.6
[bbf7d656] + CommonSubexpressions v0.2.0
[34da2185] + Compat v2.2.0
[e66e0078] + CompilerSupportLibraries_jll v0.3.3+0
[864edb3b] + DataStructures v0.17.17
[163ba53b] + DiffResults v0.0.4
[b552c78f] + DiffRules v0.0.10
[bb8be931] + EAGO v0.2.1
[90fa49ef] + ErrorfreeArithmetic v0.5.0
[fa42c844] + FastRounding v0.2.0
[1a297f60] + FillArrays v0.8.10
[f6369f11] + ForwardDiff v0.10.10
[069b7b12] + FunctionWrappers v1.1.1
[60bf3e95] + GLPK v0.10.0
[d1acc4aa] + IntervalArithmetic v0.17.2
[15111844] + IntervalContractors v0.4.2
[b6b21f68] + Ipopt v0.5.4
[682c06a0] + JSON v0.21.0
[4076af6c] + JuMP v0.19.2
[f8899e07] + LinQuadOptInterface v0.6.0
[1914dd2f] + MacroTools v0.5.5
[b8f27783] + MathOptInterface v0.8.4
[fdba3010] + MathProgBase v0.7.8
[77ba4419] + NaNMath v0.3.3
[efe28fd5] + OpenSpecFun_jll v0.5.3+3
[bac558e1] + OrderedCollections v1.2.0
[69de0a69] + Parsers v1.0.4
[3cdcf5f2] + RecipesBase v1.0.1
[189a3867] + Reexport v0.2.0
[37e2e3b7] + ReverseDiff v0.3.1
[5eaf0fd0] + RoundingEmulator v0.2.1
[3cc68bcd] + SetRounding v0.2.0
[276daf66] + SpecialFunctions v0.10.3
[90137ffa] + StaticArrays v0.8.3
[2a0f44e3] + Base64
[ade2ca70] + Dates
[8bb1440f] + DelimitedFiles
[8ba89e20] + Distributed
[b77e0a4c] + InteractiveUtils
[76f85450] + LibGit2
[8f399da3] + Libdl
[37e2e46d] + LinearAlgebra
[56ddb016] + Logging
[d6f4376e] + Markdown
[a63ad114] + Mmap
[44cfe95a] + Pkg
[de0858da] + Printf
[3fa0cd96] + REPL
[9a3f8284] + Random
[ea8e919c] + SHA
[9e88b42a] + Serialization
[1a1011a3] + SharedArrays
[6462fe0b] + Sockets
[2f01184e] + SparseArrays
[10745b16] + Statistics
[8dfed614] + Test
[cf7118a7] + UUIDs
[4ec0a83e] + Unicode
Building GLPK ──→ `~/.julia/packages/GLPK/Fb12F/deps/build.log`
Building CRlibm → `~/.julia/packages/CRlibm/NFCH5/deps/build.log`
Building Ipopt ─→ `~/.julia/packages/Ipopt/Iu7vT/deps/build.log`
julia> using JuMP
[ Info: Precompiling JuMP [4076af6c-e467-56ae-b986-b466b2749572]
julia> using EAGO
[ Info: Precompiling EAGO [bb8be931-2a91-5aca-9f87-79e1cb69959a]
┌ Warning: Package EAGO does not have SparseArrays in its dependencies:
│ - If you have EAGO checked out for development and have
│ added SparseArrays as a dependency but haven't updated your primary
│ environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with EAGO
└ Loading SparseArrays into EAGO from project dependency, future warnings for EAGO are suppressed.
WARNING: could not import IntervalArithmetic.pi_interval into McCormick
WARNING: could not import IntervalContractors.pow_rev into McCormick
ERROR: LoadError: LoadError: LoadError: UndefVarError: pi_interval not defined
Stacktrace:
[1] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick_utilities/constants.jl:38
[2] include(::Module, ::String) at ./Base.jl:377
[3] include(::String) at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:1
[4] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:76
[5] include(::Module, ::String) at ./Base.jl:377
[6] include(::String) at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:1
[7] top-level scope at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:29
[8] include(::Module, ::String) at ./Base.jl:377
[9] top-level scope at none:2
[10] eval at ./boot.jl:331 [inlined]
[11] eval(::Expr) at ./client.jl:449
[12] top-level scope at ./none:3
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick_utilities/constants.jl:38
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/mccormick_library/mccormick.jl:76
in expression starting at /home/oliver/.julia/packages/EAGO/G9lC2/src/EAGO.jl:29
ERROR: Failed to precompile EAGO [bb8be931-2a91-5aca-9f87-79e1cb69959a] to /home/oliver/.julia/compiled/v1.4/EAGO/t0N0c_A6IiH.ji.
Stacktrace:
[1] error(::String) at ./error.jl:33
[2] compilecache(::Base.PkgId, ::String) at ./loading.jl:1272
[3] _require(::Base.PkgId) at ./loading.jl:1029
[4] require(::Base.PkgId) at ./loading.jl:927
[5] require(::Module, ::Symbol) at ./loading.jl:922
Cannot reproduce the following example after updating JuMP to v0.21
https://github.com/PSORLab/EAGO-notebooks/blob/master/notebooks/nlpopt_interval_bnb.ipynb
Julia: v1.4 or v1.5
JuMP: v0.21.4
EAGO: 0.4.2 or 0.5.2
MWE:
using EAGO, IntervalArithmetic, JuMP
struct IntervalExt <: EAGO.ExtensionType end
import EAGO.lower_problem!
function lower_problem!(t::IntervalExt, x::EAGO.Optimizer)
# retrieve bounds at current node
n = x._current_node
lower = n.lower_variable_bounds
upper = n.upper_variable_bounds
# define X for node and compute interval extension
x_value = Interval.(lower, upper)
F = sin(x_value[1])x_value[2]^2-cos(x_value[3])/x_value[4]
x._lower_objective_value = F.lo
x._lower_solution = IntervalArithmetic.mid.(x_value)
x._lower_feasibility = true
x._cut_add_flag = false
return
end
import EAGO.upper_problem!
function EAGO.upper_problem!(t::IntervalExt, x::EAGO.Optimizer)
# retrieve bounds at current node
n = x._current_node
lower = n.lower_variable_bounds
upper = n.upper_variable_bounds
# compute midpoint value
x_value = 0.5*(upper + lower)
f_val = sin(x_value[1])*x_value[2]^2-cos(x_value[3])/x_value[4]
x._upper_objective_value = f_val
x._upper_solution = x_value
x._upper_feasibility = true
return
end
import EAGO: preprocess!, postprocess!, cut_condition
function EAGO.preprocess!(t::IntervalExt, x::Optimizer)
x._preprocess_feasibility = true
return
end
function EAGO.postprocess!(t::IntervalExt, x::Optimizer)
x._postprocess_feasibility = true
return
end
EAGO.cut_condition(t::IntervalExt, x::Optimizer) = false
m = JuMP.Model(EAGO.Optimizer)
JuMP.set_optimizer_attribute(m, "branch_variable", Bool[true; true; true; true])
JuMP.set_optimizer_attribute(m, "verbosity", 10)
JuMP.set_optimizer_attribute(m, "time_limit", 10.0)
JuMP.set_optimizer_attribute(m, "obbt_depth", 0)
JuMP.set_optimizer_attribute(m, "dbbt_depth", 0)
JuMP.set_optimizer_attribute(m, "cp_depth", 0)
# Adds variables and bounds
x_L = [-10, -1, -10, 2]
x_U = [10, 1, 10, 20]
@variable(m, x_L[i] <= x[i=1:4] <= x_U[i])
# Solves the problem
JuMP.optimize!(m)
This feature has been down for a while and the website is down as of EAGO v0.6. This will be remedied shortly.
Currently EAGO when provided a problem in containing m
nonlinear variables will use subgradients of size m
in all nonlinear expressions. We currently get sparsity information from the nonlinear structures when translating the JuMP.NLPEvaluator
to the EAGO.Evaluator
, so it would make sense to use MC{2,NS}
objects when computing a two-variable nonlinear term and then using grad_sparsity
values for unpack this to the appropriate components of the subgradient.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
EAGO's handling of this class of problems is somewhat limited relative to existing MINLP offerings (BARON, ANTIGONE, etc.). We aren't currently exploiting specialized forms or reformulations therein. We still need to determine which approaches we want to adopt for these constraints.
Is "subgradient-based interval refinement" removed from EAGO? If so, is it possible that we can have it back? Thanks.
add_variable
of ZeroOne typerelax!
to handle integer values.Currently the SIP solver only functions on single semi-infinite constraints. This will require the lower level problem maximimze the single nonsmooth SIP as gSIP(x,p) = max{gSIP_1, gSIP_2, ...,} this could be reformulated as a 0-1 MINLP.
Requires either MINLP or nonsmooth upper bounding problem support for EAGO.
I am running into compatibility issues with DiffRules when I try to install EAGO in Julia 1.3, see below:
(v1.3) pkg> add EAGO
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package EAGO [bb8be931]:
EAGO [bb8be931] log:
|- possible versions are: [0.2.0-0.2.1, 0.3.0] or uninstalled
|- restricted to versions * by an explicit requirement, leaving only versions [0.2.0-0.2.1, 0.3.0]
|- restricted by compatibility requirements with JuMP [4076af6c] to versions: 0.3.0 or uninstalled, leaving only versions: 0.3.0
|-- JuMP [4076af6c] log:
|--- possible versions are: [0.18.3-0.18.6, 0.19.0-0.19.2, 0.20.0-0.20.1] or uninstalled
|--- restricted to versions 0.20.1 by an explicit requirement, leaving only versions 0.20.1
|- restricted by compatibility requirements with DiffRules [b552c78f] to versions: 0.2.0-0.2.1 or uninstalled — no versions left
|-- DiffRules [b552c78f] log:
|--- possible versions are: [0.0.8-0.0.10, 0.1.0, 1.0.0] or uninstalled
|--- restricted to versions 1.0.0 by an explicit requirement, leaving only versions 1.0.0
I was wondering if you have any advice as how to move forward.
EAGO.Optimizer
with an ExtensionType
and also would simplify f(m.ext_type, m)
calls.relax_tag
field can be inferred from Optimizer parameter.relaxed_optimizer
can we infer this type from an extension?upper_optimizer
can we infer this type from an extension?BufferedNonlinearFunction
& NonlinearExpr
induce a few allocations but require a large re-write
Currently, the high level SIP algorithm requests a objective and constraint in the form f(x,p)
, g(x,p)
were f
and g
are used defined functions. This limits the number of viable sub-problem optimizers substantially. Also, more specialized routines may outperform the global solver for specialized problem types. Roadmap to fixing this below:
SIPModel(nx, nz, np)
which attaches an extension to the the JuMP
and potentially an SIPOptimizer
to be defined. This function should return the model, decision variables, and uncertain variables.Longer term it'll be interesting to look into InfiniteOpt.jl support. However, most of the use cases for the nonconvex SIPs would require support for nonlinear terms (1-3 years out).
Objective value and bound are Reversed
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.