Comments (12)
Hey, what problem are you looking to benchmark/profile? I would be willing to help out, and am looking to profile v0.5 --> v0.6 changes anyways. I know there are a few regressions due to internally changing to broadcast and @.
removing muladd
s, but OrdinaryDiffEq.jl should still be in very good shape (and all of the issues I've found have been traced to Base compiler optimizations suddenly missing... not much we can do there but wait...)
Instead of taking on the full dependency, you can use the minimal dependency versions instead. If you make it so that way the user has to pass the algorithm to solve, you can directly depend on DiffEqBase only, which is a really low dep library (in fact, you already depend on it through ODE). If you want to set the algorithm, for example to DP5()
or Tsit5()
, you will need to depend on OrdinaryDiffEq.jl.
Let me know what you need and I'd be willing to profile and PR.
from quantumoptics.jl.
Hey Chris, thank you for offering your help! I saw that you already fixed a few things in the profiling directory, but I used this code only while I developed the corresponding functionality and most of it is terrible out of date. All the real benchmarking is currently done in https://github.com/qojulia/QuantumOptics.jl-benchmarks, and the results are displayed on our website (which is not completely up to date since we are doing some restructuring). Most functionality is pretty well covered, only the (stochastic) mcwf solver is missing.
The dependence on the ODE package is also a little bit out of date and was mostly used to test that our own ode solver worked correctly.
It's great to hear that there are versions that don't have as many dependencies as the complete DifferentialEquations.jl package. Transitioning the non-stochastic methods should be pretty much straight forward since the actual ode solver is only called at one single point in the code. The monte carlo solver will probably be a little bit more work but also shouldn't be too hard. My plan for the near future is to release version 0.4 in the next two weeks and immediately afterwards I will start working on this. As soon as I have first benchmark results I will share them here. Of course if anyone else is motivated to work on this please just go ahead - help is always welcome!
from quantumoptics.jl.
the actual ode solver is only called at one single point in the code
Can you point me to that spot?
from quantumoptics.jl.
It's in src/timeevolution_base.jl line 20. (https://github.com/bastikr/QuantumOptics.jl/blob/master/src/timeevolution_base.jl#L20)
from quantumoptics.jl.
Using test_dopri.jl
, I did:
using Base.Test
using QuantumOptics
ode_dopri = QuantumOptics.ode_dopri
ode_event = ode_dopri.ode_event
ode = ode_dopri.ode
ζ = 0.5
ω₀ = 10.0
y₀ = Float64[0., sqrt(1-ζ^2)*ω₀]
A = 1
ϕ = 0
function f(t::Float64)
α = sqrt(1-ζ^2)*ω₀
x = A*exp(-ζ*ω₀*t)*sin(α*t + ϕ)
p = A*exp(-ζ*ω₀*t)*(-ζ*ω₀*sin(α*t + ϕ) + α*cos(α*t + ϕ))
return [x,p]
end
function df(t::Float64, y::Vector{Float64}, dy::Vector{Float64})
dy[1] = y[2]
dy[2] = -2*ζ*ω₀*y[2] - ω₀^2*y[1]
return nothing
end
T = [0.,10.]
@benchmark tout, yout = ode(df, T, y₀; display_intermediatesteps=true)
@test length(tout)>2
maxstep = maximum(tout[2:end]-tout[1:end-1])
using BenchmarkTools
prob = ODEProblem(df,y₀,(T[1],T[2]))
@benchmark solve(prob,Tsit5(),dense=false)
and got for ode_dopri
:
BenchmarkTools.Trial:
memory estimate: 191.59 KiB
allocs estimate: 8266
--------------
minimum time: 196.164 μs (0.00% GC)
median time: 210.089 μs (0.00% GC)
mean time: 236.384 μs (9.92% GC)
maximum time: 3.226 ms (92.46% GC)
--------------
samples: 10000
evals/sample: 1
and for Tsit5()
:
BenchmarkTools.Trial:
memory estimate: 78.37 KiB
allocs estimate: 3694
--------------
minimum time: 148.435 μs (0.00% GC)
median time: 182.102 μs (0.00% GC)
mean time: 203.711 μs (8.00% GC)
maximum time: 7.028 ms (93.66% GC)
--------------
samples: 10000
evals/sample: 1
then using @benchmark solve(prob,DP5(),dense=false)
, I get:
BenchmarkTools.Trial:
memory estimate: 74.58 KiB
allocs estimate: 3688
--------------
minimum time: 127.853 μs (0.00% GC)
median time: 141.000 μs (0.00% GC)
mean time: 154.853 μs (8.48% GC)
maximum time: 4.371 ms (94.67% GC)
--------------
samples: 10000
evals/sample: 1
so it looks like DP5()
does well here.
That's on v0.5.2. However, it looks like DiffEq has a very disgusting regression on v0.6:
@benchmark solve(prob,Tsit5(),dense=false)
BenchmarkTools.Trial:
memory estimate: 854.32 KiB
allocs estimate: 37975
--------------
minimum time: 3.283 ms (0.00% GC)
median time: 6.393 ms (0.00% GC)
mean time: 6.645 ms (2.45% GC)
maximum time: 57.946 ms (0.00% GC)
--------------
samples: 752
evals/sample: 1
I'm going to take a look at that right now.
from quantumoptics.jl.
Yes, the regression on Julia v0.6 is due to a problem in Base with broadcasting:
JuliaLang/julia#22255 (comment)
Avoiding broadcasting, Tsit5()
gives on v0.6:
BenchmarkTools.Trial:
memory estimate: 87.43 KiB
allocs estimate: 3757
--------------
minimum time: 281.032 μs (0.00% GC)
median time: 526.934 μs (0.00% GC)
mean time: 475.180 μs (2.88% GC)
maximum time: 5.601 ms (87.13% GC)
--------------
samples: 10000
evals/sample: 1
while ode_dopri on the same computer on v0.6 gives:
BenchmarkTools.Trial:
memory estimate: 136.09 KiB
allocs estimate: 7742
--------------
minimum time: 411.302 μs (0.00% GC)
median time: 787.767 μs (0.00% GC)
mean time: 862.358 μs (2.11% GC)
maximum time: 28.656 ms (0.00% GC)
--------------
samples: 5736
evals/sample: 1
So yes, I would expect that, disregarding Base bugs which will be worked around, you'll get a small <2x speed boost of by changing.
from quantumoptics.jl.
I started the transition to DifferentialEquations.jl in the branch https://github.com/bastikr/QuantumOptics.jl/tree/diffeq. The first benchmarks on more realistic examples already look very promising:
(Green line DifferentialEquations.jl vs red line ode_dopri). The other examples yield similar results. However, they don't measure completely the same thing yet. At the moment I want to keep the same interface as before, which means I want to save output only at points in time specified in tspan
. Also the output should not be stored in the solution object but should only call the _fout
function. Is this somehow possible using the callback functionality?
from quantumoptics.jl.
At the moment I want to keep the same interface as before, which means I want to save output only at points in time specified in tspan.
For explicit saving, pass the values using saveat
: sol = solve(prob,alg;saveat=ts)
. tspan
is just start and end.
Is this somehow possible using the callback functionality?
Yes, just make the callback call _fout
.
from quantumoptics.jl.
Is it somehow possible to call the function only at times that are given to saveat? The DiscreteCallback
is called at every single ode step and I guess using ContinuousCallback
would be less efficient.
from quantumoptics.jl.
Is it somehow possible to call the function only at times that are given to saveat? The DiscreteCallback is called at every single ode step and I guess using ContinuousCallback would be less efficient.
Yes, a DiscreteCallback
is the tool for the job. You can take over the entire saving procedure. If you do condition(t,u,integrator) = true
to make it apply every time, save_values = (false,false)
to turn off the default saving, you take full control of the saving and call it directly by doing:
function affect!(integrator)
savevalues!(integrator)
end
With that, you're essentially doing exactly what OrdinaryDiffEq.jl is doing internally. In OrdinaryDiffEq.jl, this is implemented here:
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L71
There are many many complications there that probably don't apply to your use-case, but the first part is how saveat
is done:
https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/blob/master/src/integrators/integrator_utils.jl#L72
while !isempty(integrator.opts.saveat) && integrator.tdir*top(integrator.opts.saveat) <= integrator.tdir*integrator.t # Perform saveat
integrator.saveiter += 1
curt = pop!(integrator.opts.saveat)
if curt!=integrator.t # If <t, interpolate
ode_addsteps!(integrator)
Θ = (curt - integrator.tprev)/integrator.dt
val = ode_interpolant(Θ,integrator,integrator.opts.save_idxs,Val{0}) # out of place, but no force copy later
copyat_or_push!(integrator.sol.t,integrator.saveiter,curt)
save_val = val
copyat_or_push!(integrator.sol.u,integrator.saveiter,save_val,Val{false})
if typeof(integrator.alg) <: OrdinaryDiffEqCompositeAlgorithm
copyat_or_push!(integrator.sol.alg_choice,integrator.saveiter,integrator.cache.current)
end
else # ==t, just save
copyat_or_push!(integrator.sol.t,integrator.saveiter,integrator.t)
if integrator.opts.save_idxs ==nothing
copyat_or_push!(integrator.sol.u,integrator.saveiter,integrator.u)
else
copyat_or_push!(integrator.sol.u,integrator.saveiter,integrator.u[integrator.opts.save_idxs],Val{false})
end
if typeof(integrator.alg) <: Discrete || integrator.opts.dense
integrator.saveiter_dense +=1
copyat_or_push!(integrator.notsaveat_idxs,integrator.saveiter_dense,integrator.saveiter)
if integrator.opts.dense
if integrator.opts.save_idxs ==nothing
copyat_or_push!(integrator.sol.k,integrator.saveiter_dense,integrator.k)
else
copyat_or_push!(integrator.sol.k,integrator.saveiter_dense,[k[integrator.opts.save_idxs] for k in integrator.k],Val{false})
end
end
end
if typeof(integrator.alg) <: OrdinaryDiffEqCompositeAlgorithm
copyat_or_push!(integrator.sol.alg_choice,integrator.saveiter,integrator.cache.current)
end
end
end
You can see that like what's done internally in Sundials, saveat
points are skipped over and back-interpolated to (if you instead want to force hitting the timepoints, say because they are discontinuities, use tstops=ts
instead). Instead of directly pushing the interpolated value:
copyat_or_push!(integrator.sol.u,integrator.saveiter,save_val,Val{false})
if typeof(integrator.alg) <: OrdinaryDiffEqCompositeAlgorithm
copyat_or_push!(integrator.sol.alg_choice,integrator.saveiter,integrator.cache.current)
end
a function call can be put in there to modify save_val
before it's saved.
What's the interface for _fout
? I can just do a general implementation for the callback in DiffEqCallbacks.jl which takes in an _fout
and have it match what you're looking for. I actually have an open issue for that:
from quantumoptics.jl.
We have a callback for this now in the callback library.
http://docs.juliadiffeq.org/latest/features/callback_library.html#SavingCallback-1
If you add that to the solve
call it will save the output of your save_func
which above you call _fout
. So let's revive this!
from quantumoptics.jl.
Done with #191
from quantumoptics.jl.
Related Issues (20)
- ``potentialoperator()`` could use better documentation HOT 7
- LazySum of LazyTensor of sparse result in high alloc in integration HOT 2
- Addition of LazyTensor and LazySum HOT 1
- Return noise in stochastic solvers HOT 1
- Adding ForwardDiff support for all solvers HOT 1
- Precompiling fails on Julia 1.8.5 HOT 5
- Create an animation / more intuitive example for 2D time dependent schrodinger equation
- krylov methods HOT 5
- Time-dep operators used with non-dynamic evolution HOT 1
- docs do not seem to show methods implemented in QuantumOpticsBase HOT 7
- High allocations in timevolution.schroedinger_dynamic and master_dynamic HOT 7
- ForwardDiff fails on (schroedinger_dynamic with) TimeDependentOperator HOT 6
- Different behavior of schroedinger() and master() for sum basis. HOT 3
- Sparse state matrices given to the various solvers lead to poor performance - we should either provide a warning or automatically call `dense` HOT 3
- Possible hard wall boundary condition for particle? HOT 2
- Can't see occupations for composite many-body basis
- Different Fockbasis implementation HOT 1
- usability/documentation improvement: MethodError typehints when picking the wrong master equation function and better `see also` sections in the documentation HOT 3
- better error message when the Hamiltonian or Lindblad function are not of the necessary output type (currently we just get an confusing assert error) HOT 1
- update CI actions to newest versions and set up dependabot (as is already done for QuantumOpticsBase.jl) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from quantumoptics.jl.