uscqserver / openquantumbase.jl Goto Github PK
View Code? Open in Web Editor NEWAbstract types and math operations for OpenQuantumTools.jl.
Home Page: https://uscqserver.github.io/OpenQuantumTools.jl/stable/
License: MIT License
Abstract types and math operations for OpenQuantumTools.jl.
Home Page: https://uscqserver.github.io/OpenQuantumTools.jl/stable/
License: MIT License
Allow user to define a time-dependent Hamiltonian using physical time by setting the keyword argument dimensionless_time
to false.
Pull request has been approved. Now in the mandatory waiting period.
We want to have a universal constructor interface like ConstantHamiltonian(mat)
for all the types of mat
.
It seems that when the repo was kept private, I did not stop the CI service. Now Travis-CI does not work even when the repo went public because I have a negative credit balance. The needs to be fixed so the CI could work properly.
I will add more tests for the display strings for various objects:
The REPL
string can be extracted by:
replstr(x, kv::Pair...) = sprint((io,x) -> show(IOContext(io, :limit => true, :displaysize => (24, 80), kv...), MIME("text/plain"), x), x)
The show string can be extracted by:
showstr(x, kv::Pair...) = sprint((io,x) -> show(IOContext(io, :limit => true, :displaysize => (24, 80), kv...), x), x)
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix
on this issue.
I'll open a PR within a few hours, please be patient!
This could help us stabilize internal APIs.
travis-com does not have GPU instance. We should figure out a way to separate the GPU-specific test for now.
The use of a .EIGS member to diagonalize a dense Hamiltonian instead of an interface function is more bulky than necessary in time and allocations.
(Very rough) test of this with a 4 qubit system and a few different schedules suggest up to a ~25% improvement in the performance of the AME
[master 937a3236f1b9578bc223b21817dec2e7a8512ee2]
Warming up ...
7.523293 seconds (19.89 M allocations: 1.046 GiB, 4.44% gc time, 99.45% compilation time)
0.048794 seconds (134.67 k allocations: 22.224 MiB, 68.96% compilation time)
0.056558 seconds (136.75 k allocations: 22.732 MiB, 73.10% compilation time)
Running ...3.162
0.014409 seconds (69.40 k allocations: 16.847 MiB)
0.014739 seconds (77.84 k allocations: 18.921 MiB)
0.043694 seconds (79.92 k allocations: 19.429 MiB, 59.29% gc time)
Running ...316.230
0.398527 seconds (2.71 M allocations: 659.962 MiB, 26.77% gc time)
0.404000 seconds (2.79 M allocations: 679.788 MiB, 26.13% gc time)
0.435406 seconds (2.82 M allocations: 686.737 MiB, 26.39% gc time)
[commit 30e75db8bb439475cfa2757a695291f0ecd7f76a]
Warming up ...
8.250285 seconds (21.26 M allocations: 1.123 GiB, 4.19% gc time, 99.57% compilation time)
0.035314 seconds (82.01 k allocations: 12.020 MiB, 77.95% compilation time)
0.037924 seconds (82.71 k allocations: 12.263 MiB, 79.54% compilation time)
Running ...3.162
0.006695 seconds (22.20 k allocations: 7.703 MiB)
0.009089 seconds (25.18 k allocations: 8.717 MiB)
0.012318 seconds (25.89 k allocations: 8.960 MiB)
Running ...316.230
0.296447 seconds (864.41 k allocations: 301.489 MiB, 22.15% gc time)
0.299073 seconds (899.39 k allocations: 312.833 MiB, 13.81% gc time)
0.320938 seconds (908.75 k allocations: 316.072 MiB, 19.29% gc time)
This is simply with this replacement in diffeq_liouvillian.jl
function (Op::DiffEqLiouvillian{true,false})(du, u, p, t)
s = p(t)
#w, v = Op.H.EIGS(Op.H, s, Op.lvl)
w, v = haml_eigs(Op.H, s, lvl=Op.lvl)
# ....
end
Some additional optimizations also look possible by making DiffEqLiouvillian and other other objects type-generic, e.g.
struct DiffEqLiouvillian{diagonalization,adiabatic_frame, Htype <: AbstractHamiltonian}
"Hamiltonian"
H::Htype
# ...
end
Change check_positivity
such that it only returns false
if its argument is negative.
I find occasionally when using the AME trajectories solver of HOQST that ame_jump
fails with an error, such as
ERROR: LoadError: InexactError: Float64(8.106524306056477e-5 + 8.561864539050956e-22im)
Stacktrace:
[1] Real
@ ./complex.jl:44 [inlined]
[2] convert
@ ./number.jl:7 [inlined]
[3] setindex!(A::Vector{Float64}, x::ComplexF64, i1::Int64)
@ Base ./array.jl:966
[4] ame_jump(D::OpenQuantumBase.DaviesGenerator, u::Vector{ComplexF64}, gap_idx::OpenQuantumBase.GapIndices, v::Matrix{ComplexF64}, s::Float64)
@ OpenQuantumBase ~/.julia/packages/OpenQuantumBase/O2ct0/src/opensys/trajectory_jump.jl:44
[5] (::OpenQuantumBase.var"#271#272"{Vector{ComplexF64}, OpenQuantumBase.GapIndices, Matrix{ComplexF64}, Float64})(x::OpenQuantumBase.DaviesGenerator)
@ OpenQuantumBase ./none:0
The line in question is 44 (prob[idx] = g0 * (ϕ' * ϕ)
) in the file src/opensys/trajectory_jump.jl. I suggest wrapping the RHS (as in line 30) in real
or a simple real_if_close
function (i.e. one which will return the real part if it is close enough to a real to some tolerance). I made the change locally and it seems to have fixed things for me. I can make a PR, but it's quite an easy fix really :)
Internal APIs do not need to be exported.
Eq. (23) in the ULE paper.
More user-friendly features
Add optional Lamb shift to coarse-grained ME. The form of the Lamb shift is stated by Eq.(108) in the reference.
Use the package: IterativeSolvers.jl.
Convert Eq. (23) of Completely positive master equation for arbitrary driving and small level spacing into frequency form (probably using adiabatic approximation).
This is to close issue #16 in the main repo. To support the sparse operator, two object
need to be rewritten.
To get CUDA to work in DiffEq solvers of OpenQuantumTools with minimal changes, I had to add support for CuArray type in OpenQuantumBase. In particular, the initial state u0 of A::AbstractHamiltonian(H, u0) is now allowed to be CuArray.
OpenQuantumBase.jl/src/OpenQuantumBase.jl
Line 11 in e6778bc
OpenQuantumBase.jl/src/OpenQuantumBase.jl
Line 48 in e6778bc
This brings up several issues. The two most important are
(I) How should we integrate CUDA with OpenQuantumBase.jl?
(II) Is there a way to make CUDA an optional dependence?
My proposed solution to (I) is to make CuHamiltonian and CuAnnealing constructors which inherit from the Abstract versions. When passed to a solver in OpenQuantumTools, we just use multiple-dispatch on "Cu" types to do GPU accelerated solvers.
Pros:
(1) CuHamiltonian/ CuAnnealing data can be optimized for GPU (i.e. Float32 and whatever else is necessary)
(2) Solvers will have GPU support via multiple-dispatch (no additional arguments/ "seperate gpu solvers")
(3) If partially solved in one GPU run, final state uf is CuArray, so supports future runs as u0 "natively"
Cons:
(1) Users have to define a separate CuH/CuA types if they want to run on GPU.
(2) CUDA is now native dependence.
Define S
function for HybridOhmic bath.
Use the interface provided in SciMLOperators.jl.
Related with this issue
When using the `Hamiltonian interface, if the elements of second arguments do not have the same type, the constructor will get stuck in an infinite loop.
When a jump event occurs, the jump operators from each Davies generator should be summed together.
In a standard anneal, a user will use the standard_driver
function
OpenQuantumBase.jl/src/matrix_util.jl
Lines 104 to 110 in c567d61
Array{Complex{Float64},2}
. While we've shown that casting this as a CuArray, i.e.cu(standard_driver(n))
, is sufficient for a speed-up, this is not optimal. Ideally, the GPU should only deal with Float32s, and perhaps even better, with real numbers only.
Furthermore, the DenseHamiltonian constructor performs "scalar operations" by indexing the m array (see
OpenQuantumBase.jl/src/hamiltonian/dense_hamiltonian.jl
Lines 31 to 48 in c567d61
Questions/ things to resolve:
1.) Does converting matrices to Array{Complex{Float32},2}
before casting as CuArray
help GPU performance? If so, add this support.
2.) Is there any speed to be gained by converting complex numbers to two reals numbers instead of Complex type? Does CUDA handle that for us?
3.) Does CUDA.allowscalar(false) actually help us? If not, is there a way to remove scalar operations from DenseHamiltonian constructor in the first place so that scalar operations don't occur on GPU?
Maybe we should use Evolution
as a different dispatch for quantities defined using physical time.
Add a function bloch_to_state
, which converts the single-qubit Bloch sphere angle to the state vector.
Optimize AME solver for constant Hamiltonian.
Add an object to interpolate time-dependent Hamiltonian on gridded points.
Related to this issue: USCqserver/HOQSTTutorials.jl#19
We will test if CUDA conditional usage is an issue when the code is finished.
OpenQuantumBase.jl/src/hamiltonian/dense_hamiltonian.jl
Lines 65 to 70 in ac578ee
Functions like update_cache! take p as an argument but never use it.
Seems like a possibly deprecated argument that arose when writing wrappers for DiffEq update functions. For example, in the solve_schrodinger function of OpenQuantumTools.jl.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.