Git Product home page Git Product logo

openquantumbase.jl's People

Contributors

araujoms avatar github-actions[bot] avatar hmunozb avatar neversakura avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

openquantumbase.jl's Issues

add SuperOhmicBath object

  • Add the two-point correlation function
  • Add the spectral function
  • Add polaron frame correlation function
  • Add polaron frame spectral function
  • Add test suite for AME and Redfield

Travis-CI negative credit balance

It seems that when the repo was kept private, I did not stop the CI service. Now Travis-CI does not work even when the repo went public because I have a negative credit balance. The needs to be fixed so the CI could work properly.

Add test suite for displays

I will add more tests for the display strings for various objects:

The REPL string can be extracted by:
replstr(x, kv::Pair...) = sprint((io,x) -> show(IOContext(io, :limit => true, :displaysize => (24, 80), kv...), MIME("text/plain"), x), x)
The show string can be extracted by:
showstr(x, kv::Pair...) = sprint((io,x) -> show(IOContext(io, :limit => true, :displaysize => (24, 80), kv...), x), x)

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

GPU test suite

travis-com does not have GPU instance. We should figure out a way to separate the GPU-specific test for now.

Reduce allocations and run times

The use of a .EIGS member to diagonalize a dense Hamiltonian instead of an interface function is more bulky than necessary in time and allocations.

(Very rough) test of this with a 4 qubit system and a few different schedules suggest up to a ~25% improvement in the performance of the AME

[master 937a3236f1b9578bc223b21817dec2e7a8512ee2]
Warming up ...
  7.523293 seconds (19.89 M allocations: 1.046 GiB, 4.44% gc time, 99.45% compilation time)
  0.048794 seconds (134.67 k allocations: 22.224 MiB, 68.96% compilation time)
  0.056558 seconds (136.75 k allocations: 22.732 MiB, 73.10% compilation time)
Running ...3.162
  0.014409 seconds (69.40 k allocations: 16.847 MiB)
  0.014739 seconds (77.84 k allocations: 18.921 MiB)
  0.043694 seconds (79.92 k allocations: 19.429 MiB, 59.29% gc time)
Running ...316.230
  0.398527 seconds (2.71 M allocations: 659.962 MiB, 26.77% gc time)
  0.404000 seconds (2.79 M allocations: 679.788 MiB, 26.13% gc time)
  0.435406 seconds (2.82 M allocations: 686.737 MiB, 26.39% gc time)

[commit 30e75db8bb439475cfa2757a695291f0ecd7f76a]
Warming up ...
  8.250285 seconds (21.26 M allocations: 1.123 GiB, 4.19% gc time, 99.57% compilation time)
  0.035314 seconds (82.01 k allocations: 12.020 MiB, 77.95% compilation time)
  0.037924 seconds (82.71 k allocations: 12.263 MiB, 79.54% compilation time)
Running ...3.162
  0.006695 seconds (22.20 k allocations: 7.703 MiB)
  0.009089 seconds (25.18 k allocations: 8.717 MiB)
  0.012318 seconds (25.89 k allocations: 8.960 MiB)
Running ...316.230
  0.296447 seconds (864.41 k allocations: 301.489 MiB, 22.15% gc time)
  0.299073 seconds (899.39 k allocations: 312.833 MiB, 13.81% gc time)
  0.320938 seconds (908.75 k allocations: 316.072 MiB, 19.29% gc time)

This is simply with this replacement in diffeq_liouvillian.jl

function (Op::DiffEqLiouvillian{true,false})(du, u, p, t)
    s = p(t)
    #w, v = Op.H.EIGS(Op.H, s, Op.lvl)
    w, v = haml_eigs(Op.H, s, lvl=Op.lvl)
# ....
end

Some additional optimizations also look possible by making DiffEqLiouvillian and other other objects type-generic, e.g.

struct DiffEqLiouvillian{diagonalization,adiabatic_frame, Htype <: AbstractHamiltonian}
    "Hamiltonian"
    H::Htype
   # ...
end

ame_jump with InexactError due to rounding

I find occasionally when using the AME trajectories solver of HOQST that ame_jump fails with an error, such as

ERROR: LoadError: InexactError: Float64(8.106524306056477e-5 + 8.561864539050956e-22im)
Stacktrace:
[1] Real
@ ./complex.jl:44 [inlined]
[2] convert
@ ./number.jl:7 [inlined]
[3] setindex!(A::Vector{Float64}, x::ComplexF64, i1::Int64)
@ Base ./array.jl:966
[4] ame_jump(D::OpenQuantumBase.DaviesGenerator, u::Vector{ComplexF64}, gap_idx::OpenQuantumBase.GapIndices, v::Matrix{ComplexF64}, s::Float64)
@ OpenQuantumBase ~/.julia/packages/OpenQuantumBase/O2ct0/src/opensys/trajectory_jump.jl:44
[5] (::OpenQuantumBase.var"#271#272"{Vector{ComplexF64}, OpenQuantumBase.GapIndices, Matrix{ComplexF64}, Float64})(x::OpenQuantumBase.DaviesGenerator)
@ OpenQuantumBase ./none:0

The line in question is 44 (prob[idx] = g0 * (ϕ' * ϕ)) in the file src/opensys/trajectory_jump.jl. I suggest wrapping the RHS (as in line 30) in real or a simple real_if_close function (i.e. one which will return the real part if it is close enough to a real to some tolerance). I made the change locally and it seems to have fixed things for me. I can make a PR, but it's quite an easy fix really :)

AME with Sparse Operators

This is to close issue #16 in the main repo. To support the sparse operator, two object

  • DaviesGenerator
  • AMEDiffEqOperator

need to be rewritten.

Integrating CUDA properly in OpenQuantumBase

To get CUDA to work in DiffEq solvers of OpenQuantumTools with minimal changes, I had to add support for CuArray type in OpenQuantumBase. In particular, the initial state u0 of A::AbstractHamiltonian(H, u0) is now allowed to be CuArray.

import CUDA.CuArray

abstract type AbstractAnnealing{hType <: AbstractHamiltonian,uType <: Union{Vector,Matrix,CuArray},} end

This brings up several issues. The two most important are
(I) How should we integrate CUDA with OpenQuantumBase.jl?
(II) Is there a way to make CUDA an optional dependence?

My proposed solution to (I) is to make CuHamiltonian and CuAnnealing constructors which inherit from the Abstract versions. When passed to a solver in OpenQuantumTools, we just use multiple-dispatch on "Cu" types to do GPU accelerated solvers.

Pros:
(1) CuHamiltonian/ CuAnnealing data can be optimized for GPU (i.e. Float32 and whatever else is necessary)
(2) Solvers will have GPU support via multiple-dispatch (no additional arguments/ "seperate gpu solvers")
(3) If partially solved in one GPU run, final state uf is CuArray, so supports future runs as u0 "natively"

Cons:
(1) Users have to define a separate CuH/CuA types if they want to run on GPU.
(2) CUDA is now native dependence.

`Hamiltonian` interface promotion error

When using the `Hamiltonian interface, if the elements of second arguments do not have the same type, the constructor will get stuck in an infinite loop.

Optimizing Hamiltonian constructor for GPU acceleration

In a standard anneal, a user will use the standard_driver function

function standard_driver(num_qubit; sp = false)
res = ""
for idx = 1:num_qubit
res = res * "I"^(idx - 1) * "X" * "I"^(num_qubit - idx) * "+"
end
q_translate(res[1:end-1], sp = sp)
end

This generates a matrix of type Array{Complex{Float64},2}. While we've shown that casting this as a CuArray, i.e.cu(standard_driver(n)), is sufficient for a speed-up, this is not optimal. Ideally, the GPU should only deal with Float32s, and perhaps even better, with real numbers only.

Furthermore, the DenseHamiltonian constructor performs "scalar operations" by indexing the m array (see

function DenseHamiltonian(funcs, mats; unit = :h, EIGS = EIGEN_DEFAULT)
if any((x) -> size(x) != size(mats[1]), mats)
throw(ArgumentError("Matrices in the list do not have the same size."))
end
if is_complex(funcs, mats)
mats = complex.(mats)
end
hsize = size(mats[1])
# use static array for size smaller than 100
if hsize[1] <= 10
mats = [SMatrix{hsize[1],hsize[2]}(unit_scale(unit) * m) for m in mats]
else
mats = unit_scale(unit) * mats
end
cache = similar(mats[1])
EIGS = EIGS(cache)
DenseHamiltonian{eltype(mats[1])}(funcs, mats, cache, hsize, EIGS)
end

This can be turned off with CUDA.allowscalar(false) and CuArray.allowscalar(false) or something like this.

Questions/ things to resolve:
1.) Does converting matrices to Array{Complex{Float32},2} before casting as CuArray help GPU performance? If so, add this support.
2.) Is there any speed to be gained by converting complex numbers to two reals numbers instead of Complex type? Does CUDA handle that for us?
3.) Does CUDA.allowscalar(false) actually help us? If not, is there a way to remove scalar operations from DenseHamiltonian constructor in the first place so that scalar operations don't occur on GPU?

p is non-essential argument

function update_cache!(cache, H::DenseHamiltonian, p, s::Real)
fill!(cache, 0.0)
for i = 1:length(H.m)
@inbounds axpy!(-1.0im * H.f[i](s), H.m[i], cache)
end
end

Functions like update_cache! take p as an argument but never use it.

Seems like a possibly deprecated argument that arose when writing wrappers for DiffEq update functions. For example, in the solve_schrodinger function of OpenQuantumTools.jl.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.