Git Product home page Git Product logo

itensors.jl's Issues

Mistake in Spin 1/2 operator Sy

elseif opname == "iSʸ" || opname == "iSy"
Op[Up, DnP] = -0.5
Op[Dn, UpP] = 0.5
elseif opname == "" || opname == "Sy"
Op = complex(Op)
Op[Up, DnP] = 0.5*im
Op[Dn, UpP] = -0.5*im

There is a non-canonical definition of the Pauli matrix \sigma_y; it has a sign error with respect to the canonical definition [1,2]:

image

Obviously, this is only noticeable in operators which are odd in \sigma^y and thus this might be a mild bug. However, I wish to argue that we should fix the sign in order to not throw the commutation relations

image

over board as they indeed define the Pauli matrices in a basis-independent way. NB: Anti-commutation relations are still valid under \sigma^y -> -\sigma^y (even in \sigma^y).
Currently, the commutation relations do not hold.

Moreover, \sigma^+ and \sigma^- would have to change role as

image

if opname == "S⁺" || opname == "Splus" || opname == "S+"
Op[Dn, UpP] = 1.
elseif opname == "S⁻" || opname == "Sminus" || opname == "S-"
Op[Up, DnP] = 1.

Cheers,
Jan

[1] W. Pauli, „Zur Quantenmechanik des magnetischen Elektrons", Zeitschrift für Physik, Bd. 43, 1927, S. 601, DOI: 10.1007/BF01397326
[2] Pauli matrices, Wikipedia (en), visited: 05/12/2019 at 17:00UTC+1

Inaccurate svd results sometimes

In the following code (as minimal as I could make it, because it depends on the tensor being SVD'd) I find that sometimes, like maybe every third or fourth time, the result of SVD'ing the tensor R gives low accuracy, meaning the relative error (truncation error) is in practice around 1E-8.

This problem goes away when a cutoff is specified, like cutoff=1E-12, which makes me suspect the bug is due to an interplay of how the truncate! function works and the default cutoff of exactly 0.0.

The important part is the last few lines below; the code at the top is necessary to prepare a tensor which has some nontrivial structure. When I input a totally random tensor the svd gives very accurate results every time. This makes me suspect the bug happens because R is approximately low-rank (which it is): it has one nearly-zero singular value.

using ITensors

let
  D = 8
  d = 2
  s1 = Index(D,"s1")
  s2 = Index(D,"s2")
  t1 = Index(d,"t1")
  t2 = Index(d,"t2")
  l1 = Index(d,"l1")

  MM = randomITensor(t1,s1,t2,s2)

  Q,R = qr(MM,(s1,s2))
  ii = commonindex(Q,R)
  u1 = Index(d,"u1")
  u2 = Index(d,"u2")
  split = ITensor(ii,u1,u2)
  for i=1:d,j=1:d
    split[ii(i+d*(j-1)),u1(i),u2(j)] = 1
  end
  R *= split

  #R1,S,R2 = svd(R,(u1,t1),cutoff=1E-12) #works well
  R1,S,R2 = svd(R,(u1,t1)) #sometimes leads to large differences
  R1 *= S
  @show norm(R)
  @show norm(R1*R2-R)
  @show norm(R1*R2-R)/norm(R) #should be < 1E-13 say
end

position! does not preserve link tag information

The latest implementation of position! (mps/mpo.jl) does not propagate index tag information, such as which number bond the index is for. This is not a hard bug, but would be a good improvement to make, as the C++ version does a better job preserving tags in position!

`randomITensor` can throw OOM error

julia> using ITensors

julia> randomITensor(2)
julia-1.1(94967,0x7fffb6654380) malloc: *** mach_vm_map(size=811454977087225856) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ERROR: OutOfMemoryError()
Stacktrace:
 [1] Type at ./boot.jl:402 [inlined]
 [2] Type at ./boot.jl:411 [inlined]
 [3] fill at ./array.jl:403 [inlined]
 [4] fill at ./array.jl:401 [inlined]
 [5] Type at /Users/nick/juliacon/ITensors.jl/src/storage/dense.jl:9 [inlined]
 [6] Type at /Users/nick/juliacon/ITensors.jl/src/itensor.jl:31 [inlined]
 [7] randomITensor at /Users/nick/juliacon/ITensors.jl/src/itensor.jl:187 [inlined]
 [8] randomITensor(::Int64) at /Users/nick/juliacon/ITensors.jl/src/itensor.jl:192
 [9] top-level scope at none:0
julia> VERSION
v"1.1.1"

Remove InitState type

Let's remove the InitState type in favor of just a Vector{String} or Vector{Int} since these are easy to make and work with in Julia (versus C++, where for example printing Vector{String} isn't defined for you by default, so you can't view the initial state you're making).

Consider refactoring into a few smaller packages

So ITensors.jl is turning into quite the large dependency and will only get bigger. Certain users may only be interested in the tensor parts whereas others will be interested in the MPS parts. Perhaps, the ITensor type and all the low level stuff can live in one package and the MPS / DMRG algorithms that live on top should be in a separate package that depends on the package that exports ITensor.

Just a thought, might cause some developer inconveniences while things are moving fast and breaking, but it seems like it might be a good long term thing.

One immediate benefit is that it would really cut down on CI test times...

Davidson for MPO with complex entries: inner product lambda has small complex part

I'm doing DMRG on a 3-state Potts model. (Code and error message below). The Hamiltonian (hence the effective nsite Hamiltonian) has complex entries, so in davidson() the inner product

  lambda::Float64 = dot(V[1],AV[1])

(iterativesolvers.jl:90) has a small complex part. Two issues here, one straightforward to fix and one not:

  1. We're trying to assign a Complex{Float64} to a Float64. This is easily fixable: write a function that drops the imaginary part if it is below some threshold (say 1e-10), and errors if it is above that threshold.
  2. If I do that, I see that the thing runs fine for a little while, but then the imaginary part grows beyond the threshold. (I think this is over the course of one call to davidson(), i.e. it's an instability towards imaginary parts in the Davidson algorithm as opposed to a matter of increasing bond dimension hence MPS matrix size, but I actually haven't checked that.)

What's the right way to deal with this?

Code:


################################################################
# Potts siteset

function pottsSites(N :: Int; q :: Int = 3)
  return [Index(q, "Site,Potts,n=$n") for n = 1:N]
end

const PottsSite = makeTagType("Potts")

# 1-index my Potts states
# so diagonal elements of Z are
#    e^{2πi/q}
#    e^{2πi*2/q}
#    e^{2πi*3/q}
#    e^{2πi*(q-1)/q}
#    e^{2πi*q/q} = 1
# this seems like the least bad thing
function state(::PottsSite,
               st::AbstractString)
  return parse(Int64, st)
end

function op(::PottsSite,
            s :: Index,
            opname :: AbstractString)::ITensor
  sP = prime(s)
  q = dim(s)

  Op = ITensor(Complex{Float64},dag(s), s')

  if opname == "Z"
    for j in 1:q
      Op[j,j] = exp(2*π*im*j/q)
    end
  elseif opname == "ZH"
    for j in 1:q
      Op[j,j] = exp(-2*π*im*j/q)
    end
  elseif opname == "X"
    for j in 1:q
      Op[j % q + 1,j] = 1
    end
  elseif opname == "XH"
    for j in 1:q
      Op[j,j % q + 1] = 1
    end
  elseif opname == "X+XH"
    for j in 1:q
      Op[j,j % q + 1] = 1
      Op[j % q + 1,j] = 1
    end
  else
    throw(ArgumentError("Operator name '$opname' not recognized for PottsSite"))
  end
  return Op
end

################################################################
# DMRG


#-----------
# setup
#-----------

θ = π/8

N = 100
sites = pottsSites(N)
psi0 = randomMPS(sites)

ampo = AutoMPO()
for j = 1:N-1
    add!(ampo, -sin(θ), "Z", j,"ZH",j+1)
    add!(ampo, -sin(θ), "ZH",j,"Z", j+1)
end
   
for j = 1:N
    add!(ampo, -cos(θ), "X+XH", j)
end

H = toMPO(ampo, sites);

#-----------
# DMRG
#-----------

sweeps = Sweeps(15)
maxdim!(sweeps, 10,20,100,100,200)
cutoff!(sweeps, 1E-10)

#energy, psi = dmrg(H,psi0,sweeps,observer=observer)
energy, psi = dmrg(H,psi0,sweeps)

Error:

InexactError: Float64(3.7091034021132 + 5.051406153050126e-18im)

Stacktrace:
 [1] Type at ./complex.jl:37 [inlined]
 [2] convert(::Type{Float64}, ::Complex{Float64}) at ./number.jl:7
 [3] #davidson#123(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::ProjMPO, ::ITensor{3}) at /home/christopher/work/src/ITensors.jl/src/iterativesolvers.jl:90
 [4] davidson(::ProjMPO, ::ITensor{3}) at /home/christopher/work/src/ITensors.jl/src/iterativesolvers.jl:59
 [5] macro expansion at /home/christopher/work/src/ITensors.jl/src/mps/dmrg.jl:34 [inlined]
 [6] macro expansion at ./util.jl:213 [inlined]
 [7] #dmrg#159(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::MPO, ::MPS, ::Sweeps) at /home/christopher/work/src/ITensors.jl/src/mps/dmrg.jl:21
 [8] dmrg(::MPO, ::MPS, ::Sweeps) at /home/christopher/work/src/ITensors.jl/src/mps/dmrg.jl:9
 [9] top-level scope at In[9]:5

Operator tracker for DMRG

In the C++ code, you can include operators you want DMRG to measure as it sweeps back and forth, to see how the entropy/magnetization/whatever are doing. It would be nice to have this feature in the Julia code too.

Use Symbols for tags

Is there any reason to use Strings for tags instead of Symbols?

Index(2, :m) 

feels much more julian to me than

Index(2, "m") 

Consider renaming In and Out

Both IJulia.jl and ITensors.jl export the names In and Out. This doesn't necessarily need to be fixed but it means that anyone working in a jupyter notebook needs to write ITenors.In and ITensors.Out instead of In or Out.

The easiest solution would be to rename them to in and out, which would make some sense since they aren't types but fancy integers and usually only types have upper cased names in Julia.

Another option would be contavariant and covariant but that's pretty verbose.

sum(::MPS/MPO,::MPS/MPO)

I noticed here:

orthogonalize!(A, 1; kwargs...)

that sum(::MPS/MPO,::MPS/MPO) is modifying the inputs. I think we should copy the inputs (or call an out-of-place orthogonalize) to make sure the inputs don't get modified, since that would be surprising behavior.

factorization with `eigen` does not respect `tags` keyword

Hi,
It seems that the issue of bond tags not being conserved when replaceBond! is not fully fixed by #81, because the factorization functions using eigen don't respect the tags keyword. My tests where unfortunately such that only SVD was used.

I think a small change in storage_eigen fixes that (see orialb@9151552) .
I wanted to ask if this is how you meant it to be handled, because I saw that in the case of svd factorization the tags replacement is done in the _factorize... function and not in the storage_svd function.

combiner

We need to implement combiner from the C++ code, documented here. Would be nice to keep the combiner tensor, combined index return layout as well.

Finish coverage testing for storage/contract.jl

The only remaining non-covered part of contract.jl is related to not having tests where one requests the indices of the result (C) in a different order from the "natural" matrix-matrix one. (Either that or the coverage site is buggy which it seems to be for other files!)

using KrylovKit.exponentiate with ITensors

Hi,

As a continuation of the discussion in #98 regarding using KrylovKit, I just wanted to update that I implemented TDVP using ITensors + KrylovKit.exponentiate (here ) and it seems to work nicely.

It works almost smoothly out of the box, the only thing missing is implementation of similar(A::ITensor, T) when T is a different type than the storage type of A (to create a similar ComplexF64 ITensor from a Float64 ITensor ). In the meantime I just make sure the MPS is complex before starting time-evolution.

I did some benchmarks vs. the recently added applyExp function in the C++ version of ITensor, and as far as I can tell exponentiate is actually significantly faster.
Using the Julia code below and the C++ code given here, I get:

  • ITensors.jl + KrylovKit - 22.2ms
  • ITensor C++ - 32ms

Both ITensor and Julia (v1.2) were compiled with MKL, and I used 1 MKL thread.
I wouldn't expect to see such a big difference between the implementations, so I hope I benchmarked it correctly (otherwise maybe there is some problem in applyExp).

(Note that I also compared my actual TDVP code vs. the code from ITensor/tdvp branch and also there I see a similar performance advantage to Julia)

p.s - Just wanted to add that in general it was super easy to implement TDVP with ITensors.jl, it took me something like 20 mins to write. So, great job with ITensors.jl and also great job by @Jutho with KrylovKit!

using ITensors, BenchmarkTools, KrylovKit

struct LocalOp
    A::ITensor
    B::ITensor
end

(L::LocalOp)(t::ITensor) = noprime(L.A*t*L.B)

m=100
i,j = Index(m,"i"), Index(m,"j")
v = randomITensor(i,j)
A= randomITensor(i',i)
B = randomITensor(j,j')
M =LocalOp(A/norm(A),B/norm(B) )

@btime exponentiate($M,0.1,$v,tol=1e-12)

Question about how to mute tensor order warning

Hi,
Is there a way to mute the tensor order warning outside ITensors module? I am working with some tensors of high order (larger than the default warnTensorOrder = 14 ) and want to stop the warnings.

Thanks,
Yiqing

Printing and showing ITensor in non-transposed format

In view of closed issue #152, I propose a change in the printing and passing mechanism of an ITensor.
The issue is briefly summarised with the following code example:

julia> using ITensors

julia> sites = ITensors.siteinds("S=1/2", 1)
1-element Array{Index,1}:
 (2|id=90|S=1/2,Site,n=1)

julia> Sy = op(sites[1], "Sy");

julia> @show Sy
Sy = ITensor ord=2 (2|id=90|S=1/2,Site,n=1) (2|id=90|S=1/2,Site,n=1)'
Dense{Complex{Float64},Array{Complex{Float64},1}}
2×2 Tensor{Complex{Float64},2,Dense{Complex{Float64},Array{Complex{Float64},1}},IndexSet{2}}:
  0.0+0.0im  0.0+0.5im
 -0.0-0.5im  0.0+0.0im
ITensor ord=2 (2|id=90|S=1/2,Site,n=1) (2|id=90|S=1/2,Site,n=1)'
Dense{Complex{Float64},Array{Complex{Float64},1}}

julia> matrix(Sy)
2×2 Array{Complex{Float64},2}:
  0.0+0.0im  0.0+0.5im
 -0.0-0.5im  0.0+0.0im

@show {ITensor} and matrix({ITensor}) prints and returns, respectively, the transposed version of a conventional matrix, as it begins with the unprimed (ket) index and then the primed (bra) index, rather than the other way around. Indeed, thinking of Sy as a matrix mapping (a vector to its left and a dualvector to is right) to the complex numbers, would expect the indices to be permuted.
This has proven to be confusing in at least one case 🙋‍♂️.

What are your thoughts?

Cheers,
Jan

Return truncation error and singular values from replaceBond and factorize

Currently replaceBond, factorize, svd and eigen don't return the truncation error which we would like to track in many cases. It is also necessary in some cases to have the actual singular values from replaceBond and factorize.
It would be simple to just propagate this through all these functions, but might make the code a little more cumbersome in cases when one doesn't need this information (although not much more).
Another option is to have a factorization object returned from factorize, svd and eigen functions which will contain all the information.
A third option is to return this information from factorize, svd and eigen only when it is asked for by the caller with a keyword argument.

Any thoughts?

Slightly better `makeTagType` syntax

Instead of using Val{...}, we can make our own simple TagType{...} struct:

struct TagType{N} end

Then instead of makeTagType we can just define a constructor for this type:

TagType(s) = TagType{Tag(s)}
# We can alternatively make it return an instance of the type,
# I am not sure which is better
TagType(s) = TagType{Tag(s)}()

This is just to avoid having to use Val, which might appear a bit mysterious.

Some improvements to priming/tagging

Various people have asked for functions like primeExcept(A,"i") (at least in the C++ version). I was hesitant to add it in order to avoid a proliferation of priming/tagging functions. I think a nice solution could be syntax like the following:

i = Index(2,"i")
j = Index(2,"j")
k = Index(2,"k")

A = randomITensor(i,j,k)

Ap = prime(A,not("i"))

This is useful in cases where you want to contract an ITensor over only one specified Index, so is quite a common operation.

not could just create a simple type that wraps the input (so creates an object Not{TagSet} that just stores the TagSet "i"), and then the function indexpositions could dispatch on Not{T} objects. One could of course make a not(i) or not(i,j) to avoid priming an index or set of indices.

This could of course all work in the C++ version too. This syntax could even be extend to more complicated logic, like prime(A,or("i","j")). Just some food for thought.

`trg` test error on master

Hey! I just cloned the repo, ran julia --project=@. -e 'using Pkg; Pkg.instantiate(); Pkg.test()
and see that one test errors:

trg: Error During Test at /Users/nick/juliacon/ITensors.jl/test/test_trg.jl:79
  Got exception outside of a @test
  BoundsError: attempt to access 8-element StaticArrays.MArray{Tuple{8},UInt8,1,8} at index [9]
  Stacktrace:
   [1] throw_boundserror(::StaticArrays.MArray{Tuple{8},UInt8,1,8}, ::Tuple{Int64}) at ./abstractarray.jl:484
   [2] checkbounds at ./abstractarray.jl:449 [inlined]
   [3] setindex! at /Users/nick/.julia/packages/StaticArrays/mcf7t/src/MArray.jl:126 [inlined]
   [4] TagSet(::String) at /Users/nick/juliacon/ITensors.jl/src/tagset.jl:95
   [5] replacetags(::TagSet, ::String, ::String) at /Users/nick/juliacon/ITensors.jl/src/tagset.jl:205
   [6] replacetags at /Users/nick/juliacon/ITensors.jl/src/index.jl:95 [inlined]
   [7] replacetags!(::IndexSet, ::String, ::String, ::String) at /Users/nick/juliacon/ITensors.jl/src/indexset.jl:335
   [8] replacetags at /Users/nick/juliacon/ITensors.jl/src/indexset.jl:339 [inlined]
   [9] replacetags(::ITensor, ::String, ::String, ::String) at /Users/nick/juliacon/ITensors.jl/src/itensor.jl:151
   [10] #trg#7(::Int64, ::Int64, ::Function, ::ITensor, ::Tuple{Index,Index}, ::Tuple{Index,Index}) at /Users/nick/juliacon/ITensors.jl/test/test_trg.jl:56
   [11] (::getfield(Main, Symbol("#kw##trg")))(::NamedTuple{(:χmax, :nsteps),Tuple{Int64,Int64}}, ::typeof(trg), ::ITensor, ::Tuple{Index,Index}, ::Tuple{Index,Index}) at ./none:0
   [12] top-level scope at /Users/nick/juliacon/ITensors.jl/test/test_trg.jl:92
   [13] top-level scope at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.1/Test/src/Test.jl:1083
   [14] top-level scope at /Users/nick/juliacon/ITensors.jl/test/test_trg.jl:81
   [15] include at ./boot.jl:326 [inlined]
   [16] include_relative(::Module, ::String) at ./loading.jl:1038
   [17] include(::Module, ::String) at ./sysimg.jl:29
   [18] include(::String) at ./client.jl:403
   [19] top-level scope at none:0
   [20] include at ./boot.jl:326 [inlined]
   [21] include_relative(::Module, ::String) at ./loading.jl:1038
   [22] include(::Module, ::String) at ./sysimg.jl:29
   [23] include(::String) at ./client.jl:403
   [24] top-level scope at none:0
   [25] eval(::Module, ::Any) at ./boot.jl:328
   [26] exec_options(::Base.JLOptions) at ./client.jl:243
   [27] _start() at ./client.jl:436
Test Summary: | Error  Total
trg           |     1      1
ERROR: LoadError: LoadError: Some tests did not pass: 0 passed, 0 failed, 1 errored, 0 broken.
in expression starting at /Users/nick/juliacon/ITensors.jl/test/test_trg.jl:79
in expression starting at /Users/nick/juliacon/ITensors.jl/test/runtests.jl:9
ERROR: Package ITensors errored during testing

Version info:

(ITensors) pkg> st
Project ITensors v0.1.0
    Status `~/juliacon/ITensors.jl/Project.toml`
  [861a8166] Combinatorics v0.7.0
  [2ae35dd2] Permutations v0.3.2
  [1fd47b50] QuadGK v2.0.3
  [90137ffa] StaticArrays v0.10.2
  [37e2e46d] LinearAlgebra
  [de0858da] Printf
  [9a3f8284] Random
  [8dfed614] Test

julia> VERSION
v"1.1.1"

Roadmap says Combinatorics requires imagemagick, but I don't think that is true

- Combinatorics (requires ImageMagick(!); let's move to test-only dependency or remove)

Combinatorics.jl has one dependency, Polynomials, which has one dependency RecipesBase, which has no dependencies.

https://github.com/JuliaMath/Combinatorics.jl/blob/master/REQUIRE
https://github.com/JuliaMath/Polynomials.jl/blob/master/REQUIRE
https://github.com/JuliaPlots/RecipesBase.jl/blob/master/Project.toml

bug (?) in toMPO: multiple onsite terms

If I do

ampo = AutoMPO()
sites = spinOneSites(10)
add!(ampo, 0.5, "Sx",1)
add!(ampo, 0.5, "Sy",1)
toMPO(ampo, sites)

(e.g. for a TFIM with longitudinal field), I get a voluminous error (below). The code works (runs without error) with either one of Sx, Sy, but not both; I see the same behavior with a handful of different site types. This sort of thing doesn't appear to be tested by the test/test_autompo.jl testset Multiple Onsite Ops.

Error:

MethodError: no method matching isless(::SiteOp, ::SiteOp)
Closest candidates are:
  isless(!Matched::Missing, ::Any) at missing.jl:70
  isless(::Any, !Matched::Missing) at missing.jl:71

Stacktrace:

 [1] cmp(::Array{SiteOp,1}, ::Array{SiteOp,1}) at ./abstractarray.jl:1708
 [2] isless at ./abstractarray.jl:1714 [inlined]
 [3] < at ./operators.jl:260 [inlined]
 [4] isless(::MatElem{MPOTerm}, ::MatElem{MPOTerm}) at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:215
 [5] lt at ./ordering.jl:49 [inlined]
 [6] sort!(::Array{MatElem{MPOTerm},1}, ::Int64, ::Int64, ::Base.Sort.InsertionSortAlg, ::Base.Order.ForwardOrdering) at ./sort.jl:455
 [7] sort!(::Array{MatElem{MPOTerm},1}, ::Int64, ::Int64, ::Base.Sort.MergeSortAlg, ::Base.Order.ForwardOrdering, ::Array{MatElem{MPOTerm},1}) at ./sort.jl:540
 [8] sort! at ./sort.jl:539 [inlined]
 [9] sort! at ./sort.jl:634 [inlined]
 [10] #sort!#7 at ./sort.jl:694 [inlined]
 [11] sort! at ./sort.jl:682 [inlined]
 [12] remove_dups!(::Array{MatElem{MPOTerm},1}) at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:251
 [13] #svdMPO#184(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::AutoMPO, ::Array{Index,1}) at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:322
 [14] svdMPO at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:272 [inlined]
 [15] #toMPO#200(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::AutoMPO, ::Array{Index,1}) at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:431
 [16] toMPO(::AutoMPO, ::Array{Index,1}) at /home/christopher/work/src/ITensors.jl/src/physics/autompo.jl:430
 [17] top-level scope at In[6]:7

Rename `array(::ITensor,::Index...)` -> `Array(::ITensor,::Index...)`

This is to emphasize that a copy of the data is made, and distinguish it from array(::ITensor) which is a version that makes a view when possible as an optimization to avoid copying Dense ITensors.

Also add a version Array{T}(::ITensor,::Index...) to help with type inference, when the desired element type is known.

Left-over print statement

There is a print statement println("identity") on line 42 of src/tensor/combiner.jl which should probably be commented out. Or was it a placeholder for some code to go there?

Factorize interface

The factorize function doesn't always seem to generate a unitary U as the first return value. Hopefully I am not doing something wrong here, but here is some code to reproduce what I found:

l = Index(4,"Link")
s = Index(2,"Site")
T = randomITensor(l,s)
U,R = factorize(T,s)

@show U*prime(U,"Link")  # not equal to the identity

Make `plussers` a Tensor function

plussers has different implementations for different storage types (Dense, CuDense, BlockSparse, etc.) so it should be a Tensor function.

bug in `mul!(w::ITensor, a, v::ITensor)` when `w= similar(v)` and v is complex

The current implementation of mul!(w,a,v) returns add!(w,0,v,a) which is a problem when w has NaN values, as it means that the result of mul! will also have NaN values in it.
This situation can occur for example when v is an ITensor with complex storage and w = similar(v) (see code below).
Due to this behavior it is currently not possible to use KrylovKit functions with ITensors (which is how I found out about this problem).

A simple fix would be to just use apply! directly in the definition of mul! instead of relying on add!. I can make a quick PR if you approve this solution.

here is a minimal code to see the problem (at least on my machine):

using ITensors, LinearAlgebra
i = Index(1000,"i")
v = randomITensor(ComplexF64,i)
w = similar(v)
mul!(w,0.1,v)
any(x->isnan(x),data(store(w)))

Bug in orthogonalize! when used with MPS with Dense{ComplexF64} storage

Hi,

While working on my time evolution code I encountered a bug in orthogonalize! when the MPS has complex coefficients. Basically it seems that it is not bringing the MPS into a proper canonical form - after calling orthogonalize!(psi,b) , psi[b-1] is not left orthogonal and psi[b+1] is not right orthogonal.

Here is a minimal example:

using ITensors

basicRandomMPS(sites::SiteSet; kwargs...) = basicRandomMPS(Float64,sites; kwargs...)
function basicRandomMPS(T::Type, sites::SiteSet ;dim=4)
  M = MPS(sites)
  links = [Index(dim,"n=$(n-1),Link") for n=1:N+1]
  for n=1:N
    M[n] = randomITensor(T,links[n],sites[n],links[n+1])
  end
  M[1] *= delta(links[1])
  M[N] *= delta(links[N+1])
  M[1] /= sqrt(inner(M,M))
  return M
end

function MWE(T,b)
    sites = spinHalfSites(10)
    psi = basicRandomMPS(T,sites)
    orthogonalize!(psi,b)

    #check that psi[b-1] is left orthogonal
    L = psi[b-1]*prime(psi[b-1], b-1==1 ? "link" : commonindex(psi[b-1],psi[b]))
    l = linkindex(psi,b-1)
    err = norm(L-delta(l,l'))
    @assert  err < 1e-12 "$err < 1e-12"
end

MWE(Float64,3) # no problem

MWE(ComplexF64,3) # assertion fails

Note also that most of the tests in the "MPS gauging and truncation" test set (in test_mps.jl) fail if I use a random MPS with complex coefficients.

I could try to dig into this, but I wonder whether it still occurs with the new Tensor design.

possible bug in `position!`

Hi,

It seems to me that position! is not shifting the orthogonality center properly, in the code below the last test fails. Am I misunderstanding the functionality of position! or missing something else here?
I can try to find out the issue if it is indeed a bug.

using ITensors,LinearAlgebra, Test
N = 20
sites = spinOneSites(N)
ampo = AutoMPO(sites)
for j=1:N-1
    add!(ampo,"Sz",j,"Sz",j+1)
    add!(ampo,0.5,"S+",j,"S-",j+1)
    add!(ampo,0.5,"S-",j,"S+",j+1)
end
H = toMPO(ampo)
psi0 = randomMPS(sites)
sweeps = Sweeps(1)
maxdim!(sweeps, 10)
cutoff!(sweeps, 1E-10)
energy, psi = dmrg(H,psi0, sweeps)

position!(psi,5)
@test inner(psi,psi)  1.
@test dot(psi[5],psi[5])  1.

memory issue

Hi guys,

It is really nice to see the use of julia for DMRG. I've tried the code a little bit
and observed that currently the memory usage is large for a relatively
'small' (am I right?) test (examples/dmrg/1d_Heisenberg_dmrg.jl).

17.823825 seconds (3.94 M allocations: 26.225 GiB, 13.91% gc time)

My question is that is it because the current code has not been
carefully written to avoid the unnecessary memory allocation or it is the julia
compilation that took a large amount of memory. If the later is
the case, then it might be a problem.

PS: Another question is that about the speed compared with the
c++ version of itensor. Does any benchmark have been performed?
Any comment or insight about this will be very helpful for my understanding.
Thanks!

Best,
Zhendong

julia> @time include("1d_Heisenberg_dmrg.jl")
sweeps = Sweeps
1 cutoff=1.0E-10, maxdim=10, mindim=1
2 cutoff=1.0E-10, maxdim=20, mindim=1
3 cutoff=1.0E-10, maxdim=100, mindim=1
4 cutoff=1.0E-10, maxdim=100, mindim=1
5 cutoff=1.0E-10, maxdim=200, mindim=1

After sweep 1 energy=-137.331806906147 maxDim=9 time=0.099
After sweep 2 energy=-138.933671055011 maxDim=20 time=0.386
After sweep 3 energy=-138.940063572147 maxDim=85 time=2.773
After sweep 4 energy=-138.940085657916 maxDim=100 time=6.140
After sweep 5 energy=-138.940086065305 maxDim=99 time=7.619
Final energy = -138.94008606530534
17.823825 seconds (3.94 M allocations: 26.225 GiB, 13.91% gc time)

RFC: Use Symbols instead of SmallStrings?

Symbol is builtin and String is heap allocated, I guess why you implemented this SmallString for this. I'm wondering if it is better to make use of existing infrastructure.

Should there be an AbstractMPS type?

Hi,

I think it could be useful to have an AbstractMPS type and have some of the MPS functions accept AbstractMPS.
The use case I have in mind is that it will allow users to define MPS wrappers that could represent for example a purification or a vectorized density matrix , for which different algorithms and operations are exactly the same as a regular MPS, but measurement of observables is done differently. Another use case could be for an iMPS.

As a starting point it will be already useful to just define AbstractMPS and make MPS <: AbstractMPS without making any other changes to the code.

The question is what is the most minimal but still useful interface that defines anAbstractMPS. I'll have to think about it a bit more, possibly after trying to actually implement some MPS wrapper.

`dmrg` throws error when `quiet=false`

The cause seems to be that maxDim was renamed to maxLinkDim, but not updated in dmrg.
I guess all the DMRG tests run with quiet=true so this was not seen.

Possible bug in MPOTerm, proposal for fix

Hi,

I noticed that when I run
sites = spinHalfSites(4); ampo = AutoMPO(); add!(ampo, "S+", 1, "S-", 2, "S+", 3, "S-", 4); H = MPO(ampo, sites)
the error "BoundsError: attempt to access 4-element Array{SiteOp,1} at index [5]" is thrown.
It seems to me like this should not happen and that it's due to lines 85-87 in autompo.jl
for n=1:2:length(ops); vop[2+n] = SiteOp(ops[n],ops[n+1]); end
as n grows to length(ops) while length(vop) = length(ops) / 2.
A possible fix might be:
for n = 1:div(length(ops),2); vop[2+n] = SiteOp(ops[2*n-1],ops[2*n]); end

I'll make a pull request with this fix and a four site Ising test that catches the issue. If I've misunderstood something and this is all expected behaviour then please ignore!

name choice of `maxLinkDim`

Hi,
first let me say that I am really excited about iTensor coming to Julia. I used the C++ version a couple of years ago and this new port looks really promising. thanks for your work!

I was wondering regarding the naming of maxLinkDim, for two reasons:

  1. According to the Julia style guide functions should be in lowercase, so it seems that maxlinkdim would be more julian (this is also true for maxDim).
  2. Wouldn't calling it maxbonddim be more in line with the tensor networks literature ? I never saw it referred to as "link dimension" (but I am aware the core authors of the library have much greater expertise in tensor networks than me, so please correct me if I'm wrong). I guess this is related to the fact that the bond indices of the MPS are tagged as "Link".

Probably point 2 is less important, although it might make it more intuitive and discoverable for new users who are familiar with the literature.

Possible bug in `delta`

Hi,
let's suppose I have an ITensor with indices (i, i', m) with dimension, e.g. (3, 3, 2). If I contract that ITensor with delta(i, i'), the contraction returns the sum of the elements [1, 1, 1] + [2, 2, 1] of the ITensor and not the sum of the elements [1, 1, 1]+[2, 2, 1]+[3, 3, 1].

A MWE showing this

using LinearAlgebra, Test, ITensors

idim = 3
mdim = 2 # Test will fail whenever mdim < idim
    
i = Index(idim, "i")
m = Index(mdim, "m")
    
A = randomITensor(i, i', m)
    
# Using delta.
O = A*δ(i, i')
    
# Manual trace.
P = map(i -> tr(Array(A)[:, :, i]), 1:mdim)

@test Array(O)  P

Is that the expected behavior of delta?

time evolution

Hi,
I am interested in adding some time evolution functionality to the package (since this is my main use case at the moment). Will you be interested in such a contribution or is it something you prefer to postpone for later stages / prefer not to delegate to external contributors ?

Implement writing of types to disk

A key question is what format to use. Probably HDF5 using a simple subset of its features is best. I just tried installing HDF5.jl on Mac and it was quite fast and easy–in the past I gather the installation would take a long time and fail often so it seems they've improved it!

Another alternative format is BSON. Finally we could offer a straight binary format and/or Julia serialize, which would not require installing any libraries. Perhaps we can make a simple wrapper that allows us to offer whichever one a user prefers without us needing to code each one for every type in ITensors.jl.

Small change to AutoMPO syntax

An alternative syntax we could use for AutoMPO is the following:

ampo = AutoMPO(sites)
for j=1:N-1
  ampo += ("Sz",j,"Sz",j+1)
  ampo += (0.5,"S+",j,"S-",j+1)
  ampo += (0.5,"S-",j,"S+",j+1)
end

as opposed to:

ampo = AutoMPO(sites)
for j=1:N-1
  add!(ampo,"Sz",j,"Sz",j+1)
  add!(ampo,0.5,"S+",j,"S-",j+1)
  add!(ampo,0.5,"S-",j,"S+",j+1)
end

This would get use very close to the C++ syntax. It should just require overloading Base.:+(::AutoMPO,::Tuple).

@emstoudenmire, let me know what you think.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.