Git Product home page Git Product logo

sparseir.jl's People

Contributors

github-actions[bot] avatar mwallerb avatar samuel3008 avatar shinaoka avatar timholy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sparseir.jl's Issues

Conflicting symbol names (`beta`)

@sakurairihito

We (I and R. Sakurai) found this bit annoying:

using SparseIR
lambda_ = 100.0
beta = 10.0
wmax = lambda_/beta
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
println(beta(basis))

Many Julia users seem to prefer to using using.
We should rename the accessor something like getbeta?
The same applies to wmax...

Perfomance of evaluating uhat

This performance problem becomes serious when evaluating the basis functions on a dense frequency mesh.

Julia:

using SparseIR
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
nvec = 2 .* collect(1:10000) .+ 1
basis.uhat(nvec)
@time basis.uhat(nvec)
22.467327 seconds (72.36 M allocations: 22.350 GiB, 6.70% gc time)

Python:

from sparse_ir import FiniteTempBasis
import numpy as np
import time
beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis("F", beta, wmax, 1e-7)
nvec = 2 * np.arange(10000) + 1
t1 = time.time()
basis.uhat(nvec)
time.time() - t1
0.49092721939086914

Furthere improvement of evaluate!

When dim=end, evaluate! still allocates a lot of memory.

Benchmark result for commit bebe186.

using Revise
using SparseIR
using BenchmarkTools

beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
smpl = MatsubaraSampling(basis)

N = 1000000
in = zeros(ComplexF64, length(basis), N)
out = zeros(ComplexF64, length(smpl.sampling_points), N)
@benchmark evaluate!(out, smpl, in; dim=1)

image

N = 1000000
in = zeros(ComplexF64, N, length(basis))
out = zeros(ComplexF64, N, length(smpl.sampling_points))
@benchmark evaluate!(out, smpl, in; dim=2)

image

Typing of interface

Related to #12.

FiniteTempBasis and sampling classes are parametric types.

https://github.com/SpM-lab/SparseIR.jl/blob/main/src/basis.jl#L168
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/sampling.jl#L34

I like the type stability of this design but I feel that we expose too many internal type parameters to the user. As a result, FiniteTempBasis objects for different kernels have different types even though they have the same interface. Is it possible to reduce the number of exposed internal type parameters while keeping the type stability of the interface?

In my opinion, the user would expect that basis and sampling classes depend only on one type parameter T <: Floating that describes the number of bits for representing basis functions (and T can default to Float64).

We could make the type of the attribute kernel abstract to prevent the working floating type of SVD, T_work, from be propagated into the user at the price of some type instability.

Any ideas?

struct FiniteTempBasis{T<:Floating} <: AbstractBasis
    kernel::AbstractKernel
    sve_result::Tuple{
        PiecewiseLegendrePolyVector{T},Vector{T},PiecewiseLegendrePolyVector{T}
    }
    statistics::Statistics
    β::T
    u::PiecewiseLegendrePolyVector{T}
    v::PiecewiseLegendrePolyVector{T}
    s::Vector{T}
    uhat::PiecewiseLegendreFTArray{T}
end

Long precompilation times

SparseIR is now super-fast to load. However, it does take quite long to precompile.

I don't understand why this should be, since we mostly do bog-standard linear algebra ... or is the quad-precision linear algebra just much slower?

Quite a small issue, but it is a little annoying :)

Adjust Twork for small ε

lambda_ = 100.0
beta = 10.0
wmax = lambda_/beta
basis = FiniteTempBasis(fermion, beta, wmax, 1e-10)

yields

image

Fix accuracy in kernels

As witnessed by non-uniform error in the SVE's singular values, we have precision issues in computing the matrices to be SVD'd:
Screenshot 2022-05-24 at 16 47 22

Fix by introducing x_forward/x_backward.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

fermion/boson

The symbols boson and fermion are now gone. Can we reintroduce them?

Composite/augmented basis issues

The composite/augmented basis things are in a pretty sorry state right now.

Problems:

  • there is essentially no documentation on how and why to use this
  • the interface is not type-stable at all, it stores a bunch of Vector{Any} and Union{...} instead of doing this properly
  • the sampling points are incorrect! composite basis does not actually augment the sampling points in any way, which completely wrecks any fitting procedure.

Bug for SVD with Double64

A minimum code:

using SparseIR
FiniteTempBasis(fermion, 1.0, 1.0, 1e-10)

Error message:
image

I think the kernel must be initialized with Double64 when the cutoff is small. Alternatively, we could always use Double64 for the kernel for safety.

Add `evaluate!` and `fit!`

Just a memo
To avoid allocating a new array, implementing them may be useful.
But, if dim!=1 and dim!=end, we still need to allocate temporary arrays for permutating dims.

Performance improvement of `fit!`

Does it make sense to switch from LU to SVD and allow the user to optionally pass a preallocated work array to fit!?

using Revise
using SparseIR
using BenchmarkTools

beta = 1.0
wmax = 1000.0
basis = FiniteTempBasis(fermion, beta, wmax, 1e-7)
smpl = MatsubaraSampling(basis)

N = 100000
in = zeros(ComplexF64, length(basis), N)
out = zeros(ComplexF64, length(smpl.sampling_points), N)
#@benchmark evaluate!(out, smpl, in; dim=1)
@benchmark fit!(in, smpl, out; dim=1)

image

Types of TauSampling/MatsubaraSampling

These types depend on the SVD type:
https://github.com/SpM-lab/SparseIR.jl/blob/main/src/sampling.jl#L69

This is not so convenient because changes in the implementation of the fitting could affect user codes. For instance, the code shown below does not work anymore.
https://github.com/SpM-lab/sparse-ir-tutorial/blob/main/src/ipt_jl.md

image

Does it make sense to define type aliases?

const TauSampling64 = TauSampling{Float64,Float64,SVD}
const MatsubaraSampling64 = MatsubaraSampling{Int64,ComplexF64,SVD}

Speeding up evaluating overlap

Respecting segments of a PiecewiseLegendrePoly object in evaluating overlap did NOT improve the performance. Better to port the Python code to Julia.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.