juliaapproximation / singularintegralequations.jl Goto Github PK
View Code? Open in Web Editor NEWJulia package for solving singular integral equations
License: Other
Julia package for solving singular integral equations
License: Other
Was going through the unicode math symbols and realized they include ⨍ and ⨎. Surely We can use these somehow for writing hilbert?? This would be cool if it worked:
x=Fun(identity)
⨍(exp(x)*sqrt(1-x^2)/(x-0.1))
We have a method woodburysolve
for HierarchicalMatrix{S<:AbstractMatrix,T<:LowRankMatrix}
objects, and we have a hierarchicalGreensFun
method with hierarchically off-diagonal LowRankFun
s. Ideally, the recursive woodburysolve
should work on all hierarchically off-diagonal low-rank forms, it's just a matter of assigning the right meanings to the algebraic symbols.
What happens when we getindex
a DefiniteLineIntegral
with a HierarchicalMatrix{GreensFun,GreensFun{LowRankFun}}
?
Should there be a LowRankOperator
? Then one possible result would be a HierarchicalMatrix{Operator,LowRankOperator}
.
There should be
HierarchicalVector
HierarchicalDomain
HierarchicalFunctionSpace
A HierarchicalVector
should have fast multiplication by a HierarchicalMatrix
.
A HierarchicalDomain
should have a graph partitioning constructor, via recursive bisection, considering a non-integer weighted adjacency matrix, possibly something like this.
We should be able to look at Helmholtz and Laplace's equation with homogeneous regions with different material properties: permeability & permittivity, and different wave numbers. I believe this would only change the Dirichlet/Neumann traces and how the solutions are evaluated, but the problem structure is the same.
Perhaps these could be used for mixed boundary conditions, as they have different square root endpoint singularities and they are each others' Hilbert transforms.
Should Base.getindex(::Operator,::LowRankFun) return a LowRankOperator?
Perhaps a better formula for LowRankOperator leaves the types of U and V (and possibly C) more free. Maybe V could be more than just a list of functionals. Yet, if they had a common operator, it would be more efficient to store it as C, rather than copy it to each element of V as some kind of FunctionalWrapper.
Where is the overhead?
Off the readme:
using ApproxFun, SingularIntegralEquations
k = 50 # Set wavenumber and fundamental solution for Helmholtz equation
g1(x,y) = -besselj0(k*abs(y-x))/2
g2(x,y) = x == y ? -(log(k/2)+γ)/2/π + im/4 : im/4*hankelh1(0,k*abs(y-x)) - g1(x,y).*logabs(y-x)/π
ui(x,y) = exp(im*k*(x-y)/sqrt(2)) # Incident plane wave at 45°
dom = Interval() # Set the domain
sp = Space(dom) # Canonical space on the domain
⨍ = DefiniteLineIntegral(dom) # Line integration functional
uiΓ = Fun(t->ui(real(t),imag(t)),sp) # Incident wave on Γ
# Instantiate the fundamental solution
G = GreensFun(g1,CauchyWeight(sp⊗sp,0)) + GreensFun(g2,sp⊗sp)
It's quicker to do a dense QR factorization on the entries. (note no adaptivity)
julia> @time ∂u∂n = ⨍[G]\uiΓ;
2.659190 seconds (575.29 k allocations: 401.028 MB, 2.05% gc time)
julia> @time begin
QR = qrfact(full(⨍[G][1:length(∂u∂n),1:length(∂u∂n)]))
sol = QR\pad(coefficients(uiΓ),length(∂u∂n))
end;
0.418481 seconds (572.62 k allocations: 152.076 MB, 6.08% gc time)
Trying to get Neumann boundary conditions to work with the Helmholtz equation has been tricky, but this is something I didn't expect.
If an incident wave is normal to the scatterer (d=(0,-1)) then for different wave numbers (specifically k = 2, 10, 12, 20) the density may not be even. Why? The GreensFun seems to be symmetric to 14 or so digits, and the input data is obviously symmetric in this case.
Right now, it goes row by row with the operator in between multiplication by Funs.
Maybe this can be simplified by adding all the coefficients as a BandedMatrix.
i.e. the hypersingular integral is missing. Normal derivatives on the Circle are outward - this should help. We should be able to plug in the normal derivatives of the Helmholtz fundamental solution (in HelmholtzNeumann.jl) as they seem to be coordinate independent.
If the Hilbert transform of the second partial derivative is diagonal in some basis, then time-evolution could be taken care of using a fourth-order time-stepper for stiff PDEs, like this.
For Hilbert(d,0)
, the singular term is log|y-x|.
In cauchyintegral
, it seems this is log(y-x), and the following crashes:
using ApproxFun, SIE
ds = JacobiWeight(.5,.5,Chebyshev())
f = Fun(x->exp(x)*sqrt(1-x^2),ds)
cauchyintegral(f,2.)
cauchyintegral(f,-2.)
It seems to me this will be required if we want to do scattering by a screen, so how to proceed? What is the correct analytic continuation, similar to sqrtx2
?
I'm happy to do this one, but would like to put some ideas on the table first. The solver is fast enough now that you cannot see a lag in @manipulate, but for SIEs with non-trivial kernels, creating the LowRankFuns is now a bottleneck.
I think this I've worked this out correctly: Hilbert acting on a PiecewiseSpace consisting of JacobiWeight spaces should have the Cauchy rangespaces be Ultraspherical (i.e. the space of the alternate JacobiWeight spaces). In this way:
sp = ApproxFun.PiecewiseSpace([JacobiWeight(-.5,-.5,Interval([-2.,-.5])),JacobiWeight(-.5,-.5,Interval([.5,2.]))])
H = Hilbert(sp)
domainspace(H)
rangespace(H)
can actually instantiate. Is this correct?
I don't know if there are plans for these in Julia, but in terms of numerical evaluation, the SIE operators' functions can be used together in conjunction. This means that in-place variants would be more effective.
The first time a woodburysolve
or backslash is called, (block) QR/LU factorizations of all the pivot matrices should be stored at each level in the hierarchy of a HierarchicalMatrix
. Solving again would call the less expensive phase of the algorithm.
Getting a weird message when running tests. I put the examples in tests and the test breaks at Fracture_b.jl, but when I test any example on its own it executes just fine.
ERROR: type cannot be constructed
in + at operators.jl:82
in include at ./boot.jl:246
in include_from_node1 at ./loading.jl:128
in include at ./boot.jl:246
in include_from_node1 at ./loading.jl:128
in include at ./boot.jl:246
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
in _start at /Users/Mikael/Desktop/julia/usr/lib/julia/sys.dylib
while loading /Users/Mikael/.julia/v0.4/SIE/test/../examples/Fracture_b.jl, in expression starting on line 24
while loading /Users/Mikael/.julia/v0.4/SIE/test/ExamplesTest.jl, in expression starting on line 5
while loading /Users/Mikael/.julia/v0.4/SIE/test/runtests.jl, in expression starting on line 66
Currently, we have Hilbert
inheriting its core structure & function from the calculus_operator
macro (which is awesome by the way). So I was thinking could the same be done for Cauchy
operators?
One way to do it would be to create two new operators CauchyPlus
and CauchyMinus
, and their abstract supertypes and wrappers. Another way would be to modify the macro and have the immutable
types declared before. This would allow a single Cauchy
operator to look like:
immutable Cauchy{S<:FunctionSpace,T<:Number} <: AbstractCauchy{T}
space::S
order::Int
orientation::Vector{Bool}
end
The calculus_operator
macro would set most things as usual. Then, one simple outer function, say orientation(s::Union(Int,Bool),C::Cauchy)
assigns a value to a Cauchy instance's orientation[1]
. Note that storing orientation as a Vector{Bool}
allows its entries to be changed even though the operator is immutable.
1/sqrt(x+im)
I think the following timing suggests that we would perform significantly better using truncation and banded matrix :
#HelmholtzDirichlet.jl example
@time ∂u∂n = ⨍[G]\uiΓ # 6.882111 seconds (20.04 M allocations: 4.365 GB, 10.21% gc time)
cfs=pad(coefficients(uiΓ,rangespace(⨍[G])),100)
@time ⨍[G][1:100,1:100]\cfs # 0.786749 seconds (2.67 M allocations: 525.196 MB, 10.94% gc time)
It's based on LU, so there may be inaccuracies introduced, but in this case they match to 8.9e-12
This would be useful for fractal screens of constant coefficient self-adjoint PDOs.
(Only half the inner products are needed and a simpler factorization can occur as Schur complements are symmetric.)
Basically, what we need to get working again is:
x = Fun(identity)
w = 1/sqrt(1-x^2)
H = Hilbert(space(w))
H[w]
I think the code you showed me (then I added in the tests file) for the known SIE works because of some new features you added in linsolve: the Ad = promotedomainspace(A)
line allows any number of operators to be instantiated in AnySpace
s and all this can be carried through up to solution. But this can't be a good programming solution, because we can't even check the entries of L. (Try L[1:10,1:10]
)
Now I don't know exactly why H[w]
as above is crashing (or more precisely how to fix it while not breaking anything else), but I would like that last line to make sense. This will restore H[f::BivariateFun]
.
Saw that you mentioned this at BIRS. Can this be done currently?
julia /home/xxx/.julia/v0.4/SingularIntegralEquations/examples/erfc.jl
WARNING: module ApproxFun should explicitly import getindex from Base
... bunch of warnings from ApproxFun deleted to save some space ...
WARNING: module ApproxFun should explicitly import transpose from Base
ERROR: LoadError: LoadError: LoadError: LoadError: TypeError: CauchyWeight: in parameter, expected Type{T}, got Tuple{TypeVar,TypeVar}
while loading /home/xxx/.julia/v0.4/SingularIntegralEquations/src/./GreensFun/CauchyWeight.jl, in expression starting on line 15
while loading /home/xxx/.julia/v0.4/SingularIntegralEquations/src/./GreensFun/GreensFun.jl, in expression starting on line 1
while loading /home/xxx/.julia/v0.4/SingularIntegralEquations/src/SingularIntegralEquations.jl, in expression starting on line 51
while loading /home/xxx/.julia/v0.4/SingularIntegralEquations/examples/erfc.jl, in expression starting on line 8
Version information:
julia> versioninfo()
Julia Version 0.4.0-pre+7303
Commit 404b414* (2015-09-05 05:01 UTC)
Platform Info:
System: Linux (x86_64-redhat-linux)
CPU: Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz
WORD_SIZE: 64
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Nehalem)
LAPACK: libopenblasp.so.0
LIBM: libopenlibm
LLVM: libLLVM-3.3
Known bug.
Right now s=+1 => s=true and s=-1 => s=false. I think it should work via s=+ => true and s=- => false. This avoids the issue that true==1 and false ==0.
As it is, a ProductFun
in the CauchyWeight{O}
space is created without any use of the order of the CauchyWeight. Instead, basically the smooth part is created and enriched. This breaks with other enriched spaces such as JacobiWeight
, and I think it could actually be changed. The 1D Fun along the anti diagonal can be used to extract the diagonal limit and so numerically everything is doable.
However, I don't see how this matters because we would be hard-pressed to come up with a way to detect the order of the CauchyWeight, log mixes poorly with integral powers, and most likely the smooth parts of the kernels will be known anyway.
I'm happy with it either way.
After JuliaApproximation/ApproxFun.jl@8b31c1f, these need updating to bilinearform
and innerproduct
.
Are these lines typos?
https://github.com/ApproxFun/SIE.jl/blob/master/src/Cauchy.jl#L81
https://github.com/ApproxFun/SIE.jl/blob/master/src/Cauchy.jl#L156
https://github.com/ApproxFun/SIE.jl/blob/master/src/singfuncauchy.jl#L202
Seems like they should be like
https://github.com/ApproxFun/SIE.jl/blob/master/src/Cauchy.jl#L114
or the newret
serves no purpose.
Banded linear algebra is arguably inapplicable to these operators, setting them on the path to being amenable to hierarchical solvers.
Should we also combine them with Hilbert/SingularIntegral and have them generated by a variation of the @calculus_operator macro?
Currently,
hierarchicalsolve{U,V<:LowRank...}(H::HierarchicalMatrix{U,V},f::AbstractVecOrMat)
but it may make more sense to have f::HierarchicalVector
, especially considering
# Partition the right-hand side
(f1,f2) = (f[1:size(H11,1),:],f[1+size(H11,1):end,:])
could be written
# Partition the right-hand side
f1,f2 = partitionvector(f)
The problem is what to do with the columns and rows of off-diagonal low-rank matrices? These combine naturally as a Matrix
, but they could also be a Vector{HierarchicalVector}
. In the ideal setting, the hierarchical solver has very few allocations.
It's currently covered by logkernel(Fun(u,Fourier),z)
, but it would be better to have both formulae.
In scattering problems, I'm trying to get things separated as follows:
but I'm getting NaNs as the solution coefficients when I try to do this. I will investigate.
The factorization phase of a H::HierarchichalMatrix
with off-diagonal low-rank matrices/operators requires computation of H.A
:
H.A = [I B; C I]
where B and C are dense and the identities are of conformable sizes. To simplify solution of later linear systems involving H.A
, a factorization is stored in H.factorization
. Should this be:
[I B; C I] = [I 0; C I]*[I 0; 0 I-C*B]*[I B; 0 I]
Advantage of QR is backward stability, while LU is simplest, and block LDU is fastest but both LU and block LDU may amplify ill-conditioning coming from, in particular, I-C*B
.
Note that it's likely that C = B^T
.
To me, this seems problematic because unravelling the dirichlettransform
is Toeplitz and this will make it hard to write these as banded operators in the original basis (JacobiWeight{Chebyshev}
).
Does it make sense to add Hilbert
, Cauchy
and Stieltjes
in JacobiWeight{ChebyshevDirichlet}
space for inverse square root singularities?
Many pass through functions would be easier to implement via e.g.
cauchy(f,z,s...)=stieltjes(f,z,s...)/(-2π*im)
Right now we use definitions like
u(z)=1+cauchy(u,z)
but this would be better encapsulated as a space, defined on ComplexPlane()\Γ. Then some of the singular integral operators would be phrased directly as they relate to analyticity. For example
sp=AnalyticSpace(ComplexPlane()\Γ)
Restriction(true,sp,Γ) # is equivalent to Cauchy(true,Space(Γ))
ImagOperator(sp,Γ) # gives the imaginary part, can be expressed in turns of Hilbert I think
and have its own variant of linsolve.
'nuff sed.
should check to see if the domains of a 2D fun are the same. If so, it should call Hilbert, if not, it should call OffHilbert.
Should be a sum-of-products of ProductFun
s with Log and Cauchy weights.
For kernels of convolution-type, i.e. K(x,y) = K(y-x), there should be a cross-over point between using the O(n) function evaluation + O(n^3) arithmetic Chebyshev addition formula and the original ProductFun
constructor with O(n^2) function evaluation + O(n^2 log n) arithmetic via fast transforms.
The cross-over point is TBD, but a 1D antidiagonal expansion may help in determining the size of the ProductFun
via the original construction instead of calling an adaptive routine.
The O(n) function evaluation + O(n log n) arithmetic periodic cases are clear winners.
I opened up the Manipulate SIEs.ipynb and I'm getting this now:
Warning: New definition
zeros(Type{T<:String},Any...) at /Users/Mikael/.julia/v0.3/DataFrames/src/dataframe/dataframe.jl:805
is ambiguous with:
zeros(Type{T},Union(MultivariateDomain,MultivariateFunctionSpace)) at /Users/Mikael/.julia/v0.3/ApproxFun/src/Multivariate/Multivariate.jl:50.
To fix, define
zeros(Type{_<:String},Union(MultivariateDomain,MultivariateFunctionSpace))
before the new definition.
Higher order singular integral operators arise by taking normal derivatives of fundamental solutions. A judicious combination of automatic or symbolic differentiation and differentiating Fun
representations may be useful in their accurate construction.
One could then resolve the fundamental solution as A*log|y-x| + B, take the normal derivative wrt y, and this would be ∂A * log|y-x| + A * ∂(log|y-x|) + ∂B. If CauchyWeight(1) were ∂(log|y-x|), then this would be simplest. (Otherwise some other combinations and re-sampling would be required.)
This would be a good example for systems, and the fundamental solutions are known:
Should OffHilbert be multiplied by -π and renamed Stieltjes? This matches the function call stieltjes which is now overwritten.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.