juliaarrays / blockarrays.jl Goto Github PK
View Code? Open in Web Editor NEWBlockArrays for Julia
Home Page: http://juliaarrays.github.io/BlockArrays.jl/
License: Other
BlockArrays for Julia
Home Page: http://juliaarrays.github.io/BlockArrays.jl/
License: Other
Is there support for BlockArrays where each block is either stored as a dense Array or is zero (and a significant fraction of blocks are zero)?
I have not seen any examples, and it is not obvious to me from the documentation whether this is possible. It is quite natural, though.
If we happen to have an upper/lower block triangular matrix, one would of course like matrix multiply and inversion / solving to respect this structure.
I've always felt the design for block sizes is not very elegant, and the termonology is completely unintuitive (even after acting as primary maintainer for this package for several years). I believe a better design would move this to be part of the axes
of an array. That is, we would have the following behaviour:
bs = [1,5,3] # block sizes
n = sum(bs) # total length of vector
x = BlockVector(randn(n), bs) # returns a Block vector with block-sizes [1,5,3]
a = axes(x,1)
a === BlockSlice(Base.OneTo(n), bs) # BlockSlice plays same role as `Slice`
length(a[Block(2)]) === 5 # returns the size of the second block
a[Block(2)][3] === 4 # returns the "global" index into `x` of the 3rd entry of the second block.
# This replaces blockindex2global:
x[a[Block(2)][3]] === x[4]
Are you planning to allow heterogeneous offset along each block axis? That is to say, is this supposed work or throw an error?
julia> mortar([[fill!(OffsetArray{Int}(undef, 0:2, 0:2), 0)] [fill!(OffsetArray{Int}(undef, 2:4, 0:2), 2)]])
1×2-blocked 3×6 BlockArray{Int64,2,Array{OffsetArray{Int64,2,Array{Int64,2}},2},Tuple{BlockedUnitRange{Array{Int64,1}},BlockedUnitRange{Array{Int64,1}}}}:
0 0 #undef │ #undef #undef #undef
0 0 #undef │ 2 2 #undef
#undef #undef #undef │ 2 2 #undef
(Note: there are #undef
s due to #96 ATM.)
I'm wondering this as I have code assuming that offsets are uniform along the corresponding block axis. Also, I think our block style broadcasting assumes this.
I actually don't know a real usage of OffsetArrays as blocks. Maybe just leave it like this and wait until someone wants to use it?
Is there any reason not to put this package in JuliaArrays/BlockArrays.jl? It might help raise exposure. (There was a recent Gitter discussion in the JuliaDiffEq board where people were asking for block matrices, and hadn't thought to look for BlockArrays.jl. So there definitely is a desire for a maintained block array storage package.)
It would be very helpful if views of block arrays carried the block structure from the parent. The following is natural:
julia> A = BlockArray(randn(6),[2,2,2])
6-element BlockArray{Float64,1,Array{Float64,1}}:
0.25985848144089707
-1.7782541436619197
────────────────────
-0.38268612220791354
-1.3832603336446299
────────────────────
-0.5004743004082205
0.699585741966363
julia> view(A, 2:5) # should have block sizes [1,2,1]
4-element view(::BlockArray{Float64,1,Array{Float64,1}}, 2:5) with eltype Float64:
-1.7782541436619197
-0.38268612220791354
-1.3832603336446299
-0.5004743004082205
What's less clear is what to do if we drop blocks:
julia> view(A, 3:5) # block sizes [0,2,1] or [2,1] ?
3-element view(::BlockArray{Float64,1,Array{Float64,1}}, 3:5) with eltype Float64:
-0.38268612220791354
-1.3832603336446299
-0.5004743004082205
I would argue having 0 block-sizes is more useful, so that the blocks are consistent with the parent.
Not that I need offset arrays as blocks but I found this while playing with it:
julia> mortar([fill!(OffsetArray{Int}(undef, 0:2), 0), fill!(OffsetArray{Int}(undef, 0:2), 0)])
2-blocked 6-element BlockArray{Int64,1,Array{OffsetArray{Int64,1,Array{Int64,1}},1},Tuple{BlockedUnitRange{Array{Int64,1}}}}:
0
0
#undef
──────
0
0
#undef
Is it supposed to work?
It would be nice to be able to construct Block arrays that are zeros.
A convention that could be copied is from GPUArrays.jl and override zeros
, ones
, and fill
:
julia> zeros(CLArray{Float32}, 4, 4)
GPU: 4×4 Array{Float32,2}:
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0
That is, support zeros(BlockArray{Float32}, 1:4, 1:4)
to construct a 24 x 24 BlockMatrix
.
I'm not that big a fan of this syntax as it is semantically the same as asking for a Matrix{CLArray{Float32}}
(any comments @SimonDanisch ?).
An alternative would be to use zero
instead of zeros
, though then it would be inconsistent with GPUArrays.jl.
I'm enjoying using this package for data matrices that are naturally divided into blocks/groups, and found myself wanting to write things like this MWE:
A = BlockArray(rand(4, 4), [2,2], [1,1,2])
[sum(block) for block in eachblock(A)]
I expect the output to be a 2x3
matrix of the corresponding block sums. Is there a convenient syntax for this already?
If not, and if there is interest, I'd be happy to work on contributing this functionality! To start, here is a slightly-hacky version I'm using for myself:
eachblock(A::AbstractBlockArray) = (view(A, Block(Tuple(I))) for I in CartesianIndices(blocksize(A)))
but I think this won't work for non-standard block axes.
e.g. BlockDiagonal
, BlockTridiagonal
, BlockSymTridiagonal
, BlockUpperTriangular
, etc.
This came up at JuliaArrays/BlockDiagonals.jl#8 -- that package no longer extends BlockArrays, but it'd be good to move similar functionality here if there are efficiencies to be gained.
using BlockArrays
zeros() # error
Full error message and stacktrace:
0-dimensional Array{Float64,0}:
Error showing value of type Array{Float64,0}:
ERROR: MethodError: BlockIndex(::Tuple{}, ::Tuple{}) is ambiguous. Candidates:
(::Type{BlockIndex})(I::Tuple{Vararg{Int64,N}}, α::Tuple{Vararg{Int64,N}}) where N in BlockArrays at /home/cossio/.julia/packages/BlockArrays/s3c4b/src/blockindices.jl:108
(::Type{BlockIndex})(a::Tuple{Vararg{Block{1,T} where T,N}}, b::Tuple) where N in BlockArrays at /home/cossio/.julia/packages/BlockArrays/s3c4b/src/blockindices.jl:112
(::Type{BlockIndex})(I::Tuple{Vararg{Int64,N}}, α::Tuple{Vararg{Int64,M}}) where {M, N} in BlockArrays at /home/cossio/.julia/packages/BlockArrays/s3c4b/src/blockindices.jl:122
Possible fix, define
BlockIndex(::Tuple{}, ::Tuple{})
Stacktrace:
[1] BlockIndex(::Tuple{}) at /home/cossio/.julia/packages/BlockArrays/s3c4b/src/blockindices.jl:134
[2] getindex at /home/cossio/.julia/packages/BlockArrays/s3c4b/src/blockaxis.jl:7 [inlined]
[3] print_array at ./arrayshow.jl:308 [inlined]
[4] show(::IOContext{REPL.Terminals.TTYTerminal}, ::MIME{Symbol("text/plain")}, ::Array{Float64,0}) at ./arrayshow.jl:346
[5] display(::REPL.REPLDisplay, ::MIME{Symbol("text/plain")}, ::Any) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:137
[6] display(::REPL.REPLDisplay, ::Any) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:141
[7] display(::Any) at ./multimedia.jl:323
[8] #invokelatest#1 at ./essentials.jl:712 [inlined]
[9] invokelatest at ./essentials.jl:711 [inlined]
[10] print_response(::IO, ::Any, ::Bool, ::Bool, ::Any) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:161
[11] print_response(::REPL.AbstractREPL, ::Any, ::Bool, ::Bool) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:146
[12] (::REPL.var"#do_respond#38"{Bool,REPL.var"#48#57"{REPL.LineEditREPL,REPL.REPLHistoryProvider},REPL.LineEditREPL,REPL.LineEdit.Prompt})(::Any, ::Any, ::Any) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:729
[13] #invokelatest#1 at ./essentials.jl:712 [inlined]
[14] invokelatest at ./essentials.jl:711 [inlined]
[15] run_interface(::REPL.Terminals.TextTerminal, ::REPL.LineEdit.ModalInterface, ::REPL.LineEdit.MIState) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/LineEdit.jl:2354
[16] run_frontend(::REPL.LineEditREPL, ::REPL.REPLBackendRef) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:1055
[17] run_repl(::REPL.AbstractREPL, ::Any) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:206
[18] (::Base.var"#764#766"{Bool,Bool,Bool,Bool})(::Module) at ./client.jl:383
[19] #invokelatest#1 at ./essentials.jl:712 [inlined]
[20] invokelatest at ./essentials.jl:711 [inlined]
[21] run_main_repl(::Bool, ::Bool, ::Bool, ::Bool, ::Bool) at ./client.jl:367
[22] exec_options(::Base.JLOptions) at ./client.jl:305
[23] _start() at ./client.jl:484
In #61 (comment) I wrote
using BlockArrays
function blocks_to_vector(blocks)
R = eltype(blocks)
T = eltype(R)
@assert R <: AbstractVector
ba = BlockArray{T, 1, R}(undef_blocks, size.(blocks, 1))
ba.blocks .= blocks
return ba
end
using FillArrays
x = blocks_to_vector([Fill(111.0, 4), Fill(222.0, 3)])
to create a BlockVector from a vector of blocks.
I first thought this was awkward and I was missing an API to do this but after finding #30 (comment) I started to think that there was no API to do this. Is it correct? If so, does it make sense to add an API for this?
(Note: #40 looks somewhat relevant)
The following is a bug:
julia> using BlockArrays
julia> k = 2; N = 20;
julia> a = BlockArray(fill(zeros(2,2),N,1), k*ones(Int64,N), [k]);
julia> a[Block(1,1)] .= [0 1; 1 0];
julia> a
20×1-blocked 40×2 BlockArrays.BlockArray{Float64,2,Array{Float64,2}}:
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
────────
0.0 1.0
1.0 0.0
I was wondering whether there is interest in someone (me?) implementing operations under which block-matrices are potentially closed (i.e. perhaps return another block-matrix), such as multiplication, inversion, transposition and certain factorisations, at the level of the AbstractBlockMatrix
and / or AbstractBlockVector
. For example, it would be really nice if in the following
B = BlockArray(Matrix{Float64}, [2, 2], [2, 2])
# fill in the blocks with something.
A = B.'B
both A
and B.'
were also a block matrices. At the moment these operations appear to return regular dense matrices.
This would be very useful for block matrices who's blocks have special structures where the same structure pops out the other side of such operations (I'm really interested in being able to operate on block-matrices whose blocks are circulant and have sensible things happen).
Can BlockArrays (currently) be used to define BlockDiagonalArrays in a straightforward way similar to Diagonal in LinearAlgebra?
Thanks!
Lots of places (e.g. nblocks
, blocksize
) require arguments of type Int
(which is a concrete type) but they could all be Integer
(an abstract type)
As suggested by the style guide
don't declare an argument to be of type Int or Int32 if it really could be any integer, expressed with the abstract type Integer.
In practice this matters for this package in particular, because downstream packages implementing the AbstractBlockArray
interface currently need to have equally specific argument types like Int
to avoid method ambiguities.
Is there a way to have blocks of different types in versions >0.7
? Something like,
1.0 1.0 1.0 │ 2 2
──────────────────┼──────────────
3//3 3//3 3//3 │ 4.0f0 4.0f0
3//3 3//3 3//3 │ 4.0f0 4.0f0
such that it may used in dmbates/MixedModels.jl
This issue is the second step in a larger issue that could be called "Unify with ApproxFun.BlockBandedMatrix
/ ApproxFun.BandedBlockBandedMatrix
".
In ApproxFun, we can do:
julia> A = ApproxFun.bbrand(Float64,1,1,[1,2,2],[2,2,1])
5×5 ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}}:
0.473387 0.415699 0.198946 0.423287 0.0
0.309483 0.894543 0.167148 0.997813 0.661132
0.51849 0.66597 0.272743 0.736918 0.866483
0.0 0.0 0.891839 0.523435 0.837514
0.0 0.0 0.670812 0.305726 0.14034
julia> A[Block.(1:2),Block.(1:2)]
3×4 Array{Float64,2}:
0.473387 0.415699 0.198946 0.423287
0.309483 0.894543 0.167148 0.997813
0.51849 0.66597 0.272743 0.736918
I'm not even sure how this would be done in BlockArrays
, maybe A[Block(1:2,1:2)]
?
Blockarrays can be initialized with zeros to store complex-valued block sub-matrices, but with second approach complex-valued sub matrix is not being placed. Is there a better way to initialize blockarrays with zeros to store complex-valued blocks?
First approach(working):
M = BlockArray(spzeros(3*nbus,3*nbus)+spzeros(3*nbus,3*nbus)*im, repeat([3],nbus), repeat([3],nbus) )
setblock!(M, subM, 1, 2)
where subM is complex valued matrix.
Second approach:
M = BlockArray(fill(0+0*im,3*nbus,3*nbus), repeat([3],nbus), repeat([3],nbus) )
setblock!(M, subM, 1, 2)
Second approach gives error below
InexactError: Int64(Int64, -0.6983630623801634)
Stacktrace:
[1] Type at ./float.jl:703 [inlined]
[2] convert at ./number.jl:7 [inlined]
[3] Type at ./complex.jl:12 [inlined]
[4] Type at ./complex.jl:36 [inlined]
[5] convert at ./number.jl:7 [inlined]
[6] setindex! at ./array.jl:769 [inlined]
[7] copyto!(::IndexLinear, ::Array{Complex{Int64},2}, ::IndexLinear, ::Array{Complex{Float64},2}) at ./abstractarray.jl:731
[8] copyto! at ./abstractarray.jl:723 [inlined]
[9] Type at ./array.jl:497 [inlined]
[10] convert at ./array.jl:489 [inlined]
[11] setindex! at ./array.jl:771 [inlined]
[12] setblock!(::BlockArray{Complex{Int64},2,Array{Array{Complex{Int64},2},2},BlockArrays.BlockSizes{2,Tuple{Array{Int64,1},Array{Int64,1}}}}, ::Array{Complex{Float64},2}, ::Int64, ::Int64) at
[13] top-level scope at In[33]:1
For example:
julia> Array(BlockArray(spzeros(5), [1, 4]))
5-element SparseVector{Float64,Int64} with 0 stored entries
This also applies to SparseMatrixCSC
.
Maybe rename what current Array(::BlockArray)
does to unblock
or something and then define Array(A::BlockArray) = Array(unblock(A))
?
At the moment nblocks
plays the block-wise role of size
, but there is no analogue of length
. Perhaps the following would be reasonable:
nblocks
to blocksize
and have it work exactly like size
: blocksize(A)
returns a tuple while blocksize(A,k)
is equivalent to blocksize(A)[k]
. Note that blocksizes(A)
would be left as is.blocklength
which is equivalent to prod(blocksize(A))
.A isa BlockMatrix
A[Block(5)]
would work provided 5 ≤ blocklength(A)
.They can be removed with @inbounds
but should see if they can be made less expensive.
The scalar elements on the diagonals of arrays can be accessed using diagind
(https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/#LinearAlgebra.diagind) to get the AbstractRange
of an entire diagonal and operate on it (e.g. doing D[diagind( D, -1 )] .= 1
to set all values on the diagonal 1 below the main to 1). That also works on a BlockArray
for accessing the scalar elements on a diagonal. It would be nice to have a similar function for accessing the block diagonal elements (e.g. access entire blocks on a diagonal) instead of just the scalar elements.
Example of desired behavior:
julia> D = BlockArray( randn(4,4), [2, 2], [2, 2] )
2×2-blocked 4×4 BlockArray{Float64,2}:
0.456552 0.508835 │ -1.36404 0.0981592
0.579405 1.1095 │ -0.171994 1.02463
──────────────────────┼───────────────────────
0.189181 -0.593364 │ 0.931672 -0.648214
-0.414329 1.02497 │ -0.990573 0.154431
julia> D[diagind(D,-1)]
3-element Array{Float64,1}:
0.5794046395673269
-0.5933636927794533
-0.9905727125760019
julia> D[blockdiagind(D,-1)]
2×2 Array{Float64,2}:
0.189181 -0.593364
-0.414329 1.02497
Having this available will simplify some operations such as:
# Create a NxN block matrix with identity matrices of size nx x nx on the diagonal
otherDiag = kron( I(N), to_matrix( mt, I, nx ) )
D = zeros( mt, N, N )
D[diagind( D, -1 )] .= 1
# Populate the block diagonal 1 below the main block diagonal with the nx x nx matrix sys.A
otherDiag = otherDiag + kron( D, -sys.A )
could become
# Create a NxN block matrix with identity matrices of size nx x nx on the diagonal
otherDiag = kron( I(N), to_matrix( mt, I, nx ) )
# Populate the block diagonal 1 below the main block diagonal with the nx x nx matrix sys.A
otherDiag[blockdiagind( otherDiag, -1 )] .= -sys.A
Does BlockArrays currently support constructing an e.g. BlockVector
from a vector of AbstractVector
s. i.e.
blocks = [randn(3), randn(4)]
BlockVector(blocks)
?
These have in common that they use ntuple
with a function
globalrange
is allocatingjulia> import BlockArrays: BlockSizes, globalrange
julia> block_size = BlockSizes([1,2,3], [2, 3])
[1,2,3]×[2,3]
julia> f(block_size, i, j) = for q in 1:10^5 globalrange(block_size, i,j) end
f (generic function with 1 method)
# Compile
julia> @time f(block_size, 1, 1)
0.007925 seconds (200.00 k allocations: 7.630 MB, 59.43% gc time)
global2blockindex
is allocatingjulia> import BlockArrays: BlockSizes, global2blockindex
julia> block_size = BlockSizes([1,2,3], [2, 3])
[1,2,3]×[2,3]
julia> f(block_size, i, j) = for q in 1:10^5 global2blockindex(block_size, i, j) end
f (generic function with 1 method)
# Compile
julia> @time f(block_size, 1, 1)
0.026259 seconds (300.00 k allocations: 10.681 MB, 18.26% gc time)
blockindex2global
is allocatingjulia> import BlockArrays: BlockSizes, BlockIndex, blockindex2global
julia> block_size = BlockSizes([1,2,3], [2, 3])
[1,2,3]×[2,3]
julia> block_index = BlockIndex((2,1), (2,1))
BlockArrays.BlockIndex{2}((2,1),(2,1))
julia> f(block_size, block_index) = for q in 1:10^5 blockindex2global(block_size, block_index) end
f (generic function with 1 method)
#Compile
julia> @time f(block_size, block_index)
0.002784 seconds (100.00 k allocations: 4.578 MB)
Hi,
I noticed the following is quite slow
using LinearAlgebra, SparseArrays, BlockArrays
M = 30
N = 500
J = BlockArray(spzeros(M * N, M * N), N * ones(Int64,M), N * ones(Int64,M))
for ii=2:M
setblock!(J, sprand(N,N,1/N), ii, ii)
setblock!(J, sprand(N,N,1/N), ii,ii-1)
end
J1 = @time sparse(J) #5seconds
whereas
function createSP(J::BlockArray)
nl, nc = size(J.blocks)
N = size(J[Block(1,1)])[1]
res = spzeros(N,N*nc)
for i=1:nl
line = J[Block(i,1)]
for j=2:nc
line = hcat(line,J[Block(i,j)])
end
res = vcat(res,line)
end
return res[N+1:end,:]
end
Jb = @time createSP(J)
takes 0.23s
.
createSP
in your package?Thank you,
Best regards
The problem is not resolved by defining Base.convert(::Type{Symmetric}, x::Array{Float64, 2})
.
I believe the problem is the non-commutativity of Symmetric
and BlockArray
methods.
Specifically, I think the sub-block types are being defined as Symmetric
in this line and then setindex!
complains when it tries to assign an Array{T, 2}
to the blocks.
At least that's my educated guess. Folks more familiar with what those functions do will know more.
julia> using BlockArrays, LinearAlgebra
julia> Symmetric(BlockArray(rand(5, 5), [3, 2], [2, 3]))
5×5 Symmetric{Float64,BlockArray{Float64,2,Array{Array{Float64,2},2},Tuple{BlockedUnitRange{Array{Int64,1}},BlockedUnitRange{Array{Int64,1}}}}} with indices 1:1:5×1:1:5:
0.80611 0.980725 │ 0.396233 0.218845 0.126326
0.980725 0.519793 │ 0.370195 0.88799 0.996251
────────────────────┼──────────────────────────────
0.396233 0.370195 │ 0.344366 0.171352 0.478706
0.218845 0.88799 │ 0.171352 0.789426 0.58824
0.126326 0.996251 │ 0.478706 0.58824 0.565382
julia> BlockArray(Symmetric(rand(5, 5)), [3, 2], [2, 3])
ERROR: MethodError: Cannot `convert` an object of type Array{Float64,2} to an object of type Symmetric{Float64,Array{Float64,2}}
Closest candidates are:
convert(::Type{#s664} where #s664<:Symmetric, ::Union{Hermitian, Symmetric}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/LinearAlgebra/src/symmetric.jl:193
convert(::Type{T}, ::T) where T<:AbstractArray at abstractarray.jl:14
convert(::Type{T}, ::Factorization) where T<:AbstractArray at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/LinearAlgebra/src/factorization.jl:55
...
Stacktrace:
[1] setindex! at ./array.jl:828 [inlined]
[2] setblock! at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:400 [inlined]
[3] setindex! at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/abstractblockarray.jl:203 [inlined]
[4] macro expansion at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:184 [inlined]
[5] macro expansion at ./cartesian.jl:64 [inlined]
[6] macro expansion at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:181 [inlined]
[7] BlockArray{Float64,N,R,BS} where BS<:Tuple{Vararg{AbstractUnitRange{Int64},N}} where R<:(AbstractArray{#s13,N} where #s13<:AbstractArray{Float64,N}) where N(::Symmetric{Float64,Array{Float64,2}}, ::Tuple{BlockedUnitRange{Array{Int64,1}},BlockedUnitRange{Array{Int64,1}}}) at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:179
[8] BlockArray{Float64,N,R,BS} where BS<:Tuple{Vararg{AbstractUnitRange{Int64},N}} where R<:(AbstractArray{#s13,N} where #s13<:AbstractArray{Float64,N}) where N(::Symmetric{Float64,Array{Float64,2}}, ::Array{Int64,1}, ::Array{Int64,1}) at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:172
[9] BlockArray(::Symmetric{Float64,Array{Float64,2}}, ::Array{Int64,1}, ::Array{Int64,1}) at /Users/glenn/.julia/packages/BlockArrays/Iqf2f/src/blockarray.jl:175
[10] top-level scope at REPL[199]:1
[11] eval(::Module, ::Any) at ./boot.jl:331
[12] eval_user_input(::Any, ::REPL.REPLBackend) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:86
[13] run_backend(::REPL.REPLBackend) at /Users/glenn/.julia/packages/Revise/tV8FE/src/Revise.jl:1165
[14] top-level scope at none:0
Most of the computation in fitting models with the MixedModels package
occurs in two block matrices, A
and L
. Given a value of a parameter vector the array L
is updated from information stored in A
and then converted to the lower Cholesky factor from which the objective is evaluated. During the process of optimizing the objective with respect to the parameters this process can be repeated thousands of times.
The block structure and block sizes of A
and L
are the same. Both the blocking structure and the overall sizes of these block matrices are square. A
is symmetric and L
is lower triangular.
The efficiency of the method derives from the fact that the [1, 1]
block in both matrices is always the largest block and is either diagonal or, at most, block diagonal with small, square, similarly-sized diagonal blocks.
At present I am using a funky, home-baked block matrix representation and I would like to use the capabilities of BlockArrays
instead. It seems that if I define these as
A = BlockArray(AbstractMatrix{T}, sz, sz)
L = similar(A)
for j in 1:nt, i in j:nt
Aij = densify(trms[i]'trms[j])
setblock!(A, Aij, i, j)
setblock!(L, deepcopy(Aij), i, j)
end
where sz
is a vector of block sizes and nt
is the number of blocks in each dimension, the [1, 1] block ends up being converted from Diagonal
to Matrix
.
Is this a requirement of the BlockMatrix
type? That is, are the blocks in a BlockMatrix
required to be homogeneous in actual type?
If not, could someone give me a hint of how I could avoid the conversion of blocks that are diagonal or SparseMatrixCSC
to full storage?
The alternative is to define my own block matrix type as a subtype of AbstractBlockMatrix
. I haven't yet been successful doing that.
For consistency with Array
, the following variants should also work:
julia> A = BlockArray{Float64}(randn(6,6), 1:3, 1:3)
ERROR: MethodError: no method matching BlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::Array{Float64,2}, ::UnitRange{Int64}, ::UnitRange{Int64})
Closest candidates are:
BlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::BlockArrays.UndefBlocksInitializer, ::AbstractArray{Int64,1}...) where {T, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:120
BlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::UndefInitializer, ::AbstractArray{Int64,1}...) where {T, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:163
BlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::AbstractArray{T2,N}) where {T1, T2, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:204
Stacktrace:
[1] top-level scope at none:0
julia> A = BlockArray(randn(6,6), (1:3,1:3))
ERROR: MethodError: no method matching BlockArray(::Array{Float64,2}, ::Tuple{UnitRange{Int64},UnitRange{Int64}})
Closest candidates are:
BlockArray(::AbstractArray{T,N}) where {T, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:205
BlockArray(::AbstractArray{T,N}, ::AbstractArray{Int64,1}...) where {T, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:175
BlockArray(::AbstractArray{T,N}, ::BlockArrays.BlockSizes{N}) where {T, N} at /Users/sheehanolver/Documents/Coding/lazybandedmatrices/dev/BlockArrays/src/blockarray.jl:184
Stacktrace:
[1] top-level scope at none:0
I just answered a question on StackOverflow where the OP was not aware that this package supports other block structures than 2x2. I think that indicates that using more diverse examples would be good.
In convolution neural networks the block moves in step sizes (overlaps). It would be a useful feature to have the option to split a matrix by overlapping block sizes:
For a matrix M
, we have M[1, 1] == M[1]
. However, similar indexing with Block
seems to be broken:
julia> A = PseudoBlockArray(rand(2,3), [1,1], [2,1])
2×2-blocked 2×3 PseudoBlockArray{Float64,2}:
0.969606 0.569278 │ 0.767362
────────────────────┼──────────
0.331377 0.40404 │ 0.549861
julia> A[Block(1)]
1-element Array{Float64,1}:
0.9696062405354238
julia> A[Block(1), Block(1)]
1×2 view(::Array{Float64,2}, 1:1, 1:2) with eltype Float64:
0.969606 0.569278
I expect A[Block(1)]
and A[Block(1), Block(1)]
above to return equivalent matrices. (Or throwing with A[Block(1)]
might be OK.)
This is also true with BlockArray
:
julia> bs = permutedims(reshape([
1ones(1, 3), 2ones(1, 2),
3ones(2, 3), 4ones(2, 2),
], (2, 2)))
2×2 Array{Array{Float64,2},2}:
[1.0 1.0 1.0] [2.0 2.0]
[3.0 3.0 3.0; 3.0 3.0 3.0] [4.0 4.0; 4.0 4.0]
julia> a = mortar(bs)
2×2-blocked 3×5 BlockArray{Float64,2}:
1.0 1.0 1.0 │ 2.0 2.0
───────────────┼──────────
3.0 3.0 3.0 │ 4.0 4.0
3.0 3.0 3.0 │ 4.0 4.0
julia> a[Block(1), Block(1)]
1×3 Array{Float64,2}:
1.0 1.0 1.0
julia> a[Block(1)]
1-element Array{Float64,1}:
1.0
julia> using BlockArrays
julia> Base.OneTo.(1:5)
5-element BlockRange{1,Tuple{UnitRange{Int64}}}:
Block{1,Int64}((1,))
Block{1,Int64}((2,))
Block{1,Int64}((3,))
Block{1,Int64}((4,))
Block{1,Int64}((5,))
The offending line is
BlockArrays.jl/src/blockindices.jl
Line 288 in 3addf8f
as typeof(Block) === UnionAll
.
Given some BlockArray
X
, is there a publicly-supported way for getting the d
th dimension of the size of each of the blocks? i.e.
julia> X = BlockArray(rand(4, 4), [2,2], [1,1,2])
2×3-blocked 4×4 BlockArray{Float64,2}:
0.673816 │ 0.530268 │ 0.893348 0.858635
0.92715 │ 0.400304 │ 0.373281 0.0963528
──────────┼────────────┼─────────────────────
0.339071 │ 0.189917 │ 0.403385 0.174391
0.440597 │ 0.547694 │ 0.163256 0.565782
# Example of the kind of behaviour I'm after, d = 1.
julia> block_sizes(X, 1)
2-element Array{Int64,1}:
2
2
# Example of the kind of behaviour I'm after, d = 2.
julia> block_sizes(X, 2)
3-element Array{Int64,1}:
1
1
2
This issue stems from the cumulsizes
function disappearing either at version 0.11 (or 0.12?), which is preventing me from allowing Stheno.jl to upgrade from 0.10. Ideally I would refactor my code to depend on BlockArrays
s new API, but I've only got time to patch for now, and the above functionality would suffice.
At the moment BlockArray(arr::AbstractArray{T,N},..)
creates a BlockArray{T,N,typeof(arr)}
. However, for many types subarrays are not of the same type (e.g., Diagonal
). I think it is safer to always convert to Array
.
The one argument I can see is that it might be nice for BlockArray(::SparseVector)
to automatically use sparse data storage.
Another option is to look into whether the type of sub-blocks can be inferred automatically.
I think the fact that block_size[k]
gives the cumulative block size is confusing: semantically, I think it should return the actual block_size.
Perhaps blockstart(block_size, k)
is a reasonable name for the current block_size[k]
?
I think these should work:
julia> BlockArray{Float64}(1:10,1:10)
ERROR: MethodError: no method matching BlockArrays.BlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::UnitRange{Int64}, ::UnitRange{Int64})
julia> PseudoBlockArray{Float64}(1:10,1:10)
ERROR: MethodError: no method matching BlockArrays.PseudoBlockArray{Float64,N,R} where R<:AbstractArray{Float64,N} where N(::UnitRange{Int64}, ::UnitRange{Int64})
In the first case, I suppose it should allocate the blocks.
It's sometimes useful to have multiple layers of blocks, see conversation in
I think one way to do this would be to support some sort of BlockView
that converts an AbstractBlockVector{T}
into an AbstractVector{Vector{T}}
:
struct BlockView{AT, p, BA <: AbstractBlockArray} <: AbstractArray{AT, p}
data::BA
end
_BlockView(data::BA) where BA<:AbstractBlockArray{T,p} where T = BlockView{Array{T,p},p,BA}(data)
getindex(A::BlockView, i...) = A[Block(i...)]
size(A::BlockView) = nblocks(A)
Then one create a hierarchical block array view (here we have sub-blocks of length 10 and super-blocks of length 5):
A_sub = PseudoBlockArray(randn(10*5*n), fill(10,5n))
A = PseudoBlockArray(BlockView(A_sub), fill(5,n))
A[Block(2)][3] # returns the 3rd sub-block of the 2nd super-block
I'm still working out how to get it to work with A[Block(2)][Block(3)]
...
I'd propose standard Arrays act like Block arrays with 1 big block.
I believe there just needs to be another Base.reindex
override, but the block structure of the views should be preserved as much as possible. (I need this in BlockBandedMatrices.jl to dispatch correctly to BLAS calls.)
julia> A = BlockArray(randn(6,6), 1:3, 1:3);
julia> V = view(A, Block(2,2));
julia> view(V, 1:1,1:2) === view(A, Block(2)[1:1], Block(2)[1:2]) # should return true
false
I don't think there is need for a new type, just a bool if the sizes are the same. Scalar indexing could then be made faster since you don't have to search like how it is done now.
Hi there, when I ran the following code:
design = zeros(Int16,6,9);
BlockArray(design,[6],[4,5]);
I have an error: ERROR: BoundsError: attempt to access 2-element Array{Int64,1} at index [3].
Could you please have a look at it? My guess is that the current package does not support for a single row block? Thanks in advance!
This error seems unnecessary;
julia> mortar([[1,2] [3,4]])
ERROR: All blocks must have ndims consistent with ndims = 2 of `blocks` array.
Stacktrace:
[1] sizes_from_blocks(::Array{Int64,2}) at /Users/solver/Projects/BlockArrays.jl/src/blockarray.jl:241
[2] mortar(::Array{Int64,2}) at /Users/solver/Projects/BlockArrays.jl/src/blockarray.jl:234
[3] top-level scope at none:0
The following should work. I think it just needs another reindex
method.
julia> using BlockArrays
julia> A = BlockArray{Float64}(uninitialized, 1:3, 1:3);
julia> V = view(A, Block(1):Block(2), Block(1):Block(2));
julia> view(V, Block(1), Block(1))
ERROR: type SubArray has no field block_sizes
Stacktrace:
[1] to_indices at /Users/solver/.julia/v0.6/BlockArrays/src/views.jl:82 [inlined]
[2] view(::SubArray{Float64,2,BlockArrays.BlockArray{Float64,2,Array{Float64,2}},Tuple{BlockArrays.BlockSlice{BlockArrays.BlockRange{1,Tuple{UnitRange{Int64}}}},BlockArrays.BlockSlice{BlockArrays.BlockRange{1,Tuple{UnitRange{Int64}}}}},false}, ::BlockArrays.Block{1,Int64}, ::Vararg{BlockArrays.Block{1,Int64},N} where N) at ./subarray.jl:112
These would be useful to convert between the two formats.
@SimonDanisch @andyferris @mbauman Is it possible to increase my privileges on JuliaArrays?
At the moment I'm not able to add support for attobot or delete tags to this package or FillArrays.jl (the latter of which I just tagged a new release thinking attobot was working, which I can't delete).
Currently, the Kronecker product (kron(A)
function) of a BlockMatrix does not take into account the block structure of the matrix.
For BlockMatrix, the classical definition of the Kronecker product is replaced by the Khatri–Rao product or the Tracy–Singh product (https://en.wikipedia.org/wiki/Kronecker_product).
I would find it very interesting to add these functions to your very useful library.
My implementation of the Khatri–Rao product:
function kron_khatri_rao(A::AbstractBlockMatrix, B::AbstractBlockMatrix)
# https://en.wikipedia.org/wiki/Kronecker_product
_Ablksize = blocksize(A)
_Bblksize = blocksize(A)
@assert _Ablksize == _Bblksize
_kblk = []
for _iblk in 1:_Ablksize[1]
_kblk_j = []
for _jblk in 1:_Ablksize[2]
_Ablk = A[Block(_iblk, _jblk)]
_Bblk = B[Block(_iblk, _jblk)]
push!(_kblk_j, kron(_Ablk, _Bblk))
end
push!(_kblk, tuple(_kblk_j...))
end
mortar(_kblk...)
end
This issue is the first step in a larger issue that could be called "Unify with ApproxFun.BlockBandedMatrix
/ ApproxFun.BandedBlockBandedMatrix
".
In ApproxFun I need to work with views of blocks, and in particular exploit the fact that these views are equivalent to StridedMatrix
, that is, they are compatible with LAPack routines. I also need to work with slices of blocks.
Below I've attached some of the behaviour of ApproxFun.BlockBandedMatrix
. Note that it is closer in nature to PsuedoBlockArray
in that memory is stored contiguously (though the current ordering of the memory is likely to change).
julia> using ApproxFun; import ApproxFun: Block
julia> A = ApproxFun.bbrand(Float64,1,1,[2,3],[4,1]) # random block banded matrix, the "1"s indicate the bandwidth, while the [2,3] indicates the row block sizes, and the [4,1] indicates the column block sizes
5×5 ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}}:
0.266809 0.677622 0.229719 0.947088 0.970988
0.128803 0.125624 0.107945 0.226405 0.0620217
0.384811 0.246323 0.410468 0.577572 0.147354
0.0765648 0.474993 0.199023 0.486716 0.14736
0.949332 0.37327 0.0650809 0.0692731 0.223807
julia> A[Block(1),Block(1)] # Analogous to your A[Block(1,1)]
2×4 Array{Float64,2}:
0.266809 0.677622 0.229719 0.947088
0.128803 0.125624 0.107945 0.226405
julia> S = view(A,Block(1),Block(2)) # create a view without copying. The implementation is a bit of a hack, but mimics the behaviour of Base.view with Colon
2×1 SubArray{Float64,2,ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}},Tuple{ApproxFun.Block,ApproxFun.Block},false}:
0.970988
0.0620217
julia> S[1] = 6
6
julia> A
5×5 ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}}:
0.266809 0.677622 0.229719 0.947088 6.0
0.128803 0.125624 0.107945 0.226405 0.0620217
0.384811 0.246323 0.410468 0.577572 0.147354
0.0765648 0.474993 0.199023 0.486716 0.14736
0.949332 0.37327 0.0650809 0.0692731 0.223807
julia> S_sub = view(A,Block(1)[1:2],Block(1)[2:3]) # create a view of a sub-block of a block, that is, take the 1:2 x 2:3 entries of Block(1,1).
2×2 SubArray{Float64,2,ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}},Tuple{ApproxFun.SubBlock{UnitRange{Int64}},ApproxFun.SubBlock{UnitRange{Int64}}},false}:
0.677622 0.229719
0.125624 0.107945
julia> S_sub[1,1] = 2
2
julia> A
5×5 ApproxFun.BlockBandedMatrix{Float64,Array{Int64,1},Array{Int64,1}}:
0.266809 2.0 0.229719 0.947088 6.0
0.128803 0.125624 0.107945 0.226405 0.0620217
0.384811 0.246323 0.410468 0.577572 0.147354
0.0765648 0.474993 0.199023 0.486716 0.14736
0.949332 0.37327 0.0650809 0.0692731 0.223807
For normal arrays, fill!
(and other acessing single values, of course) works on undef-initialized values:
julia> A = Array{Some}(undef, 3, 3)
3×3 Array{Some,2}:
#undef #undef #undef
#undef #undef #undef
#undef #undef #undef
julia> fill!(A, Some(1))
3×3 Array{Some,2}:
Some(1) Some(1) Some(1)
Some(1) Some(1) Some(1)
Some(1) Some(1) Some(1)
While for a BlockArray
initialized with undef_blocks
, it fails:
julia> B = BlockMatrix{Some{Int}}(undef_blocks, [2,1], [2,1])
2×2-blocked 3×3 BlockArray{Some{Int64},2}:
#undef #undef │ #undef
#undef #undef │ #undef
────────────────┼────────
#undef #undef │ #undef
julia> fill!(B, Some(1))
ERROR: UndefRefError: access to undefined reference
Stacktrace:
[1] getindex at ./array.jl:731 [inlined]
[2] iterate at ./array.jl:707 [inlined] (repeats 2 times)
[3] fill!(::BlockArray{Some{Int64},2,Array{Array{Some{Int64},2},2},BlockArrays.BlockSizes{2,Array{Int64,1}}}, ::Some{Int64}) at /home/philipp/.julia/packages/BlockArrays/UcMIS/src/blockarray.jl:402
[4] top-level scope at none:0
I can see where that's coming from, but it's counterintuitive from the point of viewing the blockarray as the complete array of scalars, I find. Maybe there's something that could be done about this. Or at least it should be documented better.
It is sometimes convenient to do arithmetic with Blocks, so I would propose at least some the following work:
Block(1) < Block(2) # returns true
Block(1) + Block(2) # returns Block(3)
Block(1) + 2 # returns Block(3), or maybe undefined?
Block(1,1) + Block(2,2) # returns Block(3,3)
@KristofferC Any objections / thoughts?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.