vetschn / bsparse Goto Github PK
View Code? Open in Web Editor NEWIt'd better bsparse.
Home Page: https://vetschn.github.io/bsparse/
License: MIT License
It'd better bsparse.
Home Page: https://vetschn.github.io/bsparse/
License: MIT License
To achieve better parallelization of our applications, it would be nice to include contiguous tensors in bsparse:
The first two dimensions allow any sparsity format like csr, coo, dia. The sparsity in the higher dimensions is constant with respect to the first dimensions which can be understood as multiple sparse matrices with the same sparsity pattern. This would result in that the datastructure of csr/coo becomes 1*n dimensional and once the corresponding index pointer and indices. Similarly BSR would hold tensors instead of single 2D blocks. The accessing of the third dimension could be handelt in the following way A[i, j, k, ...] where i,j are the indices in the 2D sparse space and k in the first higher dimension.
The individual blocks of the FEM matrices are ~99% sparse. After a quick and rough evaluation of the speed of solve
and inv
, as well as sparse --> dense conversions, the idea should be:
scipy.sparse.csr_array
Right now there are a few things in here that require some refactoring. There is a bunch of duplicated code that could probably partly be moved to the base classes. Also, the tests are not done in the best way possible.
There are a few mistakes/inconsistencies in the docstrings of various classes and methods. It would be nice to have some pictures demonstrating some of the algorithms etc. It would be worthwhile to look into also including some examples (and check with doctest!)
In bdia:~543 np.find_common_type() depreciated from np 1.25
See np.result_type()
Currently, the input sparray
is converted to LIL format as a first step, to allow slicing. The resulting data arrays are thus also in LIL format.
This is unexpected behavior and should be adjusted to convert the subarray slices back to the original sparse format for all matrices.
BSparse
typesIt would be convenient to have a method that converts all saved matrix blocks to dense matrices.
Add transpose (.T) and Hermitian conjugate (.H) to bsparse.
import numpy as np
from quasi import vbsr, vbdia
OFFSET = 2
BLOCKSIZES = [3, 2, 1, 2, 3]
BLOCKSIZES = [size + OFFSET for size in BLOCKSIZES]
vbdia.diag([np.random.rand(size, size) for size in BLOCKSIZES], overlap=OFFSET).show()
This gives unexpected behaviour. Problem is with the .from_spmatrix
constructor.
Since transferring this issue here, almost nothing has remained unchanged. The overlap functionality needs to be reimplemented.
The block sparse matrices we deal with usually exhibit some sort of symmetry across the main diagonal, i.e. they are symmetric/Hermitian
or skew-symmetric/skew-Hermitian
By implementing a proper data structure, we can save a bunch of memory.
After bit of evaluation it seems like C++ bindings using nanobind
with a CMake build system are a sensible/straightforward option. The idea should be to realize the most demanding and repetitive computations (e. g. matrix multiplications and index computations) in the C++ back-end.
The bsparse interface should support writing matrices to disk in an efficient binary format.
The bsparse.diag
constructor is pretty confusing. You should have the option to specify the row_sizes
and col_sizes
or a uniform block_shape
or something.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.