Git Product home page Git Product logo

bwohlberg / sporco Goto Github PK

View Code? Open in Web Editor NEW
257.0 16.0 37.0 13.72 MB

Sparse Optimisation Research Code

Home Page: http://brendt.wohlberg.net/software/SPORCO/

License: BSD 3-Clause "New" or "Revised" License

Python 99.71% Shell 0.29%
sparsity sparse-coding dictionary-learning convolutional-sparse-coding convolutional-dictionary-learning optimization optimization-algorithms admm fista python

sporco's People

Contributors

bwohlberg avatar cwitkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sporco's Issues

Non-negative dictionary constraint

Hello,

Thank you for providing open access to this excellent implementation of convolutional dictionary learning.

I would like to know if sporco.dictlrn.cbpdndl supports non-negative dictionary constraint.
If it does not, could you please provide any insights on how to impose this constraint?

Best regards,
Jingxiao

tikhonov_filter very slow on arm64 arch

If I run the attached test.ipynb notebook file (remove .txt) on a Google Colab system or the counterpart test.py on a local AMD/Intel system with a GeForce GTX 1660 installed, then all calls to the tikhonov_filter function complete in less than a second. However, if I run the same test.py on a Jetson TX2, then the first time the tikhonov_filter is called, it takes more than 90 seconds for the _Xfftn function function to return. Surprisingly, the subsequent calls are completed under a second. Does any know why this only happens on an arm64 architecture?

I have built pyFFTW 0.12.0 from source as well as installed via pip3, but the same result.

profile_trace.txt
test.ipynb.txt
test.py.txt

test_output
lscpu

Documentation of sporco.admm.tvl2

The documentation seems to be geared towards the 2D case, whereas the code can handle arbitrary number of dimensions.

Furthermore, many terms in the equation are not defined.

Problem with signel-channel dictionaries and multi-channel multi-signal data

Hello, I think, I`ve stumbled on a bug in your code, but before it, can you please explain the next thing, thank you.

These questions about OnlineConvBPDNDictLearn algorithm using cupy extension.
Sporco version - 0.1.11 (conda-forge)
Sporco cuda version - 0.0.3 (pip)
Python version - 3.7.3 (conda-forge)

From your code I understand, that if your dictionary does not have channels, then every dimension after spatial will be treated as signal(Kdim) dimension (channel and signal are reshaped into signal)?
If yes, then even if my input data is multi-channel, but my dictionary is single-channel, then my data will be treated as multi-signal. And in such a configuration, I must supply dimK=1?
And if my Dict and data are multi-channel, then dimK=0?

These are the questions, so about the bug.

If you supply to a SINGLE-CHANNEL dictionary a MULTI-CHANNEL MULTI-SIGNAL image, the algorithm fails. But it does not if you provide a MULTI-CHANNEL dictionary the same data.

It fails in function OnlineConvBPDNDictLearn.setcoef(self, Z):301
I think the problem is in the statement on line 288-289, which reshapes Z. Because it does reshape only input Z but does not reshape initial Z.

import numpy as np
from sporco.cupy import np2cp
from sporco.cupy.dictlrn.onlinecdl import OnlineConvBPDNDictLearn, OnlineConvBPDNDictLearnOptions

opt = OnlineConvBPDNDictLearnOptions({'CBPDN': {'MaxMainIter': 1}})

try:
    D = np2cp(np.random.rand(16,16,8))
    S = np.random.rand(256,256,4)
    print('signle-channel dict, multi-channel img')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=1)
except ValueError as e:
    print(e)

try:
    D = np2cp(np.random.rand(16,16,4,8))
    S = np.random.rand(256,256,4)
    print('multi-channel dict, multi-signal img')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=0)
except ValueError as e:
    print(e)

try:
    D = np2cp(np.random.rand(16,16,8))
    S = np.random.rand(256,256,4,2)
    print('signle-channel dict, multi-channel multi-signal img, dimK=0')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=None)
except ValueError as e:
    print(e)
    
try:
    D = np2cp(np.random.rand(16,16,8))
    S = np.random.rand(256,256,4,2)
    print('signle-channel dict, multi-channel multi-signal img, dimK=1')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=1)
except ValueError as e:
    print(e)
    
try:
    D = np2cp(np.random.rand(16,16,8))
    S = np.random.rand(256,256,4,2)
    print('signle-channel dict, multi-channel multi-signal img, dimK=None')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=None)
except ValueError as e:
    print(e)
    
try:
    D = np2cp(np.random.rand(16,16,4,8))
    S = np.random.rand(256,256,4,2)
    print('multi-channel dict, multi-channel multi-signal img, dimK=None')
    solver = OnlineConvBPDNDictLearn(D, 0.01, opt)
    result = solver.solve(S, dimK=1)
except ValueError as e:
    print(e)

Memory consumption issue/question

I would like to compute the sparse dictionary using ConvBPDNDictLearn over a collection of images, each of 720x480 resolution. Following your example in 'demo_cbpdndl_gry_2.py' , I was able to run on a small collection of images (about 10 720x480 images work). However, if I try 100 images, my computer freezes up.

I have two questions. First, is there a way to handle large numbers of images efficiently? Second, does it make sense to call ConvBPDNDictLearn in a loop, where I hand it 10 images at a time?

Do you have a cuda accelerated version of `bpdndl`?

From @CasperN on November 6, 2018 19:27

Hello, I'm trying out sporco but am having a hard time with the documentation. There really should be better names, they're quite the headache to decode.

I'm trying to do dictionary learning on a few million points of a few hundred dimensions. Is there a GPU accelerated way to do this in your library? Is my best option bpdndl?

Copied from original issue: bwohlberg/sporco-cuda#2

complex data type issue with implsden_grd_clr.py (windows)

error log shows:

PyDev console: starting.
Python 3.8.12 (default, Oct 12 2021, 03:01:40) [MSC v.1916 64 bit (AMD64)] on win32
runfile('C:/Users/junzh/OneDrive/Desktop/test/test.py', wdir='C:/Users/junzh/OneDrive/Desktop/test')
Running on GPU 0 (NVIDIA GeForce GTX 1660 Ti)
Traceback (most recent call last):
File "", line 1, in
File "C:\Program Files\JetBrains\PyCharm 2021.2.3\plugins\python\helpers\pydev_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2021.2.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/junzh/OneDrive/Desktop/test/test.py", line 64, in
b = cbpdn.ConvBPDNJoint(np2cp(D), np2cp(pad(imgnh)), lmbda, mu, opt, dimK=0)
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\common.py", line 110, in call
instance = super(IterSolver_Meta, cls).call(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\admm\cbpdn.py", line 778, in init
super(ConvBPDNJoint, self).init(D, S, lmbda, opt, dimK=dimK,
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\admm\cbpdn.py", line 570, in init
super(ConvBPDN, self).init(D, S, opt, dimK, dimN)
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\admm\cbpdn.py", line 235, in init
self.Xf = self.empty_aligned(self.Y.shape, self.cri.axisN,
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\cupy_init
.py", line 262, in _rfftn_empty_aligned
cdtype = complex_dtype(dtype)
File "C:\ProgramData\Anaconda3\envs\dev\lib\site-packages\sporco\cupy_init
.py", line 235, in _complex_dtype
if dt == cp.dtype('float128'):
TypeError: data type 'float128' not understood

Generalised sparse coding problem

Dear Mr. Wohlberg,

thank you for the great library. I wonder if it might be possible to use SPORCO to solve a generalised sparse coding problem. If yes, could you please advise the best way to do so.

To provide more context, I am looking for a solver in python to address Eq. (12) from paper Learning from Weak and Noisy Labels for Semantic Segmentation. It seems the problem is a generalization of the Basis Pursuit DeNoising (BPDN) problem with weighted L1-norm sparsity regularisation (different elements have different weights). Thank you in advance for your answer.

Sincerely,
Raman

Cannot import sporco

Under python 3.8, after pip install sporco. When I try to import sporco.dictlrn, I got the following error:
4 from sporco.dictlrn import dictlrn
5 from sporco.admm import cbpdn,ccmod
6 from sporco import cnvrep

File ~/anaconda3/envs/cdl/lib/python3.8/site-packages/sporco/init.py:8
3 version = '0.2.1'
6 # This is a temporary solution to the circular imports resulting from the
7 # use of the renamed_function decorator
----> 8 import sporco.linalg

File ~/anaconda3/envs/cdl/lib/python3.8/site-packages/sporco/linalg.py:26
23 have_numexpr = True
25 from sporco._util import renamed_function
---> 26 from sporco.array import zdivide, subsample_array
29 author = """Brendt Wohlberg [email protected]"""
33 all = ['inner', 'dot', 'valid_adjoint', 'solvedbi_sm', 'solvedbi_sm_c',
34 'solvedbd_sm', 'solvedbd_sm_c', 'solvemdbi_ism', 'solvemdbi_rsm',
35 'solvemdbi_cg', 'lu_factor', 'lu_solve_ATAI', 'lu_solve_AATI',
36 'cho_factor', 'cho_solve_ATAI', 'cho_solve_AATI',
37 'solve_symmetric_sylvester', 'block_circulant', 'rrs', 'pca',
...
--> 299 res = zeros((n, n), v.dtype)
300 if k >= 0:
301 i = k

MemoryError: Unable to allocate 512. GiB for an array with shape (262144, 262144) and data type float64...

I could import sporco one day earlier but I cannot do that with the same code now.

TVL2Deconv solution shifted by half kernel size

As a toy example I try to use TVL2Deconv (sporco version 0.1.11) to restore an image convolved with a 2D Gaussian. Interestingly, the deconvolved image ends up being circularly shifted by half of the size of the kernel. Here is a MWE.

I am out of ideas what could introduce such a phase ramp. Am I misusing something?

Introducing priors to convolutional dictionary learning

I'm using convolutional dictionary learning on noisy vibrational spectroscopy signals. I was wondering if it might be possible to modify ConvBPDNDictLearn with a regularization term, to enforce certain priors on the atoms? For example, we might be interested in setting the boundaries of the atoms to zero (to learn peak-like features), or have non-negative atoms. If so, how would I go about doing that? Thanks!

Problem using Inhibition for sparse for batched dataset

Hi,

I am trying to get cbpdnin.ConvBPDNInhib to work instead of cbpdn.ConvBPDN. However there seems to be an issue with size for batched dataset.

To be specific, I have no problem using:
cbpdn.ConvBPDN(D.reshape(12,12,1,1,72), sh.reshape(128,128,1,5), lmbda, opt, dimK=1,dimN = 2)
For learning a sparse code on 5 images of 128x128 using the dictionnary D.

But when I run:
Wg = np.append(np.eye(36), np.eye(36), axis=-1)
b = cbpdnin.ConvBPDNInhib(D.reshape(12,12,1,1,72), sh.reshape(128,128,1,5), Wg, 12, ('boxcar'), lmbda, mu, None, opt, dimK=1,dimN = 2)

I get an error:

ValueError: cannot reshape array of size 169 into shape (13,13,1,5,1)

However 169=13*13 but I don't understand why 5, the number of images is taken to compute the windows of the convolution of inhibition.

Any idea what I am doing wrong?

Thanks a lot!

import issue on mac M1

This error arises when doing simple import such as from sporco import signal with pycharm

    from __future__ import print_function
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: from __future__ imports must occur at the beginning of the file

The image drawn by sporco.plot is not displayed

from future import print_function
from builtins import input
from builtins import range

import pyfftw # See pyFFTW/pyFFTW#40
import numpy as np

from sporco import util
from sporco import plot
plot.config_notebook_plotting()
import sporco.metric as sm
from sporco.admm import cbpdn

img = util.ExampleImages().image('kodim23.png', scaled=True, gray=True,
idxexp=np.s_[160:416,60:316])

npd = 16
fltlmbd = 10
sl, sh = util.tikhonov_filter(img, fltlmbd, npd)

D = util.convdicts()['G:12x12x36']
plot.imview(util.tiledict(D), fgsz=(7, 7))

Show the image appear below disappear, how should I solve?

Cannot find reference 'tikhonov_filter' in 'util.py'

Hi, when I open vggfusion.py with PyCharm, and I have already installed sporco 0.2.1 and torch 1.12.0, It reminds me the message that cannot find reference'tikhonov_ filter' in 'util. py'. How can I solve this problem?
Look forward to your reply,thanks!

Getting type casting error of "Cannot cast ufunc 'add'" while running CSC for custom dataset

Dear Sir,
I want to run Convolution Sparse Coding for CAVE for hyperspectral image analysis. I want to load 3 .png files each of them are image of a specific wavelength. Then i am concatenating them on axis 2, and then sending that to cbpdn_clr_cd.py for multi-channel CSC.
I am getting an error from function ustep:
UFuncTypeError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('uint16') with casting rule 'same_kind'

If I do the type casting by adding integer in the ustep function,
self.U += int(self.rsdl_r(self.AX, self.Y))
then I am getting
File ~/anaconda3/envs/env_sporco/lib/python3.10/site-packages/sporco-0.2.2a2-py3.10.egg/sporco/admm/admm.py:204 in new
instance = super(ADMM, cls).new(cls)

TypeError: super(type, obj): obj must be an instance or subtype of type

Please resolve this issue.
I have attached the screenshot of the errors.
Custom_Dataset
After_Int_Conversion

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.