Git Product home page Git Product logo

nitime's Introduction

image

NIPY

Neuroimaging tools for Python.

The aim of NIPY is to produce a platform-independent Python environment for the analysis of functional brain imaging data using an open development model.

In NIPY we aim to:

  1. Provide an open source, mixed language scientific programming environment suitable for rapid development.
  2. Create software components in this environment to make it easy to develop tools for MRI, EEG, PET and other modalities.
  3. Create and maintain a wide base of developers to contribute to this platform.
  4. To maintain and develop this framework as a single, easily installable bundle.

NIPY is the work of many people. We list the main authors in the file AUTHOR in the NIPY distribution, and other contributions in THANKS.

Website

Current information can always be found at the NIPY project website.

Mailing Lists

For questions on how to use nipy or on making code contributions, please see the neuroimaging mailing list:

https://mail.python.org/mailman/listinfo/neuroimaging

Please report bugs at github issues:

https://github.com/nipy/nipy/issues

You can see the list of current proposed changes at:

https://github.com/nipy/nipy/pulls

Code

You can find our sources and single-click downloads:

Tests

To run nipy's tests, you will need to install the pytest Python testing package:

pip install pytest

Then:

pytest nipy

You can run the doctests along with the other tests with:

pip install pytest-doctestplus

Then:

pytest --doctest-plus nipy

Installation

See the latest installation instructions.

License

We use the 3-clause BSD license; the full license is in the file LICENSE in the nipy distribution.

nitime's People

Contributors

agramfort avatar arokem avatar effigies avatar emollier avatar endolith avatar fperez avatar ignatenkobrain avatar ivanov avatar jarrodmillman avatar koepsell avatar larsoner avatar luzpaz avatar mitya57 avatar mwaskom avatar nileshpatra avatar sergeyk avatar sleep-volunteer avatar tabe avatar temisap avatar tomdlt avatar tsalo avatar xunius avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nitime's Issues

nosetest w/o exit=False funks up in ipython

This commit: e8fb5fb (Working on slow tests and testlib) we removed 'exit=False' from the call to TestProgram, so now running nitime.test() in interactive environment (ipython) results in this backtrace:

/tmp/ in ()

/usr/lib/pymodules/python2.6/nitime/testlib.pyc in test(doctests)
48 # Now nose can run

49     try:

---> 50 TestProgram(argv=argv)#, exit=False)
51 finally:
52 np.set_printoptions(**opt_dict)

/usr/lib/pymodules/python2.6/nose/core.pyc in init(self, module, defaultTest, argv, testRunner, testLoader, env, config, suite, exit, plugins, addplugins)
116 self, module=module, defaultTest=defaultTest,
117 argv=argv, testRunner=testRunner, testLoader=testLoader,
--> 118 **extra_args)
119
120 def makeConfig(self, env, plugins=None):

/usr/lib/python2.6/unittest.pyc in init(self, module, defaultTest, argv, testRunner, testLoader)
815 self.progName = os.path.basename(argv[0])
816 self.parseArgs(argv)
--> 817 self.runTests()
818
819 def usageExit(self, msg=None):

/usr/lib/pymodules/python2.6/nose/core.pyc in runTests(self)
198 self.success = result.wasSuccessful()
199 if self.exit:
--> 200 sys.exit(not self.success)
201 return self.success
202

SystemExit: False
Type %exit or %quit to exit IPython (%Exit or %Quit do so unconditionally).

Originally, this argument was commented, because nose was ignoring the input 'argv', when this was provided. Maybe there is some way around this?

Make default behavior for fmri.io.time_series_from_file

This default behavior should be to create a time-series with the dimensions of the original fmri file. Read in the data from the nifti header. Maybe output something which would tell you what shape you are looking at?

Sphinx auto-summary instead of api-gen

If possible, it would be good to replace the api-gen script which is currently in tools and instead use the (relatively) new sphinx auto-summary in order to generate the automatic API documentation.

Note on examples

This is somehow not on our website anymore (got kicked out of the documentation generation process...)

tools/make_examples.py runs all the examples every time

Make it more conscious. For example, look at the time-stamps of the .py file and the .rst file and only generate the .rst file if the .py file has a later time-stamp (has been changed since last time you generated the .rst).

dpss test fails unless run from the tests/ directory

The failure is

======================================================================
ERROR: Do the dpss windows resemble the equivalent matlab result
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.6/nose/case.py", line 183, in runTest
    self.test(*self.arg)
  File "/home/fperez/usr/local/lib/python2.6/site-packages/nitime/tests/test_algorithms.py", line 234, in test_DPSS_matlab
    b = np.loadtxt('dpss_matlab.txt')
  File "/home/fperez/usr/opt/lib/python2.6/site-packages/numpy/lib/npyio.py", line 599, in loadtxt
    fh = open(fname, 'U')
IOError: [Errno 2] No such file or directory: 'dpss_matlab.txt'

----------------------------------------------------------------------
Ran 387 tests in 2.883s

It's just a matter of putting some path logic in the test. I may get to it this weekend while I work on the coherence talk for Monday.

Jack-knifing

Make a class, which receives an analyzer, together with it's data and then, produces the jack-knifing product - the methods of the original analyzer, together with their jack-knife confidence intervals.

tril_indices not available in fairly recent numpy versions

Reported by Yarick:

neurodebian@head2:~/deb/builds/nitime/0.3~dev$ grep Error.*tril_indices *build
nitime_0.2.9-1~nd09.10+1_amd64.build:AttributeError: 'module' object has no attribute 'tril_indices'
nitime_0.2.9-1~nd10.04+1_amd64.build:AttributeError: 'module' object has no attribute 'tril_indices'
nitime_0.2.9-1~nd10.10+1_amd64.build:AttributeError: 'module' object has no attribute 'tril_indices'

We should definitely fix this before 0.3 is out officially.

Race condition provoked in TimeArray

I'm seeing this race condition with regards to "_conversion_factor" in the TimeArray. It's popped up under the nose tests (output pasted below), and when trying to build the examples in docs.

Recently updated MPL, so I'll see if reverting makes a difference.

nosetest output:

....................................................Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <type 'exceptions.AttributeError'> ignored
ERROR: nitime.tests.test_viz.test_plot_xcorr


Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose/case.py", line 197, in runTest
self.test(_self.arg)
File "/Users/mike/workywork/nitime/nitime/tests/test_viz.py", line 41, in test_plot_xcorr
line_labels=['a', 'b'])
File "/Users/mike/workywork/nitime/nitime/viz.py", line 462, in plot_xcorr
label=this_labels.pop())
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/axes.py", line 4137, in plot
for line in self._get_lines(_args, **kwargs):
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/axes.py", line 317, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/axes.py", line 295, in _plot_args
x, y = self._xy_from_xy(x, y)
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/axes.py", line 213, in _xy_from_xy
bx = self.axes.xaxis.update_units(x)
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/axis.py", line 1336, in update_units
converter = munits.registry.get_converter(data)
File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/units.py", line 148, in get_converter
converter = self.get_converter(xravel[0])

<<< This repeats about ~100x >>>

File "/Users/mike/usr/lib/python2.6/site-packages/matplotlib-1.3.1rc2-py2.6-macosx-10.6-x86_64.egg/matplotlib/units.py", line 148, in get_converter
converter = self.get_converter(xravel[0])
File "/Users/mike/workywork/nitime/nitime/timeseries.py", line 233, in getitem
return self[[key]].reshape(())
File "/Users/mike/workywork/nitime/nitime/timeseries.py", line 239, in getitem
return np.ndarray.getitem(self, key)
File "/Users/mike/workywork/nitime/nitime/timeseries.py", line 207, in array_finalize
if not hasattr(self, '_conversion_factor'):
AttributeError: 'TimeArray' object has no attribute '_conversion_factor'

ImportError: Cannot Import name Factorial

Running nitime on epd 7.2-2 winx86

Traceback (most recent call last):
  File "C:\Python27\lib\site-packages\nose\loader.py", line 390, in loadTestsFro
mName
    addr.filename, addr.module)
  File "C:\Python27\lib\site-packages\nose\importer.py", line 39, in importFromP
ath
    return self.importFromDir(dir_path, fqname)
  File "C:\Python27\lib\site-packages\nose\importer.py", line 86, in importFromD
ir
    mod = load_module(part_fqname, fh, filename, desc)
  File "c:\users\sargun\documents\wildsoet lab files\nitime\nitime\fmri\__init__
.py", line 18, in <module>
    import hrf
  File "c:\users\sargun\documents\wildsoet lab files\nitime\nitime\fmri\hrf.py",
 line 2, in <module>
    from scipy import factorial
ImportError: cannot import name factorial

sphinx docs won't build (related to lazyimports?)

Running Sphinx v1.1.2
/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/matplotlib/init.py:908: UserWarning: This call to matplotlib.use() has no effect
because the the backend has already been chosen;
matplotlib.use() must be called before pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.

if warn: warnings.warn(_use_error_msg)
WARNING: extension 'ipython_console_highlighting' has no setup() function; is it really a Sphinx extension module?
loading pickled environment... not yet created
building [html]: targets for 71 source files that are out of date
updating environment: 71 added, 0 changed, 0 removed
/Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Note
warn("Unknown section %s" % key)
/Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Warning
warn("Unknown section %s" % key)
/Users/arokem/projects/nitime/doc/sphinxext/docscrape.py:117: UserWarning: Unknown section Example
warn("Unknown section %s" % key)
reading sources... [ 29%] api/generated/nitime.lazyimports
Exception occurred:
File "/Library/Frameworks/EPD64.framework/Versions/7.2/lib/python2.7/site-packages/sphinx/environment.py", line 828, in read_doc
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
PicklingError: Can't pickle <type 'module'>: attribute lookup builtin.module failed
The full traceback has been saved in /var/folders/sf/3b6q6p1d7518rpb4882pzsxw0000gn/T/sphinx-err-9UpBlB.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
Either send bugs to the mailing list at http://groups.google.com/group/sphinx-dev/,
or report them in the tracker at http://bitbucket.org/birkenfeld/sphinx/issues/. Thanks!
make: *** [htmlonly] Error 1

Memory error of GrangerAnalyzer

Dear all,
When I run a script like that:

>>>sampling_rate=1000
>>>freq_idx_G
Out[7]: array([40, 41])
>>>G.frequencies.shape[0]
Out[8]: 513
>>>g1 = np.mean(G.causality_xy[:, :, freq_idx_G], -1)

it met the following memory error (freq_idx_G=):

---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-6-b3dd332ebe13> in <module>()
----> 1 g1 = np.mean(G.causality_xy[:, :, freq_idx_G], -1)

/home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/descriptors.pyc in __get__(self, obj, type)
    138         # Errors in the following line are errors in setting a
    139         # OneTimeProperty
--> 140         val = self.getter(obj)
    141 
    142         setattr(obj, self.name, val)

/home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/analysis/granger.pyc in causality_xy(self)
    202     @desc.setattr_on_read
    203     def causality_xy(self):
--> 204         return self._dict2arr('gc_xy')
    205 
    206     @desc.setattr_on_read

/home/qdong/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/nitime/analysis/granger.pyc in _dict2arr(self, key)
    191         arr = np.empty((self._n_process,
    192                         self._n_process,
--> 193                         self.frequencies.shape[0]))
    194 
    195         arr.fill(np.nan)

MemoryError: 

Can anyone give me some tips? Thanks!

HilbertAnalyzer test failing

FAIL: Testing the HilbertAnalyzer (analytic signal)

....
Arrays are not almost equal

(mismatch 99.4140625%)
x: array([[ -7.09068221e-17, 5.00000000e-01, 5.00000000e-01, ...,
5.83775176e-17, 5.00000000e-01, 5.00000000e-01],
[ -8.56663207e-18, 4.34684287e-17, -2.60671132e-17, ...,...
y: array([[ 0. , 0.01227154, 0.02454123, ..., -0.03680722,
-0.02454123, -0.01227154],
[ 1. , 0.9999247 , 0.99969882, ..., 0.99932238,...


Ran 381 tests in 44.490s

nitime.viz import error, ImportError: 'No module named mpl_toolkits.axes_grid'

I'm having an import issue, looking closer at mpl_toolkits\axes_grid__init__.py
indicates something may have changed with the import for make_axes_locatable

  1. Versions
    Python 2.7
    matplotlib 1.4.3
    nitime 0.5

import nitime.viz
FILE: mpl_toolkits\axes_grid__init__.py
from .axes_divider import Divider, SubplotDivider, LocatableAxes,
make_axes_locatable
from .axes_grid import Grid, ImageGrid, AxesGrid

from axes_divider import make_axes_locatable

= ERROR
lib\matplotlib\cbook.py:137: MatplotlibDeprecationWarning: The matplotlib.mpl module was deprecated in version 1.3. Use import matplotlib as mpl instead.
warnings.warn(message, mplDeprecation, stacklevel=1)
ImportError: 'No module named mpl_toolkits.axes_grid'

timeseries attributes like t.sampling_rate should be more tightly integrated

right now, not only do sampling_rate and sampling_interval not know about each other, so you have to set both if you're changing them, but they also don't know about their underlying data.

In [21]: ts.UniformTime(0,10,10)
Out[21]: UniformTime([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.], time_unit='s')

In [22]: ut = _

In [23]: ut*10
Out[23]: UniformTime([  0.,  10.,  20.,  30.,  40.,  50.,  60.,  70.,  80.,  90.], time_unit='s')

In [24]: ut2 = ut*10

In [25]: ut2.sampling_rate
Out[25]: 1.0 Hz

Dynamical Granger Causality Analysis

Dear all,

I have two 500ms' length of MEG time series, along the time, directional information of correspond cortical fields will change dynamically. If I want to model this directional changing, how to do? Welcome to any suggestion, thanks!

Symbolic representations, distance measures, and sliding windows.

Hi all,

I'd be interested in making contributions towards adding symbolic aggregate approximation representations (along with the other corresponding flavors), a collection of distance measures to calculate similarity between TimeSeries objects, and sliding window functionality to Nitime. I also have a couple of visualizations tools that I'd like to contribute as well.

To get the ball rolling, I threw in a sax method in the TimeSeries object to output the string representation of the TimeSeries object: stefan-pdx/nitime@be33b5c. Is there a preference where you want representational methods to be implemented? I'd prefer to have it as part of the TimeSeries object, since that would make it a bit more compatible with some of the work that I'll be doing with sliding window code, however if it would be more suitable for the utils.py file, that's fine.

Also, for implementing a variety of distance measures between TimeSeries objects, where would you recommend is the best place to place it?

Thanks!

Doctests are failing

Many doc-tests fail when running nosetest --with-doctest. All of these need to be fixed

TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind''

======================================================================
ERROR: Testing the initialization of the uniform time series object
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/builddir/build/BUILD/nitime-rel-0.5/nitime/tests/test_timeseries.py", line 545, in test_TimeSeries
    tseries2 /= 2
  File "/builddir/build/BUILD/nitime-rel-0.5/nitime/timeseries.py", line 1021, in __idiv__
    self.data.__itruediv__(other)
TypeError: ufunc 'true_divide' output (typecode 'd') could not be coerced to provided output parameter (typecode 'l') according to the casting rule ''same_kind''

function naming convention

what are the naming convention for functions?

  • is seems weird that get_spectra is the only name that starts with get_*. Why not spectra?
  • functions that returns similar quantities (eg. PSD estimates) should have common name prefix (or maybe suffix).
  • freq_response should make explicit that it works on IR models.
  • I don't like get_spectra_bi as it reminds me of bispectrum estimates which is not.
  • functions that are auxiliary functions (ie should not be used by average user) should start with _ to be explicitly marked as private (eg. it seems in coherence.py that all foo_compute functions are auxiliary functions to foo functions hence should be name rather _foo). The point is that the user should quickly see which function to use.
  • why sometimes mtm_ and sometimes multi_taper_ to refer to multi-taper method?

below the list of functions in spectral.

algorithms/spectral.py:
20 : def get_spectra(time_series, method=None):
144 : def get_spectra_bi(x, y, method=None):
186 : def periodogram(s, Fs=2 * np.pi, Sk=None, N=None, sides='default',
262 : def periodogram_csd(s, Fs=2 * np.pi, Sk=None, NFFT=None, sides='default',
369 : def dpss_windows(N, NW, Kmax):
449 : def mtm_cross_spectrum(tx, ty, weights, sides='twosided'):
557 : def multi_taper_psd(s, Fs=2 * np.pi, BW=None, adaptive=False,
710 : def multi_taper_csd(s, Fs=2 * np.pi, BW=None, low_bias=True,
834 : def freq_response(b, a=1., n_freqs=1024, sides='onesided'):

Test failure on newer versions of scipy

ERROR: Testing the FilterAnalyzer

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/nose/case.py",
line 187, in runTest
self.test(*self.arg)
File "/Users/alex/work/src/nitime/nitime/tests/test_analysis.py",
line 220, in test_FilterAnalyzer
npt.assert_almost_equal(f_slow.fir.data.mean(),
File "/Users/alex/work/src/nitime/nitime/descriptors.py", line 140, in get
val = self.getter(obj)
File "/Users/alex/work/src/nitime/nitime/analysis/spectral.py", line
357, in fir
b = signal.firwin(n_taps, ub_frac, self._win)
File "/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/site-packages/scipy/signal/fir_filter_design.py",
line 255, in firwin
atten = kaiser_atten(numtaps, float(width)/nyq)
ValueError: could not convert string to float: hamming


Multi-taper coherence estimation with two signals of different length?

Is there a common method for using Multi-taper coherence estimation with signals of different lengths? Let's say I have signal1 S1, and signal2 S2, and S2 is 0.1% of the length of S2. Is it possible / practical to use Multi-taper coherence estimation in this situation to see if S2 is similar to subsection of s1.
Thank you.

Work on tutorial

Make it run together with the examples, in order to make sure that it actually does something that the code does.

Choose a license

Some places state that we have "simplified BSD", but the LICENSE file has full BSD. Choose one of those.

Warning in analysis.coherence might be a bug

The following warning is issued in testing:

/Users/alex/local/lib/python2.7/site-packages/nitime/analysis/coherence.py:152:
ComplexWarning: Casting complex values to real discards the imaginary
part
self.spectrum[j][j])

since self.spectrum is complex valued and you initialize coherence as float. If coherence is expected to be between 0 and 1, then this might be a bug in coherence_calculate.

Scale of sigma in algorithms.multi_taper_psd

When doing :

f, psd, sigma = nia.multi_taper_psd(data[0], Fs=raw.info['sfreq'],
jackknife=True)
or

f, psd, sigma = nia.multi_taper_psd(5e10*data[0], Fs=raw.info['sfreq'],
jackknife=True)

You get same values for sigma. That is, sigma is blind to the scale of the data.

Filtering

The current filtering methods are not very good (I can say that, I wrote them). Integrate filtering methods from scipy into the analysis module.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.