Git Product home page Git Product logo

neurom's Introduction

NeuroM Logo

NeuroM

NeuroM is a Python toolkit for the analysis and processing of neuron morphologies.

Run all tox python3 license codecov.io Documentation Status DOI

Documentation

NeuroM documentation is built and hosted on readthedocs.

Migration to v2 or v3 versions

Refer to the doc page on this topic.

Reporting issues

Issues should be reported to the NeuroM github repository issue tracker. The ability and speed with which issues can be resolved depends on how complete and succinct the report is. For this reason, it is recommended that reports be accompanied with

  • A minimal but self-contained code sample that reproduces the issue. Minimal means no code that is irrelevant to the issue should be included. Self-contained means it should be possible to run the code without modifications and reproduce the problem.
  • The observed and expected output and/or behaviour. If the issue is an error, the python error stack trace is extremely useful.
  • The commit ID of the version used. This is particularly important if reporting an error from an older version of NeuroM.
  • If reporting a regression, the commit ID of the change that introduced the problem
  • If the issue depends on data, a data sample which reproduces the problem should be up-loaded. But check first whether the error can be reproduced with any of the data samples available in the tests/data directory.

Citation

When you use the NeuroM software, we ask you to cite the following (this includes poster presentations): DOI

Funding & Acknowledgements

This work has been partially funded by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 720270, 785907 (Human Brain Project SGA1/SGA2) and by the EBRAINS research infrastructure, funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.

For license and authors, see LICENSE.txt and AUTHORS.md respectively.

Copyright (c) 2015-2022 Blue Brain Project/EPFL

neurom's People

Contributors

adrien-berchet avatar alex4200 avatar arnaudon avatar arsenius7 avatar asanin-epfl avatar asvg avatar bbpgithubaudit avatar eleftherioszisis avatar haleepfl avatar jdcourcol avatar juanchopanza avatar lidakanari avatar liesbethvanherpe avatar mgeplf avatar musicinmybrain avatar orena1 avatar pgetta avatar stefanoantonel avatar tomdele avatar wizmer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neurom's Issues

Write function to unpack all trees in faulty SWC file

Neurons require well formed component tree structures as input. Some experimental files have disconnected trees. Although these aren't SWC conforming, it is of interest to unpack all available trees into a collection so that morphometrics can be obtained from them.

trees = load_trees('some_file.swc')

Disconnected neurite fragment passes morph_check

The test file test_data/swc/Neuron_missing_ids.swc has a disconnected axon, i.e. a point in the axon has a parent that does not exits. This should fail morph_check, but it passes:

$ morph_check test_data/swc/Neuron_missing_ids.swc
INFO: ========================================
INFO: Check file test_data/swc/Neuron_missing_ids.swc...
INFO:                     Has valid soma? PASS
INFO:                 Has basal dendrite? PASS
INFO:                           Has axon? PASS
INFO:                Has apical dendrite? PASS
INFO:            Nonzero segment lengths? PASS
INFO:            Nonzero section lengths? PASS
INFO:              Nonzero neurite radii? PASS
INFO:                       Check result: PASS
INFO: ========================================

z-jump check

Another check that is important for morphologies, is the z-jump check.

In neurolucida, the reconstruction is of course in 3 dimensions but the user can only see in his/her screen a 2D slice of the 3D space. In order to navigate in the z-axis he/she has to roll the mouse scroll button. The z-jump takes place when the reconstructor thinks that he/she is at the correct height while tracking the neurite, but in reality he/she has gone deeper. Here's an example:
z_jump
This usually manifests on bifurcations where the spatial transition from parent to children might be tricky, but it can also happen on other places as it can be seen in the image.

Currently, there is an algorithm in the repair tool to detect the z-jumps, thus we can either check this one first and either modify it to our needs or create another one which will be suitable for our data structures.

Application for the validation of a neuron against a population

We need an application similar to morph_check for the automatic validation of a morphology. An example use-case is:

morph_validate neuron1 population

which should result in:

Section lengths (test:ks-test, threshold:10%) : PASS
Section path distances (test:ks-test, threshold:10%) : FAIL
ALL: FAIL

Extract bifurcation angles from an L5TTPC1 population

Hi,

I created 100 L5TTPC1 morphologies with morphsyn in L5TTPC1_nancy. When I run the script extract_feature.py (see below), I get the following problem. The interesting bit is that the problem exists only for TreeType.all and NOT for TreeType.basal_dendrite. For the first morphology, isolated in the directory L5TTPC1_one everything works fine, too!

nancy

In [26]: data_dir
Out[26]: '/Users/chalimou/Projects/morphsyn/build/L5TTPC1_nancy/'

In [27]: feature
Out[27]: 'local_bifurcation_angles'

In [28]: res = extract_feature(data_dir,feature,TreeType.all)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-28-a98154e61e39> in <module>()
----> 1 res = extract_feature(data_dir,feature,TreeType.all)

/Users/chalimou/Projects/NEUROM/NeuroM/apps/extract_feature.py in extract_feature(data_dir, feature, neurite_type)
      6     population = load_neurons(data_dir)
      7
----> 8     feature_data = [getattr(n, 'get_' + feature)(neurite_type=neurite_type) for n in population]
      9
     10     return feature_data

/Users/chalimou/Projects/NEUROM/NeuroM/neurom/ezy/neuron.pyc in get_local_bifurcation_angles(self, neurite_type)
    150         '''
    151         return self.neurite_loop(i_local_bifurcation_angle,
--> 152                                  neurite_type=neurite_type)
    153
    154     def get_remote_bifurcation_angles(self, neurite_type=TreeType.all):

/Users/chalimou/Projects/NEUROM/NeuroM/neurom/ezy/neuron.pyc in neurite_loop(self, iterator_type, mapping, neurite_type)
    303             Iterable containing the iteration targets after mapping.
    304         '''
--> 305         return self._pkg(self.iter_neurites(iterator_type, mapping, neurite_type))
    306
    307     def _pkg(self, iterator):

/Users/chalimou/Projects/NEUROM/NeuroM/neurom/ezy/neuron.pyc in _pkg(self, iterator)
    307     def _pkg(self, iterator):
    308         '''Create an iterable from an iterator'''
--> 309         return self._iterable_type([i for i in iterator])
    310
    311     def _compare_neurites(self, other, neurite_type, comp_function=compare_trees):

/Users/chalimou/Projects/NEUROM/NeuroM/neurom/analysis/morphtree.pyc in local_bifurcation_angle(bifurcation_point)
     55     return mm.angle_3points(bifurcation_point.value,
     56                             bifurcation_point.children[0].value,
---> 57                             bifurcation_point.children[1].value)
     58
     59

/Users/chalimou/Projects/NEUROM/NeuroM/neurom/analysis/morphmath.pyc in angle_3points(p0, p1, p2)
     89     vec2 = vector(p2, p0)
     90     return math.acos(np.dot(vec1, vec2) /
---> 91                      (np.linalg.norm(vec1) * np.linalg.norm(vec2)))
     92
     93

ValueError: math domain error

In [29]: res = extract_feature(data_dir,feature,TreeType.basal_dendrite)

In [30]: data_dir = '/Users/chalimou/Projects/morphsyn/build/L5TTPC1_one/'

In [31]: res = extract_feature(data_dir,feature,TreeType.all)

In [32]: res = extract_feature(data_dir,feature,TreeType.basal_dendrite)

extract_feature.py:

from neurom.ezy import load_neurons
from neurom.core.types import TreeType

def extract_feature(data_dir, feature, neurite_type=TreeType.all):

    population = load_neurons(data_dir)

    feature_data = [getattr(n, 'get_' + feature)(neurite_type=neurite_type) for n in population]

    return feature_data


def flatten(feature_data):
    from itertools import chain
    return list(chain(*feature_data))

Errors in examples/soma_radius_fit

There are various errors in soma_fit. These were not caught because the filename doesn't end with .py. Anyway, some of these are styling issues, but others are real errors.

Pep8

examples/soma_radius_fit.py:43:101: E501 line too long (114 > 100 characters)
examples/soma_radius_fit.py:68:64: W291 trailing whitespace
examples/soma_radius_fit.py:89:46: E231 missing whitespace after ','
examples/soma_radius_fit.py:92:46: E231 missing whitespace after ','
examples/soma_radius_fit.py:92:49: E203 whitespace before ':'
examples/soma_radius_fit.py:111:101: E501 line too long (101 > 100 characters)
examples/soma_radius_fit.py:112:1: W391 blank line at end of file

Pylint

************* Module neurom.view.common
I: 44, 0: Locally disabling unused-import (W0611) (locally-disabled)
************* Module soma_radius_fit
C: 43, 0: Line too long (114/100) (line-too-long)
C: 68, 0: Trailing whitespace (trailing-whitespace)
C: 89, 0: Exactly one space required after comma
fit_data = {d: distribution_fit(soma_size,d) for d in distr_to_check}
^ (bad-whitespace)
C: 92, 0: Exactly one space required after comma
fit_error = {distribution_error(soma_size,d) : d for d in distr_to_check}
^ (bad-whitespace)
C:111, 0: Line too long (101/100) (line-too-long)
W: 71, 4: Unused variable 'params' (unused-variable)
E:107, 8: Undefined variable 'L' (undefined-variable)
E:108, 8: Undefined variable 'sys' (undefined-variable)

To reproduce: change the filename to examples/soma_radius_fit.py, then runmake lint`.

Dendrogram example remarks

I have a couple of remarks regarding the dendrogram example.
(https://github.com/BlueBrain/NeuroM/blob/master/examples/dendrogram.py)
I've attached an example output.

  • The triangles near the soma need to be fixed in some way
  • The diameter have to show up much thicker. Ideally different scaling on x- and y-axes should be used (with an option to enable/disable it?).
  • Will this function become part of NeuroM api ? At the moment it's a bit difficult to use because I have to manually download the dendrogram.py file. Would be nice if it was a function of the NeuroM Neuron class.

Some details:

  • Maybe only show the colors that are actually used in the legend ?
  • Try to use a white instead of a grey figure background in matplotlib, looks much better
  • We need units on the y-axis (maybe we don't even need an axis if we would have x/y scale bars)
  • After calling 'dendrogram()' the figure window closes immediately if I don't put a pylab.show(). Maybe you can add an option as argument that enables/disables pylab.show() ?

screen shot 2015-11-20 at 15 32 57

Find apical point in a tree

This property is needed for many tasks (synthesizer, morphology release, classification of neurons, validation). The best approximation so far is the function available in Pneumatk, but it still does not give 100% correct results. Could we include this functionality in NeuroM?

Non consecutive IDs

I get an error "Non consecutive IDs found in raw data" for SWC files created by Pneumatk software. The files are correct. SWC standard requires only increasing IDs, ie ID_parent < ID_child, but NeuroM raises exception if there is a gap between the parent ID and all its children. For example, in the file

1 1 0.0 0.0 0.0 1.0 -1 
3 3 1.0 0.0 0.0 0.1 1 
4 3 2.0 0.0 0.0 0.1 3

NeuroM wants the dendrite to start with ID=2, not 3.

Adopt unified feature calculation output format (numpy arrays)

In order to avoid complex handling of the results of the feature functions I would like to suggest that we adopt a "duck" format for the output of the latter even for functions such as "get_n_segments".

For example, if all the outputs were numpy arrays (bonus if the are flattened as well) then a special handling would not be necessary and the stat extraction would apply the same for any feature function.

Do you guys think that this would cause any kind of conflict?

Error when generating dendrogram for specific SWC morphology

When loading the attached morphology with:

from neurom.ezy import load_neuron
from neurom.view import view
n = load_neuron('dendrogram_error.swc')
view.dendrogram(n, show_diameters=True)

I get the following error:

/usr/local/lib/python2.7/site-packages/neurom/analysis/dendrogram.pyc in _generate_dendro(self, current_node, spacing, offsets)
    244 
    245             # create and store vertical segment
--> 246             self._rectangles[self._n] = _vertical_segment(offsets, new_offsets, spacing, radii)
    247 
    248             # assign segment id to color array

IndexError: index 1532 is out of bounds for axis 0 with size 1532

dendrogram_error.swc.zip

Provide uniform morphology check behaviour

The morphology checkers in neurom.check.morphology artibrarily return True, False or collections of information. For validation tools, it would be useful to have a simple pass/fail interface for each checker and for an ensemble of checkers.

Viewing dendrogram requires pylab.show() call

After calling 'dendrogram()' the figure window closes immediately if I don't put a pylab.show(). Maybe you can add an option as argument that enables/disables pylab.show() ? (And a white background would also be nice, but I leave that up to you)

Soma.center as a class property

Currently, the center of the soma is set when the soma is created and its value is reflected in the list of the soma points. If its value is changed, the respective point in the point list is not changed as well which leads into having to take into account both the points and the soma center for rotations, translations.

Wouldn't it be more convenient if center was a soma property (using @Property decorator) and not a variable which is set upon initialisation? This way any changes in the soma points will affect the soma.center as well.

Moreover, the soma center is a tuple while everything are numpy arrays which forces me to explicitly check for a tuple type for the transformations.

Getting test failures with matplotlib 1.5.1

Seems 'add_collection3d' is no longer available on 'AxesSubplot':

======================================================================
ERROR: test_view.test_neuron3d
----------------------------------------------------------------------
Traceback (most recent call last):
  File "*******************/python2.7/site-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/*****/src/NeuroM-bbp/neurom/view/tests/test_view.py", line 108, in test_neuron3d
    fig, ax = view.neuron3d(neuron0)
  File "/home/*****/src/NeuroM-bbp/neurom/view/view.py", line 592, in neuron3d
    tree3d(temp_tree, **kwargs)
  File "/home/*****/src/NeuroM-bbp/neurom/view/view.py", line 448, in tree3d
    ax.add_collection3d(collection)
AttributeError: 'AxesSubplot' object has no attribute 'add_collection3d'

----------------------------------------------------------------------

matplotlib version:

$ python -c 'import matplotlib; print matplotlib.__version__' 
1.5.1

Neurolucida Reader Error

Hello,
I was playing around with the new reader when a wild soma error exception appeared in an ascii file which has a soma. (OH141120_A0_idA.ASC.zip)

I was curious, thus I put a print in the extract section function:

def _extract_section(section):
    '''Find top level sections, and get their flat contents, and append them all

    Returns a numpy array with the row format:
        [X, Y, Z, R, TYPE, ID, PARENT_ID]

    Note: PARENT_ID starts at 0
    '''
    # try and detect type
    print section[0:4]
    _type = WANTED_SECTIONS.get(section[0][0], None)

The output when loading the file is the following:

['"NewContour"', ['71.37', '-680.50', '0.00', '1.85'], ['167.31', '-717.42', '0.00', '1.85'], ['222.66', '-739.57', '0.00', '1.85']]
['"Cell', 'Body"', ['CellBody'], ['-8.99', '5.26', '0.00', '0.15']]
[['Axon'], ['-11.86', '2.87', '0.31', '4.00'], ['-12.58', '3.70', '0.31', '3.17'], ['-12.95', '4.87', '1.00', '3.02']]
[['Dendrite'], ['-10.59', '-1.71', '0.13', '6.00'], ['-13.03', '-2.98', '1.25', '4.05'], ['-15.02', '-3.56', '3.56', '3.76']]
[['Dendrite'], ['-1.76', '9.34', '-13.25', '5.89'], ['-2.95', '11.11', '-13.25', '4.49'], ['-3.61', '12.57', '-13.19', '5.45']]
[['Dendrite'], ['10.98', '-4.09', '0.00', '4.72'], ['12.97', '-5.41', '0.00', '3.98'], ['14.96', '-6.36', '1.63', '3.09']]
[['Dendrite'], ['-11.91', '2.75', '0.31', '3.09'], ['-11.91', '2.75', '0.31', '3.09'], 'Generated']

This file has its first soma token as "Cell Body" with a space in between, which messes up, I think, the parsing of the lines. @mgeplf

NeuroM documentation link on front page

NeuroM has a very nice documentation on readthedocs.org. It would be useful to put a link to this documentation on the github front page (readme). (I couldn't find a link on the main page)

neurom.ezy.load_neurons to work with list of filenames

Currently we have

  • neurom.ezy.load_neuron(filename): loads from a file name
  • neurom.ezy.load_neurons(directory_name): loads set of neurons from all morphology files in a directory.

It would be very useful to have neurom.ezy.load_neurons work with a list of filenames.

Is ezy module necessary ?

I don't completely understand the rationale behind the 'ezy' module. Is it really necessary ? It creates some overhead when creating neurom scripts (and I try to avoid 'from ... import ...' statements to avoid namespace pollution)
It's also a pity it's not automatically imported together with neurom.

I know that I could
from neurom import ezy as neurom
but that's quite hacky

SomaC center

Hello,
The documentation states:

    Type C: multiple points soma
    Represented by a contour.
    Reference: neuromorpho.org
    The first point constitutes the center of the soma,
    '''

But I'm having problems confirming that the first point is the center of the soma on neuromorpho.org, could you please provide a pointer to this?

load_trees() tree-action function is not applied

In load_trees(filename, tree_action=None), the tree action is not applied. The following should work:

def post_action(t):
    t.foo = 'bar'

trees = utils.load_trees(filepath, post_action)
for t in trees:
    assert(hasattr(t, 'foo') and t.foo == 'bar')

This tree_action hook would typically be used to plug in e.g. tree type calculating and setting logic.

Yet Another Organization Discussion

Hello all,

once more I would like to propose a couple of stuff for the organization of NeuroM in a more intuitive way. I will just write down my ideas and we can discuss it over time. I would appreciate it if we can have a fruitful discussion upon these ideas.

First of all, I think it would be easier and tidier if we had a dedicated module for features where we can develop the latter without having to modify the neuron/population ezy objects. Regardless of the design of the model I think it would be better to have the minimum possible amount of code inside the core objects --the way it is right now-- and then remove entirely the ezy module. This way it will be clear for the user what are the main structures in the software and what are they for.

The connection between that module and the neuron/population objects can vary. It can be in an object oriented way, i.e. Neuron.features which set as an attribute upon initialization:

class Neuron:
    def __init__(self,...):
        self.features = NeuronFeatures(self)

segment_lengths = nrn.features.segment_lengths(*args, **kwargs)

in semi-functional way (maybe it is considered object-oriented, dunno):

fs = NeuronFeatures(nrn)
fs.section_lengths(*args, **kwargs)

which can be alternatively implemented in a more general way and create feature objects regardless of the abstraction level through a factory class:

fs = Features(object_with_features)

or in a completely functional way:

fs = features(object_with_features, Features.segment_lengths)

It is important to stress here that in the last block of code in the features function one will not pass the function itself i.e. segments.foo, but an Enum for the feature. This is mainly the interface for the users in order to be able to see all the available features in a consistent way and not have to search for the specific function in the low or high levels of the code to extract what they need.

What do you think?

make lint does not check app files

Python application scripts stored in apps do not appear in the list of files that are checked with pep8 and pylint using the make lint command. The reason is that currently only files with the .py extension are considered. This means apps can make it into the repository with unchecked errors.

Trouble reading a large set of morphologies

I have trouble reading a large set of morphologies in .h5 format.

from neurom import ezy
nrns = ezy.load_neurons('/folder_with_2000_morphologies_h5/')

results in "IOError: unable to open file (File accessibilty: Unable to open file)"

I tried reading the files one by one with

for i in range(2000):
    print i
    nrn = ezy.load_neurons('/folder_with_2000_morphologies_h5/morphology' + str(i))

and it seems I can read about 1016 files. The problem is not with a specific file (I checked); reading them in a different order stops at the same number of files.

Is it possible that files are opened but not closed?

Loading of Allen Brain swc fails

When trying to load a morphology available on the Allen Brain website, I get an error.
The cell is:
http://celltypes.brain-map.org/mouse/experiment/morphology/313861608
direct download:
http://celltypes.brain-map.org/api/v2/well_known_file_download/464113244

import neurom.ezy                                                                   
morph = neurom.ezy.Neuron('Pvalb-IRES-Cre_Ai14_IVSCC_-165874.04.02.01_464113242_m.swc')

generates:

Traceback (most recent call last):
File "test.py", line 3, in
morph = neurom.ezy.Neuron('Pvalb-IRES-Cre_Ai14_IVSCC_-165874.04.02.01_464113242_m.swc')
File "/Users/werner/src/NeuroM/neurom/ezy/neuron.py", line 108, in init
super(Neuron, self).init(neuron.soma, neuron.neurites, neuron.name)
AttributeError: 'str' object has no attribute 'soma'

Result of morph_check on that cell (having no apical dendrite should be no reason for failure, this is an interneuron):

INFO: ========================================
INFO: Check file Pvalb-IRES-Cre_Ai14_IVSCC_-165874.04.02.01_464113242_m.swc...
INFO: Has valid soma? PASS
INFO: Has Axon? PASS
ERROR: Has Apical Dendrite? FAIL
INFO: Has Basal Dendrite? PASS
INFO: All neurites have non-zero radius? PASS
INFO: All segments have non-zero length? PASS
INFO: All sections have non-zero length? PASS
ERROR: Check result: FAIL
INFO: ========================================

duplicate point removal not applicable to all file types

Duplicate point removal is only applied to HDF5 files. The reason for this is that by construction this format has duplicate points between adjoining sections. However, it would make sense to have a uniform treatment of data, where duplicate points can be removed optionally regardless of data format.

See related issue #177.

Access the individual neurites of a specific tree type (ex. basal dendrite)

Hi,
for the validation of the basal ganglia I have the following case: I need to examine a feature for each neurite depending on the TreeType. According to the results I keep a subset of the neurites for further analysis. I would like an easy way to access the basal trees one by one to analyse them separately.

Example:
Eleftherios did what I wanted the following way, but it seems too much for my python knowledge. Can this be done more easily?

def f(n):
    yield n

subset = nrn.neurite_loop(f, neurite_type=TreeType.basal_dendrite)

load_trees fails to correctly load swc files without soma

Original file:

1 2 796.109 453.797 16.031 9.000 -1
2 2 790.355 442.043 14.531 3.750 1
3 2 800.000 439.000 17.000 3.750 1
4 2 788.879 440.075 14.394 3.750 2
5 2 797.654 438.400 16.378 3.750 3
6 2 786.831 438.750 14.736 3.750 4
7 2 784.310 437.161 14.105 3.923 6
8 2 782.608 435.383 14.059 3.750 7
9 2 781.405 433.240 13.742 3.750 8
10 2 780.525 431.296 12.576 3.750 9

So when I try:

trs = load_trees('150.v3dpbd/150.v3dpbd_Advantra.swc')

I have as an output:

[]

If now I add a fake soma, e.g:

0 1 0 0 0 10 -1
1 2 796.109 453.797 16.031 9.000 0
2 2 790.355 442.043 14.531 3.750 1
3 2 800.000 439.000 17.000 3.750 1
4 2 788.879 440.075 14.394 3.750 2
5 2 797.654 438.400 16.378 3.750 3
6 2 786.831 438.750 14.736 3.750 4
7 2 784.310 437.161 14.105 3.923 6
8 2 782.608 435.383 14.059 3.750 7
9 2 781.405 433.240 13.742 3.750 8
10 2 780.525 431.296 12.576 3.750 9

It correctly loads the neurite:

[<neurom.core.tree.Tree at 0x111f65390>]

Sholl diagram needed

For compatibility with experimental data, a Sholl diagram is needed. Something like this:

    radial_distances = [array of distances from 0 to max radial distance]
    intersections = nrn.get_intersections(radial_distances, neurite_type=ttype)
    plt.plot(radial_distances, intersections)

Units in dendrogram plots

The dendrogram functionality of NeuroM should generate plots with units on the axes (or scale bars)

Summary of kstest scores from multiple distributions

I need to get a summarizing score from a set of statistical tests between pairs of distributions that represent different features:

score1 = score(f1_dataset1, f1_dataset2)
score2 = score(f2_dataset1, f2_dataset2)
...

total_score = norm(scores, p) # scores: all previous || p: p-norm

Re-factor morph_check into module + thin script

Currently morph_check does a lot of work. It would be better to move most of the logic into a module, and let morph_check delegate to it. The module would export a function taking a path, optional config yaml file and return a dictionary of results.

no __version__ in neurom

Hi,
there is no __version__ in neurom

In [30]: import neurom

In [31]: neurom.__version__
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-31-0c91c2a03994> in <module>()
----> 1 neurom.__version__

AttributeError: 'module' object has no attribute '__version__'```

Make unpack_data function publicly available

The unpack_data function has the role of unpacking an input file into a "raw data block" (RDB, a 2-dimensional numpy array) containing all the points in a neuron. It dispatches to parsers based on the input file type. Currently, it is used as an implementation helper in neurom.io.readers.load_data, but it would be useful to be able to call it directly in client code that only requires the RDB.

Disable h5py loading when instantiating swc morphology ?

Would it be possible to only load h5py when loading .h5 morphologies ?
I was trying to load an swc morphology, but I got as error that h5py is not installed. I guess h5py is not necessary in code that only uses swc morphologies ?

h5py is also not set as dependency for pip, so it generated an error after installing NeuroM using pip.

Names remove_duplicates and H5.remove_duplicate_points are misleading

The names remove_duplicates and H5.remove_duplicate_points suggest all duplicate points are removed. However the action taken is to remove identical consecutive points that are on adjacent section boundaries. We need names that reflect that. Suggestions:

  • no_section_boundary_duplicates and H5.remove_section_boundary_duplicates

get_features function

Hi
In pneumatk there was a function named get_features, which:

Extract features from set of morphologies represented by PopulationSet and
output results inside .csv file.

The feature set can be set with the parameter 'featurefile',
otherwise it will be the default list of features described here.

It is possible to add this function to NeuroM (If it is already exist, I did not manage to find it)

Thanks.

duplicate point removal isn't really optional

The removal of duplicate points in HDF5 files is only optional deep inside the implementation code. It should be optional at high level. It isn't obvious how to implement this, but we could give the neuron loading functions some optional keyword arguments that are passed on to the readers.

See related issue #178: point removal should be applicable to all file types, not just HDF5.

Compute histogram of dendrite/axon density

Dan reports that the user needs a function to get the density (length or volume) of a tree along a selected axis (given by a user-defined vector). The correct fraction of each segment should be represented in each bin. This requires a function to locate each segment in space, so that we can compute intersections between a segment and a plane.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.