loli / medpy Goto Github PK
View Code? Open in Web Editor NEWMedical image processing in Python
Home Page: http://loli.github.io/medpy/
License: GNU General Public License v3.0
Medical image processing in Python
Home Page: http://loli.github.io/medpy/
License: GNU General Public License v3.0
Since python 3, the compiled extensions are no longer named by their module name (e.g., maxflow.so
), but a system identifier is appended (e.g., maxflow.cpython-35m-x86_64-linux-gnu.so
). This in itself does not represent a problem, as the system dynamically loads the correct version. But the fact should be reflected somewhere in the documentation.
Now that #60 is merged, can we get a new release of medpy to pypi?
I've been seeming this term everywhere. Coudln't figure it out, though nothing of it.
I'm rewriting the tamura textural coarseness feature (for my own repo but would like to merge it here) and would like to support whatever you guys understand under the term voxelspacing.
Any ideas ?
In the current ordering, with setuptools 15.00
, the installation of scipy
fails.
It seems that the package are installed in the order
medpy
scipy
numpy
But scipy requires numpy and throws an error during install. Hence we are left with medpy and numpy install, but no scipy.
This might be related to
pypa/pip#2478
This should be fixed soon!
When I try to load an image, the error troubles me alot......
I run the following line:
from medpy.io import load
Iimage_data, image_header = load(test_data)
and the results are:
ImageLoadingError Traceback (most recent call last)
in ()
----> 1 image_data, image_header = load(test_data)
/home/lx/anaconda2/lib/python2.7/site-packages/medpy/io/load.pyc in load(image)
199 logger.debug('Module {} signaled error: {}.'.format(loader, e))
200
--> 201 raise err
202
203 def __load_nibabel(image):
ImageLoadingError: Failes to load image /home/lx/Personal/source code/lasc-master/data/mri/testing/b002/image.mhd as Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'LazyITKModule' object has no attribute 'AnalyzeImageIO'
Add a third and fourth option to medpy.filter.smoothing.anisotropic_diffusion
. See Black, 1997 and this:
Biweight:
from medpy.io import load
import SimpleITK
import vtk
image_data, image_header = load('/Users/N01-T2.mha')
print image_data.shape
Traceback (most recent call last):
File "", line 1, in
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 880, in runfile
execfile(filename, namespace)
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/Users/wuzhenglin/Python_nice/SAL_LUNG/test.py", line 140, in
changeage()
File "/Users/wuzhenglin/Python_nice/SAL_LUNG/test.py", line 42, in changeage
image_data, image_header = load('/Users/wuzhenglin/Python_nice/SAL_BRAIN/brain_healthy_dataset/Normal001-T2.mha')
File "/Users/wuzhenglin/anaconda/lib/python2.7/site-packages/medpy/io/load.py", line 201, in load
raise err
medpy.core.exceptions.ImageLoadingError: Failes to load image /Users/wuzhenglin/Python_nice/SAL_BRAIN/brain_healthy_dataset/Normal001-T2.mha as
Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module:
'LazyITKModule' object has no attribute 'AnalyzeImageIO'
If nosetest
was installed globally, it used the globally set python version instead of the local one. This can be circumvented by running the tests with python3 -m "nose"
.
Adapt README in tests/ to replacing the old recommendation to call nosetest
command by python3 -m "nose"
.
Hi Oskar,
ich benutze Python 2.7 (python x,y).
import medpy.filter
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\filter__
File "build\bdist.win32\egg\medpy\filter\bi
File "C:\Python27\lib\site-packages\scipy\n
module>
from .filters import *
File "C:\Python27\lib\site-packages\scipy\n
dule>
from . import _ni_support
ImportError: cannot import name _ni_support
import medpy.metric
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\metric__init__.py", line 104, in
File "build\bdist.win32\egg\medpy\metric\binary.py", line 25, in
File "C:\Python27\lib\site-packages\scipy\ndimage__init__.py", line 172, in <
module>
from .filters import *
File "C:\Python27\lib\site-packages\scipy\ndimage\filters.py", line 37, in
from scipy.misc import doccer
File "C:\Python27\lib\site-packages\scipy\misc__init__.py", line 47, in
from scipy.special import comb, factorial, factorial2, factorialk
File "C:\Python27\lib\site-packages\scipy\special__init__.py", line 546, in <
module>
from ._ufuncs import *
ImportError: DLL load failed: Die angegebene Prozedur wurde nicht gefunden.
import medpy.features
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\features__init__.py", line 155, in
File "build\bdist.win32\egg\medpy\features\histogram.py", line 25, in
File "C:\Python27\lib\site-packages\scipy\stats__init__.py", line 334, in
from .stats import *
File "C:\Python27\lib\site-packages\scipy\stats\stats.py", line 181, in
import scipy.special as special
File "C:\Python27\lib\site-packages\scipy\special__init__.py", line 546, in <
module>
from ._ufuncs import *
ImportError: DLL load failed: Die angegebene Prozedur wurde nicht gefunden.
import medpy.graphcut
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named graphcut
import medpy.itkvtk
Traceback (most recent call last):
File "", line 1, in
File "build\bdist.win32\egg\medpy\itkvtk__init__.py", line 71, in
File "build\bdist.win32\egg\medpy\itkvtk\filter__init__.py", line 17, in
File "build\bdist.win32\egg\medpy\itkvtk\filter\image.py", line 24, in <module
# See the README file for information on usage and redistribution.
ImportError: No module named itk
Der Import folgender Modulteile klappt:
import medpy.io
import medpy.core
import medpy.core
import medpy.utilities
I was using MICCAI BRATS 2015 training dataset. I took a list of images and their respective masks.
# f = list of filenames for t2 sequence
im_ar = [sitk.GetArrayFromImage(sitk.ReadImage(i) for i in f]
im_ar = np.array(im_ar)
im_msk = im_ar>0
n = IntensityRandeStandardization()
# train and transform
r, out = n.train_transform([i[m] for i, m in zip(im_ar, im_msk)])
for i, m, o in zip(im_ar, im_mask, out):
i[m] = o
When i checked for some generic output o:
[array([-276.6272986 , -264.94980174, -266.89605121, ..., -266.89605121,
-272.73479964, -272.73479964]),
[-256.55019873, -145.10180764, -40.20920427, ..., 18.69724008,
-53.32077969, -276.21756186],
array([-242.07590339, -180.49569523, -160.90199263, ..., -256.07140525,
-261.66960599, -278.46420821]),
array([-263.61638514, -270.51243865, -261.89237176, ..., -138.62541528,
-142.93544873, -268.78842527]),
array([-289.90758914, -190.99232162, -271.58994701, ..., -125.04880994,
-2.28223993, -125.04880994]),
array([-271.58994701, -244.11348381, -264.26289016, ..., -262.43112594,
-247.77701223, -267.92641858]),
array([-255.26526434, -237.32862916, -247.29342648, ..., -251.27934541,
-217.39903452, -257.2582238 ]),
array([-266.97390119, -261.99150253, -266.97390119, ..., -247.04430655,
-197.22031995, -222.13231325]),
array([-267.50772962, -262.16944534, -272.8460139 , ..., -239.03688013,
-247.9340206 , -265.72830153]),
array([-260.52822525, -241.17910424, -233.92318386, ..., -262.94686538,
-279.87734627, -282.29598639]),
array([-268.32900412, -265.59142244, -264.2226316 , ..., 22.02947516,
-257.3786774 , -265.59142244]),
array([-278.91700386, -267.92641858, -267.92641858, ..., -289.90758914,
-286.24406072, -286.24406072]),
array([-273.54950873, -276.44625213, -279.34299554, ..., 180.66066588,
121.10979817, -288.03322576]),
array([-281.79748804, -201.92697517, -146.77828771, ..., -281.79748804,
-283.69916692, -283.69916692]),
array([-274.22631793, -253.27230487, -248.61585753, ..., -269.56987059,
-269.56987059, -255.60052855]),
array([-263.2371022 , -258.25470354, -248.28990621, ..., -243.30750755,
-263.2371022 , -248.28990621]),
array([-260.39001725, -253.27230487, -256.83116106, ..., -260.39001725,
-256.83116106, -260.39001725]),
array([-268.21950086, -273.20189952, -273.20189952, ..., -263.2371022 ,
-270.71070019, -270.71070019]),
array([-272.64829967, -220.05631381, -214.52031529, ..., -275.41629892,
-275.41629892, -278.18429818]),
array([-271.95629985, -271.95629985, -271.95629985, ..., -271.95629985,
-268.84230069, -271.95629985])]
The relative bin deviation implemented in medpy.metric.histogram.relative_bin_deviation
is a real metric, not a semi-metric, as it fulfils the triangle equation.
To ensure compatibility over all tests. Affects file
https://github.com/loli/medpy/blob/master/tests/metric_/histogram.py
only.
Line 1169 in 7e9e332
when i used these two pieces of codes:
evaluate.py : https://paste.ubuntu.com/p/87yBYMhctC/
surface.py: https://paste.ubuntu.com/p/Q72rb2PH7j/
i got an error:
Traceback (most recent call last): File "evaluate.py", line 79, in <module> outpath='117_baseline.csv') File "evaluate.py", line 42, in evaluate loaded_label.header.get_zooms()[:3]) File "evaluate.py", line 26, in get_scores volscores['msd'] = metric.hd(label, pred, voxelspacing=vxlspacing) File "/xwd/envs/python27/lib/python2.7/site-packages/medpy/metric/binary.py", line 348, in hd hd1 = __surface_distances(result, reference, voxelspacing, connectivity).max() File "/xwd/envs/python27/lib/python2.7/site-packages/medpy/metric/binary.py", line 1169, in __surface_distances result_border = result - binary_erosion(result, structure=footprint, iterations=1) TypeError: numpy boolean subtract, the
-operator, is deprecated, use the bitwise_xor, the
^operator, or the logical_xor function instead.
and i noticed they used medpy. the bug is occured in these two lines:
result_border = result - binary_erosion(result, structure=footprint, iterations=1) reference_border = reference - binary_erosion(reference, structure=footprint, iterations=1)
i changed these two lines to:
result_border = result ^ binary_erosion(result, structure=footprint, iterations=1) reference_border = reference ^ binary_erosion(reference, structure=footprint, iterations=1)
and it works. so please tell me why. thanks.
Hi, I am trying to use the IntensityRangeStandardization function however I am running to the following issue:
medpy.filter.IntensityRangeStandardization.SingleIntensityAccumulationError: Image no.0 shows an unusual single-intensity accumulation that leads to a situation where two percentile values are equal. This situation is usually caused, when the background has not been removed from the image. Another possibility would be to reduce the number of landmark percentiles landmarkp or to change their distribution.
I am trying to understand what exactly you mean by " background has not been removed from the image"
For instance in the following code I get the same error:
import numpy
from medpy.filter import IntensityRangeStandardization
base_image = numpy.asarray([[0,0,0],[3,5,4],[7,8,9],[2,4,8]])
good_trainingset = [base_image + x for x in range(10)]
print base_image.dtype
print type(good_trainingset)
print good_trainingset
irs = IntensityRangeStandardization(cutoffp=(1, 99), landmarkp=[10, 20, 30, 40, 50, 60, 70, 80, 90], stdrange='auto')
irs.train_transform(good_trainingset,surpress_mapping_check=True)
print its
Output
int64
<type 'list'>
[array([[0, 0, 0],
[3, 5, 4],
[7, 8, 9],
[2, 4, 8]]), array([[ 1, 1, 1],
[ 4, 6, 5],
[ 8, 9, 10],
[ 3, 5, 9]]), array([[ 2, 2, 2],
[ 5, 7, 6],
[ 9, 10, 11],
[ 4, 6, 10]]), array([[ 3, 3, 3],
[ 6, 8, 7],
[10, 11, 12],
[ 5, 7, 11]]), array([[ 4, 4, 4],
[ 7, 9, 8],
[11, 12, 13],
[ 6, 8, 12]]), array([[ 5, 5, 5],
[ 8, 10, 9],
[12, 13, 14],
[ 7, 9, 13]]), array([[ 6, 6, 6],
[ 9, 11, 10],
[13, 14, 15],
[ 8, 10, 14]]), array([[ 7, 7, 7],
[10, 12, 11],
[14, 15, 16],
[ 9, 11, 15]]), array([[ 8, 8, 8],
[11, 13, 12],
[15, 16, 17],
[10, 12, 16]]), array([[ 9, 9, 9],
[12, 14, 13],
[16, 17, 18],
[11, 13, 17]])]
Traceback (most recent call last):
File "testdelet.py", line 32, in <module>
irs.train_transform(good_trainingset,surpress_mapping_check=True)
File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 345, in train_transform
File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 260, in train
File "build/bdist.linux-x86_64/egg/medpy/filter/IntensityRangeStandardization.py", line 436, in __compute_stdrange
medpy.filter.IntensityRangeStandardization.SingleIntensityAccumulationError: Image no.0 shows an unusual single-intensity accumulation that leads to a situation where two percentile values are equal. This situation is usually caused, when the background has not been removed from the image. Another possibility would be to reduce the number of landmark percentiles landmarkp or to change their distribution.
The same happens when i set :
base_image = numpy.asarray([[1,1,1],[3,5,4],[7,8,9],[2,4,8]])
Thank you.
First of all, great work in producing a nice and useful package. I am trying to use it to build one of my packages: hiwenet.
While trying to write unit tests for my own package, I looked into the tests for this package, and medpy.metric.histogram seems to be missing. Not sure if they were misplaced, or you are yet to get to them.. So this is just to learn more about the amount of testing that has been done already to functions implemented in medpy.metric.histogram..
I forked this package and am gonna try writing few tests myself, and will send a PR when I am done. Let me know if you have any good resources (tests in other packages, or great implementations elsewhere etc).
When using 'medpy.metric.binary.hd' or 'medpy.metric.binary.asd', I get the following error:
TypeError: numpy boolean subtract, the
-
operator, is deprecated, use the bitwise_xor, the^
operator, or the logical_xor function instead.
With stack trace pointing to a line in medpy/metric/binary.pyc
1169 result_border = result - binary_erosion(result, structure=footprint, iterations=1)
Does the operator need any update? (I'm using MedPy 0.3.0)
/share/data_bert1/mwilms/Projects/RTUKE/Patient01/4DCT$ medpy_join_xd_to_xplus1d.py ~/combined.nii.gz 01.dcm 02.dcm 03.dcm 04.dcm 05.dcm 06.dcm 07.dcm 08.dcm 09.dcm 10.dcm -s0.2 -v
26.03.2014 13:48:48 [INFO ] Loading image 01.dcm...
26.03.2014 13:48:50 [INFO ] Loading image 02.dcm...
26.03.2014 13:48:50 [INFO ] Loading image 03.dcm...
26.03.2014 13:48:51 [INFO ] Loading image 04.dcm...
26.03.2014 13:48:52 [INFO ] Loading image 05.dcm...
26.03.2014 13:48:52 [INFO ] Loading image 06.dcm...
26.03.2014 13:48:53 [INFO ] Loading image 07.dcm...
26.03.2014 13:48:54 [INFO ] Loading image 08.dcm...
26.03.2014 13:48:54 [INFO ] Loading image 09.dcm...
26.03.2014 13:48:55 [INFO ] Loading image 10.dcm...
Traceback (most recent call last):
File "/usr/local/bin/medpy_join_xd_to_xplus1d.py", line 7, in
execfile(file)
File "/data_kruemel1/mastmeyer/medpy/bin/medpy_join_xd_to_xplus1d.py", line 126, in
main()
File "/data_kruemel1/mastmeyer/medpy/bin/medpy_join_xd_to_xplus1d.py", line 100, in main
update_header_from_array_nibabel(example_header, output_data)
File "/data_kruemel1/mastmeyer/medpy/medpy/io/header.py", line 305, in __update_header_from_array_nibabel
hdr.get_header().set_data_shape(arr.shape)
File "/usr/lib/python2.7/dist-packages/dicom/dataset.py", line 253, in __getattr
raise AttributeError, "Dataset does not have attribute '%s'." % name
AttributeError: Dataset does not have attribute 'get_header'.
I've installed medpy from master branch, however I'm still getting a Python 3 incompatibility, xrange
function was removed in py3 and replaced by range
. For short lists, it should not make a difference in Python 2.
[...]\src\medpy\medpy\filter\smoothing.py", line 135, in anisotropic_diffusion
deltas = [numpy.zeros_like(out) for _ in xrange(out.ndim)]
NameError: name 'xrange' is not defined
I can make the changes and a PR if you're interested, let me know
To ease image support after default installation.
I installed MedPy in development mode. Also, libboost-python-dev is built with no error.
In python, I can do
from medpy.io import load
But, when I try to from medpy.graphcut import graphcut_from_voxels
I got
ImportError: /anaconda3/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0)
Do you have any idea about this error?
Thank you!
Lines
print method_str
print h1
print h2
do not follow Python3 syntax.
The current release on pypi doesn't work for python 3, due to single error in syntax (merged pull request) raising a runtime error. Since its been 3 years, its time to release another version with all the bug fixes so far. Thanks.
This issue is to track adding python 3 support to master, ie merging the python 3 branch in.
Fix.
medpy/medpy/features/intensity.py
I have installed using pip3
from medpy.io import load
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.4/dist-packages/medpy/io/init.py", line 59, in
from .load import load
File "/usr/local/lib/python3.4/dist-packages/medpy/io/load.py", line 28, in
from . import header
File "/usr/local/lib/python3.4/dist-packages/medpy/io/header.py", line 27, in
from ..core import Logger
File "/usr/local/lib/python3.4/dist-packages/medpy/core/init.py", line 55, in
from .logger import Logger
File "/usr/local/lib/python3.4/dist-packages/medpy/core/logger.py", line 94
raise RuntimeError, 'Only one instance of Logger is allowed!'
For a description, see comments in #6 .
Error:
medpy.core.exceptions.ImageLoadingError: Failes to load image 1/1_Prim.mha as Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'LazyITKModule' object has no attribute 'AnalyzeImageIO.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.449.948&rep=rep1&type=pdf offers four different ways of setting the kappa automatically.
I see that there is an old branch dedicated to Python3 support. Do you still plan to support Py3 in the future?
I can contribute to it if you need a hand. If you setup a windows-os-based CI like Appveyor it should be pretty straightforward to achieve.
EDIT: First thing to fix is import issues that make testing impossible for now.
I just finished installing medpy (Release-0.3.0) in my Python library, but I'm running into a few issues.
What works:
medpy's io module (load, header)
What gives me problems:
medpy.graphcut
When I try and call medpy.graphcut, I run into the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<myusername>/medpy/medpy/graphcut/__init__.py", line 200, in <module>
from .maxflow import GraphDouble, GraphFloat, GraphInt # this always triggers an error in Eclipse, but is right
ImportError: No module named 'medpy.graphcut.maxflow'
However, there's one major thing wrong with this error, which is that the directory it's referencing is where I cloned the package using git, NOT the package library where it was supposed to install the package. If I drop the 'maxflow.py' file into that folder (along with the .so file, which it subsequently requests after an error from running the same import function again), I get another error saying the following:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<myusername>/medpy/medpy/graphcut/__init__.py", line 206, in <module>
import energy_label
ImportError: No module named 'energy_label'
Could you please assist me in this matter?
Thanks!
Right now the Anisotropic Diffusion Filtering takes in an option {1,2,3}
, which limits the set of functions that can be used. I suggest allowing the user to pass in a function.
the signature should be: f(delta, spacing, dim_idx)
.
This will allow:
Right now the extras_require
is:
extras_require = {
'Additional image formats' : ["itk >= 3.16.0"]
},
which contains a space in the key, something I have never seen before. This causes problems when trying to pip install. Perhaps there is a way to escape, but this seems non intuitive:
$ pip install "medpy[Additional image formats]"
...
pip._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'[Additio'"
I suggest offering a string without a space.
Occurs when saving an image as .mhd (therefore probably when using the ITK bindings) at a location where an image of the same name already exists. Correct behaviour would be the error message, that the target image already exists.
Does not appear when the -f
flag is set.
Using nibabel as third party lib, the error did not occur.
Observed while calling:
medpy_intensity_range_standardization.py adc.mhd --load-model model.pkl --save-images=tmp/ -vd
Issue could be repeated with medpy_convert.py
, but this, as most scripts, does a previous internal check whether the target image exists and is therefore usually not affected by the bug (except in race conditions).
.idea folder in root dir added by insufficiently monitored merge. Remove and check whole causing merge.
I want to extract 3D patches with shape is 32x32x32 from a 3D input. I have given example of loads images from directory and gets shapes of image axes. Please let me know how to extract 3D patches samples from this input
`
from medpy.io import load
import numpy as np
import os
import h5py
data_path = ../....
for i in range(10):
subject_name = 'subject-%d-' % i
f = os.path.join(data_path, subject_name + 'C.hdr')
img, header = load(f)
inputs = img.astype(np.float32)
A = inputs.shape[0] #142
B = inputs.shape[1] # 176
C = inputs.shape[2] # 181
D = np.arange(A*B*C).reshape(A,B,C)
print (D.shape)
`
How could I use the function to create patches with size of 32x32x32 from this input? Please reply.. Thanks
Splitting an image of (100,100,50) with
medpy_reslice_3d_to_4d.py in.nii.gz out.nii.gz 1 10
results in (50, 100, 10, 10) instead of the expected (100, 10, 50, 10). The voxel spacing, on the other hand, is set as expected and therefore inconsistency results.
Look up how and add them to the package.
medpy_join_xd_to_xplus1d.py out.nii.gz in1.mhd in2.mhd -s0.5
results in
Segmentation fault (core dumped)
as calling
__is_header_nibabel(example_header)
with an ITK header as argument causes a segmentation fault.
Should be fixed or circumvented!
Hi, I think you should get a DOI for this package, so others like me can cite this package properly. Let me know once you get it from places like Zenodo.
It seems that header.get_offset() produces wrong offset values, when supplied with a NifTi header. I assume, that the sign of the main elements of the qform resp. sform matrices have to be taken into account i.e.
[1 0 0 10]
[0 1 0 10]
[0 0 1 10]
should produce the offset (10, 10, 10), while
[-1 0 0 10]
[ 0 1 0 10]
[ 0 0 1 10]
has to result in (-10, 10, 10).
But not sure about it yet.
What about cases, where the matrix looks like this:
[-0.5 0.5 0 10]
[ 0 1 0 10]
[ 0 0 1 10]
Using Anaconda distro on Windows with Python 3.5.2
I'm getting the error "ValueError: Unknown MS Compiler version 1900"
when pip install medpy or
pip install git+https://github.com/loli/medpy
(Installation into venv with Python 2.7 went OK)
I need to convert an old and not mine project into P3.5, therefore sticking to 2.7 is not really an option
Complete error log is in the attached file
import medpy error.txt
When the pad function is called with mode='mirror'
and size >> image.shape
the returned array starts with lines of zeros.
Example:
In [1]: import numpy
In [2]: from medpy.filter.utilities import pad
In [3]: test = numpy.ones([3,3])
In [4]: pad(test, size=3,mode='mirror')
Out[4]:
array([[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.],
[ 1., 1., 1., 1., 1.]])
In [5]: pad(test, size=7,mode='mirror')
Out[5]:
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 1., 1., 1., 1., 1., 1., 1., 1.]])
In [6]: pad(test, size=9,mode='mirror')
Out[6]:
array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[ 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
"connectivity : int
The neighbourhood/connectivity considered when determining the surface of the binary objects. This value is passed to scipy.ndimage.morphology.generate_binary_structure and should usually be >1>1. Presumably does not influence the result in the case of the Hausdorff distance."
I have tested in 3D objects and it actually has a big influence in the result.
Hello,
I have installed all modules to read and save mhd files. (medpy, itkwrapper, itkbridgenmpy and vtk). For installing ITK wrapper I have followed instructions in this webpage in ubuntu14: http://pythonhosted.org/MedPy/installation/itkwrapper4.7.html:
I just did it for itkwrapper4.10.
Now, I can read mhd file and convert it to numpy, but when I wanna save numpy array as mhd I got following error:
"medpy.io.save(image_orig,directory+str(n+1).zfill(3)+"/image_resize.mhd",hdr=image_header_orig)
File "build/bdist.linux-x86_64/egg/medpy/io/save.py", line 192, in save
medpy.core.exceptions.ImageSavingError: Failed to save image /home/user/Downloads/Datasets/LA-challenge2013/train-mri/a001/image_resize.mhd as type Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: Cannot get an instance of NumPy array."
Also, this is piece of code to read and save mhd:
image_orig, image_header_orig = medpy.io.load(lstFiles_orig[n])
image_seg, image_header_seg = medpy.io.load(lstFiles_gt[n])
image_orig=transform.resize(image_orig, (320,320,110), order=3, mode='constant', cval=0)
I highly appreciate if anybody can help me regarding this issue. Thanks.
Thanks for sharing your great work!
I have 20
training subjects with image and label size of 112x128x256. I stored the image and its label in a list such as imgs
and masks
. After performing the histogram normalization as preprocessing step, I got
irs = IntensityRangeStandardization()
trained_model, transformed_images = irs.train_transform([i[m] for i, m in zip(imgs, masks)],
surpress_mapping_check="ignore")
I want to extract the result of 20 images after transforming, How can I obtain it? I try to used below code but it only gives me the last image result (20th image)
for image, m, o in zip(imgs, masks, transformed_images):
image[m] = o
print (image.shape) #112x128x256
Thanks!
tests/graphcut_/enegery_label.py and cut.py are still trying to call the graph-cut functions according to the old system.
ImportError: No module named vtk
Change code to not use vtk? How to resolve this?
I want to save numpy array as a .mha or .mhd file.
Here is dummy code I tried:
data, header = medpy.io.load('/home/sumathipalaya/Desktop/ERCNoERCNoBG/T2WI/39812763482/0.dcm') medpy.io.save(data, 'xxx.mhd', header)
Gives this error:
` medpy.io.save(data, 'xxx.mhd')
Traceback (most recent call last):
File "", line 1, in
medpy.io.save(data, 'xxx.mhd')
File "/home/sumathipalaya/anaconda2/lib/python2.7/site-packages/medpy/io/save.py", line 192, in save
raise ImageSavingError('Failed to save image {} as type {}. Reason signaled by third-party module: {}'.format(filename, type_to_string[image_type], e))
ImageSavingError: Failed to save image xxx.mhd as type Itk/Vtk MetaImage (.mhd, .mha/.raw). Reason signaled by third-party module: 'module' object has no attribute 'swig'`
I am using medpy version 0.3.0 installed via pip on Python2; no errors or warnings during installation. The dependencies were install using Anaconda2, as I use this package manager. What am I doing wrong?
As an aside, is the header
argument to medpy.io.save truly optional? If so, what does medpy impute for the spacing, origin, etc.?
Wow, really nice collection of handy stuff in here.
I just noted in
medpy/medpy/features/intensity.py
Line 341 in a3e62a2
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.