Git Product home page Git Product logo

voxelmorph's Introduction

VoxelMorph: learning-based image registration

VoxelMorph is a general purpose library for learning-based tools for alignment/registration, and more generally modelling with deformations.

Tutorial

We have several VoxelMorph tutorials:

Instructions

To use the VoxelMorph library, either clone this repository and install the requirements listed in setup.py or install directly with pip.

pip install voxelmorph

Pre-trained models

See list of pre-trained models available here.

Training

If you would like to train your own model, you will likely need to customize some of the data-loading code in voxelmorph/generators.py for your own datasets and data formats. However, it is possible to run many of the example scripts out-of-the-box, assuming that you provide a list of filenames in the training dataset. Training data can be in the NIfTI, MGZ, or npz (numpy) format, and it's assumed that each npz file in your data list has a vol parameter, which points to the image data to be registered, and an optional seg variable, which points to a corresponding discrete segmentation (for semi-supervised learning). It's also assumed that the shape of all training image data is consistent, but this, of course, can be handled in a customized generator if desired.

For a given image list file /images/list.txt and output directory /models/output, the following script will train an image-to-image registration network (described in MICCAI 2018 by default) with an unsupervised loss. Model weights will be saved to a path specified by the --model-dir flag.

./scripts/tf/train.py --img-list /images/list.txt --model-dir /models/output --gpu 0

The --img-prefix and --img-suffix flags can be used to provide a consistent prefix or suffix to each path specified in the image list. Image-to-atlas registration can be enabled by providing an atlas file, e.g. --atlas atlas.npz. If you'd like to train using the original dense CVPR network (no diffeomorphism), use the --int-steps 0 flag to specify no flow integration steps. Use the --help flag to inspect all of the command line options that can be used to fine-tune network architecture and training.

Registration

If you simply want to register two images, you can use the register.py script with the desired model file. For example, if we have a model model.h5 trained to register a subject (moving) to an atlas (fixed), we could run:

./scripts/tf/register.py --moving moving.nii.gz --fixed atlas.nii.gz --moved warped.nii.gz --model model.h5 --gpu 0

This will save the moved image to warped.nii.gz. To also save the predicted deformation field, use the --save-warp flag. Both npz or nifty files can be used as input/output in this script.

Testing (measuring Dice scores)

To test the quality of a model by computing dice overlap between an atlas segmentation and warped test scan segmentations, run:

./scripts/tf/test.py --model model.h5 --atlas atlas.npz --scans scan01.npz scan02.npz scan03.npz --labels labels.npz

Just like for the training data, the atlas and test npz files include vol and seg parameters and the labels.npz file contains a list of corresponding anatomical labels to include in the computed dice score.

Parameter choices

CVPR version

For the CC loss function, we found a reg parameter of 1 to work best. For the MSE loss function, we found 0.01 to work best.

MICCAI version

For our data, we found image_sigma=0.01 and prior_lambda=25 to work best.

In the original MICCAI code, the parameters were applied after the scaling of the velocity field. With the newest code, this has been "fixed", with different default parameters reflecting the change. We recommend running the updated code. However, if you'd like to run the very original MICCAI2018 mode, please use xy indexing and use_miccai_int network option, with MICCAI2018 parameters.

Spatial transforms and integration

  • The spatial transform code, found at voxelmorph.layers.SpatialTransformer, accepts N-dimensional affine and dense transforms, including linear and nearest neighbor interpolation options. Note that original development of VoxelMorph used xy indexing, whereas we are now emphasizing ij indexing.

  • For the MICCAI2018 version, we integrate the velocity field using voxelmorph.layers.VecInt. By default we integrate using scaling and squaring, which we found efficient.

VoxelMorph papers

If you use VoxelMorph or some part of the code, please cite (see bibtex):

Notes

  • keywords: machine learning, convolutional neural networks, alignment, mapping, registration
  • data in papers: In our initial papers, we used publicly available data, but unfortunately we cannot redistribute it (due to the constraints of those datasets). We do a certain amount of pre-processing for the brain images we work with, to eliminate sources of variation and be able to compare algorithms on a level playing field. In particular, we perform FreeSurfer recon-all steps up to skull stripping and affine normalization to Talairach space, and crop the images via ((48, 48), (31, 33), (3, 29)).

We encourage users to download and process their own data. See a list of medical imaging datasets here. Note that you likely do not need to perform all of the preprocessing steps, and indeed VoxelMorph has been used in other work with other data.

Creation of deformable templates

To experiment with this method, please use train_template.py for unconditional templates and train_cond_template.py for conditional templates, which use the same conventions as VoxelMorph (please note that these files are less polished than the rest of the VoxelMorph library).

We've also provided an unconditional atlas in data/generated_uncond_atlas.npz.npy.

Models in h5 format weights are provided for unconditional atlas here, and conditional atlas here.

Explore the atlases interactively here with tipiX!

SynthMorph

SynthMorph is a strategy for learning registration without acquired imaging data, producing powerful networks agnostic to contrast induced by MRI (eprint arXiv:2004.10282). For a video and a demo showcasing the steps of generating random label maps from noise distributions and using these to train a network, visit synthmorph.voxelmorph.net.

We provide model files for a "shapes" variant of SynthMorph, that we train using images synthesized from random shapes only, and a "brains" variant, that we train using images synthesized from brain label maps. We train the brains variant by optimizing a loss term that measures volume overlap of a selection of brain labels. For registration with either model, please use the register.py script with the respective model weights.

Accurate registration requires the input images to be min-max normalized, such that voxel intensities range from 0 to 1, and to be resampled in the affine space of a reference image. The affine registration can be performed with a variety of packages, and we choose FreeSurfer. First, we skull-strip the images with SAMSEG, keeping brain labels only. Second, we run mri_robust_register:

mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale
mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale --ixform transform.lta --affine

where we replace --satit --iscale with --cost NMI for registration across MRI contrasts.

Data

While we cannot release most of the data used in the VoxelMorph papers as they prohibit redistribution, we thorough processed and re-released OASIS1 while developing HyperMorph. We now include a quick VoxelMorph tutorial to train VoxelMorph on neurite-oasis data.

Contact

For any code-related problems or questions please open an issue or start a discussion of general registration/VoxelMorph topics.

voxelmorph's People

Contributors

adalca avatar ahoopes avatar avnishks avatar balakg avatar brf2 avatar dyhan316 avatar katiemlewis avatar mariannerakic avatar mu40 avatar neel-dey avatar raisingbits avatar steffenczolbe avatar voxelmorph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

voxelmorph's Issues

About model miccai2018_10_02_init1.h5

dear author, I have a question about the model miccai2018_10_02_init1.h5. When I train the train_miccai2018.py file, there was a parameter 'load_model_file'. You explain the parameter 'optional h5 model file to initialize with'. Thus, when I load the model miccai2018_10_02_init1.h5, I found the trained model performed worse when the training epoch increased.
I want to know what the init model miccai2018_10_02_init1.h5 is and why it can be used for initializing the network.

how to visualize deformation field?

Hi,

I am reading the paper entitled "VoxelMorph: A Learning Framework for Deformable Medical Image Registration". in Fig. 6, columns 4-5 represent the deformation fields. I am wondering how you guys visualize the deformation field like that? Is it somewhere in the program capable of giving these figures?

Thanks for your work! Look forward to your response.

code about the network structure

I have some doubts about the code implementation of the network structure, which seems to be different from the version in the cvpr paper.
In the code, the order of downsampling is different, and the feature map is not the same. For example, the third layer is concatenated ​​with the first layer(the original fixed and moving image).

flow_loss increase

Dear author,
I am training the cvpr2018_net with lung volume.
During training, loss and spatial_transformer_1_loss decreased, but flow_loss increased.
and I think all the values are large.
Is there anything wrong?
image

Here is my train command:
python3 "/content/drive/My Drive/Colab Notebooks/voxelmorph/src/train.py" "/content/drive/My Drive/Colab Notebooks/voxelmorph/data/rigid_train0"
--atlas_file "/content/drive/My Drive/Colab Notebooks/voxelmorph/data/rigid_case1_00.nii" --model_dir "/content/drive/My Drive/Colab Notebooks/voxelmorph/model0"
--epochs 60 --steps_per_epoch 50
#--load_model_file ""

and the data was rigidly registered and normalized to [0 , 1], all resized to 256x256x96

How to solve the blurred warped image issue?

Dear Dr. Dalca,
I am so sorry I haven’t solved my problem about the blurred warped images. I use Colin27 volumes as my atlas and ADNI volumes as the subject images. The maximum intensity of the Colin27 volumes is about 0.62 and that of the ADNI volumes is about 0.86 after intensity normalization to [0,1] by simply using “ img = img/255 ” and the minimum intensity for both is 0.

The model I am using is your miccai voxelmorph, and the prior_lambda is 15, the image_sigma is 0.02, batch size is 1, the rest of parameters are as default. I find that no matter how I change the two sigma and lambda parameters, the loss values are no less than 8 during training process. I use the trained model to generate deformation field, and then use the deformation field to warp the Colin volume, I find that the warped images are all blurred, I don’t know the specific reason for that. I really appreciate that if you can help me with that, thank you very much.

Best wishes,
-Shuo

problem on register.py

when runing register.py on my training result /data/afei/voxelmorph/models/models_1/1500.h5,the error is :

Traceback (most recent call last):
  File "/data/afei/voxelmorph/src/register.py", line 165, in <module>
    register(**vars(args))
  File "/data/afei/voxelmorph/src/register.py", line 89, in register
    net = keras.models.load_model(model_file, custom_objects=custom_layers)
  File "/data/home/amaxcuda8/anaconda2/envs/afei_py35/lib/python3.5/site-packages/keras/models.py", line 301, in load_model
    sample_weight_mode=sample_weight_mode)
  File "/data/home/amaxcuda8/anaconda2/envs/afei_py35/lib/python3.5/site-packages/keras/engine/training.py", line 841, in compile
    sample_weight, mask)
  File "/data/home/amaxcuda8/anaconda2/envs/afei_py35/lib/python3.5/site-packages/keras/engine/training.py", line 434, in weighted
    score_array = fn(y_true, y_pred)
TypeError: recon_loss() missing 1 required positional argument: 'y_pred'

After adding recon_loss and kl_loss(),

#parameter in recon_loss and kl_loss
vol_size = [160,192,224]
nf_enc = [16,32,32,32]
nf_dec = [32,32,32,32,16,3]
model = networks.miccai2018_net(vol_size, nf_enc, nf_dec, bidir=bidir)
flow_vol_shape = model.outputs[-1].shape[1:-1]
loss_class = losses.Miccai2018(image_sigma, prior_lambda, flow_vol_shape=flow_vol_shape)
#import loss of src/losses
custom_layers = {'SpatialTransformer':nrn_layers.SpatialTransformer,
                 'VecInt':nrn_layers.VecInt,
                 'Sample':networks.Sample,
                 'Rescale':networks.RescaleDouble,
                 'Resize':networks.ResizeDouble,
                 'Negate':networks.Negate,
                 # afei
                 # 'recon_loss':losses.Miccai2018.recon_loss,
                 # 'kl_loss':losses.Miccai2018.kl_loss}
                 'recon_loss':loss_class.recon_loss,
                 'kl_loss':loss_class.kl_loss}

And I find in another terminal(run ps a):
Begianing is :

 PID TTY      STAT   TIME COMMAND
24654 pts/42   Rl+    1:29 python /data/afei/voxelmorph/src/register.py --gpu 3 /data/afei/voxelm...

finally there is in terminal :

Using TensorFlow backend.
'''
some thing about gpu
'''
<module 'keras.losses' from '/data/home/amaxcuda8/anaconda2/envs/afei_py35/lib/python3.5/site-packages/keras/losses.py'>

in another terminal :

 PID TTY      STAT   TIME COMMAND
24654 pts/42   Sl+    6:16 python /data/afei/voxelmorph/src/register.py --gpu 3 /data/afei/voxelm...

Now, I want to know can you offer another register.py or give some advise.

Batch Size = 1?

in the paper you state: each training batch consists of one pair of volumes which appears to be the case in your code

voxelmorph/src/train.py

Lines 73 to 75 in 568af9e

X = train_example_gen.__next__()[0]
train_loss = model.train_on_batch(
[X, atlas_vol], [atlas_vol, zero_flow])

Is there is reason (GPU memory probably a big one) why you didn't try larger batch-sizes? Intuitively it would seem that large sizes would lead to a more stable gradient descent.

about the jacobian determinant

The miccai model is based on the integration of static velocity. The obtained deformation field is expected to be diffeomorphic. However, the paper result or good jacobian determinant pefromance can not be reproduced by the supplied code. I just want to know is the code supplied consistent with paper? Could suggest the potential bugs for the failure reproduction? I mean we can obtain higer dice ratio but with many negtive jacobian determinant points.

an approximate solution to compute mutual information

@adalca @balakg
Hi,I find an approximate solution to compute mutual information in tensorflow, but I cannot test its rightness as I donot have enough data to train the model, can anyone help ? code is here:

def nmi_gaussian(R, T, win=20, eps=1e-5):
    '''
        Parzen window approximation of mutual information
    Params:
        R : Reference(Fixed) Image, shape should be N * H * W * Z * 1
        T: Test (Moving) Image, shape should be the same as R
        win: number of bins used in histogram counting
    '''
    N, H, W, Z, C = R.shape
    assert C == 1, 'image should be only one channel'
    im_size = N.value * H.value * W.value * Z.value

    R_min = tf.reduce_min(R, keep_dims=False)
    R_max = tf.reduce_max(R, keep_dims=False)
    T_min = tf.reduce_min(T, keep_dims=False)
    T_max = tf.reduce_max(T, keep_dims=False)

    R_bin_size = (R_max - R_min) / win
    T_bin_size = (T_max - T_min) / win

    # compute bins
    R_bin_window = tf.range(R_min + 0.5 * R_bin_size, R_min + 0.5 * R_bin_size + R_bin_size * win - eps, delta=R_bin_size)
    T_bin_window = tf.range(T_min + 0.5 * T_bin_size, T_min + 0.5 * T_bin_size + T_bin_size * win - eps, delta=T_bin_size)

    R_mesh = tf.tile(tf.reshape(R_bin_window, (-1, 1)), multiples=[1, win])
    T_mesh = tf.tile(tf.reshape(T_bin_window, (1, -1)), multiples=[win, 1])
    R_T_mesh = tf.concat([tf.reshape(R_mesh, (-1, 1)), tf.reshape(T_mesh, (-1, 1))], axis=-1)
    R_T_mesh = R_T_mesh[tf.newaxis, tf.newaxis, tf.newaxis, :, :]

    p_l_k = 1/(np.sqrt(2 * np.pi)) * tf.exp(-0.5 * (tf.square((R - R_T_mesh[..., 0])/R_bin_size) + tf.square((T - R_T_mesh[..., 1])/T_bin_size)))
    
    p_l_k = tf.reduce_sum(p_l_k, axis=(0, 1, 2, 3)) / im_size
    p_l_k = p_l_k / tf.reduce_sum(p_l_k)
    p_l_k = tf.reshape(p_l_k, (win, win))
    p_l = tf.reduce_sum(p_l_k, axis=0)
    p_k = tf.reduce_sum(p_l_k, axis=1)

    pl_pk = p_l[:, tf.newaxis] * p_k[tf.newaxis, :]

    mi = p_l_k * tf.log(p_l_k / pl_pk)

    mi = tf.where(tf.is_finite(mi), mi, tf.zeros_like(mi))
    mi = -tf.reduce_sum(mi)
    return mi

the idea is using Parzen Window estimation of MutualInformation, as of which used in MattesMutualInformation of ITK, but replace the B-Spline window function with a Gaussian window function

Originally posted by @argman in #25 (comment)

Subject-to-Subject Registration

Hi,i want to do the same work in the paper(VoxelMorph, Section V-F).What is the data set you used in this experiment?Do i need to modify the datagenerator make two images in train pairs input in the same way as training data of train_miccai2018.py?
This is my modified code.
in datagenerator.py:

def miccai2018_gen(gen, batch_size=1, bidir=False):
        while True:
        atlas_vol_bs = next(gen)[1]
        volshape = atlas_vol_bs.shape[1:-1]
        zeros = np.zeros((batch_size, *volshape, len(volshape)))
        X = next(gen)[0]
        if bidir:
            yield ([X, atlas_vol_bs], [atlas_vol_bs, X, zeros])
        else:
            yield ([X, atlas_vol_bs], [atlas_vol_bs, zeros])

def example_gen(train_vol_names, atlas_vol_names, batch_size=1, return_segs=False, seg_dir=None):
    while True:
        idxes = np.random.randint(len(train_vol_names), size=batch_size)
        X_data = []
        Y_data = []
        for idx in idxes:
            X = load_volfile(train_vol_names[idx])
            X = X[np.newaxis, ..., np.newaxis]
            X_data.append(X)
            Y = load_volfile(atlas_vol_names[idx])
            Y = Y[np.newaxis, ..., np.newaxis]
            Y_data.append(Y)

in train_miccai2018.py:

    train_example_gen = datagenerators.example_gen(train_vol_names,atlas_vol_names, batch_size=batch_size)
    miccai2018_gen = datagenerators.miccai2018_gen(train_example_gen,
                                                   batch_size=batch_size,
                                                   bidir=bidir)

does it work?

training net with landmark annotation

Hello, I am very curious about your method, looks very promising. In the readme you mentioned that is unsupervised - "Unsupervised Learning with CNNs for Image Registration" and later there is a training phase (after setup), so I become confused... If training with annotation is needed, would it work with a landmark annotation such as ANHIR?

data loaded error

I used the data provided. When I run train.py, the code X = np.load(vol_names[idx])['vol_data'] in datagenerators.py raise the KeyError: 'vol_data is not a file in the archive'.

code for miccia paper

Can you publish the code for MICCIA paper
Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration

some questions about the loss

hi, I run my dataset accordding to train.py ( CVPR paper ), and my loss is negative .

The cross-correlation defined in (5) in "An Unsupervised Learning Model for Deformable Medical Image Registration" has possibility to be positive, which causes the loss to be negative.

I am wondering what should I do when the loss is negative.

test progress of ANTs

Dear expert: Your paper has a dice score calculation for 29 anatomical structures of the brain. One of the comparison methods is ANTs. I would like to know the process used by ANTs which get experimental results (avg dice 0.749). I have used ANTs to do the registration and got the output after registration. How do I make a warp on the seg data to achieve the purpose of the test.

I got the following output with the syn algorithm:
image

The following are what I think should ba modify.

  • When running python /data/afei/voxelmorph/src/train_miccai2018.py /data/afei/voxelmorph/data --gpu 3 --model_dir /data/afei/voxelmorph/models/models_1 --atlas_file /data/afei/voxelmorph/data/atlas_norm.npz --load_model_file /data/afei/voxelmorph/models/miccai2018_10_02_init1.h5, it's wrong with X = np.load(datafile)['vol_data'] and right with X = np.load(datafile)['vol'] in datagenerator.py.
  • And the folIwing are my code and content in ../data/atlas_norm.npz:
import numpy as np
test=np.load('/data/afei/voxelmorph/data/atlas_norm.npz')
print(test.files)`
# ['vol', 'seg', 'train_avg']
filename content
meanstats_T1_WARP.npz ['init_mu', 'init_std']
test_vol.npz ['vol_data']

So I think u should modify your code.

Registering mammograms

Hi @adalca, I am trying to register mammogram images (2D) using voxelmorph CVPR version.

  • Images are resized (512x448), normalized (0 - 1) and affinely aligned.
  • Registration (training and testing) happens between 2 images of the same breast taken at different time.

image

1- The breast tissues in the warped/moved image are not smooth (look like flakes). Is it normal or can something be done to improve it?

2- Some calcifications that are visible in the moving image (and have their matches in the fixed image) become invisible after registration. Normal?

3- I would like to measure the target registration error. The landmarks would be the calcifications. With Elastix tool, after registration, a transform parameters file is generated which I use to transform the landmark coordinates and obtain the new coordinates and measure the distance with those landmarks in the fixed image. How can I achieve something similar using voxelmorph?

A request of atlas.nii

hello,I am a newcomer to medical image processing.And I don't understand the process of preprocessing very well.Could you provide a copy of the original atlas.nii file rather than .npz format?I want to use it to do some tests.My email address is : [email protected]. Thank you very much.

About the affine transformation between moved image and the fixed image

Hello, I have read your paper and your work is very nice.
I would appreciate it if you could answer my questions. There is no affine transformation in the test image in your paper. I don't know whether the code can handle the situation if there has a violent affine transformation between the fixed and moved image?

What is the role of vm1.h5 and vm2.h5?

Dear adalca,
I have some questions when I train my model with your code. I used my trained model to register results that were unsatisfactory. I guess this may be that I used the default pre-training model (voxelmorph/models/cvpr2018_vm1_cc.h5, voxelmorph/models/cvpr2018_vm1_l2.h5, ...) in your code. Therefore, if it's available to train my network with the default pre-training model based on my dataset? And what is the role of vm1 and vm2?
Thanks!!!

about the code

I have read your code.,so I have some question? first,Does the ”flow“ in the code means the dense registration field? Second,how to resample the moving image,I could understand the messure in your paper ,but I can not understand your code.Could you explain your code detailed. Thank you very much!

Some questions about test_seg.npz data

Dear author, I'm reading your paper published in CVPR. There are some questions I am very confused. Can you give me some help?

  1. I saw that you used the test_seg.npz data when testing. I am confused about what it is used for.
  2. What kind of processing or software did you used to get this data? FreeSurfer or ANTs?
  3. If I don't have the data for these test_seg, can I complete the test?
    Looking forward to your answer. Thank you.

Duplicate of 'Data Loaded Error'

Hi,
I am trying to run the command:

train_miccai2018.py C:/Users/schakr01/Desktop/NRG/voxelmorph/voxelmorph/data --gpu 0 --model_dir C:/Users/schakr01/Desktop/NRG/voxelmorph/voxelmorph/models

but getting the same error which I am copy-pasting below:

Using TensorFlow backend.
2018-11-16 14:32:06.593595: I tensorflow/core/platform/cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not compiled to
use: AVX2
Epoch 1/1500
Traceback (most recent call last):
File "C:\Users\schakr01\Desktop\NRG\voxelmorph\voxelmorph\src\train_miccai2018
.py", line 191, in
train(**vars(args))
File "C:\Users\schakr01\Desktop\NRG\voxelmorph\voxelmorph\src\train_miccai2018
.py", line 146, in train
verbose=1)
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\keras\engine\training.py", line 1415, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\keras\engine\training_generator.py", line 177, in fit_generator
generator_output = next(output_generator)
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\keras\utils\data_utils.py", line 793, in get
six.reraise(value.class, value, value.traceback)
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\six.py", line 693, in reraise
raise value
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\keras\utils\data_utils.py", line 658, in _data_generator_task
generator_output = next(self._generator)
File "C:\Users\schakr01\Desktop\NRG\voxelmorph\voxelmorph\src\datagenerators.p
y", line 41, in miccai2018_gen
X = next(gen)[0]
File "C:\Users\schakr01\Desktop\NRG\voxelmorph\voxelmorph\src\datagenerators.p
y", line 66, in example_gen
X = load_volfile(vol_names[idx])
File "C:\Users\schakr01\Desktop\NRG\voxelmorph\voxelmorph\src\datagenerators.p
y", line 127, in load_volfile
X = np.load(datafile)['vol_data']
File "C:\Users\schakr01\AppData\Local\Programs\Python\Python36\lib\site-packag
es\numpy\lib\npyio.py", line 239, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'vol_data is not a file in the archive'

I have a TensorFlow version of 1.12.0 running on my computer.

about use the test.py

thank you sir, I try to implement registration for my visible and frared pictures , i have made my own data set to .npy format ,but i have no label information how can i use the test.py to test the effect.

How to import my own images for training and export models

Dear author, I'm a beginner in medical image registration and Linux, so I have some questions about voxelmorph.
How could I use some public dataset to train my own model?
How could I use the .nii files to export .npz files?
Please forgive me for some low-level questions.

questions about the overall registration

Hi, I have two question of the architecture:

  1. what is "squaring and scaling" ? is this a common operation in image registration ?
  2. why do you use a variational encoder in the registration ?

dice score on your test_vol

I am getting these score for your test volume. Just want to sure that,Am I getting right scores as expected?

cvpr2018_vm2_cc.h5 test, atlas
0.6399471668139538 0.15196420552382797
cvpr2018_vm2_l2.h5 test, atlas
0.642888676106308 0.14746905652473066
cvpr2018_vm1_cc.h5 test, atlas
0.6428134803084593 0.15237654435358472
cvpr2018_vm1_l2.h5 test, atlas
0.645497306543245 0.14790431835110063

thanks

problem training my own model

hi,

I'm trying to train my own model on Mac.

First, I used test_vol.npz and atlas_norm.npz to test but I got the error below.

Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node diffflow/map/while/strided_slice. Error: Pack node (diffflow/map/while/stack_3) axis attribute is out of bounds: 3
F ./tensorflow/core/util/mkl_util.h:607] Check failed: dims == sizes.size() (5 vs. 4)

and there is also two warnings,

I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.

Then, I also tried to use my own data,

python train_miccai2018.py /Users/xiao/Desktop/lab/software/voxelmorph-master/train --atlas_file /Users/xiao/Desktop/lab/software/voxelmorph-master/data/atlas.npz --gpu 0 --model_dir /Users/xiao/Desktop/lab/software/voxelmorph-master/mymodel

both atlas.npz and moving.npz are (58, 62, 54) MRI image.

but I got this error,

ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 8, 8, 8, 32), (None, 8, 8, 7, 32)]

Is my problem due to environment?

Could you please help me out?

Thank you!

no training configuration found ..... whwne running register.py

I ma trying to use already trained model using
'python register.py --gpu 0 /path/to/test_vol.nii.gz /path/to/atlas_norm.nii.gz --out_img /path/to/out.nii.gz --model_file ../models/cvpr2018_vm2_cc.h5 '

I am getting the following error

'envs/tensorflow3.6/lib/python3.6/site-packages/keras/engine/saving.py:269: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '

should I retrain the model or is there a way to test your trained model.

Thanks

test_seg.npz

hi,Your paper says that the data is done with freesurfer. Can you elaborate on how test_seg.npz is done with freesurfer?

Duplicate of 2#' Data loaded error

Hi @rongrongxiangxin, I am still having this issue
Duplicate of 2#'
I used the data provided. When I run train.py, the code X = np.load(vol_names[idx])['vol_data'] in datagenerators.py raise the KeyError: 'vol_data is not a file in the archive'.

Questions about ANTs

I have some questions about your paper.
In your paper, you have provided the performance of Affine only and ANTs, could you share your scripts about how to get such results?

Multi GPU version of Voxelmorph

Hi
In your paper you have mentioned that multiple gpu's were used during training. I can run single gpu version simply, but can't make it working for multiple gpus. Could you please give any direction to run thi network with multiple gpus?
What I tried currently is as follows: (networks.py file)
with tf.device('/gpu:0'):
src = Input(shape=vol_size + (1,))
tgt = Input(shape=vol_size + (1,))

    x_in = concatenate([src, tgt])
    x0 = myConv(x_in, enc_nf[0], 2)  # 80x96x112
    x1 = myConv(x0, enc_nf[1], 2)  # 40x48x56
    x2 = myConv(x1, enc_nf[2], 2)  # 20x24x28
    x3 = myConv(x2, enc_nf[3], 2)  # 10x12x14

    x = myConv(x3, dec_nf[0])
    x = UpSampling3D()(x)
    x = concatenate([x, x2])
    x = myConv(x, dec_nf[1])
    x = UpSampling3D()(x)
    x = concatenate([x, x1])
    x = myConv(x, dec_nf[2])
    x = UpSampling3D()(x)
    x = concatenate([x, x0])
    x = myConv(x, dec_nf[3])
    x = myConv(x, dec_nf[4])

with tf.device('/gpu:1'):
    x = UpSampling3D()(x)
    x = concatenate([x, x_in])
    x = myConv(x, dec_nf[5])
    if(len(dec_nf) == 8):
        x = myConv(x, dec_nf[6])
    flow = Conv3D(dec_nf[-1], kernel_size=3, padding='same',
                kernel_initializer=RandomNormal(mean=0.0, stddev=1e-5), name='flow')(x)
    
    y = Dense3DSpatialTransformer()([src, flow]) 

How to prepare the training data

Hi,I have read your paper and want to run this code with my own data,so could you please tell me the general steps about how to prepare the training data,now I only have some nifti format T1-MRI data ,Do I need to follow the steps in paper strictly? such as resample,preprocess by freesurfer,crop to 160192224 etc.

Register the lung volume

Dear author. I am reading your paper published in CVPR and MICCAI. My task is to register the lung volume,and I want to use voxelmorph to do it.I have the a lot of data now.I want to train my model.What should I do next?Can you tell me the next step?
Looking forward to your answer. Thank you.

about data input

I have some problem with some input parameters of train_miccai2018.py:

1.What does data_dir and atlas_file mean?
2.If I want to register with my nii data set, how should these two parameters be set?
“You might need to make a small change in the training file where train_vol_names is set, but that should be it.”This quoted from your answer,I don't know how to change train_vol_names.It would be better if you could explain it in detail.
3.Do I have to do other processing in addition to normalizing and cropping the input image?
Thank you very much

Questions about groundtruth mentioned in paper

Thanks for your wonderful work and code!
You have implemented this method on many dataset, such as ADNI, OASIS, ABIDE and so on, and I know these dataset can be obtained in public websites. But I want to know if you have released the groundtruth(anatomical structure) about registration related to these dataset?
If I want to compare my method's results with yours, what should I do?
I wish that you can solve my questions, thank you very much!

ValueError: Error when checking input

hi,
I tried to use my own data but I got this error.

ValueError: Error when checking input: expected input_1 to have shape (160, 192, 224, 1) but got array with shape (58, 62, 27, 1)

does the shape of data have to be (160, 192, 224, 1)?

about dataset

Hi,when I download the ADNI (1,2,3,go) dataset,there are more than 10000 MRI images.But in your paper there are total 7829 datas.So which ADNI dataset do you use? And how to generator the Atlas image? Thank you very much.

Code work environment

Hello, I want to implement this program. Is there any requirement for this computer software environment? For example, the python version, the computer system, the deep learning runtime framework.

ValueError: Unknown loss function:recon_loss

Dear author,I have trained my model with train_miccai2018.py and output a lot of files :
image
I want to use my model to register a pair of images, so I run python register.py --gpu 0 ../data/test_vol.nii.gz ../data/atlas_norm.nii.gz --out_img ../data/out.nii.gz --model_file ../models/15.h5 ,but I got an error:
image
Please forgive me for some low-level questions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.