Git Product home page Git Product logo

abdomenatlas's People

Contributors

aaekay avatar chongyu1117 avatar mrgiovanni avatar ollie-ztz avatar wenxuanchelsea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

abdomenatlas's Issues

Errow while loading dictionaries from swinunetr

I am getting the following error while checkpoint loading

RuntimeError: Error(s) in loading state_dict for Universal_model:
Unexpected key(s) in state_dict: "swinViT.patch_embed.proj.weight", "swinViT.patch_embed.proj.bias", "swinViT.layers1.0.blocks.0.norm1.weight", "swinViT.layers1.0.blocks.0.norm1.bias", "swinViT.layers1.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers1.0.blocks.0.attn.relative_position_index", "swinViT.layers1.0.blocks.0.attn.qkv.weight", "swinViT.layers1.0.blocks.0.attn.qkv.bias", "swinViT.layers1.0.blocks.0.attn.proj.weight", "swinViT.layers1.0.blocks.0.attn.proj.bias", "swinViT.layers1.0.blocks.0.norm2.weight", "swinViT.layers1.0.blocks.0.norm2.bias", "swinViT.layers1.0.blocks.0.mlp.linear1.weight", "swinViT.layers1.0.blocks.0.mlp.linear1.bias", "swinViT.layers1.0.blocks.0.mlp.linear2.weight", "swinViT.layers1.0.blocks.0.mlp.linear2.bias", "swinViT.layers1.0.blocks.1.norm1.weight", "swinViT.layers1.0.blocks.1.norm1.bias", "swinViT.layers1.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers1.0.blocks.1.attn.relative_position_index", "swinViT.layers1.0.blocks.1.attn.qkv.weight", "swinViT.layers1.0.blocks.1.attn.qkv.bias", "swinViT.layers1.0.blocks.1.attn.proj.weight", "swinViT.layers1.0.blocks.1.attn.proj.bias", "swinViT.layers1.0.blocks.1.norm2.weight", "swinViT.layers1.0.blocks.1.norm2.bias", "swinViT.layers1.0.blocks.1.mlp.linear1.weight", "swinViT.layers1.0.blocks.1.mlp.linear1.bias", "swinViT.layers1.0.blocks.1.mlp.linear2.weight", "swinViT.layers1.0.blocks.1.mlp.linear2.bias", "swinViT.layers1.0.downsample.reduction.weight", "swinViT.layers1.0.downsample.norm.weight", "swinViT.layers1.0.downsample.norm.bias", "swinViT.layers2.0.blocks.0.norm1.weight", "swinViT.layers2.0.blocks.0.norm1.bias", "swinViT.layers2.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers2.0.blocks.0.attn.relative_position_index", "swinViT.layers2.0.blocks.0.attn.qkv.weight", "swinViT.layers2.0.blocks.0.attn.qkv.bias", "swinViT.layers2.0.blocks.0.attn.proj.weight", "swinViT.layers2.0.blocks.0.attn.proj.bias", "swinViT.layers2.0.blocks.0.norm2.weight", "swinViT.layers2.0.blocks.0.norm2.bias", "swinViT.layers2.0.blocks.0.mlp.linear1.weight", "swinViT.layers2.0.blocks.0.mlp.linear1.bias", "swinViT.layers2.0.blocks.0.mlp.linear2.weight", "swinViT.layers2.0.blocks.0.mlp.linear2.bias", "swinViT.layers2.0.blocks.1.norm1.weight", "swinViT.layers2.0.blocks.1.norm1.bias", "swinViT.layers2.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers2.0.blocks.1.attn.relative_position_index", "swinViT.layers2.0.blocks.1.attn.qkv.weight", "swinViT.layers2.0.blocks.1.attn.qkv.bias", "swinViT.layers2.0.blocks.1.attn.proj.weight", "swinViT.layers2.0.blocks.1.attn.proj.bias", "swinViT.layers2.0.blocks.1.norm2.weight", "swinViT.layers2.0.blocks.1.norm2.bias", "swinViT.layers2.0.blocks.1.mlp.linear1.weight", "swinViT.layers2.0.blocks.1.mlp.linear1.bias", "swinViT.layers2.0.blocks.1.mlp.linear2.weight", "swinViT.layers2.0.blocks.1.mlp.linear2.bias", "swinViT.layers2.0.downsample.reduction.weight", "swinViT.layers2.0.downsample.norm.weight", "swinViT.layers2.0.downsample.norm.bias", "swinViT.layers3.0.blocks.0.norm1.weight", "swinViT.layers3.0.blocks.0.norm1.bias", "swinViT.layers3.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers3.0.blocks.0.attn.relative_position_index", "swinViT.layers3.0.blocks.0.attn.qkv.weight", "swinViT.layers3.0.blocks.0.attn.qkv.bias", "swinViT.layers3.0.blocks.0.attn.proj.weight", "swinViT.layers3.0.blocks.0.attn.proj.bias", "swinViT.layers3.0.blocks.0.norm2.weight", "swinViT.layers3.0.blocks.0.norm2.bias", "swinViT.layers3.0.blocks.0.mlp.linear1.weight", "swinViT.layers3.0.blocks.0.mlp.linear1.bias", "swinViT.layers3.0.blocks.0.mlp.linear2.weight", "swinViT.layers3.0.blocks.0.mlp.linear2.bias", "swinViT.layers3.0.blocks.1.norm1.weight", "swinViT.layers3.0.blocks.1.norm1.bias", "swinViT.layers3.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers3.0.blocks.1.attn.relative_position_index", "swinViT.layers3.0.blocks.1.attn.qkv.weight", "swinViT.layers3.0.blocks.1.attn.qkv.bias", "swinViT.layers3.0.blocks.1.attn.proj.weight", "swinViT.layers3.0.blocks.1.attn.proj.bias", "swinViT.layers3.0.blocks.1.norm2.weight", "swinViT.layers3.0.blocks.1.norm2.bias", "swinViT.layers3.0.blocks.1.mlp.linear1.weight", "swinViT.layers3.0.blocks.1.mlp.linear1.bias", "swinViT.layers3.0.blocks.1.mlp.linear2.weight", "swinViT.layers3.0.blocks.1.mlp.linear2.bias", "swinViT.layers3.0.downsample.reduction.weight", "swinViT.layers3.0.downsample.norm.weight", "swinViT.layers3.0.downsample.norm.bias", "swinViT.layers4.0.blocks.0.norm1.weight", "swinViT.layers4.0.blocks.0.norm1.bias", "swinViT.layers4.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers4.0.blocks.0.attn.relative_position_index", "swinViT.layers4.0.blocks.0.attn.qkv.weight", "swinViT.layers4.0.blocks.0.attn.qkv.bias", "swinViT.layers4.0.blocks.0.attn.proj.weight", "swinViT.layers4.0.blocks.0.attn.proj.bias", "swinViT.layers4.0.blocks.0.norm2.weight", "swinViT.layers4.0.blocks.0.norm2.bias", "swinViT.layers4.0.blocks.0.mlp.linear1.weight", "swinViT.layers4.0.blocks.0.mlp.linear1.bias", "swinViT.layers4.0.blocks.0.mlp.linear2.weight", "swinViT.layers4.0.blocks.0.mlp.linear2.bias", "swinViT.layers4.0.blocks.1.norm1.weight", "swinViT.layers4.0.blocks.1.norm1.bias", "swinViT.layers4.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers4.0.blocks.1.attn.relative_position_index", "swinViT.layers4.0.blocks.1.attn.qkv.weight", "swinViT.layers4.0.blocks.1.attn.qkv.bias", "swinViT.layers4.0.blocks.1.attn.proj.weight", "swinViT.layers4.0.blocks.1.attn.proj.bias", "swinViT.layers4.0.blocks.1.norm2.weight", "swinViT.layers4.0.blocks.1.norm2.bias", "swinViT.layers4.0.blocks.1.mlp.linear1.weight", "swinViT.layers4.0.blocks.1.mlp.linear1.bias", "swinViT.layers4.0.blocks.1.mlp.linear2.weight", "swinViT.layers4.0.blocks.1.mlp.linear2.bias", "swinViT.layers4.0.downsample.reduction.weight", "swinViT.layers4.0.downsample.norm.weight", "swinViT.layers4.0.downsample.norm.bias", "encoder1.layer.conv1.conv.weight", "encoder1.layer.conv2.conv.weight", "encoder1.layer.conv3.conv.weight", "encoder2.layer.conv1.conv.weight", "encoder2.layer.conv2.conv.weight", "encoder3.layer.conv1.conv.weight", "encoder3.layer.conv2.conv.weight", "encoder4.layer.conv1.conv.weight", "encoder4.layer.conv2.conv.weight", "encoder10.layer.conv1.conv.weight", "encoder10.layer.conv2.conv.weight", "decoder5.transp_conv.conv.weight", "decoder5.conv_block.conv1.conv.weight", "decoder5.conv_block.conv2.conv.weight", "decoder5.conv_block.conv3.conv.weight", "decoder4.transp_conv.conv.weight", "decoder4.conv_block.conv1.conv.weight", "decoder4.conv_block.conv2.conv.weight", "decoder4.conv_block.conv3.conv.weight", "decoder3.transp_conv.conv.weight", "decoder3.conv_block.conv1.conv.weight", "decoder3.conv_block.conv2.conv.weight", "decoder3.conv_block.conv3.conv.weight", "decoder2.transp_conv.conv.weight", "decoder2.conv_block.conv1.conv.weight", "decoder2.conv_block.conv2.conv.weight", "decoder2.conv_block.conv3.conv.weight", "decoder1.transp_conv.conv.weight", "decoder1.conv_block.conv1.conv.weight", "decoder1.conv_block.conv2.conv.weight", "decoder1.conv_block.conv3.conv.weight".

The shape of the generated label does not match the original image.

I encountered a minor issue when running your code: the shape of the generated label does not match the original image. Here are the details:
I used the epoch_450.pth file provided by you (seems like a beta version :>), and tested it on a private dataset.

The shape of my original image is [64, 368, 576], while the shape of the generated label is [127, 171, 267].
They appear like this in 3D Slicer:
image

I'm wondering if this is due to my operation or if it's a bug?


In addition, regarding lines 233-241 in test.py:

    #Load pre-trained weights
    store_dict = model.state_dict()
    checkpoint = torch.load(args.resume)
    load_dict = checkpoint['net']
    # args.epoch = checkpoint['epoch']

    for key, value in load_dict.items():
        name = '.'.join(key.split('.')[1:])
        store_dict[name] = value

The key in store_dict will have an additional 'backbone.' ,while the key in load_dict seem to have an additional 'module.' due to the use of nn.DataParallel during training and do not have a 'backbone.'.

In this case, the above code may not work properly.

You can directly use:

    #Load pre-trained weights
    store_dict = model.state_dict()
    store_dict_keys = [key for key, value in store_dict.items()]
    checkpoint = torch.load(args.resume)
    load_dict = checkpoint['net']
    load_dict_value = [value for key, value in load_dict.items()]
    # args.epoch = checkpoint['epoch']

    for i in range(len(store_dict)):
        store_dict[store_dict_keys[i]] = load_dict_value[i]

Dataset Release

Are you going to release the datasets or do we have to run the code to obtain the annotations ? I am a little confused. You have mentioned in FAQ that ~5000 volume annotation will be release but i don't see any link where i can download.

About nnUnet pre-trained model

Hi, Thanks for sharing! I noticed that you provided two pre-trained models for download. So regarding nnUnet, will pre-training models be provided? I saw in your article that you used three models including nnUnet.

Dataset

Nice work. When will you release the dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.