Git Product home page Git Product logo

anatomynet-for-anatomical-segmentation's People

Contributors

wentaozhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

anatomynet-for-anatomical-segmentation's Issues

it seems the baseline outperforms the anatomyNet?

it seems the baseline outperforms the anatomyNet?

After training baselineSERes18Conc.py, the results are:

epoch 49 TRAIN loss 1.5370
test loss 0.8655, 0.3843, 0.9132, 0.6634, 0.6670, 0.8777, 0.8685, 0.7767, 0.7779
best test loss 0.8659, 0.5447, 0.9156, 0.6816, 0.6765, 0.8784, 0.8686, 0.7906, 0.7903

After training AnatomyNet.py, the results are:

epoch 49 TRAIN loss 1.2705
test loss 0.8529, 0.3434, 0.9224, 0.6685, 0.6795, 0.8765, 0.8700, 0.7762, 0.7727
best test loss 0.8660, 0.4094, 0.9224, 0.6848, 0.6882, 0.8819, 0.8739, 0.7870, 0.7925

zero dice during the training

Dear Author:
During the training of baselineSERes18Conc.py, I found the dice is almost always to be 0 for all the classes. Is It normal? I used the data you provide at Google Drive and all the original hyper-parameters.

epoch 47 TRAIN loss 8.3984
test loss 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000
best test loss 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000

Error about TypeError: slice indices must be integers or None or have an __index__ method

Hello, doc. Zhu, many thanks for your code, recently I am following your paper of AnatomyNet, but when I use your code, the error is below, I just changed the data path:
0it [00:00, ?it/s]
Traceback (most recent call last):
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 160, in
train_data, test_data = process('C:/Coco_file/dataset/pddca18/')
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 146, in process
return getdatamask(train_data+train_dataopt+test_data, train_masks_data+train_masks_dataopt+test_masks_data,debug=debug), getdatamask(test_dataoff, test_masks_dataoff,debug=debug)
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 99, in getdatamask
img = imfit(img, int(tnz), int(tny), int(tnx)) #zoom(img, (tnz/nz,tny/ny,tnx/nx), order=2, mode='nearest')
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 90, in imfit
retimg[bz:ez, by:ey, bx:ex] = img

It seems the error of resize the img, but I have no idea, Can you help me. Thanks !

difference between baselineSERes18Conc and anatomy net

Hi, thank you for sharing your nice work.

I have some questions when running your code.

  1. what's the difference between baselineSERes18Conc and anatomy net? I check the two *.py file line by line, I thought the differences is only the relu function? If so, is there such big gap between the performance of the two model that we must finetune anatomy net based on the pretrained baselineSERes18Conc model?
  2. Do we need the entire three database to reproduce your result? I used only the PDDCA dataset and I crop the image myself to exclude the non head neck area. The result is wierd like this. After 287 epoch the model converge, I use diceloss+0.5*Focalloss. And I use the Adam instead of your RMS+SGD since they don't work on my machine, and I don't know why.
epoch train_loss train_acc_BrainStem train_acc_Chiasm train_acc_OPL train_acc_OPR train_acc_Parotid_L train_acc_Parotid_R
287 0.159289687224056 0.832054673673114 0.368384247710526 0.440287856253575 0.432384147017327 0.793738396989537 0.797057225879442
val_loss val_acc_BrainStem val_acc_Chiasm val_acc_OPL val_acc_OPR val_acc_Parotid_L val_acc_Parotid_R
0.045590034552983 0.834816692158485 0.346497982045023 0.41390753766761 0.433888376484056 0.790529341529381 0.794535598142614

Currently, I'm not sure which step is run. I tried diceloss ,focal loss as well as three kind of other loss. The best dice coefficient for chiasm is less than 0.4. My training time is around 20 hours to get the model converged.

Would you please give me any suggestion? Thank you

hybrid loss of AnatomyNet

Only Dice loss can be seen in the AnatomyNet. Where is Focal loss? Dose it mean AnatomyNet is fine-tuned from 'baselineSERes18Conc' by using Dice loss only?

Training

I have a question. How much GPU memory do you have for training?

Model is not training for 3rd organ(Mandible)

loss: 0%| | 0/88 [00:00<?, ?it/s]

epoch 37 TRAIN loss 2.8312
test loss 0.8429, 0.4998, 0.0000, 0.6772, 0.6740, 0.8516, 0.8249, 0.7296, 0.7656
best test loss 0.8437, 0.5580, 0.0000, 0.7028, 0.7022, 0.8647, 0.8469, 0.7699, 0.7822

As you can see the loss is 0 for 3rd organ.
Any idea why this is happening?

about model file

hi! in the file ./src/*.py, there are 4 PATH, like
TRAIN_PATH = './data/trainpddca15_crp_v2_pool1.pth' TEST_PATH = './data/testpddca15_crp_v2_pool1.pth' CET_PATH = './data/trainpddca15_cet_crp_v2_pool1.pth' PET_PATH = './data/trainpddca15_pet_crp_v2_pool1.pth'
how can i generate above 4 PATH? thanks.

About the loss

Hello, I want to use the mixed loss function(focal and dice loss) you mentioned in the paper. I don't know which part of the code I should look at, but what I downloaded is "baselineDiceFocalLoss.py".Do you want to make sure that this part of the code I'm looking at is correct? Thank you very much!
您好,我想借鉴您在论文中提到的混合损失函数应用到我的篡改检测模型中,准备参考baselineDiceFocalLoss.py,不知道看的代码是不是对应的?非常感谢您!

Why you manually set the crop boundary ?

Hi, in preprocess_crop.ipynb,
why you manually set the crop boundary as
"minz, maxz, miny, maxy, minx, maxx = 35, 90, 90, 300, 170, 350" instead of using calculated result.
And then you just compare these two results to ensure that manual setting is right.

About the finetuning

hello, sir, a nice work about your hybrid loss func!
I just want to figure out whether to use the hybrid loss function(dice + focal loss) in pre-training(use baselineSERes18Conc.py to initialize weights) and also in fine-tuning stage(load pretrained model, use AnatomyNet.py),I am looking forward to your reply, thanks a lot.

About Pytorch version

HI,
I am very intersted in your AnatomyNet et want to test it. But I see that the pytorch version is 0.3/0.4 which is a lettre old. I don't know if it can be used in my pytorch 1.3.1 environnement. ou do you have the plan to re compile it to the lastest version?
Think you for you attention.

关于preprocess的遇到一点问题

RuntimeError: Exception thrown in SimpleITK ReadImage: /opt/miniconda2/conda-bld/simpleitk_1491574810448/work/Code/IO/src/sitkImageReaderBase.cxx:82:
sitk::ERROR: Unable to determine ImageIO reader for "/mnt/cc7fd727-39d5-4b8f-90d3-c033854aba68/wxy/数据集/pddca18/0522c0330/structures/BrainStem_crp.npy"

about lung nodule/tumor segmentation

Dear doctor Zhu, sorry to disturb you. my master advisor ask me to segment lung nodules/tumors based on LIDC-IDIR database, can you give me some suggestions in pre-processing data and training? many thanks~

How to remove CT scanner artifacts from the images

Hi,

I was using the same dataset, and I wanted to know how did you deal with the artifacts of the CT scanner in the scans. For example:

image

This is a cropped image, but even then, there are some remnants of the CT scanner itself towards the right of the image.

Did you apply any pre-processing to deal with these (apart from cropping)? If not, were the models robust enough to not use them as potential shortcuts?

Loss function issue

Hi.
I saw that you use the tversky loss in scripts.
Is it same as the hybrid loss mentioned in paper ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.