wentaozhu / anatomynet-for-anatomical-segmentation Goto Github PK
View Code? Open in Web Editor NEWAnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation
License: Apache License 2.0
AnatomyNet: Deep 3D Squeeze-and-excitation U-Nets for fast and fully automated whole-volume anatomical segmentation
License: Apache License 2.0
it seems the baseline outperforms the anatomyNet?
After training baselineSERes18Conc.py, the results are:
epoch 49 TRAIN loss 1.5370
test loss 0.8655, 0.3843, 0.9132, 0.6634, 0.6670, 0.8777, 0.8685, 0.7767, 0.7779
best test loss 0.8659, 0.5447, 0.9156, 0.6816, 0.6765, 0.8784, 0.8686, 0.7906, 0.7903
After training AnatomyNet.py, the results are:
epoch 49 TRAIN loss 1.2705
test loss 0.8529, 0.3434, 0.9224, 0.6685, 0.6795, 0.8765, 0.8700, 0.7762, 0.7727
best test loss 0.8660, 0.4094, 0.9224, 0.6848, 0.6882, 0.8819, 0.8739, 0.7870, 0.7925
Dear Author:
During the training of baselineSERes18Conc.py, I found the dice is almost always to be 0 for all the classes. Is It normal? I used the data you provide at Google Drive and all the original hyper-parameters.
epoch 47 TRAIN loss 8.3984
test loss 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000
best test loss 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000
Is the code for calculating various metrics(mentioned in the research paper) available?
There is no code for generating *_crp_v2.npy. I would not like to directly use files downloaded from google drive.
Hello, doc. Zhu, many thanks for your code, recently I am following your paper of AnatomyNet, but when I use your code, the error is below, I just changed the data path:
0it [00:00, ?it/s]
Traceback (most recent call last):
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 160, in
train_data, test_data = process('C:/Coco_file/dataset/pddca18/')
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 146, in process
return getdatamask(train_data+train_dataopt+test_data, train_masks_data+train_masks_dataopt+test_masks_data,debug=debug), getdatamask(test_dataoff, test_masks_dataoff,debug=debug)
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 99, in getdatamask
img = imfit(img, int(tnz), int(tny), int(tnx)) #zoom(img, (tnz/nz,tny/ny,tnx/nx), order=2, mode='nearest')
File "C:/Coco_file/AnatomyNet-for-anatomical-segmentation-master/AnatomyNet-for-anatomical-segmentation-master/src/AnatomyNet.py", line 90, in imfit
retimg[bz:ez, by:ey, bx:ex] = img
It seems the error of resize the img, but I have no idea, Can you help me. Thanks !
Hi, thank you for sharing your nice work.
I have some questions when running your code.
epoch | train_loss | train_acc_BrainStem | train_acc_Chiasm | train_acc_OPL | train_acc_OPR | train_acc_Parotid_L | train_acc_Parotid_R |
---|---|---|---|---|---|---|---|
287 | 0.159289687224056 | 0.832054673673114 | 0.368384247710526 | 0.440287856253575 | 0.432384147017327 | 0.793738396989537 | 0.797057225879442 |
val_loss | val_acc_BrainStem | val_acc_Chiasm | val_acc_OPL | val_acc_OPR | val_acc_Parotid_L | val_acc_Parotid_R |
---|---|---|---|---|---|---|
0.045590034552983 | 0.834816692158485 | 0.346497982045023 | 0.41390753766761 | 0.433888376484056 | 0.790529341529381 | 0.794535598142614 |
Currently, I'm not sure which step is run. I tried diceloss ,focal loss as well as three kind of other loss. The best dice coefficient for chiasm is less than 0.4. My training time is around 20 hours to get the model converged.
Would you please give me any suggestion? Thank you
Visualizations for showing actual contour vs predicted contour, like the ones provided in readme
Only Dice loss can be seen in the AnatomyNet. Where is Focal loss? Dose it mean AnatomyNet is fine-tuned from 'baselineSERes18Conc' by using Dice loss only?
I have a question. How much GPU memory do you have for training?
loss: 0%| | 0/88 [00:00<?, ?it/s]
epoch 37 TRAIN loss 2.8312
test loss 0.8429, 0.4998, 0.0000, 0.6772, 0.6740, 0.8516, 0.8249, 0.7296, 0.7656
best test loss 0.8437, 0.5580, 0.0000, 0.7028, 0.7022, 0.8647, 0.8469, 0.7699, 0.7822
As you can see the loss is 0 for 3rd organ.
Any idea why this is happening?
Currently, training has 150 epochs with RMSprop optimizer and 50 epochs with SGD optimizer.
What is the reason behind using 2 different optimizers?
Why not use only one? -- or -- why not use more than 2?
hi! in the file ./src/*.py, there are 4 PATH, like
TRAIN_PATH = './data/trainpddca15_crp_v2_pool1.pth' TEST_PATH = './data/testpddca15_crp_v2_pool1.pth' CET_PATH = './data/trainpddca15_cet_crp_v2_pool1.pth' PET_PATH = './data/trainpddca15_pet_crp_v2_pool1.pth'
how can i generate above 4 PATH? thanks.
No such file or directory: './model/unet10pool3e2e_seres18_conc_pet_wmask_2_rmsp_1
Hello, I want to use the mixed loss function(focal and dice loss) you mentioned in the paper. I don't know which part of the code I should look at, but what I downloaded is "baselineDiceFocalLoss.py".Do you want to make sure that this part of the code I'm looking at is correct? Thank you very much!
您好,我想借鉴您在论文中提到的混合损失函数应用到我的篡改检测模型中,准备参考baselineDiceFocalLoss.py,不知道看的代码是不是对应的?非常感谢您!
Hi, in preprocess_crop.ipynb,
why you manually set the crop boundary as
"minz, maxz, miny, maxy, minx, maxx = 35, 90, 90, 300, 170, 350" instead of using calculated result.
And then you just compare these two results to ensure that manual setting is right.
hello, sir, a nice work about your hybrid loss func!
I just want to figure out whether to use the hybrid loss function(dice + focal loss) in pre-training(use baselineSERes18Conc.py to initialize weights) and also in fine-tuning stage(load pretrained model, use AnatomyNet.py),I am looking forward to your reply, thanks a lot.
Just curious about what all techniques did you use....
HI,
I am very intersted in your AnatomyNet et want to test it. But I see that the pytorch version is 0.3/0.4 which is a lettre old. I don't know if it can be used in my pytorch 1.3.1 environnement. ou do you have the plan to re compile it to the lastest version?
Think you for you attention.
I wanna run the model on the data that I have, but the training is taking too long(even after using Google Colab GPU). Can you please share the trained model....
RuntimeError: Exception thrown in SimpleITK ReadImage: /opt/miniconda2/conda-bld/simpleitk_1491574810448/work/Code/IO/src/sitkImageReaderBase.cxx:82:
sitk::ERROR: Unable to determine ImageIO reader for "/mnt/cc7fd727-39d5-4b8f-90d3-c033854aba68/wxy/数据集/pddca18/0522c0330/structures/BrainStem_crp.npy"
Dear doctor Zhu, sorry to disturb you. my master advisor ask me to segment lung nodules/tumors based on LIDC-IDIR database, can you give me some suggestions in pre-processing data and training? many thanks~
I'm trying to use this loss in my segmentation task. Could you describe what the structure of inputs, targets and flagvec should be? Much thanks!
Hi,
I was using the same dataset, and I wanted to know how did you deal with the artifacts of the CT scanner in the scans. For example:
This is a cropped image, but even then, there are some remnants of the CT scanner itself towards the right of the image.
Did you apply any pre-processing to deal with these (apart from cropping)? If not, were the models robust enough to not use them as potential shortcuts?
Hi.
I saw that you use the tversky loss in scripts.
Is it same as the hybrid loss mentioned in paper ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.