Git Product home page Git Product logo

segdgm_cnn's People

Contributors

zl376 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

segdgm_cnn's Issues

QSM dataset

Hi there,
My QSM nifti images are of 174 x 192 x 160 dimension and of 1 x 1 x 1 voxel size. Though the code 'segDGM_3DCNN.py' fails to work and gives me null image as the label output.

Could you provide the link to your dataset as I would like to reproduce results like yours.

Thanking you,
Best,
JMC

More information about the methodology

I am trying to use your model to segment a QSM and I have a few questions:

  1. Is the data used for training the model available somewhere? That would really help me reorient the data for inference.
  2. I get no output labels when running chi_cosmos from the 2016 QSM Challenge data (intensities in the range [-0.264, 0.39]). I tracked down the source of the problem to the scale function, which seems to map every number to either 127 or 128 (because adding 250 to n<1 and then dividing by 500 yields 0.5 plus or minus n/500), but the comment implies it should be mapping uniformly to 0-255 (or 256?).
def scale(img, window):
    val_min, val_max = window
    
    res = np.copy(img)
    res[res < val_min] = val_min
    res[res > val_max] = val_max
    
    res = (res - val_min) / (val_max - val_min)
    
    res = (res * 256).astype(int)
    res[res == 256] = 255
    
    return res

What is scale supposed to do, and what does window represent? I tried changing the function to do a true linear mapping:

def scale(img):
    res = img.copy().astype(np.float)
    val_min, val_max = img.min(), img.max()
    res = (res - val_min) / (val_max - val_min) * 255.0
    return res.astype(np.uint8)

which normalizes the data to the range [0, 255]. That gave me at least some output (which looks half-decent).
3. Is there a reason that the built-in Keras convolution layers wouldn't work with the raw NIfTI image as an input? I noticed you wrote lots of code to split the image into slices to analyse and I was wondering if using a Conv3D layer could achieve the same effect.

Thank you for making your work public.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.