Git Product home page Git Product logo

Comments (4)

mathiasunberath avatar mathiasunberath commented on June 6, 2024

Hi,
I have some questions and suggestions.

  1. Which segmentation algorithm are you using? V-net or threshold based? Have you investigated whether the segmentation algorithm into bone-soft tissue-air gives you the expected result? If not, try to look at that and then I would suggest playing with the threshold parameters. It might be that your dataset has some global linear intensity transformation such that segmentation is wrong (and consequently also densities, materials, etc).
  2. If segmentation is correct, then it may be a difference in geometry: patient prone vs supine.
  3. Are you using scatter generation? If yes, try switching off. The images a system will give you will never be the acquired image but will have passed through many pre-processing steps that bring the image quality to what you see there.
  4. Your CT seems to have somewhat low resolution, maybe you want to try with higher resolution CT to get better outcome. Low resolution in 3D will be amplified due to magnification during projection.

@Jan-Nico anything to add?

Let us know how it goes.
Mathias

from deepdrr.

pengchengu avatar pengchengu commented on June 6, 2024

Hi,
Thanks for the suggestions.

  1. The segmentation algorithm is threshold based and i have checked the segmentation results are correct with the help of 3D Slicer.
  2. Both DRR and real x-ray are patient prone
  3. The former results used scatter generation and now i switched off that, the result is showed in Figure 4.
  4. Figure 1 is the result of a CT with (0.488mm,0.488mm,1.5 mm) voxel size. Figure 2, 3 and 4 are from a CT with(0.869mm,0.869mm,1.000mm). I believe the resolutions of both CT are eligible for DRR.

From the results, I believe switched off the scatter generation do help reducing the brightness of lung area, but it is still quite different from the real x-ray. I'm trying to put a scale weight on the Attenuation Coefficients of air to reduce the brightness of lung, is that feasible method? Besides, what's the meaning of the pixel value of the DRR result, how can i normalized it to the same distribution of real x-ray for the following deep learning model?
Figure 4
image

from deepdrr.

mathiasunberath avatar mathiasunberath commented on June 6, 2024

Hi.
It is difficult to remote debug what may be the case, again some more comments.
I personally think we should be more interested in achieving the same image appearance rather than the fact that "the lungs are bright". This may be due to different proportions in the anatomy so this does not seem to be a good indicator to me.

  1. I assume that you have gotten the above "real" X-ray directly from a scanner which (as mentioned previously) will have gone through a pre-processing pipeline. As a consequence, direct comparison of these images will be difficult if you do not know what processing is applied to the other images.
  2. That being said, some other properties may be different (I assume that indeed the segmentation is correct): Which spectrum are you using? Is it comparable (kVp and filtration) to the spectrum used to generate the real images?
  3. It seems to me that the DRRs are not, in fact, from the same prone position? When I compare the heart's outline, it seems to me that the apex is towards the right in your real X-ray and towards the left in your generated images.
  4. Regarding your question for normalization: Off the top of my head, we compute the deposited energy per pixel. However, for processing in a NN you may want to normalize to -1,1 or so anyway, so maybe that does not make a big difference.
  5. Regarding the image resolution: You are considering a rather high magnification view onto your anatomy with magnification factors likely around 2 for the spine. This will increase the artifacts due to low resolution on your detector which will also add to then not-as-natural image appearance.

As a side remark, you are obviously free to tune any parameter that you like, but I would discourage the heuristic adjustment of any attenuation coefficients etc before having identified the source of error. It currently seems to me that the largest difference in appearance of the images is bone contour contrast which could be due to a different source spectrum and processing.

from deepdrr.

mathiasunberath avatar mathiasunberath commented on June 6, 2024

I will close this issue for now since there was no more response. Let us know if you have further questions.

from deepdrr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.