Git Product home page Git Product logo

Comments (20)

mathiasunberath avatar mathiasunberath commented on May 26, 2024 1

OK, so this looks like you have figured it out. I'll go ahead and close this issue then.

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

DeepDRR allows you to simulate digital radiography, as acquired by a C-arm or digital radiography system, from CT scans (that in your case you got from Kaggle).

Thus, in order to use DeepDRR, you will need to specify the camera parameters of the radiography system which follows approximately a pinhole camera model. To generate images you will need a 3 x 4 projection matrix that will map homogeneous 3D coordinates from your volume to 2D pixel coordinates in the detector, your resulting digitally reconstructed radiograph.

I'd suggest you read up on those concepts to get started and we can then see where you stand. There are some other common issues you may run into that have been answered on here before.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Thanks for your comment.
I read all those concepts and appreciate to your suggestion. It was really beneficial.

The data that I got from Kaggle has the dimension of (512, 512, 54) and the size of (0.92, 0.92, 9.00) mm.
I can't find any other information.
How can I get the projection matrix matching those data.
Also, which codes need to be fixed?
Sorry for the question is still ambiguous.
Thanks.

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

Can you confirm that you have attempted to run our usage example in the readme by loading your data and projecting it?
You will probably need to adjust the center point (which should be set to the center of your CT, in most cases) and then you can check in greater detail the other components that go into the projection part of the code.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

I appreciate your comment. I attempted to run the example codes with setting of the center below.
center = volume.world_from_ijk @ deepdrr.geo.point(256, 256, 27)
But I got this Error.

Traceback (most recent call last):
File "/home/minjoon/Desktop/JupyterLab/deepDRR/reconstruction.py", line 18, in
carm = deepdrr.CArm(isocenter=center)
TypeError: init() missing 1 required positional argument: 'isocenter_distance'

I can fix it by carm = deepdrr.CArm(isocenter=center, isocenter_distance = ?).
But I don't know what value to put for '?'.
The below picture is my result when I put '0' to isocenter_distance.
image

And this is the result when I put '100'.
image

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

OK, this looks like you are making progress. 0 as the isocenter distance is not a great because this essentially means that you are rotating around the X-ray source. May work but will make geometry debugging hard. Set it to about half the source-detector distance, in this case ~600.
Then, you can play around a bit with the center. You should think a bit about what you would expect your images to look like which will help you see whether things are read in properly.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Hi, I'm sorry for asking a poor question that I adjusted the isocenter_distance to '0'. I fixed it 600.

I've tried to get valuable results. However, I still stuck in the same issue.

First, I guess my setting of the center point is wrong.
Here is my CT data.
image
As you can see it, dimension of the CT is [512, 512, 36]. So I adjusted the center like this.
center = volume.world_from_ijk @ deepdrr.geo.point(256, 256, 18)
Did I misunderstand what you mentioned about the adjustment of the center?

These are my codes and the results respectively.
image
image

When I set the center to (256, 256, 240), I got this result.
image

What should I do to take off from this issue?

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

Can you generate a couple more images by rotating your camera source around the iso center in both angular directions? This will allow you to understand better from which direction you are looking at your volume.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Thanks for your comment.
By following your comment, I got this result when I set the center = (256, 256, 200) source to detector dist = 1200, isocenter dist = 600.

image

I think It's a projection of the CT from axial view.
Following figures are some slices of the input.

image
image

It seems that the summation of the red lines makes the first figure.
But I can't find any 'bone-likely' structures, only black shadow.
I guess it's because the range is small, so the contrast doesn't be seen.
If I'm right, can I adjust the range of the results?

Here is my codes that gave me the result.

image

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

So the problem I see is that you are looking at your volume from the head-foot direction. This is not usually a view that is possible with X-ray machines. You should rotate around 90 degrees so that you get an anterior-posterior view of your CT scan. Once you have that geometric configuration you should start seeing some structure.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

I followed your comment, then I can see some structure.

image
image

But still, I want more valuable results.

These are my data and codes.
(I changed the input data. The edited comment is the results of the data(study_1109.nii) before.
The "study_1109" isn't isotropic ([0.7, 0.7, 8.0]) so that I changed the data to "dcmtonii.nii" which has the voxel size of [0.7, 0.7, 1.0].)

image
image

What more can I do for making the DRR?

(I added the contents of the header to help your comment.)
0010,1010,Patient Age=
0010,21D0,?=20000101
0012,0062,?=542328153
0012,0063,?=DCM.113100/113105/113107/113108/113109/113111
0013,0010,?=542135363
0013,1010,?=LIDC-IDRI
0013,1013,?=62796001
0018,0015,Body Part Examined=CHEST
0018,0022,Scan Options=HELICAL MODE
0018,0050,Slice Thickness=0.625
0018,0060,KVP [Peak KV]=120
0018,0090,Data collection diameter=500.000000
0018,1020,Software Version=06MW03.5
0018,1100,Reconstruction Diameter=349.000000
0018,1110,Distance Source to Detector [mm]=949.075012
0018,1111,Distance Source to Patient [mm]=541.000000
0018,1120,Gantry/Detector Tilt=0.000000
0018,1130,Table Height=137.000000
0018,1140,Rotation Direction=22339
0018,1150,Exposure Time [ms]=478
0018,1151,X-ray Tube Current [mA]=441
0018,1152,Acquisition Device Processing Description=12337
0018,1160,Filter Type=BODY FILTER
0018,1170,Generator Power=52800
0018,1190,Focal Spot[s]=1.200000
0018,1210,Convolution Kernel=STANDARD
0018,5100,Patient Position=FFS
0020,000D,Study Instance UID=1.3.6.1.4.1.14519.5.2.1.6279.6001.10125320060604644062
0020,000E,Series Instance UID=1.3.6.1.4.1.14519.5.2.1.6279.6001.5727422057329558140.
0020,0010,Study ID=
0020,0011,Series Number=1003
0020,0013,Image Number=0
0020,0032,Image Position Patient=-175.500000 -174.500000 0.0
0020,0037,Image Orientation (Patient)=1.000000\0.000000\0.000000\0.000000\1.000000\0.000000
0020,0052,Frame of Reference UID=1.3.6.1.4.1.14519.5.2.1.6279.6001.317612173882565644891105935364
0020,1040,Position Reference=SN
0020,1041,Slice Location=0
0028,0002,Samples Per Pixel=1
0028,0004,Photometric Interpretation=MONOCHROME2
0028,0010,Rows=512
0028,0011,Columns=512
0028,0030,Pixel Spacing=0.703125 0.703125
0028,0100,Bits Allocated=16
0028,0101,Bits Stored=16
0028,0102,High Bit=15
0028,0103,Pixel Representation=1
0028,0120,Pixel Padding Value=63536
0028,0303,?=MODIFIED
0028,1050,Window Center=40
0028,1051,Window Width=350
0028,1052,Rescale Intercept=0.0
0028,1053,Rescale Slope=1
0038,0020,?=20000101
0040,0002,?=20000101
0040,0004,?=20000101
0040,0244,?=20000101
0040,2016,?=
0040,2017,?=
0040,A075,?=Removed by CTP
0040,A123,?=Removed by CTP
0040,A124,?=1.3.6.1.4.1.14519.5.2.1.6279.6001.3618968344312659762.
0070,0084,?=
0088,0140,?=1.3.6.1.4.1.14519.5.2.1.6279.6001.141585249950369106214894210758
7FE0,0010,Pixel Data=524288

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

I believe to see lung structures in your first images which is good. This suggests that right now your images are simply too dark. This can have multiple reasons:

  1. your primary (the number of photons) are not sufficiently high so you don't get enough signal at the detector.
  2. your dicoms are not properly segmented into the different materials. This can have several reasons, for example that intensity values are not read properly or the V-net segmentation does not work. Here I would recommend to use threshold based material decomposition and manually check the result.
  3. your dicom is not read properly so that all intensity values are too high.
  4. some combination of the above
  5. something else.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

I do appreciate for your comment again.

I checked the results step by step.
First, I plotted the results of segmentation and the result of mass_attenuation.
These are the results.
(From left to right: bone, soft tissue, air, attenuation)

  1. Coronal view
    image

  2. Sagittal view
    image

I think that attenuation part is something wrong.

I am wondering if I can get data you worked with. I just want to see the valuable results. :(

from deepdrr.

mathiasunberath avatar mathiasunberath commented on May 26, 2024

The individual images look good. We had an issue with the version that @benjamindkilleen fixed. Maybe you are running into that one if your version is that specific one. Maybe you can update and @benjamindkilleen can provide some guidance on that specific bug?

from deepdrr.

benjamindkilleen avatar benjamindkilleen commented on May 26, 2024

@artiiicy can you try installing the v1.0.1-alpha branch from source? This is an unstable branch, but it is the most up to date as of now and may have a fix to your problem.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Thanks for your help.
Unfortunately, v.1.0.1-alpha shows the same results.

I checked my input data's intensity and it has the range about [-1024 ~ 3071].
Is there any possibility that my input data is invalid?

I used the nii data which is achieved from conversion of the below data.
https://data.idoimaging.com/dicom/1020_abdomen_ct/1020_abdomen_ct_510_jpg.zip

from deepdrr.

benjamindkilleen avatar benjamindkilleen commented on May 26, 2024

By the data's intensity, do you mean the CT data? From that range, you need to convert from Hounsfield units to density. Can you post a code snippet for how you are instantiating the Volume object? You may need to use the from_hu or another classmethod rather than the Volume() class constructor directly.

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Yes, The range about [-1024 ~ 3071] means the intensity of CT data.

Here is my code snippet for loading the Volume object.
As you can see, I just used the volume() class.

image

Is there any written codes in your repository for conversion from HU to density?
(I can't find the from_hu class.)

from deepdrr.

benjamindkilleen avatar benjamindkilleen commented on May 26, 2024

On the alpha branch, from_hu is a classmethod like from_nifti that takes in the HU CT volume as a numpy array, rather than a path to a file. The load_dicom module contains the code for converting HU to density. However, from your filename, is your nifti file already converted to density? Or is that referring to the conversion from DICOM to NifTi? If you examine the values of the NifTi file directly, what are they?

from deepdrr.

minjoony avatar minjoony commented on May 26, 2024

Oh, Sorry. I searched on the master not branch.

My filename means the conversion from DICOM to NifTi.
As I mentioned, I used this data (https://data.idoimaging.com/dicom/1020_abdomen_ct/1020_abdomen_ct_510_jpg.zip)
which is DICOM.
So I converted it to NifTi and used as input.

The values of the NifTi file are [-1024 ~ 3071].
Sorry to make you confused.

Additionally, I had critical mistakes. I used the codes of the master, not branch.
After I used the codes of v.1.0.1-alpha, I got this DRR results.
image

I appreciate your help so much.
This is my code-snippet. (I don't use the from_hu. The from_nifti of the v1.0.1-alpha works well.)

image

from deepdrr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.