Git Product home page Git Product logo

naronet's People

Contributors

djimenezsanchez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

naronet's Issues

Low final contrast accuracy value

Hello!

I am working with Naronet on my dissertation thesis under the supervision of @miselico.
I have tried to run Naronet on the following databases: example_POLE, halved example_POLE and Endometrial POLE (full dataset).
Each time I got a low final contrast accuracy value (under 2%), even though I followed the steps from the Readme file. The only change I made is limiting the model to one GPU. I modified the next line in NaroNet.py->class NaroNet-> init-> 2nd line
self.device = 'cuda:0'. This created a runtime error as only the computer I am using only has one GPU available.

As mentioned in the README, the accuracy should be way higher to continue using the model components:
"To check whether the CNN has been trained successfully, check the 'Model_training_xxxx' folder and open the 'Contrast_accuracy_plot.png', where you should expect a final contrast accuracy value over 50%."

I attached the plot generated in my case. How can this issue be solved?
Thanks in advance!

Contrast_accuracy_plot

Cuda out of memory error

I'm trying the example of 'Endometrial_POLE', I have changed PCL_batch_size from 80 to 8 and it is still not woring given that my GPU memory is about 16 GB

image

image

In your documentation, it is saying that I can change the args['PCL_N_Crops_per_Image'] parameter but I can not see this parameter in the code
image

Please advise what to do, what other parameters I'm supposed to change to make it work given that I'm emptying my gpu RAM before each run and I'm making sure that there were no "zombies" :)
Thanks

CUDA out of memory

I am experimenting with the Endometrial_POLE dataset with an added "Patients_per_Image" file. This was created according to the instructions in the README file on GitHub, with the scope to use multiple images for each patient. I attached the files, in case the issue is caused by the content of these files.

Patient_to_Image.xlsx
Image_Labels.xlsx

The first issue concerns the classes used to train Naronet. In this context, each patient - from the 12 selected in the cohort - has 4 classes assigned to his/her knowledge graph, one for each classification task.
However, the two following lines in Naronet.py select only the second label. Is this correct? Why so?

    self.Train_indices = [self.IndexAndClass[i][1] for i in self.Train_indices]
    self.Test_indices = [self.IndexAndClass[i][1] for i in self.Test_indices]

I also observed that the set of training and test indices are always the same, as shown in the attached image. This raises the following issue: the class assigned to the patients selected in the training set for the second classification task, saved in the y_train variable, is always 1. This raises the following error:

File "/home/carol/NaroNet-main/NaroNet-main/src/NaroNet/NaroNet.py", line 204, in initialize_fold
self.Train_indices, _ = ros.fit_resample(x_trainn, y_trainn)

ValueError: The target 'y' needs to have more than 1 class. I got 1 class instead
outputs

    x_trainn = np.expand_dims(np.array(self.Train_indices),1)
    y_trainn = [self.labels[i][0] for i in self.Train_indices]

y-trainn and x_trainn for context. I added these two variables for clarity, the functionality is exactly the same.

To get past this issue I set the indices of training and testing instances differently by hand, so there will be at least an instance with class 0 in both sets.

    self.Train_indices = [0, 1, 2, 3, 4, 6]
    self.Test_indices = [5, 7, 8, 9, 10, 11]

This leads me to the second issue, shown below. For this experiment, I am using a server with two GPUs. I tried using both of them on two separate runs but received the same error. The model of both GPUs is Nvidia RTX A6000 with 48GB RAM, which is higher than the hardware mentioned in the paper with 11GB RAM. As I checked the code, no Data Parallelization method was implemented.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 618.00 MiB (GPU 1; 47.54 GiB total capacity; 45.74 GiB already allocated; 189.12 MiB free; 45.97 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

This first appeared on the following line:

File "/home/carol/NaroNet-main/NaroNet-main/src/NaroNet/NaroNet_model/GNN.py", line 440, in MLPintoFeatures
x = F.relu(conv0(x))

I added these lines at the beginning of GNN.py:

import os
os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'max_split_size_mb:512'
import torch
torch.cuda.empty_cache()

Now the error appears at:

File "/home/carol/NaroNet-main/NaroNet-main/src/NaroNet/NaroNet_model/GNN.py", line 443, in MLPintoFeatures
x = F.relu(conv0(x))

Empty cluster to phenotype arrays

Hi,

I am interested in applying NaroNet to a multiplexed immunofluorescence imaging dataset. I like the idea of your self-supervised embeddings and spatial neighborhood graphs. However, I cannot get the code to function completely. I am going to describe what I think works for me and where the problem starts:

The patch-contrastive pre-training seems to work:
image
And the patch-level embeddings seem to make sense since if I just apply a simple k-means clustering it resembles some expected structures in my images:
image

The NaroNet training seems to work as well and it performs quite well in distinguishing the two groups that I have in the cross-validation confusion matrix:
image

The BioInsights module also provides some possibly reasonable output for areas and neighborhoods:
image

However, then the Phenotype composition of neighborhoods is already empty for all neighborhoods:
image
I can confirm that the respective entries in the previously saved cluster assignment numpy arrays seems to be empty since in line 489 of the Pheno_Neigh_Info.py yields and array that consists only zeros:
patch_to_pheno_assignment = np.load(osp.join(dataset.processed_dir_cell_types,'cluster_assignmentPerPatch_Index_{}_0_ClustLvl_{}.npy'.format(idxclster[1], clusters[-3])))
So when I do patch_to_pheno_assignment.max() I get 0.

And then the code crashes at
image
because the respective entries in CropConf are just an empty list.

Do you have any ideas on what to look into or what things or intermediate results I could check to see where things go wrong?

Any help is much appreciated. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.