Git Product home page Git Product logo

sharifamit / vtgan Goto Github PK

View Code? Open in Web Editor NEW
42.0 3.0 7.0 751 KB

[ICCV'21] [Tensorflow] Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
fluorescein-angiography generative-adversarial-network fundus-image-analysis vision-transformers vision-transformer semi-supervised-learning disease-prediction diabetic-retinopathy-detection

vtgan's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

vtgan's Issues

train problem: Incompatible shapes: [2,64,64,1] vs. [2,32,32,1]

I am sorry to ask many questions, when i run train.py,there is a problem, and my tf==2.5.0 , keras==2.5.0, appreciate it, best wishes :D
Traceback (most recent call last): File "train.py", line 220, in <module> train(d_model1, d_model2, g_model_coarse, g_model_fine, vt_model, dataset, n_epochs=args.epochs, File "train.py", line 73, in train d_loss4 = d_model2.train_on_batch([X_realA_half,X_fakeB_half],[y1_coarse,y2,d_feat2_fake[2]])[0] # [,X_fakeB_half] File "/opt/conda/lib/python3.8/site-packages/keras/engine/training.py", line 1800, in train_on_batch logs = self.train_function(iterator) File "/opt/conda/lib/python3.8/site-packages/keras/engine/training.py", line 830, in train_function return step_function(self, iterator) File "/opt/conda/lib/python3.8/site-packages/keras/engine/training.py", line 820, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 1285, in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 2833, in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py", line 3608, in _call_for_each_replica return fn(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 597, in wrapper return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/keras/engine/training.py", line 813, in run_step outputs = model.train_step(data) File "/opt/conda/lib/python3.8/site-packages/keras/engine/training.py", line 771, in train_step loss = self.compiled_loss( File "/opt/conda/lib/python3.8/site-packages/keras/engine/compile_utils.py", line 201, in __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) File "/opt/conda/lib/python3.8/site-packages/keras/losses.py", line 142, in __call__ losses = call_fn(y_true, y_pred) File "/opt/conda/lib/python3.8/site-packages/keras/losses.py", line 246, in call return ag_fn(y_true, y_pred, **self._fn_kwargs) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper return target(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/keras/losses.py", line 1202, in mean_squared_error return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 10405, in squared_difference _ops.raise_from_not_ok_status(e, name) File "/opt/conda/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 6897, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [2,64,64,1] vs. [2,32,32,1] [Op:SquaredDifference]

The link for the dataset has become inactive

I have noticed that the download link for the dataset (Hajeb et al) has become inactive. Furthermore, when we attempted to locate the official dataset, we found that the link for the "normal" dataset has also expired. Would it be possible for you to share the dataset with me? My email address is: [email protected].

I would be immensely grateful.

Article reproduction effect

Hello, dear professor! I've been working on your Fundus2Angio and VTGAN work recently, I'm very interested in your work and trying to reproduce the results in the paper.

However, I have encountered some difficulties at present, and I would like to ask you a question to solve it.

What I want to ask is which pictures did you use for testing and how many pictures did you use to calculate the FID score?

In addition, there is another problem that I set the parameters batchsize to 4 and epoch to 100, and select the model of the last epoch from the trained model to test 850 pictures (the training set is also 850 pictures), so the calculation My FID score is as high as 130, and I really encountered difficulties, so I would like to ask you for help.

If you can give me a reply in your busy schedule, I will be very grateful!

Problem of randomcrop.py

I wanna ask the purpose of this part. In my opinion, this operation should be done by all paires.

image

need some help

hi,i love this application. I rewirtte it in pytorch and upload to github, i will appreciate that if you can have a look at my github account.
By the way, my code don't work well even though it can run. Maybe you can give me some advise for me to improve it. i have no idea about why the network doesn't learn anything about blood vessels, but learn the style and optic disc area.
Thank you for your time.
0

Pretrained model

Thanks a lot for your help!I would like to ask if I can get the pretrained model, because I feel that it takes a long time to train from 0. and how many time you spend on train? Best wishes

Ask For Details

    Hello, dear professor! Glad to receive your reply! It is a great honor to have this opportunity to communicate with you! I've thought about your reply and have some new questions to chat with you!

The first point is that I am currently reproducing the Pytorch version with reference to your Tensorflow version of Fundus2Angio. My current reproduction effect is 130. It should be that there is a problem with my implementation. I will carefully modify it according to your suggestion. I am also going to implement the Tensorflow version of Fundus2Angio to compare the effects. In addition, I am running the Pytorch version of VTGAN that you taught on the homepage. I still have questions about the model selection of these two jobs. I would like to ask how you choose the best What about the best model? Specifically, how many epoch models are used for testing? 

The second point is that the dataset link you provided contains a total of 68 pairs of images. I would like to ask whether these images are all from the 850 pairs of images obtained after the random crop operation of the original dataset? Or are they derived from datasets elsewhere? 

The third point is that I want to communicate with you about the Gan image generation task of converting Fundus to Angio image. Do you think there is still room for improvement and research? Why I ask this is because I want to take this direction as my future research direction! 

If I can get your patient reply one by one, I really appreciate it! Thanks for taking the time to respond to me! I wish you success in your work, Professor!

Help

Could you update the dataset link? I found it's invalid. Best Wishes.

Convert data problem

Hi, thanks for your contribution! There is a problem when i run convert_npz.py

Loaded:  (850, 512, 512, 3) (850, 512, 512, 1)
Traceback (most recent call last):
  File "convert_npz.py", line 53, in <module>
    savez_compressed(filename, src_images, tar_images.label)
AttributeError: 'numpy.ndarray' object has no attribute 'label'

it will solved when i revome label savez_compressed(filename, src_images, tar_images.label)
but there is another problem when i run train.py

File "train.py", line 179, in <module>
    dataset = load_real_data(args.npz_file+'.npz')
  File "/home/VTGAN-main/src/dataloader.py", line 9, in load_real_data
    X1, X2, y = data['arr_0'], data['arr_1'], data['arr_2']
  File "/opt/conda/lib/python3.8/site-packages/numpy/lib/npyio.py", line 260, in __getitem__
    raise KeyError("%s is not a file in the archive" % key)
KeyError: 'arr_2 is not a file in the archive'

So i think the main problem is in convert_npz.py, by the way, my Tensorflow==2.41,keras==2.4.3.
Thanks for your time, appreciate it!

Issue about data

Hello, I failed downloading the data from the link. Could you provide other link like google drive?
Thank your!

resuming training

why don't you load discriminator weights when resuming the training?

change input size from 512 to 256

Thanks for your nice coding!
I am trying to run your code on the dataset with a bit lower resolution and I want to crop images to 256x256 instead of 512x512.
I have modified the input_dim for preprocessing files but when I start training, some errors occur.
Therefore, I want to ask what should I change if the input size is 256x256. (i.e., input_dim, n_patch, patch_size, or any network structures...?)

dataloader.py

I think there are some problems in the dataloader.py file, as an example:
In the generate_real_data function, the function returns the y2 value, but nothing is defined as y2 inside it.
may you please update this .py file to the latest version?

Questions about the label

I have notice that you use the -1*np.ones, in other word, all -1 patch, to represent the label 1. You have marked this in your code in the function "generate_real_data":

# generate 'real' class labels (1)
y1 = -np.ones((random_samples, patch_shape[0], patch_shape[0], 1))

why you use the -1 to represent "Yes", is it because of the activation you use is the "tanh"?
Since the use of tanh in discriminator is rare(it seems that sigmoid is more normal?) , I couldn't understand using -1 patches to represent "Yes", but using 1 patches to represent "No".
Could you make a little bit explaination?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.