Git Product home page Git Product logo

sharifamit / rvgan Goto Github PK

View Code? Open in Web Editor NEW
85.0 2.0 20.0 249 KB

[MICCAI'21] [Tensorflow] Retinal Vessel Segmentation using a Novel Multi-scale Generative Adversarial Network

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
keras tensorflow retinal-vessel-segmentation vessel-segmentation gan generative-adversarial-network generative-adversarial-networks conditional-gan medical-imaging medical-image-analysis

rvgan's Introduction

Deep learning researcher working on Bio-medical Imaging

Selected Publications

I can be reached at

https://twitter.com/dopplerganger12 https://www.linkedin.com/in/sharif-k-b15004105/

rvgan's People

Contributors

sharifamit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

rvgan's Issues

Question about loss decline

May I ask what kind of value range will your losses eventually stabilize in, including d1, d2, ect?
The value g_local loss has remained steady at about 3 when I try to train the Drive Dataset from 18 to 200 epochs.
I wonder what's the reason for that

Warning while training

Hi,
When I am training the model with the newer version of tf2.6, I got this warning message:
2021-10-22 14-03-18 的屏幕截图
But my training seems goes well, don't know if this is a bug.

Question

When I run train.py, I found the loss is 'nan' at behind of 2-epochs. Do you have this problem when you train. So I want to know why is it and how to solve this problem.

Are there pretrained model weights available?

Hi, great stuff here, really interesting approach!

Are there pre-trained models weights available somewhere? I have an application in mind that I would love to use them for, would be citing your paper.

length error!

Hi, I got this error running train.py using DRIVE dataset in Colab:

Traceback (most recent call last):
File "/content/drive/MyDrive/RVGAN-master/train.py", line 215, in
train(d_model1, d_model2,g_model_coarse, g_model_fine, rvgan_model, dataset, n_epochs=args.epochs, n_batch=args.batch_size, n_patch=[128,64],savedir=args.savedir)
File "/content/drive/MyDrive/RVGAN-master/train.py", line 100, in train
g_global_loss,_ = g_global_model.train_on_batch([X_realA_half,X_realB_half], [X_realC_half])

ValueError: The two structures don't have the same sequence length. Input structure has length 1, while shallow structure has length 2.

Would you help me to fix it?

pretrained model of chase can't be loaded

Hello, the pretrained model of CHASE can't be loaded. Error message is
ValueError: Shapes (7, 7, 4, 128) and (64, 4, 7, 7) are incompatible.

In addition, could you provide the trained model?
I have tried many times, but I can't get the f1 score near which descaribed in the paper.
Until now, the best F1 score trained on drive is 0.78, Se is 0.74. The results on STARE is better(f1=0.8030 Se=0.8191). The training process is time-consuming(I use a card of 2080ti).

Killed error

Hi,

I got this error. I tried to drop batch size, it didn't work.

Screenshot from 2022-03-08 09-38-25

System features as below:
Screenshot from 2022-03-08 09-40-00

Thank you

need help!

Hello! can you send me your generated dataset npz file? I can't generate is using your code

libtiff error

Hi
when i run infer.py, an error occurred. I had installed libtiff,
Traceback (most recent call last):
File "infer.py", line 7, in
from libtiff import TIFF
File "/home/miniconda3/envs/gwc_test/lib/python3.6/site-packages/libtiff/init.py", line 23, in
from .libtiff_ctypes import libtiff, TIFF, TIFF3D # noqa: F401
File "/home/miniconda3/envs/gwc_test/lib/python3.6/site-packages/libtiff/libtiff_ctypes.py", line 128, in
value = eval(value)
File "", line 1
\

looking forward to your reply
best wishes

local_plot predictions always blank

I am trying to train the DRIVE data with RVGAN-tf-2.6. An issue I'm finding is the predictions in local_plot are always blank even after 53 epochs of training which took over 20 hours:

local_plot_000053
local_plot_000053.png

The global_plot though is visible
global_plot_000053
global_plot_000053.png

I am trying to reproduce eval.py IOU values but the predictions are always blank and looks like it is because predictions from g_local_model are always blank though not from g_global_model

Inference time

Hi,
I noticed that in your paper the inference time for each image is 0.025 second, is that for patch image or the whole image?

How to determine the best model

Hi!Thank you for your work!
I'm a little confused that you said,we should looped over all 100 saved weights to find the best performing coarse and fine generator pairs and train again.Which data do we use to determine the best model?

Some questions about the paper and code

I have read your paper and run your code on my computer. I found some issues.

  1. SFA block written in the code is different from which described in the paper. It's short of a input added to the terminal output.
  2. Discriminator Residual Block described in the paper is not used in your code. In the code, Discriminator Residual Block just outputs two times of input.
    And i have a question.
    Is the network in the code no need to modify identical with which in the paper?
    Look forward to your reply!

Hello!

1667311930327
I meet i problem. I don't understand you how to solve the data number is small,and how to get the number 4320 for STARE、15120 for CHASE-DB1、4200 for DRIVE .can you explain it? or send me your preprocess datasets. Thanks!!!

datasets

Could you provide the data set of npz format or provide preprocessed code?

Hi~

I try to do your work,but meet a bug.
03000ae4a968fc3b9ded2c3f3859ed4
22a7fb94e3a7319d711d6881d25c0e9
2e5832eb14769d749ec282c29f0c718
My tensorflow is 2.7.4 and keras is 2.7.0. I don't know what whould I do.

the generation of the final output image

The paper descire the model and training skill very well in detail, but it seems to forget to describe the generation of the final output image. How do you use the results from Gf and Gc to predict an image?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.