Git Product home page Git Product logo

jcs's Introduction

Hi there 👋

jcs's People

Contributors

yuhuan-wu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

jcs's Issues

Implementation detail of "Feature combination" module

Hello Mr. @yuhuan-wu,

Recently I have read your paper and I am particularly interested in the "Feature combination" module. In the paper, you mentioned that: "In merging the features of stage k, we have two feature maps Ak, MkE for the merge. We first resize the smaller one Ak, making it the same size as the larger one Mk E, and concatenate them together. Then, we apply a simple 1 × 1 convolution layer for the feature channel reduction, making the output feature maps the same number of channels as Mk E. Such 1 × 1 convolution layer is followed by a SE block with a reduction rate of 4. At last we use a 3 × 3 convolution layer of the same number of input and output channels as the transition layer"

I thoroughly examined the code and I found that FuseNet was responsible for this. The implementation is as follows:

class FuseNet(nn.Module):
    def __init__(self, c1=[1,2,3,4,5], c2=[1,2,3,4,5], out_channels=[1,2,3,4,5]):
        super(FuseNet, self).__init__()
        self.cat_modules = nn.ModuleList()
        self.se_modules = nn.ModuleList()
        self.fuse_modules = nn.ModuleList()
        for i in range(len(c1)):
            self.cat_modules.append(ConvBNReLU(c1[i]+c2[i], out_channels[i]))
            self.se_modules.append(ConvBNReLU(out_channels[i], out_channels[i]))
            self.fuse_modules.append(ConvBNReLU(out_channels[i], out_channels[i]))

    def forward(self, x1, x2):
        x_new = []
        for i in range(5):
            x1[i] = F.interpolate(x1[i], x2[i].shape[2:], mode='bilinear', align_corners=False)
            m = self.cat_modules[i](torch.cat([x1[i], x2[i]], dim=1))
            #print(m.shape)
            m = self.se_modules[i](m)
            #print(m.shape)
            m = self.fuse_modules[i](m)
            #print(m.shape)
            x_new.append(m)
        return x_new[0], x_new[1], x_new[2], x_new[3], x_new[4]

These 2 lines correctly correspond to "We first resize the smaller one Ak, making it the same size as the larger one Mk E, and concatenate them together."

x1[i] = F.interpolate(x1[i], x2[i].shape[2:], mode='bilinear', align_corners=False)
m = self.cat_modules[i](torch.cat([x1[i], x2[i]], dim=1))

However, after this operation, you didn't use a Conv 1x1 as you described in the paper, instead you used a normal Conv 3x3 as defined in class ConvBNReLU. The self.se_modules does not contain any SEBlock. Can you please elaborate your implementation?

Thank you.

Best regards,
Louis

Why do you perform both AM and Segmentation?

In your paper, why perform both Activation mapping and segmentation? What is the use of activation mapping if you are able to obtain the segmented lesions directly? Thank you.

.

.

Training process

Hi, thanks for your share of JCS model. How can I access the training code?

Training issue

Thank you for providing the code, you did a great work. However, I am still a bit confused after reading the README, as it only mentions the training method for Segmentation. I want to ask, in the code, are both the classification branch and segmentation branch trained using train_single.py? train_single.py seems like it's for segmentation training, so how is the classification part trained if I use my own dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.