Git Product home page Git Product logo

class-balanced-loss-pytorch's People

Contributors

vandit15 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

class-balanced-loss-pytorch's Issues

Is this same to apply pos_weight?

Instead calculating a weight for each batch, applying to class using pos_weight argument in torch.nn.BCELoss(pos_weights=weights)

Simply, https://github.com/vandit15/Class-balanced-loss-pytorch/blob/master/class_balanced_loss.py#L71-L82

Are those line of codes same with

effective_num = 1.0 - np.power(beta, samples_per_cls)
weights = (1.0 - beta) / np.array(effective_num)
weights = weights / np.sum(weights) * no_of_classes
loss = torch.nn.BCELoss(reduction='mean', pos_weight=weights)

this code?

focal_loss with softmax

does someone has implement focal_loss with softmax correctly?could you recommend its link for me?thank you very much

What is samples_per_cls?

I should compute samples_per_cls of whole dataset or each batch? If there is 0 in samples_per_cls of each batch, the loss will be nan.

Sth about samples_per_cls

Hi, thanks for your code sharing!

I am now trying to understand how do you implement and I have something want to discuss.

In your main function you have provide us with a toy example, I noticed that you have set samples_per_cls equals [2,3,1,2,2] which indicates label 0 has 2 samples , am i right?
But you have random labels,so should I count samples of each class every times ?

I would be appreciated if you can answer my question as quick as possible. Have a good day!

base information about cbloss

Thanks for sharing code.I have modified the code. I want to calculate the multivariate classification. The code is as follows. I can always report errors. I wonder if you have tried it?

pred = logits.log_softmax(dim=1)
cb_loss = F.cross_etropy(input=pred, target=labels, weights=weights)

Thanks.

Object detection and Semantic segmentation

@vandit15 thanks for open-sourcing the code , is it possible to use Cb loss in the object detection or segmentation architecture ?? did you experiment it with any std architecture as yolo retina, deeplab ?? since i am planning on using to our custom object detection architecture so

why modulator?

Hi, i'm interested in your work! Now i have problem, why your implement code of focal loss use "modulator = torch.exp(-gamma * labels * logits - gamma * torch.log(1 + torch.exp(-1.0 * logits)))" ?

Is this implementation correct?

Hi, I wonder if this implementation is correct?

When I run the code with softmax, I get weights like this:

tensor([[0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [1.7646, 1.7646, 1.7646, 1.7646, 1.7646],
        [1.7646, 1.7646, 1.7646, 1.7646, 1.7646],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824],
        [0.8824, 0.8824, 0.8824, 0.8824, 0.8824]])

where each sample enjoin the same weights.

This is weird, right? According to the original paper, we should have different weight for each class (column), right?

Line 87 typo

original:
cb_loss = F.binary_cross_entropy_with_logits(input = logits,target = labels_one_hot, weights = weights)

update:
cb_loss = F.binary_cross_entropy_with_logits(input = logits,target = labels_one_hot, weight = weights)

  • pytorch optimizer takes keyword 'weight', not 'weights'

Thanks for the resource, it worked great!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.