Git Product home page Git Product logo

alfs's People

Contributors

hitcszx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

alfs's Issues

Request kind citation of our two papers as they are highly relevant.

Dear authors,

How are you?

I read your great work about asymmetric loss functions: Asymmetric Loss Functions for Learning with Noisy Labels, ICML 2021.

However, we have two pieces of work which worked on this aspect too, which I believe to be highly relevant.

(1) In DERIVATIVE MANIPULATION FOR GENERAL EXAMPLE WEIGHTING (https://arxiv.org/pdf/1905.11233.pdf), we mentioned :
image

(2) In IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude’s Variance Matters, we mentioned:
image

Both papers have been included in my PhD thesis: Example weighting for deep representation learning, Xinshao Wang, 2020

Therefore, could I ask you kindly to cite our two papers? If you could update your arXiv version by citing our two papers, I would appreciate it a lot.

Thanks very much.
Kind regards,

Adjustment of hyper-parameters

Hi,
Thanks for this great implementation.
There are some hyper-parameters in the proposed ALFs. And the test accuracy corresponding to each hyper-parameter with different values is shown in the paper. However, there is no information on how to adjust the hyper-parameters in the training phase. So I wonder how you choose the value of these hyper-parameters. Is there a validation set with noisy labels? If yes, how to evaluate the performance on the noisy validation set? Looking forward to your reply.

Best Regards,

Implementation Problem for NLNL loss

Hi,

Thanks for this great implementation. When I ran this code, I found that the implementation of the NLNL loss might exist problem that will cause loss to be Nan. It might be caused by this line:

ALFs/losses.py

Line 233 in f9c5bf3

labels = labels * 0 - 100
. This line will make all labels to be -100. Would you mind checking this issue?

Best Regards,
Hongxin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.