Git Product home page Git Product logo

bnm's People

Contributors

cuishuhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bnm's Issues

Supplementary materials

Hi, I would like to know where I can see the supplementary material in the article, I would like to understand the process of deriving the theory in your article, thanks!

About Rank

Thank you for your excellent work.
And then I have a question, can we replace the kernel norm by just taking the rank of the matrix, and then making its rank as close to the class as possible.

In BNM, loss value is negative

Hi, first of all , thanks for the code and your paper, it's really excellent work.
And during the debug, i found the loss value is negative, is that right?
i debug the BNM in DA, the dataset was office31, source is amazon and target is dslr.
the "Transfer loss" will be -0.8 and classifier loss will be 0.02 after 2000 iterations.

and also, i found that if we simple calculate the classifier loss, the target acc will reach 100% near 1800 iteration, which means if we cut the BNM loss, it will be no harm to transfer the net from source to target.

am i missing something?
by the way, we are using pytorch1.9.0

We look forward to your reply. thanks again.

核范数的计算

在代码中, 你的计算方式 transfer_loss = -torch.mean(s_tgt),使用的是均值而不是和,这里到底应该是和?还是均值呢?

use nuclear-norm instead of F-norm

Hi, @cuishuhao , thanks for the code release.
I have some confusion about the details of the paper.
According to the inequality (5) in original paper, the two norms can bound each other. Thus, optimizing F-norm is equal to optimizing nuclear-norm..
Thanks

where could find the Supplementary?

Thank you very much for the excellent work and sharing the code generously!

In Section 3.1 you mentioned the maximum of F-norm and the minimum of entropy could be achieved at the same value in Supplementary. I was wondering where could I find the Supplementary? I couldn't find it on the arxiv page.

Sorry for the bothering and thanks again.

cannot reproduce the results in the paper

Hi, @cuishuhao , thanks for your code implementations, but when I tried to reproduce the results in the paper using the office-31 dataset, specifically from the dslr dataset to the amazon dataset, the final acc is around 69% (CDAN+BNM), can you try to reproduce the results using this version of code and release it here? are there any hyperparameters that you change in this transferring scenario?

Thanks,

convergence problem with torch.svd

Hi:
I'm impressed by your paper and try to apply the BNM loss for domain adaptation problem. However, the torch.svd() function usually crush down, my batch-size and vector size is 128*1000, I wonder how do you figure out the torch.svd() problem?
Best wishes!

Is it right to use `torch.mean(s_tgt)` when C < B?

Hi and I am studying your approach with your implementation. My question is that in your paper you use (Equation 12) to compute the BNM loss, and the divisor is the batch size. But in BNM/DA/BNM/train_image.py L#164 I found that this is done with torch.mean(). Then if the class number is smaller than batch size, the SVD operation will generate a s_tgt with length C instead of B. Wouldn't that be incorrect according to the original equation? Why don't explicitly divide with the batch size?

Tensor

Hello,if I have a Tensor with size [8,3,512,512], how can I compute BNM?

diversity ratio code

Hello, the diversity ratio is measured by the mean predicted category number dividing the mean ground-truth category number. Is there a code to calculate the diversity ratio? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.