albert0147 / aad_sfda Goto Github PK
View Code? Open in Web Editor NEWCode for our NeurIPS 2022 (spotlight) paper 'Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation'
Code for our NeurIPS 2022 (spotlight) paper 'Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation'
Hi, I really appreciate you sharing your code here. I have a question for you. Where in your code does it reflect that the value of beta is automatically selected using the SND method? Thank you!
Hi Shiqi,
Thanks you very much for providing code and congratulations on your paper acceptance!
After reading the loss function part, I could not find how the code reflects Eq.5 of paper. Please correct me if I miss anything:
First, the first term of Eq.5 is consistency between local neighbours. I understand you use a memory bank to save softmax outputs, and then get K neighbours.
Line 316 in 9a4c8bf
However, I cannot find the connection between KlD loss and the first term in Eq.5
Line 320 in 9a4c8bf
This line of code does not equal to the first term of Eq.5. I am confused by this, please help me solve this.
Second, for the second term, it is to disperse the prediction of potential dissimilar features. However, your code does not reflect this thing.
Line 330 in 9a4c8bf
Given a test sample, this line of code regards the rest of samples in the batch as background. This is also not equal to the definition of background set in the paper.
This work claims that "provide a surprisingly simple solution for source-free domain adaptation, which is an upperbound of the proposed clustering objective". Therefore, I expect the code corresponds to equation.
Please help me address the concern, and correct my if I misunderstand anything.
Thanks to the author for contributing the code! Can you provide your pre-trained source models?
Could you provide the DomainNet experiment code in ODA settings?
Hi, there!
I notice that in the dataloader for test, the parameter shuffle
of the init of DataLoader is set to False
in Visda17, whereas in office-home and office31, it is set to True
. Is this setting on purpose?
i.e. in line 120 of tar_adaptation.py
, line 154 of office-home/train_tar.py
, line 197 of office-home/office31_tar.py
Dear author,
I'm an undergraduate student who is quite interested in SFDA, and I have been following your work from G-SFDA, NRC to AaD. I really appreciate your works and the wonderful performance they achieve.
Recently, I'm trying to understand your work of AaD. However, I become a little confused about some implementation details as I conbine your paper and source code.
From my understanding, the B_i
in div term contains all other items in a mini batch except those in C_i
. In other words, items that are k nearest neighbors should be excluded from B_i
, as presented in the paper. However, in your code I copied below:
mask = torch.ones((inputs_target.shape[0], inputs_target.shape[0]))
diag_num = torch.diag(mask)
mask_diag = torch.diag_embed(diag_num)
mask = mask - mask_diag
if args.noGRAD:
copy = softmax_out.T.detach().clone()
else:
copy = softmax_out.T # .detach().clone() #
dot_neg = softmax_out @ copy # batch x batch
dot_neg = (dot_neg * mask.cuda()).sum(-1) # batch
neg_pred = torch.mean(dot_neg)
loss += neg_pred * alpha
it seems that only diagonal entries in mask
are set to 0, rather than k nearest neighbors.
I suppose I must have some misunderstandings, so I create the issue, hoping to get your answer. I would appreciate it if you could answer my qusetion. Looking forward to your reply.
Hi, professor. I'm very interesting in your paper. Could you provide the Office-Home experiment setting file in paritial-set DA settings, i.e., hyper-parameters K and β. Thank you very much and look forward to your early reply.
Thanks to the author for contributing the code!
Based on the hyperparameters provided in the paper, I found in my experiments that the accuracy of the OfficeHome dataset is very low (only 1.58% in task A->P) and seems difficult to fit.
The source domain model parameters I used were provided by SHOT.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.