albert0147 / g-sfda Goto Github PK
View Code? Open in Web Editor NEWcode for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
License: MIT License
code for our ICCV 2021 paper 'Generalized Source-free Domain Adaptation'
License: MIT License
Hi. Thanks for sharing your research.
I was wondering if the test data from target domain are the same as the data used for adaptation on the target domain. Because it seems your source code gives the same path to both dataset(https://github.com/Albert0147/G-SFDA/blob/main/train_tar_visda.py#L392-L395) . In my opinion, test data should not be accessed before evaluation even if they are unlabeled. Am I missing something, or is it an intended behavior?
Best regards.
Good work! I have two questions:
The generation of the sparse domain attention (SDA) vector in the code is different from that in the paper. Why? In your paper, there is an embedding layer. But in the code, the sparse domain attention (SDA) vector is initialized and regularized by their norm.
Why target domain attention can be generalized by only using the source data? It seems that At is generalized by feeding source data to a different path with a mask that is initialized differently. Why does this work?
Thanks for your ingenious methods. I am really impressed by the methods ๐
However, I'd like to ask few questions about while training. I first downloaded VIsDA dataset, but it seems like the dataset structure is a bit complicated, so I instead used office home dataset first.
I am a little bit unsure that is it right to modify the Art, Clipart, ....txt files to fit my environment?
Also, after downloading the resnet baseline model at the beginning after running the training command, it seems that there is no output for like an hour. May I ask how long it takes to train the office-home dataset?
Thanks for reading this issue. I really appreciate it.
Thanks for your interesting work. I'm impressed with your method, but a little confused.
In the paper, "As and At are both trained on the source domain and are fixed during the adaptation to the target domain." How do you train At without target domain data?
Hi,
I have some doubts about the source-free assumption you do in the paper.
According to Algorithm 1, the pretrained Source model also employs A_t.
In my understanding this is not a source-pretraining stage, in fact adaptation is already happening, since the model is exposed to both source and target. Besides, this means source and target are available at the same time, i.e. the source free assumption drops.
I would be happy if you could clarify this.
Best,
P.
Good work!
we know VisDA2017 has three part: train dataset, validation dataset, and test dataset.
In code (train_src_visda.py and train_tar_visda.py)
for training source model, G-SFDA use VisDA's train dataset (90% train and 10% test)
for training target model, G-SFDA use VisDA's validation dataset (shuffle =ture, batchSize)
for testing target model, G-SFDA also use VisDA's validation dataset (shuffle =false, batchSize*3)
one question: why not use VisDA's test dataset?
@Albert0147
Albert, May I ask if this method can be used in medical image dataset with several domains?
Thanks for your great contribution to the SFDA task. I am really impressed by the method in your G-SFDA paper.
However, we found that there are some differences in results between tables 1-4. Specifically, for the VisDA-C dataset, in table 1, the avg result is 85.4, but under the same condition, the avg result in table 3 is 85.0 ......
Also, this difference is the same as the OfficeHome dataset, in table 2, the avg result is 71.3, but under the same condition, the avg result in table 4 is 70.8 ......
Could you explain the reasons for these differences?
It seems that we need the split of the train/test of the source/target datasets for office-home in the utils.py, could you provide the txt files?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.