Git Product home page Git Product logo

g-sfda's People

Contributors

albert0147 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

g-sfda's Issues

Are test data the same as data used for adaptation?

Hi. Thanks for sharing your research.

I was wondering if the test data from target domain are the same as the data used for adaptation on the target domain. Because it seems your source code gives the same path to both dataset(https://github.com/Albert0147/G-SFDA/blob/main/train_tar_visda.py#L392-L395) . In my opinion, test data should not be accessed before evaluation even if they are unlabeled. Am I missing something, or is it an intended behavior?

Best regards.

About At

Good work! I have two questions:

  1. The generation of the sparse domain attention (SDA) vector in the code is different from that in the paper. Why? In your paper, there is an embedding layer. But in the code, the sparse domain attention (SDA) vector is initialized and regularized by their norm.

  2. Why target domain attention can be generalized by only using the source data? It seems that At is generalized by feeding source data to a different path with a mask that is initialized differently. Why does this work?

About training office home dataset

Thanks for your ingenious methods. I am really impressed by the methods ๐Ÿ‘
However, I'd like to ask few questions about while training. I first downloaded VIsDA dataset, but it seems like the dataset structure is a bit complicated, so I instead used office home dataset first.
I am a little bit unsure that is it right to modify the Art, Clipart, ....txt files to fit my environment?
Also, after downloading the resnet baseline model at the beginning after running the training command, it seems that there is no output for like an hour. May I ask how long it takes to train the office-home dataset?
Thanks for reading this issue. I really appreciate it.

How to train At?

Thanks for your interesting work. I'm impressed with your method, but a little confused.
In the paper, "As and At are both trained on the source domain and are fixed during the adaptation to the target domain." How do you train At without target domain data?

Concern about the Source-Free assumption

Hi,
I have some doubts about the source-free assumption you do in the paper.
According to Algorithm 1, the pretrained Source model also employs A_t.
In my understanding this is not a source-pretraining stage, in fact adaptation is already happening, since the model is exposed to both source and target. Besides, this means source and target are available at the same time, i.e. the source free assumption drops.

I would be happy if you could clarify this.
Best,
P.

Question about VisDA2017

Good work!

we know VisDA2017 has three part: train dataset, validation dataset, and test dataset.

In code (train_src_visda.py and train_tar_visda.py)

for training source model, G-SFDA use VisDA's train dataset (90% train and 10% test)
for training target model, G-SFDA use VisDA's validation dataset (shuffle =ture, batchSize)
for testing target model, G-SFDA also use VisDA's validation dataset (shuffle =false, batchSize*3)

one question: why not use VisDA's test dataset?

About the differences in results between tables 1-4

Thanks for your great contribution to the SFDA task. I am really impressed by the method in your G-SFDA paper.
However, we found that there are some differences in results between tables 1-4. Specifically, for the VisDA-C dataset, in table 1, the avg result is 85.4, but under the same condition, the avg result in table 3 is 85.0 ......
Also, this difference is the same as the OfficeHome dataset, in table 2, the avg result is 71.3, but under the same condition, the avg result in table 4 is 70.8 ......
Could you explain the reasons for these differences?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.