jihanyang / afn Goto Github PK
View Code? Open in Web Editor NEW(ICCV'19 Best Paper Nomination) Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation
(ICCV'19 Best Paper Nomination) Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation
Hi,
Did you use pretrained model on Imagenet as other domain adaptation methods?
Hi~ Where can I get "IAFN/result" folder and "/data/da" folder? As they are needed in eval.py.
Thank you.
Hi,
may I get the code for producing figure 1 of your paper, please?
Hi, thanks for your sharing the code!
As shown in the AFN/vanilla/Visda2017/SAFN/code/model/net.py , the "ResClassifier" contains three FC layers with 2048 * 1000, 1000 * 1000, 1000 * class_num neurons, respectively. While as shown in the AFN/vanilla/Office31/SAFN/code/model/net.py /, the "ResClassifier " contains two FC layers with 2048 * 1000, and 1000 * class_num neurons, respectively.
Dear Yang,
Thank you so much for sharing your code.
I have a question regarding the proposed get_L2norm_loss_self_driven loss. In your code, for example in Office31,
def get_L2norm_loss_self_driven(x):
radius = x.norm(p=2, dim=1).detach()
assert radius.requires_grad == False
radius = radius + 1.0
l = ((x.norm(p=2, dim=1) - radius) ** 2).mean()
return args.weight_L2norm * l
x.norm will be cancelled out when you compute L. This results L always be 1 and the loss will always be args.weight_L2norm which is 0.05 during the training. I also print L when I train your model, and it indeed always one.
Could you please explain this to me so that I can understand your paper better?
Thank you so much in advance for your help.
Hi, the paper says you have used 10-crop during evaluation. But it didn't appear in the code. In addition, I cannot reproduce the results on both office31 and visda:
training A->W epoch : 20
25it [00:08, 3.00it/s]
training A->W epoch : 21
25it [00:08, 2.99it/s]
training A->W epoch : 22
25it [00:08, 2.90it/s]
training A->W epoch : 23
25it [00:08, 3.00it/s]
training A->W epoch : 24
25it [00:08, 2.93it/s]
training A->W epoch : 25
25it [00:08, 2.95it/s]
training A->W epoch : 26
25it [00:08, 2.96it/s]
Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fe467713128>>
Traceback (most recent call last):
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in del
self._shutdown_workers()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
self.worker_result_queue.get()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:
training A->W epoch : 27
25it [00:08, 2.99it/s]
Exception ignored in: <bound method _DataLoaderIter.del of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fe46c1bc4e0>>
Traceback (most recent call last):
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in del
self._shutdown_workers()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
self.worker_result_queue.get()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/v-lew/anaconda3/envs/PyTorch0.4Python3.6/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:
training A->W epoch : 28
25it [00:08, 2.91it/s]
training A->W epoch : 29
25it [00:08, 2.99it/s]
Hi there. I wonder why we should the L2 norm the feature 'x' first?
The L2 norm of feature 'x' will always be 1, and the true norm of feature 'x' is lost.
Hello,thank you for sharing your codes. It is so useful.
I am confused that whether you can publish the codes about visualization in figure 1 and figure 4?
Thank you very much.
You have done a great work.
I am confused that whether you can publish the codes about visualization in figure 1 and figure 4?
Thank you very much. Looking forward to your replay.
Hi,
May I know that how we can test the results with Alexnet model? It would be really nice if you compare your results with Alexnet model?
Thanks and regards,
Hi,
Thanks for releasing the code for your work.
One quick question: In section 3.2 of your ICCV paper, you wrote about an L2-preserved dropout operation (to meet the adaptive L2 feature norm goal). However, I can't seem to find it in this repository. For example, the model from the vanilla>VisDA code seems to use ordinary nn.Dropout - which, if I recall correctly, is the L1 preserved dropout that you mention in the paper. So is the L2 preserved dropout not used, or is it simply omitted in this release of the code?
Thanks
Hi jihan,
It's a great work, and I am now reproducing the results.
I am a little bit confused about the result of the source only ResNet-50. I believe this result can be achieved by simply setting the weights of L2_norm loss and ent loss to be zero. However, it turned out that the results are much better. For example, in Office-31 A to W, the accuracy is about 0.79 (0.68 reported). Though, I saw that the same results are also reported in other papers.
Looking forward to your reply.
Hi,
I have a quick question toward get_L2norm_loss_self_driven in the example of Office 31. As you mentioned in #4, the loss value will only relate to delta r and it would be a constant value during the whole training process.
If that is the case, then basically the total loss at the end would be (classification loss + entropy loss + constant generated by get_l2norm_loss_self_driven). Since that constant is only depended on delta R and has nothing to do with L2_norm, I am wondering what is the role of L2 norm in this case?
Looking forward to hearing from you soon!
Thank you,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.