ypxie / pytorch-neucom Goto Github PK
View Code? Open in Web Editor NEWPytorch implementation of DeepMind's differentiable neural computer paper.
Pytorch implementation of DeepMind's differentiable neural computer paper.
mldl@mldlUB1604:/ub16_prj/pytorch-NeuCom/tasks/Copy$ python train.py/ub16_prj/pytorch-NeuCom/tasks/Copy$
Using CUDA.
Iteration 0/100000Traceback (most recent call last):
File "train.py", line 177, in
output, _ = ncomputer(input_data)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(*input, **kwargs)
File "../../neucom/dnc.py", line 108, in forward
interface['erase_vector']
File "../../neucom/memory.py", line 384, in write
lookup_weight = self.get_content_address(memory_matrix, key, strength)
File "../../neucom/memory.py", line 78, in get_content_address
cos_dist = cosine_distance(memory_matrix, query_keys)
File "../../neucom/utils.py", line 154, in cosine_distance
memory_norm = torch.norm(memory_matrix, 2, 2, keepdim=True)
TypeError: norm() got an unexpected keyword argument 'keepdim'
mldl@mldlUB1604:
Hey,
In function get_allocation_weight()
in memory.py
, cumprod
is used, but as far as I know, this operation does not have autograd support yet and is still in PR: https://github.com/pytorch/pytorch/pull/1439, so I'm really confused as how is the backward passed through this operation for you? Is the resulting variables not in the computation graph?
Thanks in advance! Really nice repository:)
Using CPU.
Iteration 0/100000Traceback (most recent call last):
File "train.py", line 177, in <module>
output, _ = ncomputer(input_data)
File "/home/jramapuram/.venv/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "../../neucom/dnc.py", line 108, in forward
interface['erase_vector']
File "../../neucom/memory.py", line 398, in write
allocation_weight = self.get_allocation_weight(sorted_usage, free_list)
File "../../neucom/memory.py", line 144, in get_allocation_weight
flat_unordered_allocation_weight.cpu()
File "/home/jramapuram/.venv/lib/python2.7/site-packages/torch/autograd/variable.py", line 629, in scatter_
return Scatter(dim, True)(self, index, source)
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
(VENVpytorch2) mldl@mldlUB1604:~/ub16_prj/pytorch-NeuCom/tasks/Copy$ python train.py
Using CUDA.
Iteration 0/100000Traceback (most recent call last):
File "train.py", line 177, in
output, _ = ncomputer(input_data)
File "/home/mldl/ub16_prj/VENV_host/VENV_pytorch2/VENVpytorch2/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(*input, **kwargs)
File "../../neucom/dnc.py", line 108, in forward
interface['erase_vector']
File "../../neucom/memory.py", line 384, in write
lookup_weight = self.get_content_address(memory_matrix, key, strength)
File "../../neucom/memory.py", line 78, in get_content_address
cos_dist = cosine_distance(memory_matrix, query_keys)
File "../../neucom/utils.py", line 154, in cosine_distance
memory_norm = torch.norm(memory_matrix, 2, dim=True)
File "/home/mldl/ub16_prj/VENV_host/VENV_pytorch2/VENVpytorch2/local/lib/python2.7/site-packages/torch/autograd/variable.py", line 596, in norm
return Norm(p, dim)(self)
File "/home/mldl/ub16_prj/VENV_host/VENV_pytorch2/VENVpytorch2/local/lib/python2.7/site-packages/torch/autograd/_functions/reduce.py", line 192, in forward
output = input.norm(self.norm_type, self.dim)
TypeError: norm received an invalid combination of arguments - got (int, bool), but expected one of:
(VENVpytorch2) mldl@mldlUB1604:~/ub16_prj/pytorch-NeuCom/tasks/Copy$
Hi,
I think the training progresses well, and there does'nt seem to be any improvement after 10,000
iterations. Sometimes I get nan
s very early on, and have to restart, and I also got nan
s after 40,000
iterations.
May I ask how to serialise and save the DNC
at checkpoints? I set a check point after 100
iterations, and got the following error,
Using CUDA.
Iteration 0/50000
Avg. Logistic Loss: 0.6931
Iteration 50/50000
Avg. Logistic Loss: 0.6674
Iteration 100/50000
Avg. Logistic Loss: 0.4560
Saving Checkpoint ... Traceback (most recent call last):
File "train.py", line 183, in <module>
torch.save(ncomputer, f)
File "/home/ajay/anaconda3/envs/rllab3/lib/python3.5/site-packages/torch/serialization.py", line 120, in save
return _save(obj, f, pickle_module, pickle_protocol)
File "/home/ajay/anaconda3/envs/rllab3/lib/python3.5/site-packages/torch/serialization.py", line 186, in _save
pickler.dump(obj)
_pickle.PicklingError: Can't pickle <class 'memory.mem_tuple'>: attribute lookup mem_tuple on memory failed
PS - Sometimes I get nan
s very early on, and have to restart, and I also got nan
s after 40,000
iterations. I've seen this very often with Neural Turing Machines, so I guess it's inherent with these type of things?
Hi @ypxie,
I just saw DeepMind released their Tensorflow/Sonnet implementation of the DNC,
https://github.com/deepmind/dnc
I was wondering if you're interested in re-looking at this project to see if we can compare it with DMs code? I spent quite a lot of time previously on your implementation, be a lot of fun to start working on it again ๐
Kind regards,
Ajay
Hi thanks for sharing this, it's really cool that you managed to get it working ๐
I got an error though when trying to run the repo,
Iteration 0/100000Traceback (most recent call last):
File "train.py", line 155, in <module>
loss.backward()
File "/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/variable.py", line 158, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File "/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/torch/autograd/variable.py", line 201, in _do_backward
"leaf variable was used in an inplace operation"
AssertionError: leaf variable was used in an inplace operation
I've seen this before but I can't remember how to fix it? Can you suggest anything please?
Also maybe there's a typo on the "how to run" part of the cover page for the repo, I think it should be ,
python tasks/Copy/train.py
Thanks a lot for your help ๐
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.