element-research / dpnn Goto Github PK
View Code? Open in Web Editor NEWdeep extensions to nn
License: BSD 3-Clause "New" or "Revised" License
deep extensions to nn
License: BSD 3-Clause "New" or "Revised" License
Hi, I'm getting this error when requiring dpnn:
/home/lighton/torch/install/share/lua/5.1/trepl/init.lua:389:
/home/lighton/torch/install/share/lua/5.1/trepl/init.lua:389:
/home/lighton/torch/install/share/lua/5.1/torch/init.lua:102:
bad argument #2 (invalid parent class name nn.Decorator)
stack traceback:
[C]: in function 'error'
/home/lighton/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
[string "_RESULT={require 'dpnn'}"]:1: in main chunk
[C]: in function 'xpcall'
/home/lighton/torch/install/share/lua/5.1/trepl/init.lua:661: in function 'repl'
...hton/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:204: in main chunk
[C]: at 0x00405d50
I tried
luarocks remove dpnn
luarocks install dpnn
But I still get it. May be related to this?
Hi nicholas,
did you try doing the semi-supervised mode with the ladder network in this repo?
Best,
Jake
Module to convert between different data formats : nn.Convert('bchw', 'bf')
Does nn.Transpose, nn.View
Hi, I'm interested in using inception layers, some without the 1x1 path,
and some without the reduction after pooling.
I committed my quick solution to this at https://github.com/bamos/dpnn/commit/69f7caa40378cc6b97fe4b1d6c4998ec98235a4b,
which doesn't add these layers if the reduceSize
value is nil
.
I'm happy to discuss alternatives and help add implementations/tests to merge this
and #11 into the master branch.
Hi,
I'm looking for a version of ZipTable with the following functionality:
given a table {vec, tab}
of vector vec
and table of vectors tab
, produce { {vec, tab[1]}, {[vec tab[2}, ..., {vec tab[n]} }
,
where tab[i]
is the i'th element of tab
.
Is there a way to achieve this with nn/dpnn modules?
I wrote a module to do that based on ZipTable, but am not sure I'm getting the derivatives correctly. Here it is:
local MyZipTable, parent = torch.class('nn.MyZipTable', 'nn.Container')
-- based on ZipTable in dpnn
-- input : { v, {a, b, c} }
-- output : { {v,a}, {v,b}, {v,c} }
function MyZipTable:__init()
parent.__init(self)
self.output = {}
self.gradInput = {}
end
function MyZipTable:updateOutput(input)
assert(#input == 2, "input must be table of element and table")
inputEl, inputTable = input[1], input[2]
self.output = {}
for i,v in ipairs(inputTable) do
self.output[i] = {inputEl:clone(), v}
end
return self.output
end
function MyZipTable:updateGradInput(input, gradOutput)
assert(#input == 2, "input must be table of element and table")
local inputEl, inputTable = input[1], input[2]
local gradInputEl = torch.zeros(inputEl:size())
local gradInputTable = {}
for i,gradV in ipairs(gradOutput) do
gradInputEl:add(gradV[1])
gradInputTable[i] = gradV[2]:clone()
end
self.gradInput = {gradInputEl, gradInputTable}
return self.gradInput
end
The broader context is that I actually need to produce a table of concatenated vectors of vec
and the elements of tab
, i.e. { [vec tab[1]], [vec tab[2]], ..., [vec tab[n]] }
, but given the functionality described above I can achieve this by using nn.JoinTable inside rnn.Sequencer.
Note that I do not know the size of the table tab
at the time the network is constructed so I cannot use nn.Replicate.
Any help is appreciated!
P.S. if this is not the appropriate place for such an issue please advice where else I should post. Thank you.
Hi, it looks like when I apply sharedClone to a nn.BatchNormalization layer, the running mean and running variance stored in this module are not shared across the clones.
I can see some rationale for this, but not completely sure that this is intentional behavior.
Module:ioShapes() return inputShape, outputShape. dp will use this to extrapolate module and criterion shapes.
Hi, I'd like to use inception layers with l2 pooling, like in FaceNet, but Inception.lua only uses max pooling.
As a quick workaround for myself, I've modified the Inception layer in the master branch of my fork to accept a generic pool
argument instead of just the size and stride of max pooling:
https://github.com/bamos/dpnn/commit/c91d263add5832d7209ab5e3d6fd20e73d6e8c97
This works well for me, but the downside is that I've changed the API.
method to make read/write methods for each module (which ignore certain attributes by default).
gradParameters becomes nan.
Of course, if I just use the constituent layers and build my own FireModule function like:
function FireModule(nInputPlane, s1x1, e1x1, e3x3)
local module = nn.Sequential()
module:add(nn.SpatialConvolution(nInputPlane, s1x1, 1, 1)):add(nn.ReLU(true))
local expand = nn.Concat(2)
expand:add(nn.SpatialConvolution(s1x1, e1x1, 1, 1))
expand:add(nn.SpatialConvolution(s1x1, e3x3, 3, 3, 1, 1, 1, 1))
module:add(expand):add(nn.ReLU(true))
return module
end
model:add(FireModule(a,b,c,d))
then everything works fine, but then I don't get the neat __tostring__
.
Is there any way of fixing this from your side? Thanks
Here, I find that self:sparseParameters will never return scales for nn.Module, so why you use a scales variable here? can you explain?
here, i believe this line of code should replaced by the following line:
local params, gradParams = self:parameters()
I cloned torch, followed by
luarocks install torch
luarocks install nn
luarocks install cutorch
luarocks install cunn
luarocks install torchx
luarocks install dpnn
However the below command (require 'dpnn') results in an error
th> require 'dpnn'
<torch-dir>/install/share/lua/5.1/trepl/init.lua:389: <torch-dir>/install/share/lua/5.1/trepl/init.lua:389: <torch-dir>/install/share/lua/5.1/torch/init.lua:102: class nn.Constant has been already assigned a parent class
stack traceback:
[C]: in function 'error'
<torch-dir>/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
[string "_RESULT={require 'dpnn'}"]:1: in main chunk
[C]: in function 'xpcall'
<torch-dir>/install/share/lua/5.1/trepl/init.lua:661: in function 'repl'
...rath/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:204: in main chunk
[C]: at 0x004064f0
note: <torch-dir> is the torch home directory
When I run command th> require 'dpnn'
or when I try to run th main.lua --help
from feedback networks, I get the following error:
/home/mobeen/torch/install/bin/luajit: /home/mobeen/torch/install/share/lua/5.1/trepl/init.lua:389: /home/mobeen/torch/install/share/lua/5.1/trepl/init.lua:389: /home/mobeen/torch/install/share/lua/5.1/trepl/init.lua:389: /home/mobeen/torch/install/share/lua/5.1/trepl/init.lua:389: /home/mobeen/torch/install/share/lua/5.1/torch/init.lua:102: class nn.SpatialGlimpse has been already assigned a parent class
stack traceback:
[C]: in function 'error'
/home/mobeen/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
main.lua:14: in main chunk
[C]: in function 'dofile'
...been/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406500
I have installed
luarocks install rnn
luarocks install dpnn
luarocks install nn
luarocks install cutorch
but still not working.
I encountered this error message:
Cloning into 'dpnn'...
fatal: unable to connect to github.com:
github.com[0: 192.30.252.130]: errno=Connection timed out
Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/dpnn-scm-1.rockspec - Failed cloning git repository.
Hi guys,
I am able to install dpnn but getting an error while calling the library. Below is the message I got while installing:
root@570ccca0a1d2:/digits/dpnn# luarocks make rocks/dpnn-scm-1.rockspec
cmake -E make_directory build && cd build && cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/usr/local/torch/install/" -DCMAKE_INSTALL_PREFIX="/usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1" && make
-- The C compiler identification is GNU 4.9.4
-- The CXX compiler identification is GNU 4.9.4
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found Torch7 in /usr/local/torch/install
-- Configuring done
-- Generating done
-- Build files have been written to: /digits/dpnn/build
cd build && make install
Install the project...
-- Install configuration: "Release"
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialBinaryLogisticRegression.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/BinaryClassReward.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/BinaryLogisticRegression.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/OneHot.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/FireModule.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/NCECriterion.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialRegionDropout.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/CAddTensorTable.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Constant.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ModuleCriterion.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Convert.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/VRClassReward.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Serial.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialMaxPooling.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Collapse.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/LookupTable.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ReinforceBernoulli.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ReverseTable.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Module.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ReinforceCategorical.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Inception.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Kmeans.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/init.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/BatchNormalization.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Sequential.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialConvolutionMM.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ReinforceNormal.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Clip.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/PrintSize.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ReinforceGamma.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Criterion.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialBinaryConvolution.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialBatchNormalization.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ZipTable.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ParallelTable.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/WhiteNoise.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialFeatNormalization.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/CategoricalEntropy.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ArgMax.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialUniformCrop.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialConvolution.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SpatialGlimpse.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/NCEModule.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/ZipTableOneToMany.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Dictionary.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Reinforce.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/Container.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/TotalDropout.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/SimpleColorTransform.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/PCAColorTransform.lua
-- Installing: /usr/local/torch/install/lib/luarocks/rocks/dpnn/scm-1/lua/dpnn/test.lua
Updating manifest for /usr/local/torch/install/lib/luarocks/rocks
dpnn scm-1 is now built and installed in /usr/local/torch/install/ (license: BSD)
Now I get the below error while I call it:
root@570ccca0a1d2:/digits/dpnn# th
______ __ | Torch7
/_ / ________/ / | Scientific computing for Lua.
/ / / _ / __/ / _ \ | Type ? for help
/_/ _// _//// | https://github.com/torch
| http://torch.ch
th> require 'dpnn'
/usr/local/torch/install/share/lua/5.1/trepl/init.lua:384: /usr/local/torch/install/share/lua/5.1/trepl/init.lua:384: /usr/local/torch/install/share/lua/5.1/torch/init.lua:102: bad argument #2 (invalid parent class name nn.Decorator)
stack traceback:
[C]: in function 'error'
/usr/local/torch/install/share/lua/5.1/trepl/init.lua:384: in function 'require'
[string "_RESULT={require 'dpnn'}"]:1: in main chunk
[C]: in function 'xpcall'
/usr/local/torch/install/share/lua/5.1/trepl/init.lua:652: in function 'repl'
...ocal/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:199: in main chunk
[C]: at 0x00406670
[0.0344s]
th>
Running a dontBackward()
on FB's Resnet-18 model, from layers 8 and below using
model.modules[1]:get(i):dontBackward()
results in the following error when I run model:backward(inputs,dE_dy)
In 1 module of nn.Sequential:
In 2 module of nn.Sequential:
/home/faratin/torch/install/share/lua/5.1/cudnn/init.lua:118: Error in CuDNN: CUDNN_STATUS_BAD_PARAM (cudnnBatchNormalizationBackward)
stack traceback:
[C]: in function 'error'
/home/faratin/torch/install/share/lua/5.1/cudnn/init.lua:118: in function 'errcheck'
...torch/install/share/lua/5.1/cudnn/BatchNormalization.lua:95: in function <...torch/install/share/lua/5.1/cudnn/BatchNormalization.lua:83>
[C]: in function 'xpcall'
/home/faratin/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/faratin/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function </home/faratin/torch/install/share/lua/5.1/nn/Sequential.lua:78>
[C]: in function 'xpcall'
/home/faratin/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/faratin/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
dev.lua:125: in function 'dryRun'
[string "_RESULT={dryRun(model,loss,10)}"]:1: in main chunk
[C]: in function 'xpcall'
/home/faratin/torch/install/share/lua/5.1/trepl/init.lua:652: in function 'repl'
...atin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:199: in main chunk
[C]: at 0x004064f0
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/faratin/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/faratin/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
dev.lua:125: in function 'dryRun'
[string "_RESULT={dryRun(model,loss,10)}"]:1: in main chunk
[C]: in function 'xpcall'
/home/faratin/torch/install/share/lua/5.1/trepl/init.lua:652: in function 'repl'
...atin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:199: in main chunk
[C]: at 0x004064f0```
If I do not run `dontBackward()` the model:backward() works fine
dpnn adds methods to nn (those used in dp) like sharedClone, sharedType, outside
Constant.lua has a error, at line 28.
self.output:resize(self.size):copy(self.value)
self.size is not initialized.
4 module of nn.Sequential: ...orch/install/share/lua/5.1/dpnn/ReinforceCategorical.lua:20: invalid arguments: CudaTensor CudaTensor number expected arguments: *CudaLongTensor* CudaTensor int [boolean]
Why this error occured?
My model is still failing to learn, so to be sure that my ReinforceGamma module is doing backprop correctly, I'd like to build a simple test to prove that it can learn correctly, and to ensure that the test works, I wanted to run it on the ReinforceNormal module first. I was thinking something along the lines of regression where a Linear module placed before the Reinforce module would learn to scale the input by 100:
net = nn.ConcatTable()
net:add( nn.Sequential():add( nn.Linear(1,1) ):add( nn.ReinforceNormal(30) )
net:add( nn.Sequential():add( nn.Constant(1,1) ):add( nn.Add(1) )
crit = nn.VRClassReward(net,1,nil,reward)
function gradUpdate(asd, x, y, criterion, learningRate)
pred = asd:forward(x)
err = criterion:forward(pred,y)
gradCriterion = criterion:backward(pred,y)
asd:zeroGradParameters()
asd:backward(x,gradCriterion)
asd:updateParameters(learningRate)
return err
end
for i=1,10 do
x = torch.rand(1)
y = torch.mul(x,100)
err = gradUpdate(q,x,y,c,1.0)
end
where the VRClassReward module is modified to accept a function on initialization, which is used to calculate the reward given an input and a target. Specifically, this is my modified VRClassReward, and this is my reward function:
-- this function should just return 1 if x and y are within 10 of each other and 0 otherwise
function reward(x,y)
local z = torch.abs( torch.add(x:float(),-1,y:float())):apply( function(q) if q < 10 then return 1 else return 0 end end)
return z
end
Ideally, I expected this to allow net.modules[1].modules[1].weight
to converge to 100, or somewhere close, but I'm seeing large negative values, and I can't figure out why...any suggestions?
The interface would be something like nn.TimedModule(module, print_interval, name)
, and after print_interval
calls to the module, it would print name
and the average time for a forwards and backwards pass. If this seems useful, I'm happy to implement it here, or if there are other ways of timing specific modules / technical issues I'm overlooking, that'd be useful too.
take module and table of mapping attributes (previous node, outsize:1) to configuration attributes : 1, 2, 3, 4, or vice-versa (just past list of arguments):
nn.PostInit(module, {'~outsize:1',...})
Use special character '~' to access post-init variables: '~outside:1'. First word must be among accepted keywords. We can add some as we go on. This should make it easier to initialize by removing need for calculating input size of next module.
I'm a little confused about the meaning of the variable target
, which is the argument of the following two functions:
VRClassReward:updateOutput(input, target)
VRClassReward:updateGradInput(inputTable, target)
For reinforcement learning agents, the correct target for a given input is not always available. In fact, a reward is computed based on the model's input and the model's output only. Why, then, do we need this
target
?
I am running a languagemodel.lua
tutorial script and getting:
torch/install/share/lua/5.1/dpnn/Dictionary.lua:5: DEPRECATED Jan 14, 2016
If Dictionary
is DEPRECATED the correspondent docs should mention this as well, shouldn't they?
In https://github.com/Element-Research/dpnn/blob/master/Module.lua#L43, you just return the model itself for recurrent module in RNN, however, can this cause the problem of each element in table self.outputs
in recurrent module referencing the same memory chunk? i.e. self.outputs[1]
will be identical to self.outputs[2]
etc? How do you manage such things? can you give me some hint, many thanks
Hello,
Thank you for the great packages. I am using dpnn
together with rnn
package and both of them are amazing.
Currently I have issue while using OneHot layer with MaskZero decorator from rnn, as OneHot (more specifically scatter) is not accepting zeros. I understand this works as intended, but It would be nice, if it worked together out of the box, probably with zeros as the output.
I wonder why the return of updataGradInput is "self.gradInput", since all your computation are "gradMean" and "gradStdev"?
I've been considering switching to Torch because I need an embedding class that only updates rows actually used in a batch. I see there's one in this library but its been deprecated. Why is that? Thanks.
Hi,
I just reinstall this package and get this error message when I execute a lua script..
/home/XXX/torch/install/share/lua/5.1/trepl/init.lua:384: cannot open /home/XXX/torch/install/share/lua/5.1/dpnn/NaN.lua: No such file or directory
stack traceback:
[C]: in function 'error'
/home/XXX/torch/install/share/lua/5.1/trepl/init.lua:384: in function 'require'
XXX_cnn.lua:1: in main chunk
[C]: in function 'dofile'
...XXX/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
seems NaN.lua is missing.
best regards...nice package btw
I installed this package via luarocks, and it seems it overloads its tables into the nn table, so, for example, there is nn.Convert
. However, ?nn.Convert
is undocumented. Is that a feature to be implemented, or my installation is missing something?
This code:
require 'cutorch'
require 'dpnn'
local v = torch.CudaTensor.zeros(torch.CudaTensor.new(), 5)
v[3] = 1
local one_hot = nn.OneHot(5):cuda()
one_hot:forward(v)
Gives me this error:
THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-75/cutorch/lib/THC/generic/THCStorage.c line=147 error=77 : an illegal memory access was encountered
Am I missing something really, really obvious or is this a really weird bug?
when I run the demo recurrent-visual-attention.lua with layer of SpatialGlimpse, my gpu only use 20%.
And I find that SpatialGlimpse.lua process the sample one by one . Is there some ways to improve the speed or make this layer process data in batch?
I use nn.Serial and lightSerial to save my module with less disk space, but I can not reload my module and train it. Is there any way I could do that?
Hello
I'm running the text-classification example in NVIDIA DIGITS.
I already did luarocks install dpnn
and I get the following error:
ERROR: /usr/share/lua/5.1/trepl/init.lua:384: .../share/digits/digits/jobs/20160919-102150-25c4/model.lua:1: dpnn module required: luarocks install dpnn
My luarocks list:
luarocks list
Warning: Failed loading manifest for /home/gonzalo/.luarocks/lib/luarocks/rocks: /home/gonzalo/.luarocks/lib/luarocks/rocks/manifest: No such file or directory
Installed rocks:
argcheck
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
cudnn
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
cunn
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
cutorch
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
cwrap
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
dok
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
dpnn
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
env
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
gnuplot
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
graph
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
hdf5
0-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
image
1.1.alpha-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
lightningmdb
0.9.18.2-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
lpeg
1.0.0-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
lua-cjson
2.1devel-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
lua-pb
scm-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
luaffi
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
luafilesystem
1.6.3-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
moses
1.4.0-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
nccl
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
nn
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
nngraph
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
nnx
0.1-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
optim
1.0.5-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
paths
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
penlight
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
qtlua
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
qttorch
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
struct
1.4-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
sundown
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
sys
1.1-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
tds
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
threads
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
torch
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
totem
0-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
trepl
scm-1 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
xlua
1.0-0 (installed) - /home/gonzalo/torch/install/lib/luarocks/rocks
I have changed the permissions of /usr/share/lua and /home/gonzalo/torch where the luarocks modules are installed to allow access to them.
How can I solve this?
I see that nn.ArgMax is not yet compatible with CudaTensor?
If this is so I'd be willing to help (I really need this layer on CUDA right now). Let me know what I can do - I'm not a CUDA developer but fairly experienced with Lua/Torch.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.