Git Product home page Git Product logo

rnn's Introduction

THIS REPOSITORY IS DEPRECEATED.

Please use https://github.com/torch/torch7

For install scripts, please look at: https://github.com/torch/ezinstall

Torch7 Library.

Torch7 provides a Matlab-like environment for state-of-the-art machine learning algorithms. It is easy to use and provides a very efficient implementation, thanks to an easy and fast scripting language (Lua) and a underlying C implementation.

In order to install Torch7 you can follow these simple instructions, but we suggest reading the detailed manual at http://www.torch.ch/manual/install/index

Requirements

  • C/C++ compiler
  • cmake
  • gnuplot
  • git

Optional

  • Readline
  • QT (QT4.8 is now supported)
  • CBLAS
  • LAPACK

Installation

$ git clone git://github.com/andresy/torch.git
$ cd torch
$ mkdir build
$ cd build

$ cmake .. 
OR
$ cmake .. -DCMAKE_INSTALL_PREFIX=/my/install/path

$make install

Running

$torch
Type help() for more info
Torch 7.0  Copyright (C) 2001-2011 Idiap, NEC Labs, NYU
Lua 5.1  Copyright (C) 1994-2008 Lua.org, PUC-Rio
t7> 

3rd Party Packages

Torch7 comes with a package manager based on Luarocks. With it it's easy to install new packages:

$ torch-rocks install image
$ torch-rocks list
$ torch-rocks search --all

Documentation

The full documentation is installed in /my/install/path/share/torch/html/index.html

Also, http://www.torch.ch/manual/index points to the latest documentation of Torch7.

rnn's People

Contributors

amartya18x avatar anoidgit avatar bartvm avatar blackyang avatar boknilev avatar cheng6076 avatar ethanabrooks avatar guillitte avatar hughperkins avatar iamalbert avatar ivendrov avatar jnhwkim avatar joostvdoorn avatar manojelement avatar mirandaconrado avatar nhynes avatar nicholas-leonard avatar pavanky avatar robotsorcerer avatar rohanpadhye avatar sahiliitm avatar sennendoko avatar soumith avatar ssampang avatar suryabhupa avatar temerick avatar vgire avatar vzhong avatar yenchenlin avatar ywelement avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnn's Issues

AbstractRecurrent simplifications

  • deprecate rho
  • deprecate self.recurrentModule
  • _updateOutput
  • In getHiddenState(step, input) the step = step == nil and (self.step - 1) or (step < 0) and (self.step - step - 1) or step -> function
  • userPrev* -> setHiddenState (code + doc)
  • cleanup SeqLSTM

Deprecated TrimZero

@jnhwkim Heads up, I am going to deprecate TrimZero in torch/rnn. The main reason is that I want to simplify the zero-masking interface. However, I want to bring back the fundamental idea behind TrimZero at a later time. I am not sure exactly how to implement this, but the basic idea is to copy cudnn's way of handling variable length sequences. They do this by ordering the sequences from longest to shorted sequence. The forward/backward of each step is done with torch.index, instead a narrow is used. The narrowed indices get shorter and shorter as time increased. Nevertheless TrimZero will continue to be supported via Element-Research/rnn. I hope this is okay with you.

attempt to call field 'Recurrent' (a nil value)

I have updated rnn and all the other dependencies to the latest version,but it still has the error:

./buildModel.lua:65: attempt to call field 'Recurrent' (a nil value)
stack traceback:
./buildModel.lua:65: in function 'buildModel_MeanPool_RNN'
videoReid.lua:100: in main chunk
[C]: in function 'dofile'
...hiyx/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

nn.Recursor expecting nn.Module instance at arg 1 when using newer rnn package

So, I am trying to use newer RecGRU layer to replace older GRU but it does not work and the docs say nothing that could give you a hint. So, I took a look at recurrent-language-model.lua implementation and made sure to replicate it. There are basically no changes on the surface but something has obviously changed under the hood.

Here is the error:

Building vocab from:	
data/train.txt	
Vocab size:	8488	

Preprocessing training data	
data/train.txt	
data/valid.txt	
Training set size:	197835	
Validation set size:	26605	
	
/home/pavel/torch/install/bin/luajit: ...el/torch/install/share/lua/5.1/rnn/AbstractRecurrent.lua:9: nn.Recursor expecting nn.Module instance at arg 1
stack traceback:
	[C]: in function 'assert'
	...el/torch/install/share/lua/5.1/rnn/AbstractRecurrent.lua:9: in function '__init'
	/home/pavel/torch/install/share/lua/5.1/rnn/Recursor.lua:10: in function '__init'
	/home/pavel/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/pavel/torch/install/share/lua/5.1/torch/init.lua:87>
	[C]: in function 'Recursor'
	/home/pavel/torch/install/share/lua/5.1/rnn/Sequencer.lua:22: in function '__init'
	/home/pavel/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/pavel/torch/install/share/lua/5.1/torch/init.lua:87>
	[C]: in function 'Sequencer'
	./networks/network_sum.lua:60: in function 'build'
	main.lua:132: in main chunk
	[C]: in function 'dofile'
	...avel/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x00405c80

The model code is:

function network.build(ivocab,opt)
  local net = nn.Sequential() -- main network container

  -------------------- input layer --------------------
  local par = nn.ParallelTable() -- container for word and feat vectors

  local seq1 = nn.Sequential()
  seq1:add(nn.SplitTable(1)):add(nn.MapTable():add(nn.Linear(#ivocab,opt.inputsize))) -- apply linear to word vector batch

  local seq2 = nn.Sequential()
  seq2:add(nn.SplitTable(1)):add(nn.MapTable():add(nn.Linear(opt.feats,opt.inputsize))) -- apply linear to feat vector batch

  par:add(seq1):add(seq2)

  net:add(par):add(nn.ZipTable())
  net:add(nn.MapTable():add(nn.CAddTable())) -- sum up word and feature vectors into one

  local stepmodule = nn.Sequential()
  local rnn = nn.RecGRU(opt.inputsize,opt.hiddensize[1])
  stepmodule:add(rnn)

  if opt.dropout > 0 then
    stepmodule:add(nn.Dropout(opt.dropout))
  end
  stepmodule:add(nn.Linear(opt.hiddensize[1],1))
  stepmodule:add(nn.Sigmoid())

  -- adding recurrency
  net:add(nn.Sequencer(stepmodule))  -- <-- this line throws the error

  -- remember previous state between batches
  net:remember((opt.rnn and "eval") or "both")

  return net
end

gradOutput Ignored

function ReinforceCategorical:updateGradInput(input, gradOutput)

Hi,
I am trying to understand the logic in reinforce implementation. I am new to this so please bear with my basic questions.
Why is gradOutput being ignored? If we multiply gradOutput with rewards, will it be wrong?
Also, what is happening here:self.gradInput:copy(self.output)? Output is a probability distribution right?

Thanks,
Parag

Remove un-necessary dpnn files

Move to nn:

  • Constant
  • WhiteNoise
  • OneHot
  • Padding (rnn)
  • PrintSize
  • ZeroGrad (rnn)
  • ZipTable*
  • Collapse
  • Convert
  • Clip
  • CAddTensorTable
  • Kmeans
  • ModuleCriterion

Move to dpnn:

  • FireModule
  • Dictionary
  • Inception
  • LinearNoBias
  • PCAColorTransform
  • SimpleColorTransform
  • SpatialBinaryConv
  • SpatialFeatNormalization
  • SpatialRegionDropout
  • Serial
  • SpatialBinaryLR
  • BLR
  • SpatialUniformCrop

Test:
require rnn; require dpnn

"memcpy" was not declared in this scope

When I try to install rnn with luarocks with the following command: "luarocks install rnn" I get the following error.

/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;

All of the output from the command is below:

Installing https://raw.githubusercontent.com/torch/rocks/master/rnn-scm-1.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/rnn-scm-1.rockspec... switching to 'build' mode
Cloning into 'rnn'...
remote: Counting objects: 183, done.
remote: Compressing objects: 100% (155/155), done.
remote: Total 183 (delta 28), reused 98 (delta 19), pack-reused 0
Receiving objects: 100% (183/183), 903.53 KiB | 0 bytes/s, done.
Resolving deltas: 100% (28/28), done.
Checking connectivity... done.
cmake -E make_directory build;
cd build;
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/home/jacomus/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/jacomus/torch/install/lib/luarocks/rocks/rnn/scm-1" -DCMAKE_C_FLAGS=-fPIC -DCMAKE_CXX_FLAGS=-fPIC;
make

-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Torch7 in /home/jacomus/torch/install
-- Found CUDA: /usr (found suitable version "7.5", minimum required is "6.5")
-- Automatic GPU detection failed. Building for common architectures.
-- Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;5.2+PTX
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/luarocks_rnn-scm-1-7762/rnn/build
[ 14%] Building NVCC (Device) object CMakeFiles/rnn.dir/src/rnn_generated_rnn.cu.o
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
^
CMake Error at rnn_generated_rnn.cu.o.cmake:267 (message):
Error generating file
/tmp/luarocks_rnn-scm-1-7762/rnn/build/CMakeFiles/rnn.dir/src/./rnn_generated_rnn.cu.o

CMakeFiles/rnn.dir/build.make:63: recipe for target 'CMakeFiles/rnn.dir/src/rnn_generated_rnn.cu.o' failed
make[2]: *** [CMakeFiles/rnn.dir/src/rnn_generated_rnn.cu.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/rnn.dir/all' failed
make[1]: *** [CMakeFiles/rnn.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

Fix SeqBRNN

  • support zero-masking version 2
  • self.modules[1]
  • support GRU

seqlstm.batchfirst = true doesn't actually do anything

This line is in the SeqLSTM docs:

Note that if you prefer to transpose the first two dimension (that is, batchsize x seqlen
instead of the default seqlen x batchsize) you can set seqlstm.batchfirst = true
following initialization.

However, there is no such value anywhere in the code. The SeqLSTM.lua code always assumes the seqlen is first, and the batchsize is second.

Cannot run nn.Recurrent(8,nn.LookupTable(10000,10),nn.Linear(10,10),nn.Sigmoid(),5) recently

Before I reinstall torch and rnn module, everything is OK. But recently I reinstalled torch and rnn module for some reason, then I found that the following error would raise when I run code
require 'rnn'
r=nn.Recurrent(8,nn.LookupTable(10000,10),nn.Linear(10,10),nn.Sigmoid(),5)

error message:
[string "r = nn.Recurrent(8,nn.LookupTable(10000,10),n..."]:1: attempt to call field 'Recurrent' (a nil value)
stack traceback:
[string "r = nn.Recurrent(8,nn.LookupTable(10000,10),n..."]:1: in main chunk
[C]: in function 'xpcall'
/home/xxx/torch/install/share/lua/5.1/trepl/init.lua:679: in function 'repl'
...ngli/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:204: in main chunk
[C]: at 0x00405d50

anybody can tell me how to fix it?

torch rnn multiple files for training

I am trying to let torch rnn generate multiple diffrent levels for a 2D game i created. The levels are represented in textformat with 2 diffrent characters. '#' and '+'. # = ground, the character can jump and walk on. + = nothing. I can create infinite levels with a random function. Each level can me infinitely long. Right now the levels are 20 characters in height and 300 characters in length. I am aiming for the rnn to generate levels in those sizes.

My question is: How should I input my training textfiles into torch rnn? Can torch even handle somehow multiple input files for preprocessing/training? Should I combine them to one big file (empty line as seperator)? Should I create one single very long level?

I am very thankful for any kinds of advices. I am very new to machine learning and this is my first project with it.

Kind regards

Current torch/rnn breaks older Element-Research based code

Seriously guys.

Looks like current torch/rnn introduces changes compared to Element-Research/rnn and is not compatible with the older code. Once Torch users update/reinstall/install the framework and attempt to run code that uses rnn it will crash and they will flood the issue tracker with misc error messages depending on their model implementations which will all be related to the same issue. What's even worse, rebuilding a docker image now pulls all the new stuff and crashes in production because the older models are not compatible.

I am using a modified GRU.lua layer in 6 of my models and now all of them stopped working after the update. And there is still no changelog no tips on how to fix issues with GRU_mod.lua expecting nn.Module instance at arg 1. What is the status of stepmodule which is now equally used in recurrrent language model ("recurrent-language-model.lua") as nn.Sequential container and then inside something called RecGRU.lua that was simple GRU layer before? Why did you decide to replace a simple LSTM, GRU layers with recurrent versions and deprecating the former in process? Sigh.

The latest update is a good way to force ppl start using pytorch.

unknown Torch class <nn.LinearNoBias>

I've updated the RNN model by "luarocks install rnn".
When I test my per-train model trained with “Element-Research rnn” ,I got unknown Torch class <nn.LinearNoBias>
Please give me some suggestions. Thank you very much.

AbstractRecurrent assert checks incorrect nn.Module class

I have recreated exactly the same recurrent language model code and it still crashes on the last line with AbstractRecurrent.lua:9: nn.Recursor expecting nn.Module instance at arg 1

  local net = nn.Sequential() -- main network container

  -------------------- input layer --------------------
  local lookup = nn.LookupTable(#trainset.ivocab, opt.inputsize)
  net:add(lookup)

  if opt.dropout > 0 then
        net:add(nn.Dropout(opt.dropout))
  end

  -------------------- Recurrent layer --------------------
  local stepmodule = nn.Sequential()

  local rnn = nn.RecGRU(opt.inputsize, opt.hiddensize[1])
  stepmodule:add(rnn)

  -------------------- Output layer --------------------

  if opt.dropout > 0 then
    stepmodule:add(nn.Dropout(opt.dropout))
  end

  stepmodule:add(nn.Linear(opt.hiddensize[1],1))
  stepmodule:add(nn.Sigmoid())

  -- adding recurrency
  net:add(nn.Sequencer(stepmodule))  -- <-- crash!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.