Git Product home page Git Product logo

zhongkaifu / rnnsharp Goto Github PK

View Code? Open in Web Editor NEW
283.0 48.0 93.0 1.89 MB

RNNSharp is a toolkit of deep recurrent neural network which is widely used for many different kinds of tasks, such as sequence labeling, sequence-to-sequence and so on. It's written by C# language and based on .NET framework 4.6 or above versions. RNNSharp supports many different types of networks, such as forward and bi-directional network, sequence-to-sequence network, and different types of layers, such as LSTM, Softmax, sampled Softmax and others.

License: BSD 3-Clause "New" or "Revised" License

C# 100.00%
rnn crf deep-learning machine-learning c-sharp dotnet sequence-labeling rnn-model recurrent-neural-networks nlp

rnnsharp's Introduction

Donate a beverage to help me to keep Seq2SeqSharp up to date :) Support via PayPal

[Note: RNNSharp is in maintenance status and won't have new feature any more. For new neural network framework, please try Seq2SeqSharp (https://github.com/zhongkaifu/Seq2SeqSharp)]

RNNSharp

RNNSharp is a toolkit of deep recurrent neural network which is widely used for many different kinds of tasks, such as sequence labeling, sequence-to-sequence and so on. It's written by C# language and based on .NET framework 4.6 or above version.

This page introduces what is RNNSharp, how it works and how to use it. To get the demo package, you can access release page.

Overview

RNNSharp supports many different types of deep recurrent neural network (aka DeepRNN) structures.

For network structure, it supports forward RNN and bi-directional RNN. Forward RNN considers histrocial information before current token, however, bi-directional RNN considers both histrocial information and information in future.

For hidden layer structure, it supports LSTM and Dropout. Compared to BPTT, LSTM is very good at keeping long term memory, since it has some gates to contorl information flow. Dropout is used to add noise during training in order to avoid overfitting.

In terms of output layer structure, simple, softmax, sampled softmax and recurrent CRFs[1] are supported. Softmax is the tranditional type which is widely used in many kinds of tasks. Sampled softmax is especially used for the tasks with large output vocabulary, such as sequence generation tasks (sequence-to-sequence model). Simple type is usually used with recurrent CRF together. For recurrent CRF, based on simple outputs and tags transition, it computes CRF output for entire sequence. For sequence labeling tasks in offline, such as word segmentation, named entity recognition and so on, recurrent CRF has better performance than softmax, sampled softmax and linear CRF.

Here is an example of deep bi-directional RNN-CRF network. It contains 3 hidden layers, 1 native RNN output layer and 1 CRF output layer.

Here is the inner structure of one bi-directional hidden layer.

Here is the neural network for sequence-to-sequence task. "TokenN" are from source sequence, and "ELayerX-Y" are auto-encoder's hidden layers. Auto-encoder is defined in feature configuration file. <s> is always the beginning of target sentence, and "DLayerX-Y" means the decoder's hidden layers. In decoder, it generates one token at one time until </s> is generated.

Supported Feature Types

RNNSharp supports many different feature types, so the following paragraph will introduce how these feaures work.

Template Features

Template features are generated by templates. By given templates and corpus, these features can be automatically generated. In RNNSharp, template features are sparse features, so if the feature exists in current token, the feature value will be 1 (or feature frequency), otherwise, it will be 0. It's similar as CRFSharp features. In RNNSharp, TFeatureBin.exe is the console tool to generate this type of features.

In template file, each line describes one template which consists of prefix, id and rule-string. The prefix indicates template type. So far, RNNSharp supports U-type feature, so the prefix is always as "U". Id is used to distinguish different templates. And rule-string is the feature body.

# Unigram
U01:%x[-1,0]
U02:%x[0,0]
U03:%x[1,0]
U04:%x[-1,0]/%x[0,0]
U05:%x[0,0]/%x[1,0]
U06:%x[-1,0]/%x[1,0]
U07:%x[-1,1]
U08:%x[0,1]
U09:%x[1,1]
U10:%x[-1,1]/%x[0,1]
U11:%x[0,1]/%x[1,1]
U12:%x[-1,1]/%x[1,1]
U13:C%x[-1,0]/%x[-1,1]
U14:C%x[0,0]/%x[0,1]
U15:C%x[1,0]/%x[1,1]

The rule-string has two types, one is constant string, and the other is variable. The simplest variable format is {“%x[row,col]”}. Row specifies the offset between current focusing token and generate feature token in row. Col specifies the absolute column position in corpus. Moreover, variable combination is also supported, for example: {“%x[row1, col1]/%x[row2, col2]”}. When we build feature set, variable will be expanded to specific string. Here is an example in training data for named entity task.

Word Pos Tag
! PUN S
Tokyo NNP S_LOCATION
and CC S
New NNP B_LOCATION
York NNP E_LOCATION
are VBP S
major JJ S
financial JJ S
centers NNS S
. PUN S
---empty line---
! PUN S
p FW S
' PUN S
y NN S
h FW S
44 CD S
University NNP B_ORGANIZATION
of IN M_ORGANIZATION
Texas NNP M_ORGANIZATION
Austin NNP E_ORGANIZATION

According above templates, assuming current focusing token is “York NNP E_LOCATION”, below features are generated:

U01:New
U02:York
U03:are
U04:New/York
U05:York/are
U06:New/are
U07:NNP
U08:NNP
U09:are
U10:NNP/NNP
U11:NNP/VBP
U12:NNP/VBP
U13:CNew/NNP
U14:CYork/NNP
U15:Care/VBP

Although U07 and U08, U11 and U12’s rule-string are the same, we can still distinguish them by id string.

Context Template Features

Context template features are based on template features and combined with context. In this example, if the context setting is "-1,0,1", the feature will combine the features of current token with its previous token and next token. For instance, if the sentence is "how are you". the generated feature set will be {Feature("how"), Feature("are"), Feature("you")}.

Pretrained Features

RNNSharp supports two types of pretrained features. The one is embedding features, and the other is auto-encoder features. Both of them are able to present a given token by a fixd-length vector. This feature is dense feature in RNNSharp.

For embedding features, they are trained from unlabled corpus by Text2Vec project. And RNNSharp uses them as static features for each given token. However, for auto-encoder features, they are trained by RNNSharp as well, and then they can be used as dense features for other trainings. Note that, the token's granularity in pretrained features should be consistent with training corpus in main training, otherwise, some tokens will mis-match with pretrained feature.

Likes template features, embedding feature also supports context feature. It can combine all features of given contexts into a single embedding feature. For auto-encoder features, it does not support it yet.

Run Time Features

Compared with other features generated offline, this feature is generated in run time. It uses the result of previous tokens as run time feature for current token. This feature is only available for forward-RNN, bi-directional RNN does not support it.

Source Sequence Encoding Feature

This feature is only for sequence-to-sequence task. In sequence-to-sequence task, RNNSharp encodes given source sequence into a fixed-length vector, and then pass it as dense feature to generate target sequence.

Configuration File

The configuration file describes model structure and features. In console tool, use -cfgfile as parameter to specify this file. Here is an example for sequence labeling task:

#Working directory. It is the parent directory of below relatived paths.
CURRENT_DIRECTORY = .

#Network type. Four types are supported:
#For sequence labeling tasks, we could use: Forward, BiDirectional, BiDirectionalAverage
#For sequence-to-sequence tasks, we could use: ForwardSeq2Seq
#BiDirectional type concatnates outputs of forward layer and backward layer as final output
#BiDirectionalAverage type averages outputs of forward layer and backward layer as final output
NETWORK_TYPE = BiDirectional

#Model file path
MODEL_FILEPATH = Data\Models\ParseORG_CHS\model.bin

#Hidden layers settings. LSTM and Dropout are supported. Here are examples of these layer types.
#Dropout: Dropout:0.5 -- Drop out ratio is 0.5 and layer size is the same as previous layer.
#If the model has more than one hidden layer, each layer settings are separated by comma. For example:
#"LSTM:300, LSTM:200" means the model has two LSTM layers. The first layer size is 300, and the second layer size is 200.
HIDDEN_LAYER = LSTM:200

#Output layer settings. Simple, Softmax ands sampled softmax are supported. Here is an example of sampled softmax:
#"SampledSoftmax:20" means the output layer is sampled softmax layer and its negative sample size is 20.
#"Simple" means the output is raw result from output layer. "Softmax" means the result is based on "Simple" result and run softmax.
OUTPUT_LAYER = Simple

#CRF layer settings
#If this option is true, output layer type must be "Simple" type.
CRF_LAYER = True

#The file name for template feature set
TFEATURE_FILENAME = Data\Models\ParseORG_CHS\tfeatures
#The context range for template feature set. In below, the context is current token, next token and next after next token
TFEATURE_CONTEXT = 0,1,2
#The feature weight type. Binary and Freq are supported
TFEATURE_WEIGHT_TYPE = Binary

#Pretrained features type: 'Embedding' and 'Autoencoder' are supported.
#For 'Embedding', the pretrained model is trained by Text2Vec, which looks like word embedding model.
#For 'Autoencoder', the pretrained model is trained by RNNSharp itself. For sequence-to-sequence task, "Autoencoder" is required, since source sequence needs to be encoded by this model at first, and then target sequence would be generated by decoder.
PRETRAIN_TYPE = Embedding

#The following settings are for pretrained model in 'Embedding' type.
#The embedding model generated by Txt2Vec (https://github.com/zhongkaifu/Txt2Vec). If it is raw text format, we should use WORDEMBEDDING_RAW_FILENAME instead of WORDEMBEDDING_FILENAME as keyword
WORDEMBEDDING_FILENAME = Data\WordEmbedding\wordvec_chs.bin
#The context range of word embedding. In below example, the context is current token, previous token and next token
#If more than one token are combined, this feature would use a plenty of memory.
WORDEMBEDDING_CONTEXT = -1,0,1
#The column index applied word embedding feature
WORDEMBEDDING_COLUMN = 0

#The following setting is for pretrained model in 'Autoencoder' type.
#The feature configuration file for pretrained model.
AUTOENCODER_CONFIG = D:\RNNSharpDemoPackage\config_autoencoder.txt

#The following setting is the configuration file for source sequence encoder which is only for sequence-to-sequence task that MODEL_TYPE equals to SEQ2SEQ.
#In this example, since MODEL_TYPE is SEQLABEL, so we comment it out.
#SEQ2SEQ_AUTOENCODER_CONFIG = D:\RNNSharpDemoPackage\config_seq2seq_autoencoder.txt

#The context range of run time feature. In below example, RNNSharp will use the output of previous token as run time feature for current token
#Note that, bi-directional model does not support run time feature, so we comment it out.
#RTFEATURE_CONTEXT = -1

Training file format

In training file, each sequence is represented as a features matrix and ends with an empty line. In the matrix, each row is for one token of the sequence and its features, and each column is for one feature type. In entire training corpus, the number of column must be fixed.

Sequence labeling task and sequence-to-sequence task have different training corpus format.

Sequence labeling corpus

For sequence labeling tasks, the first N-1 columns are input features for training, and the Nth column (aka last column) is the answer of current token. Here is an example for named entity recognition task(The full training file is at release section, you can download it there):

Word Pos Tag
! PUN S
Tokyo NNP S_LOCATION
and CC S
New NNP B_LOCATION
York NNP E_LOCATION
are VBP S
major JJ S
financial JJ S
centers NNS S
. PUN S
---empty line---
! PUN S
p FW S
' PUN S
y NN S
h FW S
44 CD S
University NNP B_ORGANIZATION
of IN M_ORGANIZATION
Texas NNP M_ORGANIZATION
Austin NNP E_ORGANIZATION

It has two records splitted by blanket line. For each token, it has three columns. The first two columns are input feature set, which are word and pos-tag for the token. The third column is the ideal output of the model, which is named entity type for the token.

The named entity type looks like "Position_NamedEntityType". "Position" is the word position in the named entity, and "NamedEntityType" is the type of the entity. If "NamedEntityType" is empty, that means this is a common word, not a named entity. In this example, "Position" has four values:
S : the single word of the named entity
B : the first word of the named entity
M : the word is in the middle of the named entity
E : the last word of the named entity

"NamedEntityType" has two values:
ORGANIZATION : the name of one organization
LOCATION : the name of one location

Sequence-to-sequence corpus

For sequence-to-sequence task, the training corpus format is different. For each sequence pair, it has two sections, one is source sequence, the other is target sequence. Here is an example:

Word
What
is
your
name
?
---empty line---
I
am
Zhongkai
Fu

In above example, "What is your name ?" is the source sentence, and "I am Zhongkai Fu" is the target sentence generated by RNNSharp seq-to-seq model. In source sentence, beside word features, other feautes can also be applied for training, such as postag feature in sequence labeling task in above.

Test file format

Test file has the similar format as training file. For sequence labeling task, the only different between them is the last column. In test file, all columns are features for model decoding. For sequence-to-sequence task, it only contains source sequence. The target sentence will be generated by model.

Tag (Output Vocabulary) File

For sequence labeling task, this file contains output tag set. For sequence-to-sequence task, it's output vocabulary file.

Console Tool

RNNSharpConsole

RNNSharpConsole.exe is a console tool for recurrent neural network encoding and decoding. The tool has two running modes. "train" mode is for model training and "test" mode is for output tag predicting from test corpus by given encoded model.

Encode Model

In this mode, the console tool can encode a RNN model by given feature set and training/validated corpus. The usage as follows:

RNNSharpConsole.exe -mode train
Parameters for training RNN based model. -trainfile : Training corpus file
-validfile : Validated corpus for training
-cfgfile : Configuration file
-tagfile : Output tag or vocabulary file
-inctrain : Incremental training. Starting from output model specified in configuration file. Default is false
-alpha : Learning rate, Default is 0.1
-maxiter : Maximum iteration for training. 0 is no limition, Default is 20
-savestep : Save temporary model after every sentence, Default is 0
-vq : Model vector quantization, 0 is disable, 1 is enable. Default is 0
-minibatch : Updating weights every sequence. Default is 1

Example: RNNSharpConsole.exe -mode train -trainfile train.txt -validfile valid.txt -cfgfile config.txt -tagfile tags.txt -alpha 0.1 -maxiter 20 -savestep 200K -vq 0 -grad 15.0 -minibatch 128

Decode Model

In this mode, given test corpus file, RNNSharp predicts output tags in sequence labeling task or generates a target sequence in sequence-to-sequence task.

RNNSharpConsole.exe -mode test
Parameters for predicting iTagId tag from given corpus
-testfile : test corpus file
-tagfile : output tag or vocabulary file
-cfgfile : configuration file
-outfile : result output file

Example: RNNSharpConsole.exe -mode test -testfile test.txt -tagfile tags.txt -cfgfile config.txt -outfile result.txt

TFeatureBin

It's used to generate template feature set by given template and corpus files. For high performance accessing and save memory cost, the indexed feature set is built as float array in trie-tree by AdvUtils. The tool supports three modes as follows:

TFeatureBin.exe
The tool is to generate template feature from corpus and index them into file
-mode : support extract,index and build modes
extract : extract features from corpus and save them as raw text feature list
index : build indexed feature set from raw text feature list
build : extract features from corpus and generate indexed feature set

Build mode

This mode is to extract features from given corpus according templates, and then build indexed feature set. The usage of this mode as follows:

TFeatureBin.exe -mode build
This mode is to extract feature from corpus and generate indexed feature set
-template : feature template file
-inputfile : file used to generate features
-ftrfile : generated indexed feature file
-minfreq : min-frequency of feature

Example: TFeatureBin.exe -mode build -template template.txt -inputfile train.txt -ftrfile tfeature -minfreq 3

In above example, feature set is extracted from train.txt and build them into tfeature file as indexed feature set.

Extract mode

This mode is only to extract features from given corpus and save them into a raw text file. The different between build mode and extract mode is that extract mode builds feature set as raw text format, not indexed binary format. The usage of extract mode as follows:

TFeatureBin.exe -mode extract
This mode is to extract features from corpus and save them as text feature list
-template : feature template file
-inputfile : file used to generate features
-ftrfile : generated feature list file in raw text format
-minfreq : min-frequency of feature

Example: TFeatureBin.exe -mode extract -template template.txt -inputfile train.txt -ftrfile features.txt -minfreq 3

In above example, according templates, feature set is extracted from train.txt and save them into features.txt as raw text format. The format of output raw text file is "feature string \t frequency in corpus". Here is a few examples:

U01:仲恺 \t 123
U01:仲文 \t 10
U01:仲秋 \t 12

U01:仲恺 is feature string and 123 is the frequency that this feature in corpus.

Index mode

This mode is only to build indexed feature set by given templates and feature set in raw text format. The usage of this mode as follows:

TFeatureBin.exe -mode index
This mode is to build indexed feature set from raw text feature list
-template : feature template file
-inputfile : feature list in raw text format
-ftrfile : indexed feature set

Example: TFeatureBin.exe -mode index -template template.txt -inputfile features.txt -ftrfile features.bin

In above example, according templates, the raw text feature set, features.txt, will be indexed as features.bin file in binary format.

Performance

Here is quality results on Chinese named entity recognizer task. Corpus, configuration and parameter files are available in RNNSharp demo package file at release section. The result is based on bi-directional model. The first hidden layer size is 200, and the second hidden layer size is 100. Here are test results:

Parameter Token Error Sentence Error
1-hidden layer 5.53% 15.46%
1-hidden layer-CRF 5.51% 13.60%
2-hidden layers 5.47% 14.23%
2-hidden layers-CRF 5.40% 12.93%

Run on Linux/Mac

RNNSharp is a pure C# project, so it can be compiled by .NET Core and Mono, and runns without modification on Linux/Mac.

APIs

The RNNSharp also provides some APIs for developers to leverage it into their projects. By download source code package and open RNNSharpConsole project, you will see how to use APIs in your project to encode and decode RNN models. Note that, before use RNNSharp APIs, you should add RNNSharp.dll as reference into your project.

RNNSharp referenced by the following published papers

  1. Project-Team IntuiDoc: Intuitive user interaction for document
  2. A New Pre-training Method for Training Deep Learning Models with Application to Spoken Language Understanding
  3. Long Short-Term Memory
  4. Deep Learning

rnnsharp's People

Contributors

dmit25 avatar oapenner avatar zhongkaifu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnnsharp's Issues

Not converging due to learning rate alpha

Hello,
Testing RNNSharp I was unable to make the model converge. No matter what setting I used.
I changed two places and got the model to finally converge:

if (ppl >= lastPPL && lastAlpha != rnn.LearningRate)
 {
  //Although we reduce alpha value, we still cannot get better result.

I changed it to break only after we tried to lower alpha 8 times and failed to get a improvement.

I also changed this fragment:
rnn.LearningRate = rnn.LearningRate / 2.0f;

To a decrease in a lower rate, in my case, 1.4.

Do any of those changes make sense ?

Do you think that a smarter learning rate annealing could improve RNNSharp, or am I looking at the wrong place ??

Thanks !!

(Request) Make code more easy to use

Hello zhongkaifu,

Your framework is very great but there are something I think you should change a little to get your code more easy to use. Can you change your code in the way that it can receive directly a matrix, a jagged array as input, have a train and predict method like here ? (The example)

http://accord-framework.net/docs/html/T_Accord_MachineLearning_VectorMachines_MulticlassSupportVectorMachine.htm

With this change, I think you will make a big milestone.

Regards

Time Complexity

Hello Zhong,
I will be very thankful if kindly share its exact time complexity

IndexOutOfRangeException when using RNN

Hello again!
Using today's version ( 4fad1b6 ), It can't converge using LTSM compared to yesterday version, and give me a crash when using the RNN.

The invoked command line was:

-mode train -trainfile wnnsharp-data-rsysyi.txt -modelfile rnnsharp.model -validfile wnnsharp-data-rsysyi.txt -ftrfile wnnsharp-config-qxnigh.txt -tagfile avaliable-tags.txt -modeltype 0 -layersize 50 -alpha 0.1 -crf 0 -maxiter 30 -savestep 200K -dir 0 -dropout 0

The error is:

Unhandled Exception: System.AggregateException: One or more errors occurred. ---> System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at RNNSharp.RNN.<>c__DisplayClass107_0.<matrixXvectorADD>b__0(Int32 i) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNN.cs:line 660
   at System.Threading.Tasks.Parallel.<>c__DisplayClass17_0`1.<ForWorker>b__1()
   at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask)
   at System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object )
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally)
   at System.Threading.Tasks.Parallel.For(Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body)
   at RNNSharp.RNN.matrixXvectorADD(SimpleLayer dest, SimpleLayer srcvec, Matrix`1 srcmatrix, Int32 DestSize, Int32 SrcSize, Int32 type) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNN.cs:line 685
   at RNNSharp.SimpleRNN.computeHiddenLayer(State state, Boolean isTrain) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\SimpleRNN.cs:line 157
   at RNNSharp.RNN.PredictSentence(Sequence pSequence, RunningMode runningMode) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNN.cs:line 267
   at RNNSharp.RNN.TrainNet(DataSet trainingSet, Int32 iter) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNN.cs:line 536
   at RNNSharp.RNNEncoder.Train() in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNNEncoder.cs:line 133
   at RNNSharpConsole.Program.Main(String[] args) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharpConsole\Program.cs:line 284

Null reference exception with ForwardRNN

Hello!
When I try to run RNNSharpConsole with the parameters:

MODEL_TYPE = SEQLABEL

MODEL_DIRECTION = Forward

it throws exception "NullReferenceException"

220 line of code er += srcvec[j] * srcmatrix[j][i];
file RNNHelpers.cs

I tried to solve it, but I could not understand, why DenseWeights are not initialized.

''Необработанное исключение: System.AggregateException: Произошла одна или несколь
ко ошибок. ---> System.NullReferenceException: Ссылка на объект не указывает на
экземпляр объекта.
в RNNSharp.RNNHelper.<>c__DisplayClass27_0.b__0(Int32 i)
в C:\Users\vanin\Documents\Visual Studio 2015\Projects\RNNSharpnew\RNNSharp\RNN
Helper.cs:строка 220
в System.Threading.Tasks.Parallel.<>c__DisplayClass17_01.<ForWorker>b__1() в System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) в System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating> b__0(Object ) --- Конец трассировки внутреннего стека исключений --- в System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledE xceptions) в System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationTok en cancellationToken) в System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int3 2 toExclusive, ParallelOptions parallelOptions, Action1 body, Action2 bodyWith State, Func4 bodyWithLocal, Func1 localInit, Action1 localFinally)
в System.Threading.Tasks.Parallel.For(Int32 fromInclusive, Int32 toExclusive,
Action1 body) в RNNSharp.RNNHelper.matrixXvectorADDErr(Single[] dest, Single[] srcvec, Matr ix1 srcmatrix, Int32 DestSize, Int32 SrcSize) в C:\Users\vanin\Documents\Visual
Studio 2015\Projects\RNNSharpnew\RNNSharp\RNNHelper.cs:строка 215
в RNNSharp.SimpleLayer.ComputeLayerErr(SimpleLayer nextLayer) в C:\Users\vani
n\Documents\Visual Studio 2015\Projects\RNNSharpnew\RNNSharp\SimpleLayer.cs:стро
ка 287
в RNNSharp.DropoutLayer.ComputeLayerErr(SimpleLayer nextLayer) в C:\Users\van
in\Documents\Visual Studio 2015\Projects\RNNSharpnew\RNNSharp\DropoutLayer.cs:ст
рока 124
в RNNSharp.ForwardRNN1.ProcessSequenceCRF(Sequence pSequence, RunningMode ru nningMode) в C:\Users\vanin\Documents\Visual Studio 2015\Projects\RNNSharpnew\RN NSharp\FowardRNN.cs:строка 436 в RNNSharp.RNN1.TrainNet(DataSet1 trainingSet, Int32 iter) в C:\Users\vanin \Documents\Visual Studio 2015\Projects\RNNSharpnew\RNNSharp\RNN.cs:строка 317 в RNNSharp.RNNEncoder1.Train() в C:\Users\vanin\Documents\Visual Studio 2015
\Projects\RNNSharpnew\RNNSharp\RNNEncoder.cs:строка 96
в RNNSharpConsole.Program.Train() в C:\Users\vanin\Documents\Visual Studio 20
15\Projects\RNNSharpnew\RNNSharpConsole\Program.cs:строка 542
в RNNSharpConsole.Program.Main(String[] args) в C:\Users\vanin\Documents\Visu
al Studio 2015\Projects\RNNSharpnew\RNNSharpConsole\Program.cs:строка 292

Choose when to stop the learning

Hi, I am really impressed by your readable code. A simple question about the code which may not worth mention: I think it should stop the learning when the PPL on the development set are not increased any more or the increased error rate meets our requirement, rather than on the training set. If current ppl is larger than the previous one, we should increase the learning rate or make some other decisions.

IndexOutOfRangeException

Hi, Zhong! Thank you for your lib, your work looks really cool. I've successfully run the demo, but when I change my own corpus for NER, there some errors happen as follows:


未经处理的异常: System.IndexOutOfRangeException: 索引超出了数组界限。
在 RNNSharp.RNN.ForwardBackward(Int32 numStates, Double[][] m_RawOutput)
在 RNNSharp.RNN.learnSentenceForRNNCRF(Sequence pSequence)
在 RNNSharp.RNN.TrainNet()
在 RNNSharp.RNNEncoder.Train()
在 RNNSharpConsole.Program.Main(String[] args)


Is there any limit to the lengh of sentences? (P.S. the sentences in my task is very long for each sentence containing all words in a document) Have you handled Vanishing gradient problem by some manipulation such as mini-batch like this code http://deeplearning.net/tutorial/rnnslu.html#dataset

Crash in UpdateLearningRate

Hello !

Testing this last commit, I get a crash in UpdateLearningRate

Unhandled Exception: System.AggregateException: One or more errors occurred. ---> System.AggregateException: One or more errors occurred. ---> System.IndexOutOfRangeException: Index was outside the bounds of the array.
   at RNNSharp.RNN.UpdateLearningRate(Matrix`1 m, Int32 i, Int32 j, Double delta) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\RNN.cs:line 101
at RNNSharp.LSTMRNN.<LearnOutputWeight>b__25_0(Int32 i) in C:\sbuild\mine\backend_broka\Candidatos\RNNSharp-master\RNNSharp\LSTMRNN.cs:line 609

In LSTMRNN , Line 601 do this:


Parallel.For(0, L1, parallelOption, i =>
            {
                double cellOutput = neuHidden[i].cellOutput;
                for (int k = 0; k < L2; k++)
                {
                    double delta = NormalizeGradient(cellOutput * OutputLayer.er[k]);
                    double newLearningRate = UpdateLearningRate(Hidden2OutputWeightLearningRate, i, k, delta);

                    Hidden2OutputWeight[k][i] += newLearningRate * delta;
                }
            });

But in line 517 , it is initialized like :
Hidden2OutputWeightLearningRate = new Matrix<float>(L2, L1);

Model structure

Hello ,
Can i represent DRNN in the below structure for Named entity task. any suggestion for improvement will be highly appreciated. Also is it based on Elman Architecture which is described in (Kaisheng Yao et al.,2013) ?
image

Is it possbile to performe ocr task using RNNSharp ?

Hi there,

recently I wanna do some tests about sequence labelling(OCR without segment) via RNN

I googled and found this project , Thanks for your efforts

I have hundreds of handwritten word images and the corresponding word

Would you like to give me some instruction about this problem

any advice will be welcomeed. thanks in advance !

Best regards,

maybe it's time to support CUDA !

Hi,
RNNSharp is great for .net developers. but sometimes training time is kinda long.
I found one interesting project managedCuda which is complete .net wrapper for the CUDA Driver API. so maybe it's a good idea to merge it into rnnsharp.
best regards!
Adesun

Crash during Tagging when using word embedding

Hello again @zhongkaifu ,

Testing the latest version, after training, if I try to tag a file, I get the following crash:

Without WORDEMBEDDING_FILENAME , it works nice. The emmbed was created in Tex2Vec with the following command line:
Txt2VecConsole.exe -mode train -trainfile corpus.txt -modelfile vector.bin -vocabfile vocab.txt -vector-size 200 -window 5 -min-count 1 -cbow 1 -threads 1 -iter 15

info,4/8/2016 12:58:28 AM Loading template feature set...
info,4/8/2016 12:58:28 AM Template feature size: 86693
info,4/8/2016 12:58:28 AM Template feature context size: 86693
info,4/8/2016 12:58:28 AM Get model type LSTM and direction FORWARD
info,4/8/2016 12:58:28 AM Model Structure: LSTM-RNN
info,4/8/2016 12:58:28 AM Loading LSTM-RNN model: E:\rnnsharp.model
info,4/8/2016 12:58:28 AM Loading input2hidden weights...
info,4/8/2016 12:58:28 AM Loading LSTM-Weight: width:200, height:86693, vqSize:0...
info,4/8/2016 12:58:30 AM Loading feature2hidden weights...
info,4/8/2016 12:58:30 AM Loading LSTM-Weight: width:200, height:200, vqSize:0...
info,4/8/2016 12:58:30 AM Loading hidden2output weights...
info,4/8/2016 12:58:30 AM Loading matrix. width: 200, height: 22, vqSize: 0
info,4/8/2016 12:58:30 AM CRF Model: False
Unhandled Exception: System.AggregateException: One or more errors occurred. ---> System.NullReferenceException: Object reference not set to an instance of an object.
   at RNNSharp.SingleVector.get_Item(Int32 i) in C:\RNNSharp-master\RNNSharp\Vector.cs:line 107
   at RNNSharp.LSTMRNN.<>c__DisplayClass37_0.<computeHiddenLayer>b__0(Int32 j) in C:\RNNSharp-master\RNNSharp\LSTMRNN.cs:line 820
   at System.Threading.Tasks.Parallel.<>c__DisplayClass17_0`1.<ForWorker>b__1()
   at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask)
   at System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object )
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body, Action`2 bodyWithState, Func`4 bodyWithLocal, Func`1 localInit, Action`1 localFinally)
   at System.Threading.Tasks.Parallel.For(Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action`1 body)
   at RNNSharp.LSTMRNN.computeHiddenLayer(State state, Boolean isTrain) in C:\RNNSharp-master\RNNSharp\LSTMRNN.cs:line 869
   at RNNSharp.RNN.PredictSentence(Sequence pSequence, RunningMode runningMode) in C:\RNNSharp-master\RNNSharp\RNN.cs:line 277
   at RNNSharp.RNNDecoder.Process(Sentence sent) in C:\RNNSharp-master\RNNSharp\RNNDecoder.cs:line 77
   at RNNSharpConsole.Program.Test() in C:\RNNSharp-master\RNNSharpConsole\Program.cs:line 372
   at RNNSharpConsole.Program.Main(String[] args) in C:\RNNSharp-master\RNNSharpConsole\Program.cs:line 289


Process finished with exit code -1

Help with Converting Spatio-Temporal Dataset for Consumption

Hello,

I have a spatio-temporal dataset that I have compiled. It's in a TSV format, and I'd like your RNNSharp to consume the input for classification as well as recognition. My features are continuous values in the range [0, 1]. My TSV file looks like the following:

ID1 0.923 0.223 0.573 0.235 0.111
ID1 0.920 0.228 0.353 0.213 0.098
ID1 0.901 0.677 0.235 0.551 0.121
...
ID1 0.853 0.383 0.301 0.618 0.132

ID1 0.918 0.733 0.622 0.222 0.238
ID1 0.985 0.682 0.793 0.221 0.465
...
ID1 0.953 0.788 0.912 0.228 0.539

ID2 0.918 0.733 0.622 0.222 0.238
ID2 0.985 0.682 0.793 0.221 0.465
...
ID2 0.953 0.788 0.912 0.228 0.539

Each line in my TSV is a snapshot at a specific moment in time. When all snapshot are combined, it describes the spatio-temporal entity. These entities are separated by an EMPTY LINE. Therefore, the first instance ID1 is all the lines until you reach the empty line. The second instance of ID1 is the next set of contiguous lines and so on. Note, the first TSV value is just a class label and is not a feature. Also, I have 6 class labels for this spatio-temporal dataset.

1.) First, how can I transform my data into an "embedded feature" that is in the correct model format? I assume this is the Txt2Vec?

2.) Additionally, I will have to create a corpus. Will the following work for the corpus?

ID1 ClassLabel1
ID2 ClassLabel2
ID3 ClassLabel3
ID4 ClassLabel4
ID5 ClassLabel5
ID6 ClassLabel6

3.) Additional steps or a walkthrough would be greatly appreciated. I hope this information helps all others who are trying to consume RNNSharp. When I finish, I hope to compile a walkthrough for others, so they can easily consume this great technology.

Thank you.

Poor performance in Sequential Tagging

Hello !

Thanks again for this awesome project !

From my understanding, the performance for text sequential tagging should be equal or more than CRFSharp or CRF++ right ?
My problem is to semantic tag a big single, continuous text ( hundreds of pages).

In CRF++ I get a error token ratio of about 0.5%. In RNNSharp I can´t get it better than 40%. a gigantic different. I tried LSTM and BPTT with CRF on or off. No luck.

This is expected for my use case, or am I doing something wrong ?

Plan to Use LSTM for Time Series

I would like to use the LSTM for sequence prediction, I enter a series of numbers, and the network, outputs what number is more likely to come next, with a percent probability. Is this possible with this library? would you have a tutorial/ example on how could something similar be achieved?

Nuget package

Hi, Zhong!
Thank you for such an amazing tooling!

What do you think about publishing core libraries to Nuget. That would make life easier when it comes to updates and could be easily achieved by using free appveyor CI. I am talking about AdvUnits, Txt2Vec, CRFSharp, RNNSharp.

If you're interested, I can help you with it.

Some kind of cold start

Hi! Thank you for your lib, it look's really cool. But i've got problems in understanding some concepts and approaches. For example what for is feature templating? How does all those POS parameters means (i dont get the meaning of those abbreviations)
I want to test your tool with document classifications, what do i need to do in general? Or is there any tutorialt to start with?
Sorry if my questions sounds really stupid

A question regarding usage or RNNSharp for a particular problem

Hello,

I'm wandering whether RNNSharp can be used for the following problem:

I have a sequence of 3-number vectors {[a1, a2, a3], [b1, b2, b3]... [n1, n2, n3]}. and a matching output, which is a vector of 5 numbers [A1, A2, A3, A4, A5]. And my training input/output would be a list of such pairs of sequences, where output always has the same size (5-number vector) while input size varies (number of 3-number vectors is different).

So, for example:

[1, 2, 3], [4, 5, 6] -> [1, 2, 3, 4, 5]
[1, 2, 3], [4, 5, 6], [7, 8, 9,] -> [5, 6, 7, 8, 9]
[1, 2, 3], [4, 5, 6], [7, 8, 9,], [10, 11, 12], [13, 14, 15] -> [10, 11, 12, 13, 14]
....

and I would use the samples above to train the network so that it can output a 5-number vector for a random-size input (random number of 3-number vectors)

e.g.

[1, 2, 3], [4, 5, 6], [7, 8, 9,], [10, 11, 12] -> desired 5-number output

Can RNNSharp deal with such a problem? I'm not a novice with NN, but I've never used RNN before, so I'm not sure whether it can be used with a (relative) success here. Also, if RNNSharp is capable of dealing with such a problem, any help of how can I start / setup the network would be a greatly appreciated:) (as I would be using the API in my own project)

Thanks in advance

A few questions regarding how to exploit RNNSharp

Hi,

I highly appreciate your work and thank you very much for providing this model in C#. To add in a bit of context, I will be using your model during my research, and I would like to know as much as possible about how to exploit RNNSharp for NER. I would greatly appreciate if you could answer some of my doubts and questions that I haven't seen enough explained in the README or in other issues written by other users:

  1. What is validated corpus in the process of encoding a model? How does it differ from the training corpus file?

  2. Is there any quicker way of preparing training files in the format required for the encoding process? Say I would like to use a CONLL training corpus but, as the format is different, manually formatting requires a huge amount of time. How to go about?

  3. How to do gazzeteer-matching? Say I have a list of named entities of locations from GeoNames and I want the algorithm to integrate them and do named entity matching in the decoding process. Is there any guide how to do so?

  4. I would like to know how to feed features into the algorithm, such as handcrafted linguistic features (capitalization, linguistic pattern combinations and the like) to build a hybrid linguistic and deep-learning-based NER system.

  5. How to perform evaluation metrics on our test dataset such as recall, precision and f1-score?

Thank you very much in advance, and have a nice day.

Example file

Is there any runnable example? I've change the code a little bit and tried to run it, but the error rate is unreasonable high

Impact of run-time feature

Hello Sir,
hope you will be fine
Would you please like to let me know the impact of run-time features on results, simply why we use run-time features?

ArgumentOutOfRangeException during training

Hello !

First of all, thank you for publishing such high quality project. It is the absolute state-of-art ! Thank you !!!

I was evaluating this in a small dataset, however I get this ArgumentOutOfRangeException.

The error is in SimpleRNN.cs, at this part:

Logger.WriteLine("Saving feature2hidden weights...");
saveMatrixBin(mat_feature2hidden, fo);

This matrix, mat_feature2hidden, have the Height of 200, however the Width is 0.

The command:
.\Bin\RNNSharpConsole.exe -mode train -trainfile bruno-data.txt -modelfile .\bruno-model.bin -validfile bruno-valid.txt -ftrfile .\config_bruno.txt -tagfile .\bruno-tags.txt -modeltype 0 -layersize 200 -alpha 0.1 -crf 1 -maxiter 0 -savestep 200K -dir 1 -dropout 0

The config:
#The file name for template feature set
TFEATURE_FILENAME:.\bruno-features
#The context range for template feature set. In below, the context is current token, next token and next after next token
TFEATURE_CONTEXT: 0

And the error:

info,11/02/2016 14:25:38 Distortion: 0,0873744581246124, vqSize: 256
info,11/02/2016 14:25:38 Saving feature2hidden weights...
info,11/02/2016 14:25:38 Saving matrix with VQ 256...
info,11/02/2016 14:25:38 Sorting data set (size: 0)...

Unhandled Exception: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
   at System.ThrowHelper.ThrowArgumentOutOfRangeException(ExceptionArgument argument, ExceptionResource resource)
   at AdvUtils.VarBigArray`1.get_Item(Int64 offset)
   at AdvUtils.VectorQuantization.BuildCodebook(Int32 vqSize)
   at RNNSharp.RNN.saveMatrixBin(Matrix`1 mat, BinaryWriter fo, Boolean BuildVQ) in C:\github\RNNSharp\RNNSharp\RNN.cs:line 138
   at RNNSharp.SimpleRNN.saveNetBin(String filename) in C:\github\RNNSharp\RNNSharp\SimpleRNN.cs:line 517
   at RNNSharp.BiRNN.saveNetBin(String filename) in C:\github\RNNSharp\RNNSharp\BiRNN.cs:line 499
   at RNNSharp.RNNEncoder.Train() in C:\github\RNNSharp\RNNSharp\RNNEncoder.cs:line 107
   at RNNSharpConsole.Program.Main(String[] args) in C:\github\RNNSharp\RNNSharpConsole\Program.cs:line 279

Maybe it is because I'm not using a word embedding ??

Thank you again for this great project !

Gazetteer list as a feature

Though RNN support the word embedding feature which is very plus point of RNN compared to the competitor CRF. is RNN have the capability to support external Gazetteer list and dictionaries as feature?

Possible combinations of various RNNs types

Hello,
i am little bit confused about the various combinations of model types and possible direction e.g forward and bidirectional. i am new to RNN architecture so please guide urgently.
is the following combinations are valid ? if not then please explain little bit. thanks in advance.
With Single Hidden Layer e.g –layersize 200
Bi-Directional LSTM-RNN without CRF
Bi-Directional LSTM-RNN with CRF
Bi-Directional BPTT-RNN without CRF
Bi-Directional BPTT-RNN with CRF
Forward LSTM-RNN without CRF
Forward LSTM-RNN with CRF
Forward BPTT-RNN without CRF
Forward BPTT-RNN with CRF
With Two Hidden Layer (deep layers) e.g –layersize 200,100
Bi-Directional LSTM-RNN without CRF
Bi-Directional LSTM-RNN with CRF
Bi-Directional BPTT-RNN without CRF
Bi-Directional BPTT-RNN with CRF
Forward LSTM-RNN without CRF
Forward LSTM-RNN with CRF
Forward BPTT-RNN without CRF
Forward BPTT-RNN with CRF

GPU Training

Hi, does this library support training on GPU?

An unhandled exception of type 'System.NotSupportedException' occurred in System.Numerics.Vectors.dll

Hello, First i really appreciate your work. My programming skills in C# is not good therefore apology in advance if question seems to your good-self stupid.
when i copile the source i got the error in ModelSetting .cs
System.NotSupportedException was unhandled
HResult=-2146233067
Message=Vector.Count cannot be called via reflection when intrinsics are enabled.
Source=System.Numerics.Vectors
StackTrace:
at System.Numerics.Vector`1.get_Count()
at RNNSharp.ModelSetting.DumpSetting() in D:\RNNSharp-master\RNNSharp\ModelSetting.cs:line 51
at RNNSharpConsole.Program.Train() in D:\RNNSharp-master\RNNSharpConsole\Program.cs:line 444
at RNNSharpConsole.Program.Main(String[] args) in D:\RNNSharp-master\RNNSharpConsole\Program.cs:line 277
at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)
at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
InnerException:
here is my screen shoot:
image

so please help , i am new to C# and my work is pending since two days due to this bug. thanks

Training takes too long!!

Hello, I was wondering what are the training times for the demonstrations.
I just tried the english seq labeler, and it took 1 hour to process 10% of the corpus! (is this normal?)
It's known Deep Learning is CPU hungry, I have only 2 cores and 8GB RAM (sorry)
¿do I need to change the PC) or acquire a CUDA core to help computing?
¿Is there a way to stop learning manually, or programmatically after reaching certain error rate?

I am wondering if you ever tried sequence labelling on highly inflectional languages (like Spanish) which has lots of inflectional power (complexity) and the words as a whole string are useless, the vocabulary explodes into >300M words! and the "examples"found on text begins to be too sparse, even with negative sampling you never get certain combinations, because most verbs have over 200 versions of itself (inflections), including time-tense, person, gender, plurality, mode, etc. So there is need to train on higher level features, but not losing the "semantic" sense. ¿do you think this could be possible, like decomposing the words (by means of controlled independent lemmatization) into parts/chunks (prefix, root, suffix, as well as modal information and semantic features of the parts,) My intuition is that this might lower the training and may be better the generalization power with less extensive corpus. Like capturing higher level syntax rules, and by the way generating semantic content constraints (may be even some common sense)...

It's just a question, on theory!

Older version?

Hi,

Do you have a version available for VS2010 and .NET 4.0?

Supported Features

Hello
Hope that you will be fine.
Can you briefly explain impact of each feature type specially the embedding feature and runtime feature on the final results of RNN compared to the traditional CRF which use only one feature type e.g. the template feature.
thanks in advance

AdvUtils.dll Issue

'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_64\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\assembly\GAC_MSIL\Microsoft.VisualStudio.HostingProcess.Utilities\14.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.HostingProcess.Utilities.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Windows.Forms\v4.0_4.0.0.0__b77a5c561934e089\System.Windows.Forms.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System\v4.0_4.0.0.0__b77a5c561934e089\System.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Drawing\v4.0_4.0.0.0__b03f5f7f11d50a3a\System.Drawing.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\assembly\GAC_MSIL\Microsoft.VisualStudio.HostingProcess.Utilities.Sync\14.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.HostingProcess.Utilities.Sync.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\assembly\GAC_MSIL\Microsoft.VisualStudio.Debugger.Runtime\14.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.Debugger.Runtime.dll'.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\Users\Aguilar\Desktop\RNNSharp-master\RNNSharpConsole\bin\Debug\RNNSharpConsole.vshost.exe'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Core\v4.0_4.0.0.0__b77a5c561934e089\System.Core.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Xml.Linq\v4.0_4.0.0.0__b77a5c561934e089\System.Xml.Linq.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Data.DataSetExtensions\v4.0_4.0.0.0__b77a5c561934e089\System.Data.DataSetExtensions.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\Microsoft.CSharp\v4.0_4.0.0.0__b03f5f7f11d50a3a\Microsoft.CSharp.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_64\System.Data\v4.0_4.0.0.0__b77a5c561934e089\System.Data.dll'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\WINDOWS\Microsoft.Net\assembly\GAC_MSIL\System.Xml\v4.0_4.0.0.0__b77a5c561934e089\System.Xml.dll'. Symbols loaded.
The thread 0x4a7a4 has exited with code 0 (0x0).
The thread 0x4a570 has exited with code 0 (0x0).
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\Users\Aguilar\Desktop\RNNSharp-master\RNNSharpConsole\bin\Debug\RNNSharpConsole.exe'. Symbols loaded.
'RNNSharpConsole.vshost.exe' (CLR v4.0.30319: RNNSharpConsole.vshost.exe): Loaded 'C:\Users\Aguilar\Desktop\RNNSharp-master\RNNSharpConsole\bin\Debug\AdvUtils.dll'. Cannot find or open the PDB file.
The thread 0x4a464 has exited with code 0 (0x0).
The thread 0x19050 has exited with code 0 (0x0).
The program '[292464] RNNSharpConsole.vshost.exe' has exited with code 0 (0x0).

AdvUtils.dll is in that folder but it says cannot find or open. Any suggestions?

imncremental training NullReferenceException after last update

Hello!
After the last update, I get this error:

Необработанное исключение: System.AggregateException: Произошла одна или несколько ошибок. ---> System.NullReferenceException: Ссылка на объект не указывает на экземпляр объекта.
в RNNSharp.SimpleLayer.BackwardPass() в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharp\Layers\SimpleLayer.cs:строка 288
в RNNSharp.Networks.ForwardRNN1.ProcessSequenceCRF(Sequence pSequence, RunningMode runningMode) в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharp\Networks\FowardRNN.cs:строка 483 в RNNSharp.RNNEncoder1.Process(RNN1 rnn, DataSet1 trainingSet, RunningMode runningMode) в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharp\RNNEncoder.cs:строка 81
в System.Threading.Tasks.Parallel.<>c__DisplayClass17_01.<ForWorker>b__1() в System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) в System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object ) --- Конец трассировки внутреннего стека исключений --- в System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) в System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) в System.Threading.Tasks.Parallel.ForWorker[TLocal](Int32 fromInclusive, Int32 toExclusive, ParallelOptions parallelOptions, Action1 body, Action2 bodyWithState, Func4 bodyWithLocal, Func1 localInit, Action1 localFinally)
в System.Threading.Tasks.Parallel.For(Int32 fromInclusive, Int32 toExclusive, Action1 body) в RNNSharp.RNNEncoder1.Train() в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharp\RNNEncoder.cs:строка 227
в RNNSharpConsole.Program.Train() в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharpConsole\Program.cs:строка 550
в RNNSharpConsole.Program.Main(String[] args) в C:\Users\vanin\Documents\GitHub\RNNSharp\RNNSharpConsole\Program.cs:строка 310`

before update it is working without problems.

My config is :

`CURRENT_DIRECTORY = .\model2cfg

MODEL_TYPE = SEQLABEL

#Forward and BiDirectional are supported
NETWORK_TYPE = Forward

MODEL_FILEPATH = model.bin

HIDDEN_LAYER = LSTM:200, LSTM:100

OUTPUT_LAYER = Simple

CRF_LAYER = True

TFEATURE_FILENAME = tfeatures

#Binary and Freq are supported
TFEATURE_WEIGHT_TYPE = Freq

TFEATURE_CONTEXT = -7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5

PRETRAIN_TYPE = Embedding

WORDEMBEDDING_FILENAME = vector_big.bin

WORDEMBEDDING_CONTEXT = -1,0,1

WORDEMBEDDING_COLUMN = 0

AUTOENCODER_CONFIG = D:\RNNSharpDemoPackage\config_autoencoder.txt

#SEQ2SEQ_AUTOENCODER_CONFIG: D:\RNNSharpDemoPackage\config_seq2seq_autoencoder.txt

RTFEATURE_CONTEXT = -2,-1`

LSTM RNN

when i training a pos tagger using RNN,CRF,RNN+CRF,LSTM+CRF,the result is very good, but LSTM is not better than others,I have same training, development and test corpus, what is my problem?

Saving better model crash.

Hello.

info,20.02.2016 21:08:27 [TRACE] In training: log probability = -6539,9872918264
9, cross-entropy = 0,0073893168866162, perplexity = 1,00513502343797
info,20.02.2016 21:08:27
info,20.02.2016 21:08:27 Saving better model into file model.bin...
info,20.02.2016 21:08:27 Saving input2hidden weights...
info,20.02.2016 21:08:28 Saving matrix with VQ 256...
info,20.02.2016 21:10:28 Sorting data set (size: 1707219900)...

Process is terminated due to StackOverflowException.

Process is terminated due to StackOverflowException.
Process is terminated due to StackOverflowException.

rnn error

Seems an error occurs in AdvUtils.dll

OutofMemoryException

Thanks for this greet job!
I have got a problem when i start to use the new version( I used the old version on codeplex before)
it throws outofmemory exception when I used old version text2vec tool generated vector.bin file.if there are a way to convert it to new version.

WordEMWrapFeaturizer funcion
model.LoadBinaryModel(filename);
i found that there is no
bw.Write(0);//no VQ
in the old version,maybe this rises the exception,is there any solutions for this,I donot want to retrain the word_vector.bin.

Need Help With PRETRAIN_TYPE=Autoencoder

Hello!
You wrote:
"You need to train an auto encoder-decoder model by RNNSharp at first, and then use this pretrained model for your task"

But I do not see how to do it.
Finaly what do you mean by "pretrained model is trained by RNNSharp itself"?

NullReferenceException in DropoutLayer

Decoding crashes when using dropout layer in current code version. Although works in latest demo package release version.

Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at RNNSharp.DropoutLayer.Load(BinaryReader br, LayerType layerType, Boolean forTraining) in D:\RNNSharp-master\RNNSharp\Layers\DropoutLayer.cs:line 174
at RNNSharp.Networks.RNN1.Load(LayerType layerType, BinaryReader br, Boolean forTraining) in D:\RNNSharp-master\RNNSharp\Networks\RNN.cs:line 509 at RNNSharp.Networks.BiRNN1.LoadModel(String filename, Boolean bTrain) in D:\RNNSharp-master\RNNSharp\Networks\BiRNN.cs:line 598
at RNNSharp.RNNDecoder..ctor(Config config) in D:\RNNSharp-master\RNNSharp\RNNDecoder.cs:line 24
at RNNSharpConsole.Program.Test() in D:\RNNSharp-master\RNNSharpConsole\Program.cs:line 371
at RNNSharpConsole.Program.Main(String[] args) in D:\RNNSharp-master\RNNSharpConsole\Program.cs:line 305

Using the library from .Net Core

I am struggling porting the library to .Net core.
Is there something to keep in mind while doing so (adding some dependencies, etc )?
Thanks for the help!

System Total Fail due to double-Number conversion problems for non "es-US" Cultures

Hi, I trained the sample, for english BIO NER labeler, adn the training was OK
But when running the TEST batch, it fails TOTALLY!
The labels got were absolutely NUTS!

so I started to din inside algorithms, and found a BIG PROBLEM
when you write down the training and model files, and use a "string" as double, in "en-US" culture, the decimal point is a "." while in spanish culture, it is a "," so when writing down and reading agfain the files, there is a inconsistency (if used as string) so the solution is 2 ways

  1. use this way (modelReader.cs)

             //读入cost_factor
             strLine = sr.ReadLine();
             cost_factor_ = double.Parse(strLine.Split(':')[1].Trim(), CultureInfo.InvariantCulture);
    
  2. add this to the first run inside a MAIN() loop for console apps.

    static void Main(string[] args)
     {
     	CultureInfo culture = CultureInfo.CreateSpecificCulture("en-US");
     	CultureInfo.DefaultThreadCurrentCulture = culture;
     	CultureInfo.DefaultThreadCurrentUICulture = culture;
     	Thread.CurrentThread.CurrentCulture = culture;
     	Thread.CurrentThread.CurrentUICulture = culture;
      .....
    

And now it works!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.