maluuba / mctest-model Goto Github PK
View Code? Open in Web Editor NEWmctest-model
mctest-model
Title^
I follow the instructions and end up with the following errors:
Using Theano backend.
init dataset with h5 file.
(u'input_answer_shape:', (600, 4, 24))
(u'input_dep_shape:', (600, 51, 4))
(u'input_dep_2gram_shape:', (600, 50, 4))
(u'input_dep_3gram_shape:', (600, 49, 4))
(u'input_negation_questions_shape:', (600, 4))
(u'input_question_shape:', (600, 19))
(u'input_question_answer_shape:', (600, 4, 26))
(u'input_reordered_story_shape:', (600, 51, 83))
(u'input_reordered_story_2gram_shape:', (600, 50, 98))
(u'input_reordered_story_3gram_shape:', (600, 49, 141))
(u'input_reordered_story_attentive_shape:', (600, 632))
(u'input_story_shape:', (600, 51, 83))
(u'input_story_2gram_shape:', (600, 50, 98))
(u'input_story_3gram_shape:', (600, 49, 141))
(u'input_story_attentive_shape:', (600, 632))
(u'y_hat_shape:', (600, 4))
INFO:__main__:finish init dataset with data.h5
INFO:embeddings:load mode=in-memory, embedding data type=<type 'numpy.ndarray'>, shape=(3000000, 300)
loading word embedding.
Traceback (most recent call last):
File "run.py", line 92, in <module>
train_option(EPOCHS=args.epoch)
File "run.py", line 72, in train_option
graph = model.build()
File "/Users/allanjie/Documents/workspace/mctest-model/model.py", line 624, in build
self._add_model()
File "/Users/allanjie/Documents/workspace/mctest-model/model.py", line 417, in _add_model
trainable=config['trainable_sentence_ensemble'])(layers_s_input)
File "/Users/allanjie/Documents/workspace/mctest-model/layers.py", line 590, in __init__
self.weights = weights if len(weights) == layers_len else [1.] * layers_len
AttributeError: can't set attribute
the test data in "data.h5" seems to come from MCtest-500,
Where is the test data of MCtest-160?
Do you have codes that can convert these dataset to h5 format?
python2.7 run.py
Using TensorFlow backend.
init dataset with h5 file.
(u'input_answer_shape:', (600, 4, 24))
(u'input_dep_shape:', (600, 51, 4))
(u'input_dep_2gram_shape:', (600, 50, 4))
(u'input_dep_3gram_shape:', (600, 49, 4))
(u'input_negation_questions_shape:', (600, 4))
(u'input_question_shape:', (600, 19))
(u'input_question_answer_shape:', (600, 4, 26))
(u'input_reordered_story_shape:', (600, 51, 83))
(u'input_reordered_story_2gram_shape:', (600, 50, 98))
(u'input_reordered_story_3gram_shape:', (600, 49, 141))
(u'input_reordered_story_attentive_shape:', (600, 632))
(u'input_story_shape:', (600, 51, 83))
(u'input_story_2gram_shape:', (600, 50, 98))
(u'input_story_3gram_shape:', (600, 49, 141))
(u'input_story_attentive_shape:', (600, 632))
(u'y_hat_shape:', (600, 4))
INFO:main:finish init dataset with data.h5
INFO:embeddings:load mode=in-memory, embedding data type=<type 'numpy.ndarray'>, shape=(3000000, 300)
loading word embedding.
2018-11-20 15:09:13.013095: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-11-20 15:09:13.013120: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-11-20 15:09:13.013126: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-11-20 15:09:13.013130: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-11-20 15:09:13.013135: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
File "run.py", line 92, in
train_option(EPOCHS=args.epoch)
File "run.py", line 72, in train_option
graph = model.build()
File "/u/scratch1/rpujari/parallel_hierarchical/model.py", line 623, in build
self._add_sentence_encode(encode=self.model_config['layers']['sentence_encode']['type'])
File "/u/scratch1/rpujari/parallel_hierarchical/model.py", line 246, in _add_sentence_encode
nodes['__story_encoding1'] = weighting_layer([nodes['story_word_embedding1'], nodes['__w_story1']])
File "/homes/rpujari/anaconda3/envs/flower/lib/python2.7/site-packages/keras/engine/topology.py", line 485, in call
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/homes/rpujari/anaconda3/envs/flower/lib/python2.7/site-packages/keras/engine/topology.py", line 543, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/homes/rpujari/anaconda3/envs/flower/lib/python2.7/site-packages/keras/engine/topology.py", line 153, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors, mask=input_masks))
File "/u/scratch1/rpujari/parallel_hierarchical/layers.py", line 282, in call
lay1 = T.addbroadcast(lay1, lay1.ndim - 1)
AttributeError: 'Tensor' object has no attribute 'ndim'
Hi,
Could you please publish the scripts to produce the input feature vectors directly from the MCTest data set?
In particular there are features that are hard to reconcile with the paper (sorry if I missed something big). For example the input_dep
?
Following the instructions, using a python2.7 anaconda distribution with Keras 1.1.1 and Then 0.8.2 I get the following failure.
(python2) user@localhost:maluuba/mctest-model$ time python run.py
Using Theano backend.
init dataset with h5 file.
(u'input_answer_shape:', (600, 4, 24))
(u'input_dep_shape:', (600, 51, 4))
(u'input_dep_2gram_shape:', (600, 50, 4))
(u'input_dep_3gram_shape:', (600, 49, 4))
(u'input_negation_questions_shape:', (600, 4))
(u'input_question_shape:', (600, 19))
(u'input_question_answer_shape:', (600, 4, 26))
(u'input_reordered_story_shape:', (600, 51, 83))
(u'input_reordered_story_2gram_shape:', (600, 50, 98))
(u'input_reordered_story_3gram_shape:', (600, 49, 141))
(u'input_reordered_story_attentive_shape:', (600, 632))
(u'input_story_shape:', (600, 51, 83))
(u'input_story_2gram_shape:', (600, 50, 98))
(u'input_story_3gram_shape:', (600, 49, 141))
(u'input_story_attentive_shape:', (600, 632))
(u'y_hat_shape:', (600, 4))
INFO:__main__:finish init dataset with data.h5
INFO:embeddings:load mode=in-memory, embedding data type=<type 'numpy.ndarray'>, shape=(3000000, 300)
loading word embedding.
Traceback (most recent call last):
File "run.py", line 92, in <module>
train_option(EPOCHS=args.epoch)
File "run.py", line 72, in train_option
graph = model.build()
File "mctest-model/model.py", line 624, in build
self._add_model()
File "mctest-model/model.py", line 417, in _add_model
trainable=config['trainable_sentence_ensemble'])(layers_s_input)
File "mctest-model/layers.py", line 594, in __init__
self.weights = weights if len(weights) == layers_len else [1.] * layers_len
AttributeError: can't set attribute
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.