Git Product home page Git Product logo

neural_citation's People

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

neural_citation's Issues

notebook for training results error

tried ncn_training.ipynb and it shows 1 warning and 1 error message.

Warning

/lib/python3.7/site-packages/torch/nn/modules/rnn.py:51: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  "num_layers={}".format(dropout, num_layers))

Error

Running on: cuda
Number of model parameters: 24,341,796
Encoders: # Filters = 256, Context filter length = [4, 4, 5, 6, 7],  Context filter length = [1, 2]
Embeddings: Dimension = 128, Pad index = 1, Context vocab = 20002, Author vocab = 20002, Title vocab = 20004
Decoder: # GRU cells = 1, Hidden size = 256
Parameters: Dropout = 0.2, Show attention = False
-------------------------------------------------
TRAINING SETTINGS
Seed = 34, # Epochs = 20, Batch size = 64, Initial lr = 0.001
HBox(children=(IntProgress(value=0, description='Epochs', max=20, style=ProgressStyle(description_width='initial')), HTML(value='')))
HBox(children=(IntProgress(value=0, description='Training batches', max=6280, style=ProgressStyle(description_width='initial')), HTML(value='')))

Traceback (most recent call last):
  File "/tmp/pycharm_project_813/ncn_training.py", line 54, in <module>
    model_name = "embed_128_hid_256_1_GRU")
  File "/tmp/pycharm_project_813/{PRJT_NAME}/training.py", line 225, in train_model
    train_loss = train(model, train_iterator, optimizer, criterion, clip)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/training.py", line 106, in train
    output = model(context = cntxt, title = ttl, authors_citing = citing, authors_cited = cited)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 486, in forward
    encoder_outputs = self.encoder(context, authors_citing, authors_cited)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 185, in forward
    context = self.context_encoder(context)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 105, in forward
    x = [encoder(x) for encoder in self.encoder]
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 105, in <listcomp>
    x = [encoder(x) for encoder in self.encoder]
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/tmp/pycharm_project_813/{PRJT_NAME}/model.py", line 61, in forward
    x = F.max_pool2d(x, kernel_size=pool_size)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/_jit_internal.py", line 134, in fn
    return if_false(*args, **kwargs)
  File "/home/{USER_ID}/.conda/envs/{ENV_NAME}/lib/python3.7/site-packages/torch/nn/functional.py", line 487, in _max_pool2d
    input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (256x1x38). Calculated output size: (256x0x1). Output size is too small

Process finished with exit code 1

warning can be easily solved by change the number of layers but not for error. it seems padding size problem.

Error about the evaluation process

Thank you for sharing these codes! I am very grateful to reproduce the results of the experiment.
There are some errors when I try to excute the evaluation.
I have transferred the NCN_evaluation.ipynb into .py file, and use NCN_9_4_4_128_filters.pt, but it comes to the errors below:

Traceback (most recent call last):
  File "NCN_evaluation.py", line 23, in <module>
    evaluator = Evaluator([4,4,5,6,7], [1,2], 128, 128, 2, path_to_weights, data, evaluate=True, show_attention=False)
  File "/home2/yy/dmcproject/neural_citation/ncn/evaluation.py", line 64, in __init__
    self.model.load_state_dict(torch.load(path_to_weights, map_location=DEVICE))
  File "/home2/yy/port22/anaconda2/envs/KD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for NeuralCitationNetwork:
	size mismatch for encoder.context_embedding.weight: copying a param with shape torch.Size([20002, 128]) from checkpoint, the shape in current model is torch.Size([2, 128]).
	size mismatch for decoder.embedding.weight: copying a param with shape torch.Size([20004, 128]) from checkpoint, the shape in current model is torch.Size([4, 128]).
	size mismatch for decoder.out.weight: copying a param with shape torch.Size([20004, 384]) from checkpoint, the shape in current model is torch.Size([4, 384]).
	size mismatch for decoder.out.bias: copying a param with shape torch.Size([20004]) from checkpoint, the shape in current model is torch.Size([4]).

Can you help me ? Thank you very much.

Evaluation on the entire candidate papers

First of all, thank you so much for sharing these codes! I am very grateful to be able to reproduce the results of the experiment.

Just to clarify, I'd like to make a question.
In your paper, it seems like datasets are separated into training, validation and test, and that performance of recommendation is scored within each datasets.
But if I'd like to use the entire candidate papers on evaluation, I guess what I need to is modification in evaluation.py like this...

         if self.eval:
-            self.examples = data.test.examples
+            self.examples = data.train.examples + data.valid.examples + data.test.examples
             logger.info(f"Number of samples in BM25 corpus: {len(self.examples)}")

Am I right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.