Git Product home page Git Product logo

bert-pytorch's People

Contributors

dreamgonfly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bert-pytorch's Issues

Wrong tensor shape during pretrain

[INFO] 2020-05-04 11:56:22 > Run name : BERT-BERT-{phase}-layers_count={layers_count}-hidden_size={hidden_size}-heads_count={heads_count}-{timestamp}-layers_count=1-hidden_size=128-heads_count=2-2020_05_04_11_56_22
[INFO] 2020-05-04 11:56:22 > {'config_path': None, 'data_dir': None, 'train_path': '/home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/train.txt', 'val_path': '/home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/val.txt', 'dictionary_path': '/home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/dict.txt', 'checkpoint_dir': '/home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/checkpoints/', 'log_output': None, 'dataset_limit': None, 'epochs': 100, 'batch_size': 16, 'print_every': 1, 'save_every': 10, 'vocabulary_size': 60000, 'max_len': 512, 'lr': 0.001, 'clip_grads': False, 'layers_count': 1, 'hidden_size': 128, 'heads_count': 2, 'd_ff': 128, 'dropout_prob': 0.1, 'device': 'cuda:0', 'function': <function pretrain at 0x7f942c367b70>}
[INFO] 2020-05-04 11:56:22 > Constructing dictionaries...
[INFO] 2020-05-04 11:56:23 > dictionary vocabulary : 60000 tokens
[INFO] 2020-05-04 11:56:23 > Loading datasets...
1374it [00:11, 115.92it/s]
344it [00:05, 68.72it/s]
[INFO] 2020-05-04 11:56:40 > Train dataset size : 1828898
[INFO] 2020-05-04 11:56:40 > Building model...
[INFO] 2020-05-04 11:56:40 > BERT(
(encoder): TransformerEncoder(
(encoder_layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attention_layer): Sublayer(
(sublayer): MultiHeadAttention(
(query_projection): Linear(in_features=128, out_features=128, bias=True)
(key_projection): Linear(in_features=128, out_features=128, bias=True)
(value_projection): Linear(in_features=128, out_features=128, bias=True)
(final_projection): Linear(in_features=128, out_features=128, bias=True)
(dropout): Dropout(p=0.1)
(softmax): Softmax()
)
(layer_normalization): LayerNormalization()
)
(pointwise_feedforward_layer): Sublayer(
(sublayer): PointwiseFeedForwardNetwork(
(feed_forward): Sequential(
(0): Linear(in_features=128, out_features=128, bias=True)
(1): Dropout(p=0.1)
(2): GELU()
(3): Linear(in_features=128, out_features=128, bias=True)
(4): Dropout(p=0.1)
)
)
(layer_normalization): LayerNormalization()
)
(dropout): Dropout(p=0.1)
)
)
)
(token_embedding): Embedding(60000, 128)
(positional_embedding): PositionalEmbedding(
(positional_embedding): Embedding(512, 128)
)
(segment_embedding): SegmentEmbedding(
(segment_embedding): Embedding(2, 128)
)
(token_prediction_layer): Linear(in_features=128, out_features=60000, bias=True)
(classification_layer): Linear(in_features=128, out_features=2, bias=True)
)
[INFO] 2020-05-04 11:56:40 > 15585634 parameters
[INFO] 2020-05-04 11:56:40 > Start training...
0%| | 0/114307 [00:00<?, ?it/s]
0%| | 0/52472 [00:00<?, ?it/s]
[INFO] 2020-05-04 11:56:47 > Epoch: 0 Progress: 0.0% Elapsed: 0:00:03 Examples/second: 5e+05 Train Loss: inf Val Loss: inf Train Metrics: [inf] Val Metrics: [inf] Learning rate: 1.768e-07
[INFO] 2020-05-04 11:56:48 > Saved model to /home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/checkpoints/epoch=000-val_loss=inf-val_metrics=inf.pth
[INFO] 2020-05-04 11:56:48 > Current best model is /home/ubuntu/ALEX/BERT-pytorch/data/rusbiomed/checkpoints/epoch=000-val_loss=inf-val_metrics=inf.pth
5%|███▊ | 5364/114307 [02:28<52:08, 34.82it/s]Traceback (most recent call last):
File "main.py", line 34, in
main()
File "main.py", line 30, in main
args.function(**config, config=config)
File "/home/ubuntu/ALEX/BERT-pytorch/bert/train/train.py", line 104, in pretrain
trainer.run(epochs=epochs)
File "/home/ubuntu/ALEX/BERT-pytorch/bert/train/trainer.py", line 98, in run
train_epoch_loss, train_epoch_metrics = self.run_epoch(self.train_dataloader, mode='train')
File "/home/ubuntu/ALEX/BERT-pytorch/bert/train/trainer.py", line 64, in run_epoch
predictions, batch_losses = self.loss_model(inputs, targets)
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ALEX/BERT-pytorch/bert/train/loss_models.py", line 17, in forward
outputs = self.model(inputs)
File "/home/ubuntu/.virtualenvs/ml/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ALEX/BERT-pytorch/bert/train/model/bert.py", line 64, in forward
embedded_sources = token_embedded + positional_embedded + segment_embedded
RuntimeError: The size of tensor a (515) must match the size of tensor b (512) at non-singleton dimension 1

ONNX error: forward() takes 2 positional arguments but 3 were given

I'm hoping to try out your model with my custom data, but I need to get it converted to ONNX eventually, so I thought I'd try converting the simple examples first, as a test.
I'm just running, as a quick training pass:

python main.py pretrain --train_path data/example/train.txt --val_path data/example/val.txt

Then I try to load/convert the checkpoint with:

import torch
from collections import OrderedDict

from bert.train.model.bert import build_model
from bert.preprocess.preprocess import add_preprocess_parser
from bert.train.train import add_pretrain_parser, add_finetune_parser

trained_pth = 'checkpoints/BERT-BERT-{phase}-layers_count={layers_count}-hidden_size={hidden_size}-heads_count={heads_count}-{timestamp}-layers_count=1-hidden_size=128-heads_count=2-2019_07_14_17_44_38/epoch=010-val_loss=6.14-val_metrics=0.0-0.331.pth'
state_dict = torch.load(trained_pth, map_location='cpu')['state_dict']  # NOTE: This is an OrderedDict()
ordered_dict = OrderedDict()
for k, v in state_dict.items():
    name = k[13:]
    ordered_dict[name] = v

model = build_model(1, 128, 1, 128, 0.1, 512, 151)
model.load_state_dict(ordered_dict)

dummy_input = (torch.randn(1, 128).long(), torch.randn(1, 128).long())
input_names = ["input_sequence", "segment"]
output_names = ["predictions"]
torch.onnx.export(model, dummy_input,"bert.onnx", verbose=True, input_names=input_names, output_names=output_names)

Obviously something's wrong, because I'm hitting the following error:

File "/home/james/src/BERT-pytorch/basic_test_to_onnx.py", line 24, in <module>
  torch.onnx.export(model, dummy_input,"bert.onnx", verbose=True, input_names=input_names, output_names=output_names)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/__init__.py", line 25, in export
  return utils.export(*args, **kwargs)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/utils.py", line 131, in export
  strip_doc_string=strip_doc_string)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/utils.py", line 363, in _export
  _retain_param_name, do_constant_folding)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/utils.py", line 266, in _model_to_graph
  graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/onnx/utils.py", line 225, in _trace_and_get_graph_from_model
  trace, torch_out = torch.jit.get_trace_graph(model, args, _force_outplace=True)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/jit/__init__.py", line 231, in get_trace_graph
  return LegacyTracedModule(f, _force_outplace, return_inputs)(*args, **kwargs)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
  result = self.forward(*input, **kwargs)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/jit/__init__.py", line 294, in forward
  out = self.inner(*trace_inputs)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
  result = self._slow_forward(*input, **kwargs)
File "/home/james/anaconda3/envs/fastai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 481, in _slow_forward
  result = self.forward(*input, **kwargs)

builtins.TypeError: forward() takes 2 positional arguments but 3 were given

But it doesn't look like I have direct access to the caller of this, so I'm really not sure where the extra argument is coming from, or how I might fix it. Do you know, offhand, whether this model can be converted successfully to ONNX?

Using for supervised parallel corpora

Hey you did an awesome job.Can your code be use for training parallel corpora?Or can you tell about some other resources like se2seq or fairseq where parallel corpora can be used with bert implementation for supervised training.

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.