Git Product home page Git Product logo

spring's Introduction

SPRING

PWC

PWC

PWC

PWC

This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021.

With SPRING you can perform both state-of-the-art Text-to-AMR parsing and AMR-to-Text generation without many cumbersome external components. If you use the code, please reference this work in your paper:

@inproceedings{bevilacqua-etal-2021-one,
    title = {One {SPRING} to Rule Them Both: {S}ymmetric {AMR} Semantic Parsing and Generation without a Complex Pipeline},
    author = {Bevilacqua, Michele and Blloshmi, Rexhina and Navigli, Roberto},
    booktitle = {Proceedings of AAAI},
    year = {2021}
}

Pretrained Checkpoints

Here we release our best SPRING models which are based on the DFS linearization.

Text-to-AMR Parsing

AMR-to-Text Generation

If you need the checkpoints of other experiments in the paper, please send us an email.

Installation

cd spring
pip install -r requirements.txt
pip install -e .

The code only works with transformers < 3.0 because of a disrupting change in positional embeddings. The code works fine with torch 1.5. We recommend the usage of a new conda env.

Train

Modify config.yaml in configs. Instructions in comments within the file. Also see the appendix.

Text-to-AMR

python bin/train.py --config configs/config.yaml --direction amr

Results in runs/

AMR-to-Text

python bin/train.py --config configs/config.yaml --direction text

Results in runs/

Evaluate

Text-to-AMR

python bin/predict_amrs.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.amr.txt \
    --pred-path data/tmp/amr2.0/pred.amr.txt \
    --checkpoint runs/<checkpoint>.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

gold.amr.txt and pred.amr.txt will contain, respectively, the concatenated gold and the predictions.

To reproduce our paper's results, you will also need need to run the BLINK entity linking system on the prediction file (data/tmp/amr2.0/pred.amr.txt in the previous code snippet). To do so, you will need to install BLINK, and download their models:

git clone https://github.com/facebookresearch/BLINK.git
cd BLINK
pip install -r requirements.txt
sh download_blink_models.sh
cd models
wget http://dl.fbaipublicfiles.com/BLINK//faiss_flat_index.pkl
cd ../..

Then, you will be able to launch the blinkify.py script:

python bin/blinkify.py \
    --datasets data/tmp/amr2.0/pred.amr.txt \
    --out data/tmp/amr2.0/pred.amr.blinkified.txt \
    --device cuda \
    --blink-models-dir BLINK/models

To have comparable Smatch scores you will also need to use the scripts available at https://github.com/mdtux89/amr-evaluation, which provide results that are around ~0.3 Smatch points lower than those returned by bin/predict_amrs.py.

AMR-to-Text

python bin/predict_sentences.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.text.txt \
    --pred-path data/tmp/amr2.0/pred.text.txt \
    --checkpoint runs/<checkpoint>.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

gold.text.txt and pred.text.txt will contain, respectively, the concatenated gold and the predictions. For BLEU, chrF++, and Meteor in order to be comparable you will need to tokenize both gold and predictions using JAMR tokenizer. To compute BLEU and chrF++, please use bin/eval_bleu.py. For METEOR, use https://www.cs.cmu.edu/~alavie/METEOR/ . For BLEURT don't use tokenization and run the eval with https://github.com/google-research/bleurt. Also see the appendix.

Linearizations

The previously shown commands assume the use of the DFS-based linearization. To use BFS or PENMAN decomment the relevant lines in configs/config.yaml (for training). As for the evaluation scripts, substitute the --penman-linearization --use-pointer-tokens line with --use-pointer-tokens for BFS or with --penman-linearization for PENMAN.

License

This project is released under the CC-BY-NC-SA 4.0 license (see LICENSE). If you use SPRING, please put a link to this repo.

Acknowledgements

The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the ELEXIS project No. 731015 under the European Union’s Horizon 2020 research and innovation programme.

This work was supported in part by the MIUR under the grant "Dipartimenti di eccellenza 2018-2022" of the Department of Computer Science of the Sapienza University of Rome.

spring's People

Contributors

mbevila avatar navigli avatar rexhinab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spring's Issues

Experiment Config

Hi, thanks for your nice work.
I find there are several settings in the config
image
It seems that the BFS PENMAN and BART baseline is in the same setting.
I wonder if I have missed any details

In addition,
following the script in the README.MD, I train the model and evaluate on the test dataset.
Without using the BLINK, the model can achieve 84.4 smatch score.

I find in the config, the use_recategorization is False, so this result represents the result of SPRINGDFS +silver in the paper, right? But I do not find where the silver data is. It is strange.

Error when loading checkpoint.

RuntimeError: Error(s) in loading state_dict for AMRBartForConditionalGeneration:
Unexpected key(s) in state_dict: "model.encoder.embed_backreferences.weight", "model.encoder.embed_backreferences.transform.weight", "model.encoder.embed_backreferences.transform.bias", "model.decoder.embed_backreferences.weight", "model.decoder.embed_backreferences.transform.weight", "model.decoder.embed_backreferences.transform.bias".

When running:

python bin/predict_amrs.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.amr.txt \
    --pred-path data/tmp/amr2.0/pred.amr.txt \
    --checkpoint runs/<checkpoint>.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

With the http://nlp.uniroma1.it/AMR/AMR2.parsing-1.0.tar.bz2 checkpoint (AMR2.amr-lin3.pt).

Can those keys be ignored from the checkpoint?

Question about the changes in AMRBartForConditionalGeneration

I see that you have a custom modeling_bart.py file with AMRBartForConditionalGeneration. It looks like most of the changes revolve around adding "backreferences" but it's difficult to tell since I'm not sure what version of transformers that file originally came from (do you happen to know this?)

  • Is there anywhere that explains what's being changed in this class (conceptually)? I didn't see these modifications explained in your paper "One SPRING to Rule Them Both". Are they detailed in another place?

My goal is to upgrade to the latest transformers library for compatibility reasons. This repo trains correctly if I switch to the newest BartForConditionalGeneration from the transformers library, but I'm getting smatch scores about 2 points lower than in your paper. Is this consistent with what you've seen (ie.. does the custom AMRBartForConditionalGeneration improve scores by about 2 points?).

Hyperparameters not the same in paper and config

Thank you for open-sourcing your repo! I am trying to reproduce your results but found difficulty reaching the same scores. I then found that the hyperparameters in the config are not the same as discussed in the paper's appendix. Specifically, you mention a beam search of 5 in the paper but the config has 1. Could you please clarify? Which of these is correct?

I also find that there is a warmup_steps of 1 which seems out-of-place and a very uncommon value. Can you confirm that this is indeed correct?

Size mismatch when loading state dict

Thanks for this amazing work!

I tried running the predict_amrs_from_plaintext.py script but came cross a runtime error. It occurred when loading the state_dict of the checkpoint you released for AMR 3.0 for AMRBartForConditionalGeneration. I saw that you suggested that transformers version < 3 should be used. I experimented version 2.11.0 as suggested in your requirements.txt file, and also version 2.8.0, but the problem persisted. The tokenizers is of version 2.7.0. I was wondering if you have any idea regarding what the reason might be, and how I can fix the problem? Many thanks!

I've attached the full error message below:

RuntimeError: Error(s) in loading state_dict for AMRBartForConditionalGeneration:
size mismatch for final_logits_bias: copying a param with shape torch.Size([1, 53587]) from checkpoint, the shape in current model is torch.Size([1, 53075]).
size mismatch for model.shared.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024]).
size mismatch for model.encoder.embed_tokens.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024]).
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024]).

loaded state dict contains a parameter group that doesn't match the size of optimizer's group

When trying to run the command (checkpoint is the AMR3.parsing-1.0.tar.bz2 one)

python bin/train.py --config configs/config.yaml --direction amr --checkpoint /content/drive/MyDrive/AMRS/SPRING_AMR3_parser.pt

I get the error

Traceback (most recent call last):
  File "bin/train.py", line 430, in <module>
    fp16=args.fp16,
  File "bin/train.py", line 76, in do_train
    optimizer.load_state_dict(torch.load(checkpoint)['optimizer'])
  File "/usr/local/envs/myenv/lib/python3.7/site-packages/torch/optim/optimizer.py", line 116, in load_state_dict
    raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

Environment

# packages in environment at /usr/local/envs/myenv:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             5.1                       1_gnu  
appdirs                   1.4.4                    pypi_0    pypi
ca-certificates           2023.01.10           h06a4308_0  
cached-property           1.5.2                    pypi_0    pypi
certifi                   2022.12.7        py37h06a4308_0  
charset-normalizer        3.1.0                    pypi_0    pypi
click                     8.1.3                    pypi_0    pypi
colorama                  0.4.6                    pypi_0    pypi
docker-pycreds            0.4.0                    pypi_0    pypi
filelock                  3.12.0                   pypi_0    pypi
future                    0.18.3                   pypi_0    pypi
gitdb                     4.0.10                   pypi_0    pypi
gitpython                 3.1.31                   pypi_0    pypi
idna                      3.4                      pypi_0    pypi
importlib-metadata        6.6.0                    pypi_0    pypi
joblib                    1.2.0                    pypi_0    pypi
ld_impl_linux-64          2.38                 h1181459_1  
libffi                    3.4.2                h6a678d5_6  
libgcc-ng                 11.2.0               h1234567_1  
libgomp                   11.2.0               h1234567_1  
libstdcxx-ng              11.2.0               h1234567_1  
lxml                      4.9.2                    pypi_0    pypi
ncurses                   6.4                  h6a678d5_0  
networkx                  2.6.3                    pypi_0    pypi
numpy                     1.21.6                   pypi_0    pypi
openssl                   1.1.1t               h7f8727e_0  
packaging                 23.1                     pypi_0    pypi
pathtools                 0.1.2                    pypi_0    pypi
penman                    1.2.2                    pypi_0    pypi
pip                       22.3.1           py37h06a4308_0  
portalocker               2.7.0                    pypi_0    pypi
protobuf                  4.22.3                   pypi_0    pypi
psutil                    5.9.5                    pypi_0    pypi
python                    3.7.16               h7a1cb2a_0  
pytorch-ignite            0.4.11                   pypi_0    pypi
pyyaml                    6.0                      pypi_0    pypi
readline                  8.2                  h5eee18b_0  
regex                     2022.10.31               pypi_0    pypi
requests                  2.29.0                   pypi_0    pypi
sacrebleu                 2.3.1                    pypi_0    pypi
sacremoses                0.0.53                   pypi_0    pypi
sentencepiece             0.1.98                   pypi_0    pypi
sentry-sdk                1.21.1                   pypi_0    pypi
setproctitle              1.3.2                    pypi_0    pypi
setuptools                65.6.3           py37h06a4308_0  
six                       1.16.0                   pypi_0    pypi
smatch                    1.0.4                    pypi_0    pypi
smmap                     5.0.0                    pypi_0    pypi
spring-amr                1.0                       dev_0    <develop>
sqlite                    3.41.2               h5eee18b_0  
tabulate                  0.9.0                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0  
tokenizers                0.7.0                    pypi_0    pypi
torch                     1.5.0                    pypi_0    pypi
tqdm                      4.65.0                   pypi_0    pypi
transformers              2.11.0                   pypi_0    pypi
typing-extensions         4.5.0                    pypi_0    pypi
urllib3                   1.26.15                  pypi_0    pypi
wandb                     0.15.0                   pypi_0    pypi
wheel                     0.38.4           py37h06a4308_0  
xz                        5.2.10               h5eee18b_1  
zipp                      3.15.0                   pypi_0    pypi
zlib                      1.2.13               h5eee18b_0 

Any ideas as to what I'm doing wrong?
Thanks in advance.

Question about missing dev-gold.txt during training !

Hi ,
Thank you for your work.
I have set up a new environment, installed all the libraries, and the dataset I used to train this parser is LDC2017T10 (AMR 2.0).
However, when I run this command line (python bin/train.py --config configs/config.yaml --direction amr) to train the AMR parser. I got this issue:
FileNotFoundError: [Errno 2] No such file or directory: '.../spring/data/tmp/dev-gold.txt'
I don't see any information about this file in the README also.
Did you forget to upload it or did I miss any steps?

Thanks.

further finetune SPRING checkpoints

Hello, is there a way to use your scripts to further finetune one of your pretrained checkpoints (for e.g. with a new source of AMRs that's not in your training data)? thanks

Reproduce AMR2Text results

Thanks for your nice work!
I met a few questions when trying to reproduce the AMR2Text results on AMR2.0.

  1. I tried to run the following command using the default config (DFS)
python bin/train.py --config configs/config.yaml --direction text

but got a BLEU score of 41.78, which is lower than the result (45.3) reported in your paper.

  1. I also tried to do predict using released checkpoint AMR2.generation.pt as following:
python bin/predict_sentences.py \
    --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
    --gold-path data/tmp/amr2.0/gold.text.txt \
    --pred-path data/tmp/amr2.0/pred.text.txt \
    --checkpoint runs/AMR2.generation.pt \
    --beam-size 5 \
    --batch-size 500 \
    --device cuda \
    --penman-linearization --use-pointer-tokens

but only got a BLEU score of 42.3.

I have no idea what is going wrong, could anyone give me some suggestions?
My virtual environment is available at here.

Parameter mismatch in predict_amrs_from_plaintext.py

size mismatch for final_logits_bias: copying a param with shape torch.Size([1, 53587]) from checkpoint, the shape in current model is torch.Size([1, 53075]).
size mismatch for model.shared.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024]).
size mismatch for model.encoder.embed_tokens.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024]).
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([53587, 1024]) from checkpoint, the shape in current model is torch.Size([53075, 1024])

When I used predict_amrs_from_plaintext.py for customized text file, I had a mismatch in model config and model checkpoint. Specifically, I used AMR3.parsing.pt for model checkpoint and bart-large for model config. This is the direction of text to graph. Can you help me resolve this issue?

Using Pre-Trained Checkpoints

Hi @mbevila ,

Really cool work. I would like to use your model for a downstream task.

I have followed the instructions about installing the repo and the pre-trained checkpoints. Do you have any examples of how I can use it for external text?

Best,
Zoher

Problems about the evaluation and the released text_to_amr 2.0 model AMR2.parsing-1.0.tar.bz2

Hi, I have a question about the evaluation process.

As you mentioned in the paper, the Smatch and fine-grained scores are generated from the script provided by Damonte, Cohen, and Satta (2017) (amr-evaluation).
In the REAME.md, you said that the score from bin/predict_amr.py is higher than the one from the amr-evaluation. So, in your paper, the Smatch scores are from the bin/predict_amr.py or the amr-evaluation?

About the text_to_amr 2.0 model ``AMR2.parsing-1.0.tar.bz2''.
I download the model and use the bin/predict_amrs.py to predict the amr-2.0 test data, I only get 83.6 Smatch (reported by predict_amrs.py).
After I use the bin/blinkify.py, I got a score of 36.8, which I think corresponds to the score of SPRING^{DFS} in Table 2. Do you prepare to release the model of the best performance SPRING^{DFS}+recat?

Thank you very much for your reply.

Cannot regenerate the same output

Hi,

I am doing Text-to-AMR task, the input is a text and output is a AMR. the beam size that I used is 1. But when I regenerate twice, the output AMR is not same. How can I make the output AMR be the same for each generation? Like Can I set something like seed to control that? Many thanks.

Which of version of transformers should be compatible with the code?

We're having problems installing transformers 2.11 because of tokenizers. I tried multiple versions that are <3.0 for transformers. However, I still get issues. For example, I get AttributeError: 'BartConfig' object has no attribute 'static_position_embeddings' using transformers 2.8. Can you please tell us what other versions of transformers would work with the codebase or update the codebase to work with the new versions of transformers

Links to models cannot be reached

It seems like the links to models are no longer valid since I cannot open any of it no matter how I changed my network environment.

Preprocessing for Spanish?

Hi, I am trying to fine-tune SPRING on a small set of annotated Spanish AMRs to see how its cross-lingual parsing performance is. It seems the pre-processing is not working for Spanish (understandably), but we can't find anywhere in the code that calls for e.g. spaCy for English preprocessing that we could replace with something for Spanish. Can you help guide us to where this might be, or offer general clarification? Thanks!

code question

Hi, thanks for your nice work.
I read the code, and have a naive question:
image
what is the role of the po_logits? why not just use the lm_logits the same as the origin Bart.

Cannot download weights

Thank you for sharing the code and weights. Nice works!

However, I come across a download issue when reproducing your works. I cannot download any of the weights. Specifically, the downloading always stuck at about 90KB~150KB out of 4.4GB and timeout.

I doubt that if the weights can be accessed from outside world. It would be greatly appreciated if you can share the weights by google cloud or other online service.

Solutions tried.

  1. download with browsers
  2. download with Free Download Manager
  3. download with local network
  4. download with google colab using wget http://nlp.uniroma1.it/AMR/AMR2.parsing-1.0.tar.bz2
  5. download with different proxy from different area e.g. Japan, HongKong, Russia.

Example:

I share a Google colab notebook where you can double check whether the weights is accessible from the Internet. At the screenshot, it's downloading but will soon be timeout.

colab

Questions regarding backpointers

Hi, Michele, Rexhina and Roberto,

Thank you for your nice paper. I'm reading your codes and having some issues understanding them. Could you elaborate a bit on the role for backpointers?

I know you are using pointers to represent variables in AMR graphs. What's not clear to me is that since you have use_pointer_tokens=True for each of your linearization methods, pointers will have their embeddings and the extract_backreferences will never return any backreferences, right? Then, the codes for caculating pointer_q and pointer_k and optimizing this "copy mechanism" are no longer needed. Am I right?

What was the disruptive change in terms of positional embeddings?

It was a pain to install the requested version of transformers (well, actually its tokenizers==0.7.0 dependency) on our cluster. So I am hoping to contribute a fix to make the library compatible with recent transformers versions. Can you give a bit more information about the issues you experienced and what the problem is?

Pre-trained checkpoints of both BFS & Penman version for AMR Parsing from plain text

Can you support the pre-trained checkpoints of both BFS and Penman version for predicting AMR parsing from plain text?
When I substitute the --penman-linearization --use-pointer-tokens with --use-pointer-tokens for BFS or with --penman-linearization for PENMAN with the pre-trained checkpoints in the README, it doesn't give me nice AMR graphs.

Thanks! :)

Bug in evaluate

Hi, when I run
python bin/predict_amrs.py \ --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \ --gold-path data/tmp/amr2.0/gold.amr.txt \ --pred-path data/tmp/amr2.0/pred.amr.txt \ --checkpoint runs/<checkpoint>.pt \ --beam-size 5 \ --batch-size 500 \ --device cuda \ --penman-linearization --use-pointer-tokens

I meet a problem:

RuntimeError: Error(s) in loading state_dict for AMRBartForConditionalGeneration:
size mismatch for final_logits_bias: copying a param with shape torch.Size([1, 53587]) from checkpoint, the shape in current model is torch.Size([1, 53075])

could you help me?

Mistake in smart init

I think there is a small mistake that leads to a big difference in the smart initialization.

s_ = s + tokenizer.INIT

The following line will never match becase the special INIT token comes before the token in BART, not after. So the match will only trigger for these special tokens, but more by accident than by intent I think (the ones after "match" of course).

match -00 Ġ
match -01 Ġ
match -02 Ġ
match -03 Ġ
match -04 Ġ
match -05 Ġ
match -06 Ġ
match -07 Ġ
match -08 Ġ
match -09 Ġ
match -10 Ġ
match -11 Ġ
match -12 Ġ
match -13 Ġ
match -14 Ġ
match -15 Ġ
match -16 Ġ
match -17 Ġ
match -18 Ġ
match -19 Ġ
match -21 Ġ
match -22 Ġ
match -23 Ġ
match -24 Ġ
match -25 Ġ
match -26 Ġ
match -27 Ġ
match -28 Ġ
match -29 Ġ
match -20 Ġ
match -31 Ġ
match -32 Ġ
match -33 Ġ
match -34 Ġ
match -35 Ġ
match -36 Ġ
match -37 Ġ
match -38 Ġ
match -39 Ġ
match -40 Ġ
match -41 Ġ
match -42 Ġ
match -43 Ġ
match -44 Ġ
match -45 Ġ
match -46 Ġ
match -47 Ġ
match -48 Ġ
match -49 Ġ
match -50 Ġ
match -51 Ġ
match -52 Ġ
match -53 Ġ
match -54 Ġ
match -55 Ġ
match -56 Ġ
match -57 Ġ
match -58 Ġ
match -59 Ġ
match -60 Ġ
match -61 Ġ
match -62 Ġ
match -63 Ġ
match -64 Ġ
match -65 Ġ
match -66 Ġ
match -67 Ġ
match -68 Ġ
match -69 Ġ
match -70 Ġ
match -71 Ġ
match -72 Ġ
match -73 Ġ
match -74 Ġ
match -75 Ġ
match -76 Ġ
match -77 Ġ
match -78 Ġ
match -79 Ġ
match -80 Ġ
match -81 Ġ
match -82 Ġ
match -83 Ġ
match -84 Ġ
match -85 Ġ
match -86 Ġ
match -87 Ġ
match -88 Ġ
match -89 Ġ
match -90 Ġ
match -91 Ġ
match -92 Ġ
match -93 Ġ
match -94 Ġ
match -95 Ġ
match -96 Ġ
match -97 Ġ
match -98 Ġ
match -of Ġ

They match because they of an empty token: when looking for subcomponents to use, the if-else statement before this loop will create components by splitting on -. But when you split on -01 the result will be ["", "01"]. And as a consequence the empty string will lead to this accidental match.

The fix would be to add the INIT before the token. And probably also to filter the tok_split on empty strings.

What is the input format of a text file?

Hi, thanks for the work. I would like to use the existing text-to-AMR tool like:

    python bin/predict_amrs.py \
        --datasets <AMR-ROOT>/data/amrs/split/test/*.txt \
        --gold-path data/tmp/amr2.0/gold.amr.txt \
        --pred-path data/tmp/amr2.0/pred.amr.txt \
        --checkpoint runs/<checkpoint>.pt \
        --beam-size 5 \
        --batch-size 500 \
        --device cuda \
        --penman-linearization --use-pointer-tokens

Could you give me an example of the <AMR-ROOT>/data/amrs/split/test/*.txt files?

I have tried a plain text file, but it reported an AssertionError at here

I do not have LDC license, and just want to use this tool for my project. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.