Git Product home page Git Product logo

jamr's Introduction

JAMR - AMR Parser

This is the JAMR Parser, updated for SemEval 2016 Task 8.

JAMR is a semantic parser, generator, and aligner for the Abstract Meaning Representation. The parser and aligner have been updated to include improvements from SemEval 2016 Task 8.

For the generator, see the branch Generator.

We have released hand-alignments for 200 sentences of the AMR corpus.

For the performance of the parser (including for the parser from SemEval 2016), see docs/Parser_Performance.

Building

First checkout the github repository (or download the latest release):

git clone https://github.com/jflanigan/jamr.git
git checkout Semeval-2016

JAMR depends on Scala, Illinois NER system v2.7, tokenization scripts in cdec, and WordNet for the aligner. To download these dependencies into the subdirectory tools, cd to the jamr repository and run (requires wget to be installed):

./setup

You should agree to the terms and conditions of the software dependencies before running this script. If you download them yourself, you will need to change the relevant environment variables in scripts/config.sh. You may need to edit the Java memory options in the scripts run, sbt, and build.sbt if you get out of memory errors.

Source the config script - you will need to do this before running any of the scripts below:

. scripts/config.sh

Run ./compile to build an uberjar, which will be output to target/scala-{scala_version}/jamr-assembly-{jamr_version}.jar (the setup script does this for you).

Running the Parser

Download and extract model weights models-2016.09.18.tgz into the directory $JAMR_HOME/models. To parse a file (cased, untokenized, with one sentence per line, no blank lines) with the model trained on LDC2015E86 data do:

. scripts/config.sh
scripts/PARSE.sh < input_file > output_file 2> output_file.err

The output is AMR format, with some extra fields described in docs/Nodes and Edges Format and docs/Alignment Format. To run the parser trained on other datasets (such as LDC2014T12, or the freely downloadable Little Prince data) source the config scripts config_Semeval-2016_LDC2014T12.sh or config_Semeval-2016_Little_Prince.sh instead.

Running the Aligner

To run the rule-based aligner:

. scripts/config.sh
scripts/ALIGN.sh < amr_input_file > output_file

The output of the aligner is described in docs/Alignment Format. The aligner has been updated for SemEval 2016.

Hand Alignments

To create the hand alignments file, see docs/Hand Alignments.

Experimental Pipeline

The following describes how to train and evaluate the parser. There are scripts to train the parser on various datasets, as well as a general train script to train the parser on any AMR dataset. More detailed instructions for training the parser are in docs/Step by Step Training.

To train the parser on LDC data or public AMR Bank data, download the data .tgz file into to $JAMR_HOME/data/ and run one of the train scripts. The data file and the train script to run for each of the datasets is listed in the following table:

Dataset Date released Size (# sents) Script to run File to move to data/
LDC2015E86 (SemEval 2016 Task 8 data) August 31, 2015 19,572 scripts/train_LDC2015E86.sh LDC2015E86_DEFT_Phase_2_AMR_Annotation_R1.tgz
LDC2014T12 June 16, 2014 13,051 scripts/train_LDC2014T12.sh amr_anno_1.0_LDC2014T12.tgz
LDC2014E41 May 30, 2014 18,779 scripts/train_LDC2014E41.sh LDC2014E41_DEFT_Phase_1_AMR_Annotation_R4.tgz
LDC2013E117 (Proxy only) October 14, 2013 8,219 scripts/train_LDC2013E117.sh LDC2013E117.tgz
AMR Bank v1.4 November 14, 2014 1,562 scripts/train_Little_Prince.sh (automatically downloaded)

For LDC2013E117, LDC2014E41, or LDC2015E86, you will need a license for LDC DEFT project data. The trained model will go into a subdirectory of models/ and the evaulation results will be printed and saved to models/directory/RESULTS.txt. The performance of the parser on the various datasets is in docs/Parser Performance.

To train the parser on another dataset, create a config file in scripts/ and then do:

. scripts/my_config_file.sh
scripts/TRAIN.sh

The trained model will be saved into the $MODEL_DIR specified in the config script, and the results saved in $MODEL_DIR/RESULTS.txt To run the parser with your trained model, source my_config_file.sh before running PARSE.sh.

Evaluating

To evaluate a trained model against a gold standard AMR file, do:

. scripts/my_config_file.sh
scripts/EVAL.sh gold_amr_file optional_iteration

The optional_iteration specifies which weight file iteration to use, otherwise stage2-weights is used. The predicted output will be in models/my_directory/gold_amr_file.parsed-gold-concepts for the parser with oracle concept ID, models/my_directory/gold_amr_file.parsed for the full pipeline, and the results saved in models/my_directory/gold_amr_file.results.

jamr's People

Contributors

akornilo avatar jflanigan avatar jonmay avatar sammthomson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jamr's Issues

Use of LDC2014T12 returns Empty graph for words that are present in the model

I have successfully installed JAMR parser in Google colab and I am using LDC2014T12 pretrained model. I was trying to run some sample sentences to understand where the AMR parser fails. When I try sentences with words like Doctor, sandwich, medicine etc I got an empty graph. when I check the model, these words are present in the pretrained model.
I am not sure why JAMR fails when a particular word is there in the model. I don't think I am doing any mistake running the parser because it works fine for few other sentences. Could you please let me know the possible reason of why this may happen or what can I do to rectify it?

Thank you.

"Adding another span warning"

Hi, this is more of a question rather than an issue. The aligner is giving me the message "WARNING ADDING ANOTHER SPAN TO NODE" for a couple of sentences. What does it mean?
Regards,
Marco

Retrain the model but the program will terminate after minutes.

I'm trying to retrain the jamr with larger AMR dataset, the program would automatically terminate after about several minutes. I have tried to modify the java memory settings in some configure files, but that didn't work.
The log file shows below:

~/jamr/scripts/preprocessing ~/jamr

  • ./cmd.snt
  • ./cmd.snt.tok
  • ./cmd.tok
  • ./cmd.aligned

It seems terminated during the preprecess step.
Appreciate your help.

Instruction of Successfully Installing JAMR by Updating Packages

Reason of Failure

The original JAMR was installed in 2015 or 2016 in the server, thus some packages were broken or not updated. The default setup script in JAMR repos is set to a version no longer available.
I have installed JAMR again on my Macbook (Mojave 10.14.6) and uploaded a new version in JAMR and share the details as following.

Install Pipeline

First, please ensure the following env:
JAVA=8.0 (Download from oracle website)
sbt=1.0.2, scala=2.11.8 (Use SDKMAN! to install, like sdk install scala 2.11.8 )

Then, modify these files:

  • jamr/project/build.properties:

sbt.version=1.0.2

  • jamr/build.sbt:

// import AssemblyKeys._
// assemblySettings
name := "jamr"
version := "0.1-SNAPSHOT"
organization := "edu.cmu.lti.nlp"
scalaVersion := "2.11.8"
crossScalaVersions := Seq("2.11.8","2.11.12", "2.12.4","2.10.6")
libraryDependencies ++= Seq(
"com.jsuereth" %% "scala-arm" % "2.0",
"edu.stanford.nlp" % "stanford-corenlp" % "3.4.1",
"edu.stanford.nlp" % "stanford-corenlp" % "3.4.1" classifier "models",
"org.scala-lang.modules" %% "scala-parser-combinators" % "1.1.2",
"org.scala-lang.modules" %% "scala-pickling" % "0.10.1"
// "org.scala-lang" % "scala-swing" % "2.10.3"
)

  • jamr/project/plugins.sbt:

resolvers += "Sonatype releases" at "https://oss.sonatype.org/content/repositories/releases"
resolvers += "Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/"
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
// addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.6.0")
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")

PS: sbt-idea plugin had been unsupported since it was included into official Scala plugin about two years ago, and project files generated by outdated version of the plugin are not compatible with IntelliJ IDEA starting with version 14, if I recall correctly.

Finally, update the scala version in the CLASSPATH env variable in scripts/config.sh
(Thanks @danielhers and @zhangzx-sjtu for pointing this.)

export CLASSPATH=".:${JAMR_HOME}/target/scala-2.11/jamr-assembly-0.1-SNAPSHOT.jar"

After doing the above, install JAMR by.

. scripts/config.sh
./setup

Troubleshooting

  1. JAVA is inconsistent with Scala as mentioned in issue. Just install scala with 2.11.12 version and modified the corresponding lines in jamr/build.sbt and java/scripts/config.sh
  2. Run sbt before running ./setup in the JAMR directory if the problem raises.

Acknowledgement

Thanks @LeonardoEmili for giving us a good summary #43 (comment) on success setup and created a PR on this issue.
This issue was originated by issue under CoNLL2019 shared task and @danielhers gave many suggestions.

Hand alignment for LDC2014T12

Hi Jeff,

According to https://github.com/jflanigan/jamr/blob/Semeval-2016/docs/Hand_Alignments.md, hand alignment is annotated for LDC2013E117. We would like to work on the alignment problem but we don't have access to the LDC2013E117 data. But we have the LDC2014T12. When I look into scripts/hand_alignments/LDC2013E117/snt.ids, it seems the alignment was created on the proxy portion. So I'm wondering if it's possible to adopt the alignments on LDC2013E117 to LDC2014T12.

I tried to replace tar -xzOf "$JAMR_HOME/data/LDC2013E117.tgz" ./LDC2013E117_DEFT_Phase_1_AMR_Annotation_R3/data/deft-amr-release-r3-proxy.txt with cat $JAMR_HOME/data/amr_anno_1.0/data/unsplit/amr-release-1.0-proxy.txt but lots of the patches in scripts/hand_alignments/LDC2013E117/patch.hand_align were rejected.

Is there any other things I should pay attention in order to get it work on LDC2014T12. Thanks!

Regards,

there's something wrong when execute ./setup, connect github-cloud.s3.amazonaws.com refused.

During the procedure of running ./setup, it said connecting to github-cloud.s3.amazonaws.com refused, like this:
Resolving github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)... 52.216.81.88
Connecting to github-cloud.s3.amazonaws.com (github-cloud.s3.amazonaws.com)|52.216.81.88|:443... failed: Connection refused.

If I manually visit "github-cloud.s3.amazonaws.com" from browser, it says:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FD0C2818226FFC6E</RequestId><HostId>xgNMiNLCyhvwoiva6l30OhOtKUONyQPykNljIVGQcjJkL9rfe45gmo7lCXOFWd/kpJGbnSsWUcc=</HostId></Error>

Is this normal?

No output for PARSE.sh

Hi, I was trying to run scripts/PARSE.sh < input_file > output_file 2> output_file.err with just one test sentence in the input file, but there was nothing in the output file, and in the log file it had the following:

 ### Tokenizing input ###
Unicode character 0xfdd3 is illegal at /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl line 56.
 ### Running NER system ###
~/jamr/tools/IllinoisNerExtended ~/jamr
Adding feature: Forms
Adding feature: Capitalization
Adding feature: WordTypeInformation
Adding feature: Affixes
Adding feature: PreviousTag1
Adding feature: PreviousTag2
Adding feature: PreviousTagPatternLevel1
Adding feature: PreviousTagPatternLevel2
Adding feature: PrevTagsForContext
Adding feature: PredictionsLevel1
Adding feature: GazetteersFeatures
Adding feature: BrownClusterPaths
Loading gazetteers....
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    loading gazetteer:....ner-ext/KnownLists/WikiPeople.lst
    loading gazetteer:....ner-ext/KnownLists/ordinalNumber.txt
    loading gazetteer:....ner-ext/KnownLists/WikiSongs.lst
    loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNames.lst
    loading gazetteer:....ner-ext/KnownLists/WikiArtWorkRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/known_name.lst
    loading gazetteer:....ner-ext/KnownLists/Occupations.txt
    loading gazetteer:....ner-ext/KnownLists/WikiLocations.lst
    loading gazetteer:....ner-ext/KnownLists/known_state.lst
    loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEventsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiOrganizationsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/known_nationalities.lst
    loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNamesRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiSongsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/cardinalNumber.txt
    loading gazetteer:....ner-ext/KnownLists/currencyFinal.txt
    loading gazetteer:....ner-ext/KnownLists/known_names.big.lst
    loading gazetteer:....ner-ext/KnownLists/known_jobs.lst
    loading gazetteer:....ner-ext/KnownLists/known_title.lst
    loading gazetteer:....ner-ext/KnownLists/WikiFilmsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/temporal_words.txt
    loading gazetteer:....ner-ext/KnownLists/measurments.txt
    loading gazetteer:....ner-ext/KnownLists/known_place.lst
    loading gazetteer:....ner-ext/KnownLists/known_country.lst
    loading gazetteer:....ner-ext/KnownLists/known_corporations.lst
    loading gazetteer:....ner-ext/KnownLists/WikiOrganizations.lst
    loading gazetteer:....ner-ext/KnownLists/VincentNgPeopleTitles.txt
    loading gazetteer:....ner-ext/KnownLists/WikiFilms.lst
    loading gazetteer:....ner-ext/KnownLists/WikiLocationsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiArtWork.lst
    loading gazetteer:....ner-ext/KnownLists/WikiPeopleRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEvents.lst
    loading gazetteer:....ner-ext/KnownLists/KnownNationalities.txt
found 33 gazetteers
1288301 words added
95262 words added
85963 words added

Working parameters are:
    inferenceMethod=GREEDY
    beamSize=5
    thresholdPrediction=false
    predictionConfidenceThreshold=-1.0
    labelTypes
        PER     ORG     LOC     MISC
    logging=false
    debuggingLogPath=null
    forceNewSentenceOnLineBreaks=true
    keepOriginalFileTokenizationAndSentenceSplitting=false
    taggingScheme=BILOU
    tokenizationScheme=DualTokenizationScheme
    pathToModelFile=data/Models/CoNLL/finalSystemBILOU.model
Brown clusters resource:
    -Path: brown-clusters/brown-english-wikitext.case-intact.txt-c1000-freq10-v3.txt
    -WordThres=5
    -IsLowercased=false
Brown clusters resource:
    -Path: brown-clusters/brownBllipClusters
    -WordThres=5
    -IsLowercased=false
Brown clusters resource:
    -Path: brown-clusters/brown-rcv1.clean.tokenized-CoNLL03.txt-c1000-freq1.txt
    -WordThres=5
    -IsLowercased=false

Tagging file: /tmp/jamr-25472.snt.tmp
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level1
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level2
Extracting features for level 2 inference
Done - Extracting features for level 2 inference
~/jamr
nahgnaw@el:~/jamr/scripts$ vi /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl
nahgnaw@el:~/jamr/scripts$
nahgnaw@el:~/jamr/scripts$ vi PARSE
nahgnaw@el:~/jamr/scripts$ vi PARSE
PARSE_IT.sh  PARSE.sh
nahgnaw@el:~/jamr/scripts$ vi PARSE.sh
nahgnaw@el:~/jamr/scripts$ cat ../data/test.txt.err
 ### Tokenizing input ###
Unicode character 0xfdd3 is illegal at /home/nahgnaw/jamr/tools/cdec/corpus/support/quote-norm.pl line 56.
 ### Running NER system ###
~/jamr/tools/IllinoisNerExtended ~/jamr
Adding feature: Forms
Adding feature: Capitalization
Adding feature: WordTypeInformation
Adding feature: Affixes
Adding feature: PreviousTag1
Adding feature: PreviousTag2
Adding feature: PreviousTagPatternLevel1
Adding feature: PreviousTagPatternLevel2
Adding feature: PrevTagsForContext
Adding feature: PredictionsLevel1
Adding feature: GazetteersFeatures
Adding feature: BrownClusterPaths
Loading gazetteers....
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    loading gazetteer:....ner-ext/KnownLists/WikiPeople.lst
    loading gazetteer:....ner-ext/KnownLists/ordinalNumber.txt
    loading gazetteer:....ner-ext/KnownLists/WikiSongs.lst
    loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNames.lst
    loading gazetteer:....ner-ext/KnownLists/WikiArtWorkRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/known_name.lst
    loading gazetteer:....ner-ext/KnownLists/Occupations.txt
    loading gazetteer:....ner-ext/KnownLists/WikiLocations.lst
    loading gazetteer:....ner-ext/KnownLists/known_state.lst
    loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEventsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiOrganizationsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/known_nationalities.lst
    loading gazetteer:....ner-ext/KnownLists/WikiManMadeObjectNamesRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiSongsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/cardinalNumber.txt
    loading gazetteer:....ner-ext/KnownLists/currencyFinal.txt
    loading gazetteer:....ner-ext/KnownLists/known_names.big.lst
    loading gazetteer:....ner-ext/KnownLists/known_jobs.lst
    loading gazetteer:....ner-ext/KnownLists/known_title.lst
    loading gazetteer:....ner-ext/KnownLists/WikiFilmsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/temporal_words.txt
    loading gazetteer:....ner-ext/KnownLists/measurments.txt
    loading gazetteer:....ner-ext/KnownLists/known_place.lst
    loading gazetteer:....ner-ext/KnownLists/known_country.lst
    loading gazetteer:....ner-ext/KnownLists/known_corporations.lst
    loading gazetteer:....ner-ext/KnownLists/WikiOrganizations.lst
    loading gazetteer:....ner-ext/KnownLists/VincentNgPeopleTitles.txt
    loading gazetteer:....ner-ext/KnownLists/WikiFilms.lst
    loading gazetteer:....ner-ext/KnownLists/WikiLocationsRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiArtWork.lst
    loading gazetteer:....ner-ext/KnownLists/WikiPeopleRedirects.lst
    loading gazetteer:....ner-ext/KnownLists/WikiCompetitionsBattlesEvents.lst
    loading gazetteer:....ner-ext/KnownLists/KnownNationalities.txt
found 33 gazetteers
1288301 words added
95262 words added
85963 words added

Working parameters are:
    inferenceMethod=GREEDY
    beamSize=5
    thresholdPrediction=false
    predictionConfidenceThreshold=-1.0
    labelTypes
        PER     ORG     LOC     MISC
    logging=false
    debuggingLogPath=null
    forceNewSentenceOnLineBreaks=true
    keepOriginalFileTokenizationAndSentenceSplitting=false
    taggingScheme=BILOU
    tokenizationScheme=DualTokenizationScheme
    pathToModelFile=data/Models/CoNLL/finalSystemBILOU.model
Brown clusters resource:
    -Path: brown-clusters/brown-english-wikitext.case-intact.txt-c1000-freq10-v3.txt
    -WordThres=5
    -IsLowercased=false
Brown clusters resource:
    -Path: brown-clusters/brownBllipClusters
    -WordThres=5
    -IsLowercased=false
Brown clusters resource:
    -Path: brown-clusters/brown-rcv1.clean.tokenized-CoNLL03.txt-c1000-freq1.txt
    -WordThres=5
    -IsLowercased=false

Tagging file: /tmp/jamr-25472.snt.tmp
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level1
Reading model file : data/Models/CoNLL/finalSystemBILOU.model.level2
Extracting features for level 2 inference
Done - Extracting features for level 2 inference
~/jamr

Was it because the unicode error? or there is something else I'm missing?

Thanks!

AMRs with minus concepts :mod (-/-)

When I parse sentences with JAMR sometimes I get minus concepts which can have relations.
For example:

# ::snt There is no basketball player on the court floor and no one is grabbing the ball
# ::tok There is no basketball player on the court floor and no one is grabbing the ball
# ::alignments 15-16|0.1.1 13-14|0.1 10-11|0.0.1 9-10|0 8-9|0.0 7-8|0.0.0 4-5|0.1.0.0+0.1.0 3-4|0.1.0.0.0 2-3|0.0.1.0 ::annotator JAMR dev v0.3 ::date 2019-01-16T14:53:26.909
(a / and 
      :op1 (f / floor 
            :mod (c / court) 
            :mod (- / - 
                  :domain-of -)) 
      :op2 (g / grab-01 
            :ARG0 (t / thing 
                  :ARG0-of (p / play-12 
                        :ARG1 (b2 / basketball))) 
            :ARG1 (b / ball)))

Also smatch complains for -/- concepts.

Error

while trying to train I came across this error,

panic: swash_fetch got swatch of unexpected bit width, slen=1024, needents=64 at /home/tempuser/AMRParsing/jamr/tools/cdec/corpus/support/quote-norm.pl line 149, line 1.

Is it possible to call jmar directly from the java code

Hi
I would like to use the jamr as a library in java maven code. I have setup all the required path and models. is there any way to not call script and call the jamr from the code and have the result as object.

Thanks in advance

Little Prince

Hi,

In the README it says there should be a parser model trained on the Little Prince data, with corresponding config file. Is it correct that it isn't there and if so could you please share it? Would be very helpful!

Many thanks.

Failed setup - sbt dependencies not found

Hi,
I'm having problems running the setup script.
It's failing me and reporting these unresolved dependencies:

[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	::          UNRESOLVED DEPENDENCIES         ::
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	:: com.eed3si9n#sbt-assembly;0.10.2: not found
[warn] 	:: com.github.mpeltonen#sbt-idea;1.5.2: not found
[warn] 	:: com.typesafe.sbteclipse#sbteclipse-plugin;2.4.0: not found
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::

I'm running Scala 2.12.3 and sbt 1.0.2 on macOS Sierra 10.12.6.

Any clue on how to fix this? I'm not really familiar with Scala.

Thanks

Out of memory errors when parsing large files, and alignments for parsed files

Hi guys

I'm trying to parse a file of ~500k lines, and I always get the following error:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NEWord.addTokenToSentence(NEWord.java:156)
        at edu.illinois.cs.cogcomp.LbjNer.ParsingProcessingData.PlainTextReader.parseText(PlainTextReader.java:33)
        at edu.illinois.cs.cogcomp.LbjNer.ParsingProcessingData.PlainTextReader.parsePlainTextFile(PlainTextReader.java:24)
        at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NETagPlain.tagData(NETagPlain.java:38)
        at edu.illinois.cs.cogcomp.LbjNer.LbjTagger.NerTagger.main(NerTagger.java:21)

Is there a way to avoid the OOM issue without allocating more memory to the JVM?

Also, is it possible to get alignments between a text file and the resulting parsed AMR file without running the align script, especially because the output of JAMR isn't in the format that the align script expects?

Thanks
Kris

setup error

Hello,
When running ./setup to install, I encounter the following error from the ./compile command:
scala.reflect.internal.MissingRequirementError: object scala.runtime in compiler mirror not found.
Is this a bug, something I can work around, or very strange indeed (possibly a problem my end)?
thanks, Andrew

Setup error

I downloaded the JAMR parser.

git clone https://github.com/jflanigan/jamr.git
git checkout Semeval-2016

Then I run the command ./setup
But an error occurred as follows:

:::: ERRORS
SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.jar

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.10.4/scala-compiler-2.10.4.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-lang/scala-compiler/2.10.4/scala-compiler-2.10.4.jar

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/jline/jline/2.11/jline-2.11.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/jline/jline/2.11/jline-2.11.jar

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.3.0/ivy-2.3.0.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.3.0/ivy-2.3.0.jar

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.jar

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-sbt/test-interface/1.0/test-interface-1.0.pom

SERVER ERROR: Not Implemented url=http://repo1.maven.org/maven2/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar

:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
unresolved dependency: org.scala-lang#scala-library;2.10.4: not found
unresolved dependency: org.scala-lang#scala-compiler;2.10.4: not found
unresolved dependency: jline#jline;2.11: not found
unresolved dependency: org.apache.ivy#ivy;2.3.0: not found
unresolved dependency: com.jcraft#jsch;0.1.46: not found
unresolved dependency: org.scala-sbt#test-interface;1.0: not found
Error during sbt execution: Error retrieving required libraries
(see /home/teqip-ii-cse-nlp-01-01/.sbt/boot/update.log for complete log)
Error: Could not retrieve sbt 0.13.5

What can I do to resolve this issue.

Thanks in advance.

Aligner error?

Hi thanks for this tool,

I got this error while running the aligner:

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException
at edu.cmu.lti.nlp.amr.CorpusTool$$anonfun$main$1.apply(CorpusTool.scala:48)
at edu.cmu.lti.nlp.amr.CorpusTool$$anonfun$main$1.apply(CorpusTool.scala:43)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at edu.cmu.lti.nlp.amr.CorpusTool$.main(CorpusTool.scala:43)
at edu.cmu.lti.nlp.amr.CorpusTool.main(CorpusTool.scala)

(I removed the last line of ALIGN.sh to keep the /tmp files) and by looking at the /tmr/*.tok file it seems that the last amr graph is not printed (while its ::snt, ::id are printed).
I tried removing the last sentence and running it again, it didn't help. Same behavior.
I tried hacking it doing the tokenization by myself; it didn't help.

I managed to run it in the first sentence of the same dataset without problems.
I managed to run it with other AMR datasets.

any ideas?

thank you,
Miguel

Array Index Out of Bounds Exception when running JAMR

I believe I have followed all the steps in the readme successfully. I am now trying the last step,
scripts/PARSE.sh < input_file > output_file 2> output_file.err

I have created an input file with a few sentences on separate lines, with no blank lines.
When I run the script, it gets to ### Running Jamr ### and then has an error which I have pasted below:

`Reading weights
done
Sentence: This is a very short test.

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:358)
at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:211)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:211)
at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)`

io error on wordCount.train

I'm trying to run jamr on provided models, however I get the following error:


 ### Running JAMR ###
Stage1 features = List(bias, corpusIndicator, length, corpusLength, conceptGivenPhrase, count, phraseGivenConcept, phraseConceptPair, phrase, firstMatch, numberIndicator, sentenceMatch, andList, pos, posEvent, phraseConceptPairPOS, badConcept)
Exception in thread "main" java.io.FileNotFoundException: /home/reza/Documents/jamr-Semeval-2016/models/Semeval-2016_LDC2014T12/wordCounts.train (No such file or directory)
	at java.io.FileInputStream.open0(Native Method)
	at java.io.FileInputStream.open(FileInputStream.java:195)
	at java.io.FileInputStream.<init>(FileInputStream.java:138)
	at scala.io.Source$.fromFile(Source.scala:90)
	at scala.io.Source$.fromFile(Source.scala:75)
	at scala.io.Source$.fromFile(Source.scala:53)
	at edu.cmu.lti.nlp.amr.ConceptInvoke.package$.Decoder(package.scala:25)
	at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:113)
	at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)

any help on how to get passed this would be much appreciated.

Named entity strings added to AMR in wrong order?

One of my students ran some test sentences through JAMR, and we noticed that multiword named entities (possibly just out-of-vocabulary ones) were represented backwards. E.g., with "John Smith" as the input, the resulting AMR was

(p / person 
    :name (n / name 
        :op1 "Smith" 
        :op2 "John"))

Is this due to a bug in the NER heuristics?

Empty concept

I got this result from the parsing which seemed like a bug. The sentence was taken from DUC 2004 dataset.

# ::snt There were no boos.
# ::tok There were no boos .
# ::alignments 2-3|0 ::annotator JAMR dev v0.3 ::date 2017-11-19T19:54:42.228
# ::node	0	-	2-3
# ::root	0	-
(- / -)

issue in the process of tok

./tokenize-anything.sh < ~/Documents/ZeroShot/sen/AFP_ENG_20030417.0764 > out_file
Invalid range "\x{0CE6}-\x{0BEF}" in transliteration operator at ./support/quote-norm.pl line 149
I met this issue, could you please give me the suggestion to solve it? cheers. @jflanigan

SBT move to httpS and Scala compiler not found

Hi,

Due to recent SBT move to httpS, lots of download fails. Any plan to cope with this?

Also scala compiler not found when running ./compile. I have modified some of build.sbt to deal with point 1.

unresolved dependency: org.scala-lang#scala-compiler;2.11.3: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:217)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:126)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:125)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:103)
at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:48)
at sbt.IvySbt$$anon$3.call(Ivy.scala:57)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:98)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:81)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:102)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:62)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:52)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:57)
at sbt.IvySbt.withIvy(Ivy.scala:98)
at sbt.IvySbt.withIvy(Ivy.scala:94)
at sbt.IvySbt$Module.withModule(Ivy.scala:115)
at sbt.IvyActions$.update(IvyActions.scala:125)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1223)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1221)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1244)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1242)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1246)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1241)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:1249)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1214)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1192)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Thanks in advance.

slash in token results in malformed AMR

When parsing sentences containing the token '24/7', the JAMR parser returned AMRs with things like:

:ARG1 24/7

Which results in errors when attempting to further process these AMRs; for example, when running smatch, it returns the error:

Traceback (most recent call last):
File "smatch.py", line 927, in
main(args)
File "smatch.py", line 827, in main
amr1.rename_node(prefix1)
AttributeError: 'NoneType' object has no attribute 'rename_node'

This can be avoided by adding quotation marks in the parser output to treat the problematic token as a string, e.g.

:ARG1 "24/7"

Which appears to be the approach used in gold AMR data.

some errors

On a small test sentence I got JAMR to run fine. But on a harder document it gave lots of array out of bound errors. Is this serious? If the syntactic dependency fails, does the AMR parser always return an empty semantic graph?

This was running scripts/PARSE.sh < LICENSE.txt > LICENSE.out

LICENSE.err.txt
LICENSE.out.txt

java.lang.ArrayIndexOutOfBoundsException

I'm using to parse a given text using the following command.

scripts/PARSE.sh < ../text.in > ../text.out 2> output_file.err

The model that I was trying to use was LDC2014T12. But I get the following error.

Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:307) at edu.cmu.lti.nlp.amr.AMRParser$$anonfun$main$3.apply(AMRParser.scala:192) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) at edu.cmu.lti.nlp.amr.AMRParser$.main(AMRParser.scala:192) at edu.cmu.lti.nlp.amr.AMRParser.main(AMRParser.scala)

I tried using other models given. But the same error occurred.
I tried using scripts/EVAL.sh also. It also gave the same error.
Any help..?

Thanks..

JAMR on Mac os X ?

Hello,
I would like use JAMR parser on Mac os X.

- Is-it possible ?
- What modifications are necessary ?

Best regards,
Bernard.

How I fixed training issues

I wanted to train JAMR on the recently released LDC dataset, but ran into issues while trying. PREPROCESS.sh and TRAIN.sh both failed silently after having generated a few preprocessed files. I am running macOS 10.12.5.

To combine the datasets .txt files into three unified training, dev, and test files, I simply modified the make_splits.sh file in jamr/scripts/preprocessing/LDC2015E86 to fix the paths and add the new files, then ran my modified version. This caused some problems.

Preprocessing kept breaking silently, so I opened the script and ran each command individually. I might've even done that with each script inside of preprocess too. Eventually, I found the error that was killing preprocessing: character encoding issues brought in by make_splits.sh
To fix this, I found and replaced the following characters in dev.txt, training.txt, and test.txt:
\x{93} -> "
\x{85} -> .
\x{92} -> '
’ -> '

To avoid this, I would recommend concatenating the text files with bash cat as is done in make_splits.sh. Use a language that can better handle the encoding, such as python3, or do the concatenation by hand.

Despite this fix, I believe PREPROCESS.sh still wouldn't run straight through, but I successfully ran each inner command consecutively and completed preprocessing.

I then commented the preprocessing step out of TRAIN.sh, because it took 3 hours on the large new dataset, and ran TRAIN.sh.

There I encountered the final issue. error: command 'wf' not found during jamr/scripts/training/cmd.conceptTable.train
wf is a small shell script in the same directory, so I fixed this error by appending ./ to wf in cmd.conceptTable.train

Hope this helps someone. If you have trained JAMR on the new LDC dataset, I would love to compare smatch scores. I received lackluster results in parsing after training, and I'd like to know if others experience the same.

P.S. With 16GB RAM and quad i7 at 3.4GHz, it took about 20 hours to train 10 iterations.

Using the splits given in the dataset:

  ----- Evaluation on Dev: Smatch (all stages) -----
Precision: 0.708
Recall: 0.651
Document F-score: 0.678

  ----- Evaluation on Dev: Smatch (gold concept ID) -----
Precision: 0.805
Recall: 0.718
Document F-score: 0.759

  ----- Evaluation on Dev: Smatch (oracle) -----
Precision: 0.871
Recall: 0.833
Document F-score: 0.851

  ----- Evaluation on Dev: Spans -----
Precision: 0.764265094281678
Recall: 0.799165061014772
F1: 0.7813255470785846


  ----- Evaluation on Test: Smatch (all stages) -----
Precision: 0.700
Recall: 0.643
Document F-score: 0.670

  ----- Evaluation on Test: Smatch (gold concept ID) -----
Precision: 0.795
Recall: 0.705
Document F-score: 0.747
  ----- Evaluation on Test: Smatch (oracle) -----
Precision: 0.870
Recall: 0.832
Document F-score: 0.851


  ----- Evaluation on Test: Spans -----
Precision: 0.7584370512206797
Recall: 0.7950197578874741
F1: 0.7762976573265962

trouble with wget command for Illinois tagger

When I run ./setup, I get this error
--2018-06-13 22:35:32-- https://github.com/jflanigan/jamr/releases/download/JAMR_v0.2/IllinoisNerExtended-2.7.tgz0
Resolving github.com (github.com)... 192.30.255.112, 192.30.255.113
Connecting to github.com (github.com)|192.30.255.112|:443... connected.
OpenSSL: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
Unable to establish SSL connection.

Tried including --no-check-certificate in the wget command for the Illinois tagger, but no dice. Any idea how to fix this?

Thanks!

ERROR: Got sbt.ResolveException error while running compile command in setup

ERROR MESSAGE:

[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: UNRESOLVED DEPENDENCIES ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: com.eed3si9n#sbt-assembly;0.10.2: not found
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes.
[warn] com.eed3si9n:sbt-assembly:0.10.2 (sbtVersion=0.13, scalaVersion=2.10)
[warn]
sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly;0.10.2: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:217)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:126)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:125)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:103)
at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:48)
at sbt.IvySbt$$anon$3.call(Ivy.scala:57)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:98)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:81)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:102)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:62)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:52)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:57)
at sbt.IvySbt.withIvy(Ivy.scala:98)
at sbt.IvySbt.withIvy(Ivy.scala:94)
at sbt.IvySbt$Module.withModule(Ivy.scala:115)
at sbt.IvyActions$.update(IvyActions.scala:125)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1223)
at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1221)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1244)
at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1242)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1246)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1241)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:1249)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1214)
at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1192)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
at sbt.std.Transform$$anon$4.work(System.scala:64)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:244)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error sbt.ResolveException: unresolved dependency: com.eed3si9n#sbt-assembly;0.10.2: not found

Please let me know if you need any more information.
Thanks!

Aligner not returning anything

Hi, I followed the instruction and tried to align the following AMR:

cat amr_input 
(f / flower
      :mod (e / even)
      :ARG0-of (h / have-03
            :ARG1 (t / thorn)))

using the command "scripts/ALIGN.sh < amr_input".
I get this warning:

 ### Tokenizing ###
which: no uconv in (/sbin:/bin:/usr/sbin:/usr/bin)
[...]/jamr/tools/cdec/corpus/support/utf8-normalize.sh: FFFF Cannot find ICU uconv (http://site.icu-project.org/) ... falling back to iconv. Quality may suffer.

but I don't get any output. Is ucon necessary for the aligner to work, or is there something else that's going wrong here? The parser works properly.

Thanks in advance,
Marco

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.