Git Product home page Git Product logo

accalign's Introduction

Multilingual Sentence Transformer as A Multilingual Word Aligner

Requirements

Trained with Python 3.7, adapter-transformers 4.16.2, Torch 1.9.0, tqdm 4.62.3.

Data Format

  • source
Wir glauben nicht , daß wir nur Rosinen herauspicken sollten .
Das stimmt nicht !
  • target
We do not believe that we should cherry-pick .
But this is not what happens .
  • gold alignment
9-8 8-8 7-8 6-6 1-1 2-2 4-5 5-5 3-3 11-9 10-7 2-4 
3-4 2-3 2-6 4-7 2-5 1-2 

Directly extract alignments

bash run_align.sh

Fine-tuning on training data

bash train.sh

Calculate AER

bash cal_aer.sh

Data

Links to the test set used in the paper are here:

Language Pair Type Link
En-De Gold Alignment www-i6.informatik.rwth-aachen.de/goldAlignment/
En-Fr Gold Alignment http://web.eecs.umich.edu/~mihalcea/wpt/
En-Ro Gold Alignment http://web.eecs.umich.edu/~mihalcea/wpt05/
En-Fa Gold Alignment https://ece.ut.ac.ir/en/web/nlp/resources
En-Zh Gold Alignment https://nlp.csai.tsinghua.edu.cn/~ly/systems/TsinghuaAligner/TsinghuaAligner.html
En-Ja Gold Alignment http://www.phontron.com/kftt
En-Sv Gold Alignment https://www.ida.liu.se/divisions/hcs/nlplab/resources/ges/

Links to the training set and validation set used in the paper are here here

LaBSE

You can access to LaBSE model here .

Adapter Checkpoints

The multilingual adapter checkpoint is here .

Citation

@inproceedings{wang-etal-2022-multilingual,
    title = "Multilingual Sentence Transformer as A Multilingual Word Aligner",
    author = "Wang, Weikang  and
      Chen, Guanhua  and
      Wang, Hanqing  and
      Han, Yue  and
      Chen, Yun",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.findings-emnlp.215",
    pages = "2952--2963",
    abstract = "Multilingual pretrained language models (mPLMs) have shown their effectiveness in multilingual word alignment induction. However, these methods usually start from mBERT or XLM-R. In this paper, we investigate whether multilingual sentence Transformer LaBSE is a strong multilingual word aligner. This idea is non-trivial as LaBSE is trained to learn language-agnostic sentence-level embeddings, while the alignment extraction task requires the more fine-grained word-level embeddings to be language-agnostic. We demonstrate that the vanilla LaBSE outperforms other mPLMs currently used in the alignment task, and then propose to finetune LaBSE on parallel corpus for further improvement. Experiment results on seven language pairs show that our best aligner outperforms previous state-of-the-art models of all varieties. In addition, our aligner supports different language pairs in a single model, and even achieves new state-of-the-art on zero-shot language pairs that does not appear in the finetuning process.",
}

accalign's People

Contributors

weikang-wang avatar

Stargazers

 avatar Durwin avatar Md. Musfiqur Rahaman avatar Maxim Afanasyev avatar Bor Hodošček avatar  avatar  avatar  avatar Badran avatar James Cuénod avatar copperdong avatar Alannah Hsia avatar Jeff Carpenter avatar Gabriele Sarti avatar  avatar Vegatable_hd avatar  avatar  avatar

Watchers

 avatar

accalign's Issues

Datset for each language pair

Hi, I'm interested in your work. I want to get the en-nl and en-es word alignment data separately. However, the website http://www.tst.inl.nl/ could not be connected. Could you please give the original alignment dataset for it? Thanks!

Final code base?

Hi,

I was wondering whether this is the final code base of your project because I ran into several errors trying to run run_align.sh. They stem from

output_src,output_tgt = self.embed_loader(
inputs_src=inputs_src, inputs_tgt=inputs_tgt, attention_mask_src=(inputs_src != PAD_ID),
attention_mask_tgt=(inputs_tgt != PAD_ID), guide=None, align_layer=args.align_layer,
extraction=args.extraction, softmax_threshold=args.softmax_threshold,
train_so=args.train_so, train_co=args.train_co, do_infer=True,
)
. The issues I found so far are:

  • args.train_co does not exist as argument in 'train_alignment_adapter.py'
  • args.train_so and args.train_co are not arguments in forward of class BertForSo (
    def forward(
    self,
    inputs_src,
    inputs_tgt=None,
    labels_src=None,
    labels_tgt=None,
    attention_mask_src=None,
    attention_mask_tgt=None,
    align_layer=6,
    guide=None,
    extraction='softmax', softmax_threshold=0.1,
    position_ids1=None,
    position_ids2=None,
    do_infer=False,
    ):
    )

Thank you in advance,
Bene

Links for the dataset used in paper

hi,I'm interested in your alignment work and want to try it.Could you give the train dataset and dev dataset used in paper? seems the link is empty.Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.