Git Product home page Git Product logo

cross-thought's Introduction

Cross-Thought

Cross-Thought for Sentence Encoder Pre-training (https://arxiv.org/abs/2010.03652)

Install

git clone --depth 1 --branch v0.10.0 https://github.com/pytorch/fairseq.git
git clone https://github.com/ngoyal2707/Megatron-LM.git fairseq/fairseq/model_parallel/megatron
cp src/transformer_sentence_encoder.py fairseq/fairseq/modules/transformer_sentence_encoder.py
pip install --editable ./fairseq

Overall, we only change one file 'transformer_sentence_encoder.py' from RoBERTa for Cross-Thought. All the data processing follows RoBERTa setting.

Finetune

Please follow RoBERTa finetune (https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.glue.md) for data process

Cross-Thought checkpoint (https://drive.google.com/file/d/11lnZijWuRcPT07xiEO5NMhkXMfKopIb9/view?usp=sharing)

export CTPRETRAIN=False
TOTAL_NUM_UPDATES=113272  # 10 epochs through RTE for bsz 16
WARMUP_UPDATES=28318      # 6 percent of the number of updates
LR=1e-05                # Peak LR for polynomial LR scheduler.
NUM_CLASSES=2
MAX_SENTENCES=32        # Batch size.
ROBERTA_PATH=/path/to/cross-thought-checkpoint.pt

CUDA_VISIBLE_DEVICES=0 fairseq-train data/QQP-bin/ \
    --restore-file $ROBERTA_PATH \
    --max-positions 512 \
    --batch-size $MAX_SENTENCES \
    --max-tokens 4400 \
    --task sentence_prediction \
    --reset-optimizer --reset-dataloader --reset-meters \
    --required-batch-size-multiple 1 \
    --init-token 0 --separator-token 0 \
    --arch roberta_base \
    --criterion sentence_prediction \
    --num-classes $NUM_CLASSES \
    --dropout 0 --attention-dropout 0 \
    --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \
    --clip-norm 0.0 \
    --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \
    --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \
    --max-epoch 10 \
    --find-unused-parameters \
    --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric;

Pre-train

Please follow RoBERTa pre-train (https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md) for data process

export CTPRETRAIN=True
fairseq-train --fp16 $DATA_DIR --task masked_lm --criterion masked_lm     \
--arch roberta_base --sample-break-mode none --tokens-per-sample 32000 \
--optimizer adam --adam-betas '(0.9,0.98)' --adam-eps 1e-6 --clip-norm 0.0    \
--lr-scheduler polynomial_decay --lr 0.0005 --warmup-updates 10000 \
--total-num-update 125000 --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01    \
--batch-size 1 --update-freq 16 --max-update 500000 --log-format simple --log-interval 1 \
--save-dir outputs/crossthought --save-interval-updates 5000

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

MIT

cross-thought's People

Contributors

shuohangwang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.