Git Product home page Git Product logo

alicemind's Introduction

AliceMind

AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

This repository provides pre-trained encoder-decoder models and its related optimization techniques developed by Alibaba's MinD (Machine IntelligeNce of Damo) Lab.

The family of AliceMind:

  • Pre-trained Models:
    • Release the first multimodal large language model for enhancing LLM and MLLM through modal collaboration: mPLUG-Owl2
    • Release the first ocr-free multimodal large language model for universal document understanding: mPLUG-DocOwl(EMNLP 2023)
    • Release the first and largest public Chinese Video-language pretraining dataset and benchmarks: Youku-mPLUG, and the Chinese video large language model named mPLUG-video
    • A new training paradigm with a modularized design for large multi-modal language models: mPLUG-Owl
    • Large-scale Chinese open-domain dialogue system for digital human: ChatPLUG
    • A Modularized Multi-modal Foundation Model Across Text, Image and Video: mPLUG-2(ICML 2023)
    • Large-scale vision-language understanding and generation model: mPLUG(EMNLP 2022)
    • Large-scale chinese understanding and generation model: PLUG
    • Pre-training table model: SDCUP (Under Review)
    • Chinese language understanding model with multi-granularity inputs: LatticeBERT (NAACL 2021)
    • Structural language model: StructuralLM (ACL 2021)
    • Cross-modal language model: StructVBERT (CVPR 2020 VQA Challenge Runner-up)
    • Cross-lingual language model: VECO (ACL 2021)
    • Generative language model: PALM (EMNLP 2020)
    • Language understanding model: StructBERT (ICLR 2020)
  • Fine-tuning Methods:
    • Parameter-Efficient Sparsity methods PST (IJCAI 2022)
    • Effective and generalizable fine-tuning method ChildTuning (EMNLP 2021)
  • Model Compression:

News

  • November 9, 2023: mPLUG-Owl2, the first multimodal large language model for enhancing LLM and MLLM through modal collaboration.
  • July 7, 2023: mPLUG-DocOwl, the first ocr-free multimodal large language model for universal document understanding, were accepted by EMNLP 2023.
  • June 8, 2023: Youku-mPLUG, release the first and largest public Chinese Video-language pretraining dataset and benchmarks, and the Chinese video large language model named mPLUG-video.
  • April 27, 2023: mPLUG-Owl, a new training paradigm with a modularized design for large multi-modal language models released.
  • April 25, 2023: mPLUG-2 were accepted by ICML 2023.
  • April 16, 2023: ChatPLUG, the Chinese open-domain dialogue system for digital human applications released.
  • October, 2022: mPLUG were accepted by EMNLP 2022.
  • May, 2022: PST were accepted by IJCAI 2022.
  • April, 2022: The SOFA modeling toolkit released which supports models&techs standard code and the direct use of them in transformers!
  • December, 2021: ContrastivePruning were accepted by AAAI 2022.
  • October, 2021: ChildTuning were accepted by EMNLP 2021.
  • September, 2021: The first Chinese pre-training table model SDCUP released!
  • May, 2021: VECO and StructuralLM were accepted by ACL 2021.
  • March, 2021: AliceMind released!

Pre-trained Models

  • mPLUG-Owl (April 27, 2023): a new training paradigm with a modularized design for large multi-modal language models. Learns visual knowledge while support multi-turn conversation consisting of different modalities. Observed abilities such as multi-image correlation and scene text understanding, vision-based document comprehension. Release a visually-related instruction evaluation set OwlEval. mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality

  • ChatPLUG (April 16, 2023): a Chinese open-domain dialogue system for digital human applications that instruction finetunes on a wide range of dialogue tasks in a unified internet-augmented format. Different from other open-domain dialogue models that focus on large-scale pre-training and scaling up model size or dialogue corpus, we aim to build a powerful and practical dialogue system for digital human with diverse skills and good multi-task generalization by internet-augmented instruction tuning. ChatPLUG: Open-Domain Generative Dialogue System with Internet-Augmented Instruction Tuning for Digital Human

  • mPLUG (September 1, 2022): large-scale pre-trained model for vision-language understanding and generation. mPLUG is pre-trained end-to-end on large scale image-text pairs with both discriminative and generative objectives. It achieves state-of-the-art results on a wide range of vision-language downstream tasks, including image-captioning, image-text retrieval, visual grounding and visual question answering. mPLUG: Effective Multi-Modal Learning by Cross-Modal Skip Connections(EMNLP 2022)

  • PLUG (September 1, 2022): large-scale chinese pre-trained model for understanding and generation. PLUG (27B) is a large-scale chinese pre-training model for language understanding and generation. The training of PLUG is two-stage, the first stage is a 24-layer StructBERT encoder, and the second stage is a 24-6-layer PALM encoder-decoder.

  • SDCUP (September 6, 2021): pre-trained models for table understanding. We design a schema dependency pre-training objective to impose the desired inductive bias into the learned representations for table pre-training. We further propose a schema-aware curriculum learning approach to alleviate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner. The experiment results on SQUALL and Spider demonstrate the effectiveness of our pre-training objective and curriculum in comparison to a variety of baselines. "SDCUP: Schema Dependency Enhanced Curriculum Pre-Training for Table Semantic Parsing" (Under Review)

  • LatticeBERT (March 15, 2021): we propose a novel pre-training paradigm for Chinese — Lattice-BERT which explicitly incorporates word representations with those of characters, thus can model a sentence in a multi-granularity manner. "Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models" (NAACL 2021)

  • StructuralLM (March 15, 2021): pre-trained models for document-image understanding. We propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks. "StructuralLM: Structural Pre-training for Form Understanding" (ACL 2021)

  • StructVBERT (March 15, 2021): pre-trained models for vision-language understanding. We propose a new single-stream visual-linguistic pre-training scheme by leveraging multi-stage progressive pre-training and multi-task learning. StructVBERT obtained the 2020 VQA Challenge Runner-up award, and SOTA result on VQA 2020 public Test-standard benchmark (June 2020). "Talk Slides" (CVPR 2020 VQA Challenge Runner-up).

  • VECO v0 (March 15, 2021): pre-trained models for cross-lingual (x) natural language understanding (x-NLU) and generation (x-NLG). VECO (v0) achieves the new SOTA results on various cross-lingual understanding tasks of the XTREME benchmark, covering text classification, sequence labeling, question answering, and sentence retrieval. For cross-lingual generation tasks, it also outperforms all existing cross-lingual models and state-of-the-art Transformer variants on WMT14 English-to-German and English-to-French translation datasets, with gains of up to 1~2 BLEU. “VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation" (ACL 2021)

  • PALM (March 15, 2021): pre-trained models for natural language generation (NLG). We propose a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus, specifically designed for generating new text conditioned on context. It achieves new SOTA results in several downstream tasks. "PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation" (EMNLP 2020)

  • StructBERT (March 15, 2021): pre-trained models for natural language understanding (NLU). We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training. Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential order of words and sentences, which leverage language structures at the word and sentence levels, respectively. "StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding" (ICLR 2020)

Fine-tuning Methods

Model Compression

  • ContrastivePruning (December 17, 2021): ContrAstive Pruning (CAP) is a general pruning framework under the pre-training and fine-tuning paradigm, which aims at maintaining both task-specific and task-agnostic knowledge during pruning. CAP is designed as a general framework, compatible with both structured and unstructured pruning. Unified in contrastive learning, CAP encourage the pruned model to learn from the pre-trained model, the snapshots (intermediate models during pruning), and the fine-tuned model, respectively. “From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression" (AAAI 2022)

  • PST (May 23, 2022): Parameter-efficient Sparse Training (PST) is to reduce the number of trainable parameters during sparse-aware training in downstream tasks. It combines the data-free and data-driven criteria to efficiently and accurately measures the importance of weights, and investigates the intrinsic redundancy of data-driven weight importance and derive two obvious characteristics i.e., low-rankness and structuredness, which therefore makes the sparse training resource-efficient and parameter-efficient. “Parameter-Efficient Sparsity for Large Language Models Fine-Tuning" (IJCAI 2022)

Modeling toolkit

  • SOFA SOFA aims to faciliate easy use and distribution of the pretrained language models from Alibaba DAMO Academy AliceMind project. In addition, detail examples in the project make it simple for any end-user to access those models.

Contact Information

AliceMind Official Website: https://nlp.aliyun.com/portal#/alice

AliceMind Open Platform: https://alicemind.aliyuncs.com

Please submit a GitHub issue if you have want help or have issues using ALICE.

For more information, you can join the AliceMind Users Group on DingTalk to contact us. The number of the DingTalk group is 35738533.

For other business communications, please contact [email protected]

License

AliceMind is released under the Apache 2.0 license.

Copyright 1999-2020 Alibaba Group Holding Ltd.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at the following link.

     http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

alicemind's People

Contributors

albert-ma avatar alibaba-oss avatar chuanqi1992 avatar grygg avatar lcl6679292 avatar njustgzy avatar runxinxu avatar suluyana avatar wangwei7175878 avatar xhyandwyy avatar zhongyigu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alicemind's Issues

transfer labert model to pytorch

Hi,
I'm trying to transfer the labert model to pytorch, I used the code online :

path="./chinese_labert-base-std-512/"
tf_checkpoint_path = path + "model.ckpt/"#自己BERT模型文件夹下的ckpt文件(共3个一组)
bert_config_file = path + "labert_config.json" #自己BERT模型文件夹下的config
pytorch_dump_path = path + "pytorch_model.bin" 

def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path):
    # Initialise PyTorch model
    config = BertConfig.from_json_file(bert_config_file)
    print(f"Building PyTorch model from configuration: {config}")
    model = BertForPreTraining(config)

    # Load weights from tf checkpoint
    load_tf_weights_in_bert(model, config, tf_checkpoint_path)

    # Save pytorch-model
    print(f"Save PyTorch model to {pytorch_dump_path}")
    torch.save(model.state_dict(), pytorch_dump_path)

convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, bert_config_file, pytorch_dump_path)

But I got this:

1633691584(1)

Does anyone know if there's a way to make it work?Thanks a lot!!

code for E2E-VLP

Hi, I'm wondering if the code for E2E-VLP would be published?

ASOC 2022: Downstream image generation code implementation of PALM

Background

This is an advance subject of ASoC 2022 and #44 .

At present, PALM cannot support image generation. The code for image generation needs to be developed based on the PALM model and you can refer to public models such as DALLE, etc.

Target

Design and implement image generation code for training and inference.

Difficulty

Normal

Mentor

Chenliang Li (@lcl6679292)([email protected])

背景

这是一个阿里巴巴编程之夏 2022 的基础课题 #44
AliceMind-PALM模型不支持图片生成任务。需要参考现有图片生成模型如DALLE,为PALM补充下游图片生成的代码

目标

设计并实现图片生成代码的训练和推理

难度

正常

导师

Chenliang Li (@lcl6679292)([email protected])

DocVQA reproduce problem using StructuralLM

I tried to finetune StructuralLM on DocVQA dataset using the released weights, but I only get 76.85 ANLS on the test set.
Can the finetuning code on DocVQA be open-sourced ?

Alibaba Summer of Code (ASOC) 2022

Alibaba Summer of Code (ASOC) 2022

Welcome to the open source world! If you haven't planned how to spend this summer, come to the Alibaba Summer of Code and code with us! 💻

Alibaba Summer of Code is a global program focused on engaging students directly in open source software development. Under the guidance of the mentor in the Alibaba open source project, students can experience software development in the real world. Alibaba Summer of code will begin from May 30th to September 1st. Students can use the summertime to participate in the open source project and work with the core members of the project.

This is a master issue to track the progress and result of Alibaba Summer of Code 2022.

What you can get?

On this exclusive developer journey, students will have the opportunity to:

  • Participate in the top projects of the International Open Source Foundation;
  • Get a scholarship from Alibaba;
  • Obtain an open source contributor certificate;
  • Get a fast pass of Alibaba Internship
  • Get your code adopted and used by the open source project!

Our Mentor

Wei Wang (@wangwei7175878 ), ASoC Mentor, Core member of AliceMind
Chuanqi Tan (@Chuanqi1992 ), ASoC Mentor, Core member of AliceMind
Chenliang Li (@lcl6679292 ), ASoC Mentor, Core member of AliceMind

Timeline

开源

Apply Now!

Browse open idea list here:

阿里巴巴开源之夏: StructBERT下游任务代码实现:
Difficulty: Normal
#41
阿里巴巴开源之夏:基于PALM实现图片生成模型
Difficulty: Normal
#43
阿里巴巴开源之夏:基于AliceMind模型的稀疏算法实现
Difficulty: Hard
#42
Upload your CV and project proposal via ASOC 2022 official website

Contact the Organizer

If you have any questions, visit the event website: https://opensource.alibaba.com/asoc2022

Email address: [email protected]

请问什么时候可以开放跨模态模型

想使用跨模态模型做文生图的工作,目前的文生图工作尚不能适应下游任务。(比如装置艺术、环境设计等)
我有相关的专业数据集。

will you consider push your work to huggingface model hub?

It's a bit suffering to use your model like StructBert.

There are some minor code modifications compared with huggingface's bert.

So i won't say it's safe to directly use huggingface's from_pretrained api on your released model checkpoint, while it could be inconvenient to use your modeling code where the BertModel are not inherited with huggingface's PreTrainedModel.

Any advice?

TNEWS结果

您好,感谢您的工作,有一点小问题,文章中汇报的TNEWS结果为什么跟CLUE原文中汇报的差距这么大,CLUE原文汇报的测试接上的Accuracy大概在56-58之间,而在本文中普遍在67-68之间

Where is UED?

I cannot find UED that was mentioned in the PR article.

Could you please give the paper link?

Thanks.

StructuralLM, 如何定义cell box

StructuralLM, 如何定义cell box
对于ocr 通过给出的是一整行信息,或者说key:value 靠的很近,请问这算一个cell吗?还是算两个?

Coreference Resolution Task

For CLUEWSC2020 data, how did you manage the inputs? As was demonstrated in the Lattice-BERT paper, the coreference resolution task was also treated as a classification task in which the representation corresponding to [CLS] was used as the feature vector. I just wonder how you distinguished the different spans in the input.

Experimental configuration of Child-Tuning

Hi,
I want to reproduce the experiment of Child-Tuning, I saw "We report the averaged results over 10 random seeds" In the paper 3.2, could you display the seed sequence?
Thank you,looking forward to your reply.

fail to load structbert.en.large while trying to reproduce the result of GLUE

Hi,
I downloaded the structbert.en.large through the given link (https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model), but the below error occured during running.

RuntimeError: Error(s) in loading state_dict for BertForSequenceClassificationMultiTask:
Missing key(s) in state_dict: "classifier.0.weight", "classifier.0.bias".
Unexpected key(s) in state_dict: "lm_bias", "linear.weight", "linear.bias", "LayerNorm.gamma", "LayerNorm.beta", "classifier.weight", "classifier.bias".

Do you have any idea why this happen? Thank you very much.

请教下XNLI的复现结果?

hello~ 我这边使用train_xnli.sh的默认参数进行了训练,最终得到平均acc为78.63%,和论文中的79.9%的结果有些出入,且看上去应该是每种语言的acc都低了一点。请问下可能是什么原因导致的呢?非常感谢。
我这边是用的单核GPU,采用Tesla M40 24GB卡,Driver Version: 440.64,CUDA Version: 10.2
我从和论文中的fine-tune参数对比上看。
一个是sh代码里面使用的epoch=2,论文中是用的3/5/10选的
另一个是这里TOTAL_BATCH_SIZE=64,BATCH_SIZE=2,论文中是16/32/64选的
会不会是这两个超参导致的出入呢?
非常感谢~

ASOC 2022: Sparse metric and pattern implementation of SOFA

Background

This is an advance subject of ASoC 2022 and #44 .
At present, SOFA cannot support the sparse algorithm. It is necessary to refer to the previous work to implement common sparse metrics (e.g. MaP and MvP) and sparse patterns (e.g. unstructured pruning, block-wise pruning).

Target

The implementation of common sparse metrics (e.g. MaP and MvP) and sparse patterns (e.g. unstructured pruning, block-wise pruning).

Difficulty

Hard

Mentor

Chuanqi Tan (@Chuanqi1992 )([email protected])

背景

这是一个阿里巴巴编程之夏 2022 的基础课题 #44
AliceMind-SOFA框架不支持稀疏算法,需要参考先有工作实现MaP、MvP、L0等稀疏Metric和随机稀疏、block稀疏等稀疏Pattern。

目标

实现MaP、MvP、L0等稀疏Metric和随机稀疏、block稀疏等稀疏Pattern。

难度

导师

Chuanqi Tan (@Chuanqi1992 )([email protected])

about fine-tune using sdcup

may i use a bert-like model to load params of pre-train sdcup, then add some head top for task of table qa?

when i look into pre-train sdcup, can i ignore params like: "mlp_action1.linear.weight", "mlp_action1.linear.bias", "mlp_action2.linear.weight", "mlp_action2.linear.bias", "mlp_column1.linear.weight", "mlp_column1.linear.bias", "mlp_column2.linear.weight", "mlp_column2.linear.bias", "mlp_column1_single.linear.weight", "mlp_column1_single.linear.bias", "mlp_column2_single.linear.weight", "mlp_column2_single.linear.bias", "layer_norm_1.gamma", "layer_norm_1.beta", "layer_norm_2.gamma", "layer_norm_2.beta", "layer_norm_3.gamma", "layer_norm_3.beta". are these useful for fine-tune?

通义模型

Hi,你们在clue上提交的通义模型对应的是PLUG吗?

Pretrained weights for downstream tasks for mPLUG?

Currently, only the pretrained weights before fine-tuning on downstream tasks for mPLUG are released. Is it possible to release the pretrained weights for downstream tasks after fine-tuning, like visual question answering and image captioning?

Thanks!

ASOC 2022: Downstream tasks code implementation of StructBERT

Background

This is an advance subject of ASoC 2022 and #44 .

Supplementing the implementation of common downstream tasks such as regression/multi-label classification/sequence labeling/machine reading comprehension of StructBERT.

Target

Design and implement regression/multi-label classification/sequence labeling/machine reading comprehension code of StructBERT.

Difficulty

Normal

Mentor

Wei Wang (@wangwei7175878 )([email protected])

背景

这是一个阿里巴巴编程之夏 2022 的基础课题 #44
补充StructBERT在回归/多标签分类/序列标注/机器阅读理解等常见下游任务的代码实现。

目标

补充StructBERT在回归/多标签分类/序列标注/机器阅读理解等常见下游任务的代码实现。

难度

正常

导师

Wei Wang(@wangwei7175878 )([email protected])

Hyper parameters for ChildTuning

Thanks a lot for all the details you provide in Appendix B for reproducibility! However, I still encounter some difficulties in reproducing the experiment.
I noticed that you apply grid search. Could you please provide the specific , and learning rate for each task?

Will you share pre-training code of StructBERT

Hi, I'm trying to code StructBERT from scratch. But I couldn't find any code examples pre-training about StructBERT.
In the repository I've found codes for fine-tuning based on various datasets.

Are you planning to share pre-training model's code for StructBert such as BertForPretraining in Transformers library ?

Thanks in advance 🙂

How to reproduce the result of StructBert on SST-B?

Hi, I can not reproduce the result reported in the paper by the code example:

python run_classifier_multi_task.py \
  --task_name STS-B \
  --do_train \
  --do_eval \
  --do_test \
  --lr_decay_factor 1 \
  --dropout 0.1 \
  --do_lower_case \
  --detach_index -1 \
  --core_encoder bert \
  --data_dir data \
  --vocab_file config/vocab.txt \
  --bert_config_file config/large_bert_config.json \
  --init_checkpoint model/en_model \
  --max_seq_length 128 \
  --train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 3 \
  --fast_train \
  --gradient_accumulation_steps 1 \
  --output_dir output \
  --amp_type O1

Are there any hyper-params I set wrong?

What’s the CLEVER?

I found StructBERT + CLEVER in the GLUE benchmark. Is that a technology about the pertaining or fine-tuning? Can you provide more information about CLEVER? Thanks a lot.

image

NaN loss during training

跑LatticeBERT里面fine-tuning a AFQMC classification model的样例,参数也没动,标注数据集,迭代几次就loss NaN?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.