Git Product home page Git Product logo

viscpm's Introduction

VisCPM

基于CPM基础模型的中英双语多模态大模型系列

多模态对话模型VisCPM-Chat文生图模型VisCPM-Paint使用论文

VisCPM-Chat DemoVisCPM-Paint Demo VisCPM-Chat🤗 VisCPM-Paint🤗

简体中文 | English

VisCPM is a family of open-source large multimodal models, which support multimodal conversational capabilities (VisCPM-Chat model) and text-to-image generation capabilities (VisCPM-Paint model) in both Chinese and English, achieving state-of-the-art performance among Chinese open-source multimodal models. VisCPM is trained based on the large language model CPM-Bee with 10B parameters, fusing visual encoder (Muffin) and visual decoder (Diffusion-UNet) to support visual inputs and outputs. Thanks to the good bilingual capability of CPM-Bee, VisCPM can be pre-trained with English multimodal data only and well generalize to achieve promising Chinese multimodal capabilities.

VisCPM 是一个开源的多模态大模型系列,支持中英双语的多模态对话能力(VisCPM-Chat模型)和文到图生成能力(VisCPM-Paint模型),在中文多模态开源模型中达到最佳水平。VisCPM基于百亿参数量语言大模型CPM-Bee(10B)训练,融合视觉编码器Muffin和视觉解码器Diffusion-UNet以支持视觉信号的输入和输出。得益于CPM-Bee基座优秀的双语能力,VisCPM可以仅通过英文多模态数据预训练,泛化实现优秀的中文多模态能力。

  • 👐 开源使用:VisCPM可以自由被用于个人和研究用途。我们希望通过开源VisCPM模型系列,推动多模态大模型开源社区和相关研究的发展。
  • 🌟 涵盖图文双向生成:VisCPM模型系列较为全面地支持了图文多模态能力,涵盖多模态对话(图到文生成)能力和文到图生成能力。
  • 💫 中英双语性能优异:得益于语言模型基座CPM-Bee优秀的双语能力,VisCPM在中英双语的多模态对话和文到图生成均取得亮眼的效果。

📰 更新信息

VisCPM在持续升级中,我们支持了低资源推理、网页版部署等功能,并提供了能力升级的更高版本的模型OmniLMM,欢迎大家持续关注!

  • [2024/02/02] 🚀 欢迎关注我们最新发布的OmniLMM多模态大模型!其中OmniLMM-3B为中英双语多模态对话模型,基于中英双语大模型MiniCPM-2.4B和SigLip-400M视觉编码器训练,采用与VisCPM-Chat相同的训练流程训练,可在终端设备上部署并具备先进的多模态对话能力;OmniLMM-13B为英文多模态模型,基于EVA02-5B和Zephyr-7B-β初始化训练,相比同规模其他模型在多个基准测试中具有领先性能。
  • [2024/01/16] 🎉 VisCPM论文ICLR 2024接收,并被选为spotlight(top 5%)
  • [2023/09/06] 🔌 VisCPM-Chat API 发布!现在您可以直接通过API轻松地使用VisCPM-Chat模型了。查看API使用指南以了解更多详情。
  • [2023/08/23] 📑 VisCPM论文发布:Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages,论文提供了更详细的实现细节和实验结果
  • [2023/08/18] ⤴️ VisCPM-Chat-v1.1版本发布,带来更强的细节理解和复杂推理能力!
  • [2023/08/18] 🛠️ 支持微调,让VisCPM更适配你的应用场景!
  • [2023/07/20] 🌐 发布VisCPM-ChatVisCPM-Paint 的在线Demo,欢迎尝试!
  • [2023/07/20] 🎢 支持一键部署本地网页版Demo
  • [2023/07/20] ⚡️ 支持低资源推理,最低5G显存运行多模态对话模型!
  • [2023/07/18] 🤗 VisCPM-ChatVisCPM-Paint 已整合到Huggingface框架中

VisCPM-Chat

VisCPM-Chat支持面向图像进行中英双语多模态对话。该模型使用Muffin视觉编码架构,使用CPM-Bee(10B)作为语言基座模型,并通过语言建模训练目标融合视觉和语言模型。模型训练包括预训练和指令精调两阶段:

  • 预训练:我们使用约100M高质量英文图文对数据对VisCPM-Chat进行了预训练,数据包括CC3M、CC12M、COCO、Visual Genome、Laion等。在预训练阶段,语言模型参数保持固定,仅更新视觉编码器的参数,以支持大规模视觉-语言表示的高效对齐。

  • 指令精调:我们采用LLaVA-150K英文指令精调数据,并混合相应翻译后的中文数据对模型进行指令精调,以对齐模型多模态基础能力和用户使用意图。在指令精调阶段,我们更新全部模型参数,以提升指令精调数据的利用效率。有趣的是,我们发现即使仅采用英文指令数据进行指令精调,模型也可以理解中文问题,但仅能用英文回答。这表明模型的多语言多模态能力已经得到良好的泛化。在指令精调阶段进一步加入少量中文翻译数据,可以将模型回复语言和用户问题语言对齐。

我们在LLaVA标准英文测试集和翻译的中文测试集对模型进行了评测,该评测基准考察模型在开放域对话、图像细节描述、复杂推理方面的表现,并使用GPT-4进行打分。可以观察到,VisCPM-Chat在中文多模态能力方面取得了最佳的平均性能,在通用域对话和复杂推理表现出色,同时也表现出了不错的英文多模态能力。我们提供了两个模型版本,分别为VisCPM-Chat-balanceVisCPM-Chat-zhplus,前者在英文和中文两种语言上的能力较为平衡,后者在中文能力上更加突出。两个模型在指令精调阶段使用的数据相同,VisCPM-Chat-zhplus在预训练阶段额外加入了20M清洗后的原生中文图文对数据和120M翻译到中文的图文对数据。VisCPM-Chat-v1.1在指令精调阶段额外加入了UniMM-Chat多模态指令精调数据集。

模型 语言模型基座 英文 中文
对话 精细描述 复杂推理 平均 对话 精细描述 复杂推理 平均
英文模型 MiniGPT4 Vicuna-13B 65.0 67.3 76.6 69.7 - - - -
InstructBLIP Vicuna-13B 81.9 68.0 91.2 80.5 - - - -
LLaVA Vicuna-13B 89.5 70.4 96.2 85.6 - - - -
中英双语模型 mPLUG-Owl LLaMA-7B 64.6 47.7 80.1 64.2 76.3 61.2 77.8 72.0
VisualGLM ChatGLM-6B 62.4 63.0 80.6 68.7 76.6 87.8 83.6 82.7
Ziya-Visual Ziya-LLaMA-13B-v1 82.7 69.9 92.1 81.7 85.0 74.7 82.4 80.8
Qwen-VL Qwen-7B 82.4 72.6 91.9 83.8 82.3 93.4 89.5 88.2
VisCPM-Chat-balance CPMBee-10B 83.3 68.9 90.5 81.1 92.7 76.1 89.2 86.3
VisCPM-Chat-zhplus CPMBee-10B 80.1 65.7 92.5 79.6 90.3 81.4 92.1 88.2
VisCPM-Chat-v1.1 CPMBee-10B 80.1 67.1 97.1 81.5 91.3 90.7 95.4 92.5

图片

VisCPM-Paint

VisCPM-Paint支持中英双语的文到图生成。该模型使用CPM-Bee(10B)作为文本编码器,使用UNet作为图像解码器,并通过扩散模型训练目标融合语言和视觉模型。在训练过程中,语言模型参数始终保持固定。我们使用Stable Diffusion 2.1的UNet参数初始化视觉解码器,并通过逐步解冻其中关键的桥接参数将其与语言模型融合。该模型在LAION 2B英文图文对数据上进行了训练。

VisCPM-Chat类似,我们发现得益于CPM-Bee的双语能力,VisCPM-Paint可以仅通过英文图文对训练,泛化实现良好的中文文到图生成能力,达到中文开源模型的最佳效果。通过进一步加入20M清洗后的原生中文图文对数据,以及120M翻译到中文的图文对数据,模型的中文文到图生成能力可以获得进一步提升。我们在标准图像生成测试集MSCOCO上采样了3万张图片,计算了常用评估图像生成指标FID (Fréchet Inception Distance)评估生成图片的质量。我们同样提供了两个模型版本,分别为VisCPM-Paint-balanceVisCPM-Paint-zhplus,前者在英文和中文两种语言上的能力较为平衡,后者在中文能力上更加突出。VisCPM-Paint-balance只使用了英文图文对进行训练,VisCPM-Paint-zhplusVisCPM-Paint-balance基础上增加了20M原生中文图文对数据和120M翻译到中文的图文对数据进行训练。

模型 Zero-shot FID↓
英文 中文
GLIDE 12.2 -
Make-A-Scene 11.8 -
DALL·E-2 10.4 -
Unidiffuser 9.7 -
Cogview2 - 24.0
Stable Diffusion 8.6 -
AltDiffusion 17.2 16.1
TaiyiDiffusion - 15.6
VisCPM-Paint-balance 9.5 10.9
VisCPM-Paint-zhplus 9.9 9.6

⚙️ 安装

  1. 克隆仓库并进入源码目录
git clone https://github.com/OpenBMB/VisCPM.git
cd VisCPM
  1. 创建conda环境
conda create -n viscpm python=3.10 -y
conda activate viscpm
  1. 安装依赖
pip install torch>=1.10
pip install -r requirements.txt

💡 使用

模型下载

模型 描述 下载链接
VisCPM-Chat-v1.1 新版本多模态对话模型,强化了细节理解和复杂推理能力 链接
VisCPM-Chat-balance 中英文能力较为平衡的多模态对话模型 链接
VisCPM-Chat-zhplus 中文能力突出的多模态对话模型 链接
VisCPM-Paint-balance 中英文能力较为平衡的文生图模型 链接
VisCPM-Paint-zhplus 中文能力突出的文生图模型 链接

VisCPM-Chat

在下载模型权重后,可以使用如下代码运行VisCPM-Chat('/path/to/checkpoint'改为模型存放路径)

单轮对话

VisCPM-Chat可以通过几行代码实现多模态对话,我们在代码中默认开启了对输入图片的安全检查。

# 如果您单卡显存不足40G,可以引入如下环境变量并将安全模块开关关闭。引入后显存占用约为5G,但推理所需时间会变长。此选项依赖BMInf,需要安装BMInf依赖库。
export CUDA_MEMORY_CPMBEE_MAX=1g
from VisCPM import VisCPMChat
from PIL import Image

model_path = '/path/to/checkpoint'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
# 默认开启对输入图片的安全检查
image_path = 'figures/vlu_case1.png'
image = Image.open(image_path).convert("RGB")

question = '如果用一句**唐代的著名诗人"李白"的古诗来描述这幅图像,你能想到什么?'
answer, _, _ = viscpm_chat.chat(image, question)

print(answer)

可得到如下结果

“黄河之水天上来,奔流到海不复回。” 李白的这句诗可以用来形容这幅图片中汹涌澎湃、波涛汹涌的景象:一条湍急的河流从山上奔腾而下,形成了一幅令人叹为观止的画面,展示出大自然的力量和雄伟壮丽。

多轮对话

from VisCPM import VisCPMChat
from PIL import Image

model_path = '/path/to/checkpoint'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
# 默认开启对输入图片的安全检查
image_path = 'figures/vlu_case2.jpeg'
image = Image.open(image_path).convert("RGB")

question = '这幅图像是在哪个节日拍摄的?'
answer, context, vision_hidden_states = viscpm_chat.chat(image, question)

# 多轮对话传入历史 context
question = '你能用什么古诗描述这幅画?'
answer, context, _ = viscpm_chat.chat(image, question, context, vision_hidden_states=vision_hidden_states)

print(context)

可得到如下结果

User: 这幅图像是在哪个节日拍摄的?
AI: 这幅图像是在中秋节拍摄的, 也就是**传统节日中的月圆之夜。
User: 你能用什么古诗描述这幅画?
AI: “明月几时有,把酒问青天。” 这是苏轼的《水调歌头》中的一句诗,用来形容这幅图片再贴切不过了:在中秋之夜,月亮高高地挂在天空中,一座古老的建筑沐浴着月光,营造出一种宁静祥和的气氛。

API使用指南

我们提供了API接口,可以通过如下代码轻松体验VisCPM-Chat。API接口支持的输入格式和使用方式如下:

import requests
import base64

url = "http://34.143.180.202:3389/viscpm"
resp = requests.post(url, json={
    # need to modify
    "image": base64.b64encode(open("path/to/image", "rb").read()).decode(),
    "question": "描述一下这张图片",
})
resp = resp.json()
print(resp)

VisCPM-Paint

在下载模型权重后,可以使用如下代码运行VisCPM-Paint('/path/to/checkpoint'改为模型存放路径)。

图片

图片

生成上面图片的文本输入可参考prompts.txt

# 如果您单卡显存不足40G,可以引入如下环境变量并将安全模块开关关闭。引入后显存占用约为17G,但推理所需时间会变长。此选项依赖BMInf,需要安装BMInf依赖库。
export CUDA_MEMORY_CPMBEE_MAX=1g
from VisCPM import VisCPMPaint
painter = VisCPMPaint('/path/to/checkpoint', image_safety_checker=True, prompt_safety_checker=True, add_ranker=True) 
# 默认对输入的文本和输出的图片进行安全检查,默认开启重排序
image = painter.generate('人闲桂花落,月静春山空') # 对应上图第一行第二张图片
image.save('/data/test.png') 

我们在代码中默认开启了对输入文本和输出图片的安全检查。

同时,我们默认对生成的图像使用重排序,即对同一个输入,同时生成4张图片,返回与输入相关性最高的1张图片,相关性通过Chinese-Clip进行打分。重排序可以提升生成图片质量的稳定性,但也会降低模型的生成速度,如希望快速得到生成结果,可以关闭重排序机制。

VisCPM-Paint目前使用中文模型进行重排序打分,如果输入英文生成图片,请关闭重排序机制和输入文本检查模块。

低资源推理

为了支持更加高效的低资源推理场景,我们借助BMInf工具支持更低的显存需求。首先安装BMInf依赖pip install bminf,然后在命令行中指定export CUDA_MEMORY_CPMBEE_MAX=1g(具体数值可以根据个人需求设定),然后按照上述步骤进行推理,VisCPM-Chat最低显存占用可以降至5G,VisCPM-Paint最低显存占用可以降至17G。

Demo部署

我们提供简易的基于gradio的网页版Demo,首先安装gradio:pip install gradio,然后执行如下命令:

git clone https://github.com/OpenBMB/VisCPM.git
cd VisCPM
python demo_chat.py # viscpm_chat demo, or
python demo_paint.py # viscpm_paint demo

模型微调

为适应特定场景下的需求, 我们提供了VisCPM-Chat模型的微调代码,用户可以在私有数据上进行微调。微调代码位于./finetune/ft_viscpm_chat目录下,具体的微调代码使用方法如下:

# 获取数据集
bash ./finetune/ft_viscpm_chat/get_llava150k_zh.sh
# 模型微调, 注意修改其中的数据集与模型checkpoint路径
bash ./finetune/ft_viscpm_chat/run_viscpm_chat_ft.sh
# node: 8
# batch_size: 8 * 1
# 其他配置可参考'./finetune/ft_viscpm_chat/config/viscpm_chat_ft.json'与'./finetune/ft_viscpm_chat/run_viscpm_chat_ft.sh'

注:

  • 微调代码中使用了deepspeed-0.9.1配置训练环境,配置方法可以参考此链接
  • 目前微调代码仅在linux系统下测试,如果您在其他系统配置下进行微调,可能需要修改部分代码。

🛡 安全

安全声明

作为多模态模型,VisCPM通过学习大量的公开图文数据来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。因此用户在使用VisCPM生成的内容时,应自行负责对其进行评估和验证。

安全模块

为了帮助用户防止模型处理或生成不符合普遍社会价值观的内容,我们在VisCPM中加入了内容安全保障模块。当安全模块检测到模型处理或生成的图像文本内容不符合安全规范时,会对相应内容进行拦截。我们对VisCPM-Chat接受的图片输入以及VisCPM-Paint接受的文字输入和图片输出进行了安全检查。VisCPM的安全模块仍然不完美,可能会出现漏判和误判的情况。我们会在未来进一步提升安全模块的性能。

📝 开源协议

VisCPM系列模型采用协议为"通用模型许可协议-来源说明-宣传限制-非商业化",允许个人使用和研究用途。如需将模型用于商业用途,请联系[email protected]来洽谈商业授权事宜。

CPM-Bee基座采用协议为“通用模型许可协议-来源说明-宣传限制-商业授权”,允许商用,如需将模型用于商业用途,请联系[email protected]来获取书面授权。

✅ TODO

  • 支持模型量化功能,降低推理成本

🏫 机构

此项目由以下机构联合研发:

引用

如果我们的工作对你有帮助的话,请考虑引用以下论文

@article{viscpm,
    title={Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages}, 
    author={Jinyi Hu and Yuan Yao and Chongyi Wang and Shan Wang and Yinxu Pan and Qianyu Chen and Tianyu Yu and Hanghao Wu and Yue Zhao and Haoye Zhang and Xu Han and Yankai Lin and Jiao Xue and Dahai Li and Zhiyuan Liu and Maosong Sun},
    journal={arXiv preprint arXiv:2308.12038},
    year={2023}
}

@article{muffin,
      title={Reformulating Vision-Language Foundation Models and Datasets Towards Universal Multimodal Assistants},
      author={Tianyu Yu, Jinyi Hu, Yuan Yao, Haoye Zhang, Yue Zhao, Chongyi Wang, Shan Wang, Yinxv Pan, Jiao Xue, Dahai Li, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun},
      journal={arXiv preprint arXiv:2310.00653},
      year={2023}
}

viscpm's People

Contributors

cppowboy avatar cuiunbo avatar hsiangyounglee avatar jameshujy avatar yaoyuanthu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

viscpm's Issues

apple silicon支持

请问后续有计划将cuda转变为mps加速吗?现在的代码将cuda转变为mps(包括model.to(device)以及torch.mps.empty_cache())会出现以下问题,不太清楚是什么导致。
image

Data from human evaluation

Hi, thanks for the great work. I was wondering if you would be able to release the raw human responses used for the figures 7 and 8 in your paper. It would be very useful to have the prompts as well as the raw images seen per prompt, along with the alignment and fidelity ratings across all humans. Please let me know if you plan to release this data.

多卡推理部署VisCPM-Chat

请问可以多卡部署吗?如果可以的话,具体怎么操作可以教教吗?一张3090显存不太够,对话稍微长一些就会爆显存,BMInf速度又太慢。

多模态对话效果图

请问有关多模态对话效果图,是根据结果自己画的,还是本身搭建了了一个可以对话的前端界面?
vlu_case4

Finetune微调

请问该微调代码可以使用LoRA或者QLoRA来微调训练吗

NameError: name 'files' is not defined. Did you mean: 'filter'?

Python 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

from VisCPM import VisCPMChat
from PIL import Image
model_path= '/home/jovyan/mm_large_model/viscpm_chat_zhplus_checkpoint.pt'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
Traceback (most recent call last):
File "", line 1, in
File "/home/jovyan/mm_large_model/VisCPM/VisCPM/viscpm_chat.py", line 36, in init
files('mypkg.data').joinpath('data1.txt')
NameError: name 'files' is not defined. Did you mean: 'filter'?
exit()

paint无法生成图片

在paint任务中,按照readme上的要求,将模型下载下来(两个paint的模型都试了),然后更改demo_paint的路径,可以跑通,出现load image model success ! 但是跟他说话他生成不了东西,一直报错错误,别的地方也没有出错,不知为何会这样

你好,运行demo_chat.py的时候出现CUDA out of memory,配置是四张8G的gpu,这是什么问题,该如何解决?

(viscpm) zzz@zzz:~/yz/AllVscodes/VisCPM-main$ python demo_chat.py
use CUDA_MEMORY_CPMBEE_MAX=1g to limit cpmbee cuda memory cost
/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/bminf/wrapper.py:57: UserWarning: quantization is set to true but torch.nn.Linear is not found in your model.
warnings.warn("quantization is set to true but torch.nn.Linear is not found in your model.")
Traceback (most recent call last):
File "/home/zzz/yz/AllVscodes/VisCPM-main/demo_chat.py", line 11, in
viscpm_chat = VisCPMChat(model_path, image_safety_checker=False)
File "/home/zzz/yz/AllVscodes/VisCPM-main/VisCPM/viscpm_chat.py", line 72, in init
self.vlu_cpmbee.vpm.to(self.device)
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.93 GiB total capacity; 1.91 GiB already allocated; 42.31 MiB free; 1.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

ValueError: assignment destination is read-only

当我在测试VisCPM时,使用一张本地的图片(一张猫的照片),询问问题(是否出现猫),出现ValueError: assignment destination is read-only这样的问题
image
,请问应该如何解决,其他图片是可以的。这是因为安全检查的原因吗?

Chinese multimodal instruction dataset

Thank you for your impressing work. Which multimodal instruction datasets are you using, may be the translated llava 150k data? Will you release the data for Chinese instruction tuning?

求助--加载本地模型代码出错

求助:
我这样调用的,
from VisCPM import VisCPMChat
from PIL import Image

model_path = './openbmb/VisCPM-Chat/pytorch_model.zhplus.bin'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)

我模型加载的路径对吗?
加载模型中,GPU到811M左右时就会被自动杀死,显卡够用,这是怎么回事呢?

RuntimeError: Error(s) in loading state_dict for VLU_CPMBee: Missing key(s) in state_dict: "query", "vpm.beit3.text_embed.weight", "vpm.beit3.vision_embed.mask_token",

Python 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

from VisCPM import VisCPMChat
from PIL import Image
model_path = '/home/jovyan/mm_large_model/viscpm_paint_zhplus_checkpoint.pt'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
Traceback (most recent call last):
File "", line 1, in
File "/home/jovyan/mm_large_model/test/VisCPM/VisCPM/viscpm_chat.py", line 50, in init
self.vlu_cpmbee.load_state_dict(vlu_state_dict)
File "/opt/conda/envs/viscpm2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VLU_CPMBee:
Missing key(s) in state_dict: "query", "vpm.beit3.text_embed.weight", "vpm.beit3.vision_embed.mask_token", "vpm.beit3.vision_embed.cls_token", "vpm.beit3.vision_embed.proj.weight", "vpm.beit3.vision_embed.proj.bias", "vpm.beit3.encoder.embed_positions.A.weight", "vpm.beit3.encoder.embed_positions.B.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.A.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.A.bias", "vpm.beit3.encoder.layers.0.self_attn.k_proj.B.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.B.bias", "vpm.beit3.encoder.layers.0.self_attn.v_proj.A.weight", "vpm.beit3.encoder.layers.0.self_attn.v_proj.A.bias", "vpm.beit3.encoder.layers.0.s

API 502 Bad Gateway

Hello, when I try to process images through the API, I got an response '502 Bad Gateway'. Do you know what is the problem?
image

微调进程被kill

微调时出现的错误,用了两张A100,80G。cuda12.0的,安装的conda环境是cu117的

-------final CMD is------
deepspeed ./finetune/ft_viscpm_chat/train_viscpm_chat.py --image_path /home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/coco/train2017/ --text_path /home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/llava_instruct_150k_zh.json --llm_path ./config/cpm-bee-10b.json --exp_name ft_viscpm_chat --model_dir /home/lon/zyx/model-weights/VisCPM/weight/ --query_num 64 --max_len 512 --batch_size 1 --save_step 500 --epochs 5 --deepspeed_config ./finetune/ft_viscpm_chat/config/deepspeed/viscpm_chat_ft.json --sft --tune_llm --tune_vision
-------final CMD end------
[2023-09-17 15:57:24,702] [WARNING] [runner.py:190:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
Detected CUDA_VISIBLE_DEVICES=6,7: setting --include=localhost:6,7
[2023-09-17 15:57:24,756] [INFO] [runner.py:540:main] cmd = /home/lon/anaconda3/envs/viscpm/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbNiwgN119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None ./finetune/ft_viscpm_chat/train_viscpm_chat.py --image_path /home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/coco/train2017/ --text_path /home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/llava_instruct_150k_zh.json --llm_path ./config/cpm-bee-10b.json --exp_name ft_viscpm_chat --model_dir /home/lon/zyx/model-weights/VisCPM/weight/ --query_num 64 --max_len 512 --batch_size 1 --save_step 500 --epochs 5 --deepspeed_config ./finetune/ft_viscpm_chat/config/deepspeed/viscpm_chat_ft.json --sft --tune_llm --tune_vision
[2023-09-17 15:57:26,315] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [6, 7]}
[2023-09-17 15:57:26,316] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=2, node_rank=0
[2023-09-17 15:57:26,316] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2023-09-17 15:57:26,316] [INFO] [launch.py:247:main] dist_world_size=2
[2023-09-17 15:57:26,316] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=6,7
[2023-09-17,15:57:28][I][1943889-fin.ini-initializer.py:116]- args: {args}
[2023-09-17,15:57:28][I][1943889-fin.uti.uti-utils.py:111]- init_distributed_mode LOCAL_RANK
| distributed init (rank 1): env://, gpu 1
| distributed init (rank 0): env://, gpu 0
[2023-09-17,15:57:29][I][1943890-tor.dis.dis-distributed_c10d.py:319]- Added key: store_based_barrier_key:1 to store for rank: 1
[2023-09-17,15:57:29][I][1943889-tor.dis.dis-distributed_c10d.py:319]- Added key: store_based_barrier_key:1 to store for rank: 0
[2023-09-17,15:57:29][I][1943889-tor.dis.dis-distributed_c10d.py:353]- Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
[2023-09-17,15:57:29][I][1943890-tor.dis.dis-distributed_c10d.py:353]- Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
[2023-09-17 15:57:31,322] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 1943889
[2023-09-17 15:57:31,348] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 1943890
[2023-09-17 15:57:31,348] [ERROR] [launch.py:434:sigkill_handler] ['/home/lon/anaconda3/envs/viscpm/bin/python', '-u', './finetune/ft_viscpm_chat/train_viscpm_chat.py', '--local_rank=1', '--image_path', '/home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/coco/train2017/', '--text_path', '/home/lon/zyx/VisCPM/finetune/ft_viscpm_chat/llava_instruct_150k_zh.json', '--llm_path', './config/cpm-bee-10b.json', '--exp_name', 'ft_viscpm_chat', '--model_dir', '/home/lon/zyx/model-weights/VisCPM/weight/', '--query_num', '64', '--max_len', '512', '--batch_size', '1', '--save_step', '500', '--epochs', '5', '--deepspeed_config', './finetune/ft_viscpm_chat/config/deepspeed/viscpm_chat_ft.json', '--sft', '--tune_llm', '--tune_vision'] exits with return code = -11

torch.cuda.OutOfMemoryError

Traceback (most recent call last):
File "/data/CV/caidaigang/model/VisCPM/./finetune/ft_viscpm_chat/train_viscpm_chat.py", line 206, in
main()
File "/data/CV/caidaigang/model/VisCPM/./finetune/ft_viscpm_chat/train_viscpm_chat.py", line 202, in main
train(model, args)
File "/data/CV/caidaigang/model/VisCPM/./finetune/ft_viscpm_chat/train_viscpm_chat.py", line 87, in train
vllm_engine, vllm_optim, _, _ = deepspeed.initialize(
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/init.py", line 165, in initialize
engine = DeepSpeedEngine(args=args,
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 308, in init
self._configure_optimizer(optimizer, model_parameters)
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1167, in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1398, in _configure_zero_optimizer
optimizer = DeepSpeedZeroOptimizer(
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 485, in init
self.initialize_optimizer_states()
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 614, in initialize_optimizer_states
self.optimizer.step()
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/optim/optimizer.py", line 140, in wrapper
out = func(*args, **kwargs)
File "/data/CV/caidaigang/anaconda3/envs/viscpm/lib/python3.10/site-packages/deepspeed/ops/adam/fused_adam.py", line 129, in step
state['exp_avg_sq'] = torch.zeros_like(p.data)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.22 GiB (GPU 0; 79.21 GiB total capacity; 76.87 GiB already allocated; 45.56 MiB free; 77.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

我在A100 80G上面把batch size改成1仍然有以上问题,请问要怎么解决,有人顺利finetune模型了吗

报错 AssertionError: Torch not compiled with CUDA enabled

D:\Anaconda\envs\viscpm\lib\site-packages\torchvision\models\detection\anchor_utils.py:63: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:77.)
device: torch.device = torch.device("cpu"),
Traceback (most recent call last):
File "D:\ViscpmProj\VisCPM\hello.py", line 5, in
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
File "D:\ViscpmProj\VisCPM\VisCPM\viscpm_chat.py", line 40, in init
self.cpm_model = CPMBeeTorch(self.config)
File "D:\ViscpmProj\VisCPM\VisCPM\models\cpmbee.py", line 74, in init
self.input_embedding = EmbeddingExt(
File "D:\ViscpmProj\VisCPM\VisCPM\models\modules\embedding.py", line 77, in init
self.rotary_emb = RotaryEmbedding(
File "D:\ViscpmProj\VisCPM\VisCPM\models\modules\position_embedding.py", line 222, in init
base ** (torch.arange(0, dim, 2, device="cuda", dtype=torch.float32) / dim)
File "D:\Anaconda\envs\viscpm\lib\site-packages\torch\cuda_init_.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

测试了下,有待加强,加油 | test diary

test diary

服务器:矩池云 NVIDIA RTX A6000 48G显存、86G内存
运行环境:矩池云系统镜像Pytorch 1.13.1

预装:Ubuntu20.04, Python 3.8, Pytorch 1.13.1, CUDA 11.6, cuDNN 8, NVCC, VNC

使用模型:viscpm_chat_zhplus_checkpoint.pt

模型加载内存占用需要66G左右(忘记截图了)

推理显存使用情况:
image

推理内存使用情况:
image

官方案例推理情况:
image

随便给个图推理情况:
image

加油⛽️

web demo share

import gradio as gr
from PIL import Image

from VisCPM import VisCPMChat

# 修改你的模型地址
model_path = '/path/to/checkpoint'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=False)
print("load  model  success !")


def upload_img(image,_chatbot,_app_session):
    image = Image.fromarray(image)
    _app_session['sts']=None
    _app_session['ctx']=''
    _app_session['img']=image
    _chatbot.append(('图片解析成功,可以和我对话了', ''))
    return _chatbot,_app_session


def respond( _question, _chat_bot,_app_cfg):
    _answer, _context, sts = viscpm_chat.chat(_app_cfg['img'], _question, _app_cfg['ctx'],
                                            vision_hidden_states=_app_cfg['sts'])
    _chat_bot.append((_question, _answer))
    _app_cfg['ctx']=_context
    _app_cfg['sts']=sts
    print('context', _context)
    return '',_chat_bot,_app_cfg


with gr.Blocks() as demo:
    app_session = gr.State({'sts':None,'ctx':None,'img':None})
    bt_pic = gr.Image(label="先上传一张图片")
    chat_bot = gr.Chatbot(label="聊天对话")
    txt_message = gr.Textbox(label="输入文字")

    txt_message.submit(respond, [ txt_message, chat_bot,app_session], [txt_message,chat_bot,app_session])
    bt_pic.upload(lambda: None, None, chat_bot, queue=False).then(upload_img, inputs=[bt_pic,chat_bot,app_session], outputs=[chat_bot,app_session])


demo.queue(concurrency_count=1, max_size=20).launch(share=False, debug=True, server_port=7866,
                                                    server_name="0.0.0.0")

Computing resource

Thanks to the authors for open sourcing this nice work.

May I ask for the the number of GPUs required to train or fine-tune VisCPM-Paint and the training time?

Thanks for your help!

finetune

Hello! i want to ask about how to finetuning with lora?

web demo share (text 2 img)

import gradio as gr

from VisCPM import VisCPMPaint

# 修改你的模型地址
model_path = '/opt/ai/VisCPM/viscpm_paint_balance_checkpoint.pt'
painter = VisCPMPaint(model_path, image_safety_checker=False, prompt_safety_checker=False, add_ranker=True)
print("load  image model  success !")


def gen_img(txt, imgs):
    image = painter.generate(txt)
    imgs.append(image)
    return "",imgs,imgs


with gr.Blocks() as demo:
    imgs = gr.State([])
    gallery = gr.Gallery(label="生成图片")
    txt_message = gr.Textbox(label="输入文字")
    txt_message.submit(gen_img, [txt_message, imgs], [txt_message, gallery,imgs])

demo.queue(concurrency_count=1, max_size=20).launch(share=False, debug=True, server_port=7866,
                                                    server_name="0.0.0.0")

以上内容来自群友分享,欢迎加入社群交流

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.