Git Product home page Git Product logo

qwen-vl's People

Contributors

dsdanielpark avatar eltociear avatar eric7733 avatar honorrong avatar huybery avatar ichaobuster avatar jinze1994 avatar justinlin610 avatar shuaibai623 avatar simonjjj avatar tinytangent avatar vealocia avatar yangapku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qwen-vl's Issues

evaluation of touchstone

Evaluation of TouchStone
Can you provide automated evaluation scripts or more details about evaluation scripts?

[BUG] <title> 计算ANLS指标时执行了infographicsvqa_eval.py,然而没有找到这个文件。

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

evaluate_vqa.py 在docvqa 数据集上计算anls指标时,执行了python infographicsvqa_eval.py文件,然而没有找到这个代码。

About TouchStone

TouchStone is a VQA benchmark or a multi-turn dialogue benchmark? Will this benchmark be open-source?

[BUG] 不支持PNG图片框选元素,输出图片全黑

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

上传一张PNG图片,然后让其框选图中的某个元素,输出的图片全部都是黑色的。

期望行为 | Expected Behavior

正常图片,正常框选。

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

原因是把PLT库加载的png图片通道数据,也当成了RGB数据进行处理,因为PNG通道数据是0-1的float类似,强制转为int的RGB数据,就全部都是黑色的了。

💡 [REQUEST] - Replicate API

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

It would be awesome if you guys could upload the model to replicate.com so it is more accessible in applications.

基本示例 | Basic Example

This is a popular website for model deployment. It would be awesome if you guys could upload it here so we can use the model API. https://replicate.com/

缺陷 | Drawbacks

There are no drawbacks.

未解决问题 | Unresolved questions

No response

Normalization Range

Hi,
Can i check how do i get the normalisation range so that i can re-project the draw box coordinates myself on the same image? Or is there anyway to access the bbox_drawing function?

[BUG] <title>使用modelscope的sft示例微调coco-en时报错

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

error

报错:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

将sft的环境完全搭建后,直接运行modelscope的代码(除了修改hub的一些参数不做其他改动)

运行环境 | Environment

- OS: Centos
- Python: 3.10
- Transformers: 4.32.1
- PyTorch: 2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.7

备注 | Anything else?

求指教,谢谢!

❓[Question] The processing details of grounding dataset GRIT

The origin paper describes the GRIT processing details: "We use the greedy algorithm to clean the caption to make sure each image contains the most box labels with no recursive box labels."

What does this exactly mean? Can you provide some examples to explain this operation?

Thanks.

[BUG] Download link for evaluation is not available.

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

Hi, thanks for your work.
The download links (https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/xxxx) are not open to public.
I met this issue when downloading evaluation annotation files.

--2023-08-27 08:37:44-- https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/evaluation/vizwiz/vizwiz_val.jsonl Resolving ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com (ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com)... 39.101.35.33 Connecting to ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com (ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com)|39.101.35.33|:443... connected. HTTP request sent, awaiting response... 403 Forbidden 2023-08-27 08:37:44 ERROR 403: Forbidden.

Could you change the permission for the download links?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

wget https://ofasys-wlcb.oss-cn-wulanchabu.aliyuncs.com/QwenVL/evaluation/nocaps/nocaps_val.json

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

[BUG] <title>. Unrecognized configuration class <class 'transformers_modules.Qwen.Qwen-VL-Chat.a3d284e60f9c8298ed4c7fe6683f6dc1acff4c6c.configuration_qwen.QWenConfig'> to build an AutoTokenizer.

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.31.0
PyTorch: 2.0.1
CUDA: 11.4

备注 | Anything else?

ValueError: Unrecognized configuration class <class 'transformers_modules.Qwen.Qwen-VL-Chat.a3d284e60f9c8298ed4c7fe6683f6dc1acff4c6c.configuration_qwen.QWenConfig'> to build an AutoTokenizer.
Model type should be one of AlbertConfig, AlignConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, BloomConfig, BridgeTowerConfig, CamembertConfig, CanineConfig, ChineseCLIPConfig, ClapConfig, CLIPConfig, CLIPSegConfig, CodeGenConfig, ConvBertConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, DPRConfig, ElectraConfig, ErnieConfig, ErnieMConfig, EsmConfig, FlaubertConfig, FNetConfig, FSMTConfig, FunnelConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GPTSanJapaneseConfig, GroupViTConfig, HubertConfig, IBertConfig, InstructBlipConfig, JukeboxConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LlamaConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MgpstrConfig, MobileBertConfig, MPNetConfig, MraConfig, MT5Config, MvpConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OneFormerConfig, OpenAIGPTConfig, OPTConfig, OwlViTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, Pix2StructConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, RagConfig, RealmConfig, ReformerConfig, RemBertConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2TextConfig, Speech2Text2Config, SpeechT5Config, SplinterConfig, SqueezeBertConfig, SwitchTransformersConfig, T5Config, TapasConfig, TransfoXLConfig, UMT5Config, ViltConfig, VisualBertConfig, Wav2Vec2Config, Wav2Vec2ConformerConfig, WhisperConfig, XCLIPConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig.

[BUG] webui加载模型后,回答全用英文

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

13525172-f10d-42f1-93b3-84e859538992

webui加载模型后,回答全用英文

期望行为 | Expected Behavior

用中文回答

复现方法 | Steps To Reproduce

generation_config.json

{
  "chat_format":"chatml",
  "do_sample": true,
  "eos_token_id": 151643,
  "max_new_tokens": 512,
  "max_window_size": 6144,
  "pad_token_id": 151643,
  "top_k": 0,
  "top_p": 0.5,
  "transformers_version": "4.31.0"
}

config.json

{
  "_name_or_path": "./",
  "architectures": [
    "QWenLMHeadModel"
  ],
  "attn_dropout_prob": 0.0,
  "auto_map": {
    "AutoConfig": "configuration_qwen.QWenConfig",
    "AutoModelForCausalLM": "modeling_qwen.QWenLMHeadModel"
  },
  "bf16": false,
  "emb_dropout_prob": 0.0,
  "fp16": false,
  "fp32": false,
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 22016,
  "kv_channels": 128,
  "layer_norm_epsilon": 1e-06,
  "max_position_embeddings": 8192,
  "model_type": "qwen",
  "no_bias": true,
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "onnx_safe": null,
  "rotary_emb_base": 10000,
  "rotary_pct": 1.0,
  "scale_attn_weights": true,
  "seq_length": 2048,
  "tie_word_embeddings": false,
  "tokenizer_type": "QWenTokenizer",
  "torch_dtype": "bfloat16",
  "transformers_version": "4.31.0",
  "use_cache": true,
  "use_dynamic_ntk": true,
  "use_flash_attn": false,
  "use_logn_attn": true,
  "visual": {
    "heads": 16,
    "image_size": 448,
    "image_start_id": 151857,
    "layers": 48,
    "mlp_ratio": 4.9231,
    "output_dim": 4096,
    "patch_size": 14,
    "width": 1664
  },
  "vocab_size": 151936
}

运行环境 | Environment

- OS:ubuntu
- Python:3.9
- Transformers:4.31.0
- PyTorch:2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.7

备注 | Anything else?

No response

💡 [REQUEST] - Request for complete evaluation data

起始日期 | Start Date

9/1/23

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

Hello, I appreciate your prompt response in providing evaluation data. Upon reviewing the information, I've noticed that certain datasets, such as GQA and docvqa, have not been released yet. I'm curious if there is a planned schedule for the release of the remaining evaluation data. Thank you.

基本示例 | Basic Example

data annotation files in eval_mm/EVALUATION.md

缺陷 | Drawbacks

Some evaluation data is missing.

未解决问题 | Unresolved questions

No response

pre-training data clean

Thank you for such wonderful open-source work.
Could show me a few details about clean pre-training data in the appendix A.1.

  1. how large aspect ratio of the image
  2. smaller then what size
  3. harsh clip score
  4. text length between what range

💡 [REQUEST] - <title>有没有更详细的微调文档呢

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

modelscope上的数据准备和处理说的不是很清楚

基本示例 | Basic Example

缺陷 | Drawbacks

未解决问题 | Unresolved questions

No response

Batch for image captioning

Is anyone working on recursive folder batch script for captioning stable diffusion captions? Speed needs to be under 2 seconds for 50 tokens max per image. English phrases

I have a question about open vocabulary detection [BUG] <title>

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

I want to test the open vocabulary detection task on QWen-VL model, but I find that I can't let it output all detection boxes at the specific catogary through the instruct 'all shoes' or 'all clothes'.

期望行为 | Expected Behavior

How can it output all detection boxes at the specific catogary?

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:Ubuntu
- Python:3.9
- Transformers:4.31.0
- PyTorch:1.12.0
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.4

备注 | Anything else?

No response

what are the parameters that model accepts?

where can we find the parameters list?
for e.x: max_length, min_length, temperature & any other parameters.
it would be great if you can add the parameters list & descriptions on readme or somewhere.

Questions about training data

  1. What type of in-house data is used in the pre-training phase?
  2. In the Multi-task Pre-training stage, OCR data is used, including SynthDoG-en & zh, Common Crawl pdf & HTML. How is Common Crawl pdf & HTML obtained? Which dataset is it from or is it made by yourself? If it is made by yourself, how is it done?

Thanks for your work!

请问如何让程序并行执行,用到多GPU卡

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

请问如何让程序并行执行,用到多GPU卡

基本示例 | Basic Example

请问如何让程序并行执行,用到多GPU卡
修改 device_map = "cuda" 为 device_map = "auto"
程序用了多卡,但报错:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:3!

缺陷 | Drawbacks

请问如何让程序并行执行,用到多GPU卡

未解决问题 | Unresolved questions

No response

Is Image Resolution a Key Factor?

Thank you for the outstanding work. I would like to understand the reasons behind the exceptional performance of the model. Do you think it's related to the resolution? The resolution of mplug-doc is 1024, while yours is only 448, yet you achieved better performance than mplug-doc on docvqa. Additionally, I noticed that your adapter uses a query quantity of 256. Is this query quantity also a crucial factor?

I look forward to your response!

💡 [REQUEST] - 💡 [REQUEST] 您好,可以出一份使用自定义数据集进行Qwen-VL微调的教程或者说明文档么?

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

希望增加一份使用自定义数据集的微调教程

基本示例 | Basic Example

缺陷 | Drawbacks

希望增加一份使用自定义数据集的微调教程

未解决问题 | Unresolved questions

希望增加一份使用自定义数据集的微调教程

[BUG] <title>Qwen/Qwen-VL-Chat-Int4,加载时没有任何反应,就死在那里

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

model = AutoModelForCausalLM.from_pretrained(
args.checkpoint_path,
device_map=device_map,
trust_remote_code=True,
resume_download=True,
).eval()
这段代码运行毫无反应,而checkpoint_path反复检查过多次,没有问题。

期望行为 | Expected Behavior

加载成功

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:win11
- Python:3.10.11
- Transformers:
- PyTorch:2.01
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):11.8

备注 | Anything else?

No response

Little problem about number

Congratulations!

I notice that the value of kosmos-2 in this table is not fair for comparison. 66.7 on flickr30k and 45.6 on vqav2dev are obtained from the model w/o instruction tuning.

We have updated the final performance on flickr30k and vqav2 on our github page here. Specifically, kosmos-2 can achieve 80.5 on flickr30k and 51.1 on vqav2 under zeroshot setting.

Sorry for the misleading. Can you update the number for ours?
Thanks!

💡 [REQUEST] - <支持流式输出>

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

请问是否可以支持流式输出

基本示例 | Basic Example

1

缺陷 | Drawbacks

提升体验

未解决问题 | Unresolved questions

No response

Finetuning LORA code

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

Can you provide LORA code for finetuning with image and text as input and text as output.

基本示例 | Basic Example

Custom data modelling.

缺陷 | Drawbacks

Nothing

未解决问题 | Unresolved questions

No response

💡 [REQUEST] - <支持finetune>

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

是否有计划支持微调?

基本示例 | Basic Example

1

缺陷 | Drawbacks

1

未解决问题 | Unresolved questions

No response

💡 [REQUEST] - <微调功能>

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

请问如何针对下游图文创作的细分任务做微调,能否开源微调教程?应该用Qwen-VL还是Qwen-VL-Chat?

基本示例 | Basic Example

RT

缺陷 | Drawbacks

促进社区发展

未解决问题 | Unresolved questions

No response

Load in 4 bit not working.

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

Sample code:
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)

model_name = "Qwen/Qwen-VL-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

quantization configuration for NF4 (4 bits)

quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=torch.bfloat16
)

model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", trust_remote_code=True, bf16=True, quantization_config=quantization_config).eval()

期望行为 | Expected Behavior

Loading should not any error.

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No

[BUG] why Qwen-VL-Chat is giving different output for the same query/prompt?

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

why Qwen-VL-Chat is giving different output for the same query/prompt?
for every run i'm getting the different results.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", bf16=True, trust_remote_code=True).eval()
model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)

 query = tokenizer.from_list_format([
    {'image': im2}, # Either a local path or an url
    {'text': 'Compose a detailed account of the image'},])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)

output1:

The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the patient's internal organs during the procedure. The monitor is located towards the right side of the room, with the surgeons standing on the left side.

The surgeons are wearing scrubs and appear to be in the middle of the operation. One of the surgeons is standing closer to the front of the room, while the other is positioned towards the back. A bed can be seen in the room, likely where the patient is lying during the surgery.

output2:

The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the area they are working on. The monitor is located towards the right side of the room, and the surgeons are positioned on the left side.

The surgeons are wearing surgical gowns and masks, indicating their professional attire for the procedure. One of the surgeons is holding a tool, likely a surgical instrument, while the other one is also equipped with a tool, possibly a scalpel. The operating room is equipped with various medical devices, including a monitor, a keyboard, and a mouse. A chair can be seen in the room, possibly for the patient to sit on during the surgery.

output3:

The image depicts a medical setting with two surgeons performing surgery in an operating room. Both surgeons are focused on a monitor, which displays an image of the patient's internal organs during the procedure. The monitor is located in the center of the room, providing the surgeons with real-time information to assist them in their work.

The surgeons are wearing blue gowns and masks, indicating their professional attire for the surgery. One of the surgeons is standing closer to the monitor, while the other is located more towards the right side of the room. The operating room is equipped with various medical devices, including a bed for the patient and a clock on the wall.

期望行为 | Expected Behavior

each run should give the same response.

复现方法 | Steps To Reproduce

follow the code given above.

运行环境 | Environment

- OS: AWS Sagemaker(Amazon Linux 2, Jupyter Lab 3
(notebook-al2-v2))

- Python: 3.10
- Transformers: 4.31.0
- PyTorch: 2.0.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.8

备注 | Anything else?

i have also tried with different images & different prompts, issue persists on them as well.

💡 [REQUEST] - 能调用backbone抽feature吗?

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

这个模型能用到图文检索中吗,类似clip那种?
怎样调用backbone抽feature,有代码参考吗?

基本示例 | Basic Example


缺陷 | Drawbacks


未解决问题 | Unresolved questions

No response

[BUG] Failed to use inputs_embeds instead of inputs_ids in generate function.

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

In Qwen-VL-Chat, the model takes an url or local path as input to load images. I tried to load images with a pre-defined dataset and build inputs_embeds out of the forward function.

When I tried to pass inputs_embeds into generate function, the following line (the first line in the forward function of QwenModel) raise an error:

if past_key_values is None and torch.any(input_ids == self.config.visual['image_start_id']):

The input_ids here is None and the result is False (a bool). torch.any needs a tensor as input rather than a bool and the error is raised.

I think a quick check on whether the input_ids is None or a tensor could fix it.

if past_key_values is None and input_ids is not None and torch.any(input_ids == self.config.visual['image_start_id']):

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

💡 [REQUEST] - <title>

起始日期 | Start Date

08/28/2023

实现PR | Implementation PR

No response

相关Issues | Reference Issues

有两个问题想请教下:

  1. 在pretrain和multi-task pretrain分别使用了多少资源,预计需要多长时间
  2. 对于纯文效果,有多少损伤,有没有可比较结果

摘要 | Summary

基本示例 | Basic Example

缺陷 | Drawbacks

未解决问题 | Unresolved questions

No response

[BUG] <回答没有按照固定格式进行回答>

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

当输入图片,并让模型给出指定回答。模型回复并没有严格按照固定答案进行回答。如下

现象:
输入
图片
text: 这张图中人物是男性吗?如果是,请直接回答Yes,如果不是,请直接回答No,无法确定请直接回答无法确定,不允许出现其它回答

输出
模型回复: 不是。

问题 :回复是Yes/No/无法确定以外内容。

主要是web_demo_mm.py会出现这个问题,
请问这是prompt引导问题?还是模型?或者是web_demo_mm后处理问题?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

Is it possible to know how the image recognition works?

起始日期 | Start Date

No response

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

Is it possible to include it in the documentation?

基本示例 | Basic Example

N/A

缺陷 | Drawbacks

N/A

未解决问题 | Unresolved questions

No response

[BUG] 这个是认真的吗?是我使用的方式不对吗

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

image
他为什么这么犟,理解不了我的输入,好像一直停留在第一句了

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

有没有更新对话状态?

视频

请问模型支持视频吗?为什么SEED-Bench上有视频的评分?

[BUG] 当device_map设置为auto时无法正常推理

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

运行如下代码,开启bf16device_map="auto"

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)

# 打开bf16精度,A100、H100、RTX3060、RTX3070等显卡建议启用以节省显存
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()

model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)

query = tokenizer.from_list_format([
    {'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'}, # Either a local path or an url
    {'text': '这是什么?'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)

response, history = model.chat(tokenizer, '框出图中击掌的位置', history=history)
print(response)
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
  image.save('1.jpg')
else:
  print("no box")

产生如下报错信息

(venv) PS D:\Python\Qwen-VL> python .\debug.py
Loading checkpoint shards: 100%|████████████| 10/10 [00:14<00:00,  1.47s/it]
Traceback (most recent call last):
  File "D:\Python\Qwen-VL\debug.py", line 16, in <module>
    response, history = model.chat(tokenizer, query=query, history=None)
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 918, in chat
    outputs = self.generate(
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 1031, in generate
    return super().generate(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\transformers\generation\utils.py", line 1642, in generate
    return self.sample(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\transformers\generation\utils.py", line 2724, in sample
    outputs = self(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 830, in forward
    transformer_outputs = self.transformer(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\modeling_qwen.py", line 570, in forward
    images = self.visual.encode(images)
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\visual.py", line 426, in encode
    return self(images)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "D:\cache\huggingface\modules\transformers_modules\Qwen\Qwen-VL-Chat\1fb8c15e1ad1d0d4d5a17b550776d28a3f7ef028\visual.py", line 398, in forward
    x = self.conv1(x)  # shape = [*, width, grid, grid]
  File "D:\Python\Qwen-VL\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 160, in new_forward
    args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\hooks.py", line 290, in pre_forward
    return send_to_device(args, self.execution_device), send_to_device(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 151, in send_to_device
    return honor_type(
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 83, in honor_type
    return type(obj)(generator)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 152, in <genexpr>
    tensor, (send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys) for t in tensor)
  File "D:\Python\Qwen-VL\venv\lib\site-packages\accelerate\utils\operations.py", line 167, in send_to_device
    return tensor.to(device, non_blocking=non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS: Windows11
- Python: 3.10.9
- Transformers: 4.31.0和4.32.0都试过
- PyTorch: 2.0.1+cu118
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): 11.8
- GPU: RTX4080

备注 | Anything else?

No response

dataset weights

Thank you for such excellent open-source work! The proportion of samples in some parts of the dataset differs significantly. Could you please explain how the dataset weights are set in Pretraining and Multi-task Pretraining?

[BUG] <title>加减法都能做错,离了个大谱!

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

f4a1ecf9d56d716585a6a3ddb4db989

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

the snapshot_download model cannot use in offline mode

My gpu server cannot connect internet, so i download the mode with snapshot_download:

from huggingface_hub import snapshot_download
snapshot_download(repo_id="Qwen/Qwen-VL-Chat")

the model put in /root/.cache/huggingface/hub/models--Qwen--Qwen-VL-Chat, it contain blobs/refs/snapshots folders.
when i load mode to run demo, error occur:
tokenizer = AutoTokenizer.from_pretrained("/root/.cache/huggingface/hub",repo_id="Qwen/Qwen-VL-Chat")

/root/.cache/huggingface/hub/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//root/.cache/huggingface/hub//None' for available files.

also use this folder: /root/.cache/huggingface/hub/models--Qwen--Qwen-VL-Chat/snapshots/0eecbfae27b784c8d5e69b1d497d3589874565a8

ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.

so How to load the snapshot_download model ? thankyou!

[BUG] Demo加载本地model(从HF手动下载的,非框架自动下载),代码无法找到SimSun.ttf字体文件,导致中文乱码

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

  1. 手动从https://huggingface.co/Qwen/Qwen-VL-Chat Clone,里面确认已经包含了SimSun.ttf
  2. 启动Demo,从下载好的本地model位置加载
  3. 无法使用SimSun.ttf,导致框住的标签中文乱码

原因:代码中是从try_to_load_from_cache("Qwen/Qwen-VL-Chat","SimSun.ttf"),这种写法只适合HF框架自动下载model的情形,不适合手动Clone下载的事情。

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.