Comments (9)
from mm-cot.
Please, Share it on colab !
from mm-cot.
mark
from mm-cot.
Is this can build a chatbot?
from mm-cot.
Thank you for the demo in colab. Unfortunally I got an error
from mm-cot.
Thank you for the demo in colab. Unfortunally I got an error
Try this
from model import T5ForMultimodalGeneration
from transformers import T5Tokenizer
patch_size = (100, 256) # for DETR style
save_dir = "./models/MM-CoT-UnifiedQA-base-Rationale"
tokenizer = T5Tokenizer.from_pretrained("./models/MM-CoT-UnifiedQA-base-Rationale")
padding_idx = tokenizer._convert_token_to_id(tokenizer.pad_token)
model = T5ForMultimodalGeneration.from_pretrained(
save_dir, patch_size=patch_size, padding_idx=padding_idx, save_dir=save_dir).cuda()****
from mm-cot.
Hi, I tried the colab notebook but I am getting this below error:
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType
Below is the full error:
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_8332\3201768157.py in
----> 1 outputs = model.generate(input_ids, max_length=512) # reads the vision feature if file detacted
2 show_result(outputs)
3 #outputs
~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.clone():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
~\anaconda3\lib\site-packages\transformers\generation\utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs)
1389
1390 # 11. run greedy search
-> 1391 return self.greedy_search(
1392 input_ids,
1393 logits_processor=logits_processor,
~\anaconda3\lib\site-packages\transformers\generation\utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
2177
2178 # forward pass to get next token
-> 2179 outputs = self(
2180 **model_inputs,
2181 return_dict=True,
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~\Desktop\My Projects\mm-cot\model.py in forward(self, input_ids, image_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
116 hidden_states = encoder_outputs[0]
117
--> 118 image_embedding = self.image_dense(image_ids)
119 image_att, _ = self.mha_layer(hidden_states, image_embedding, image_embedding)
120
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType`
Is the issue with Vision features? Can anyone help me debug this?
from mm-cot.
Hi, I tried the colab notebook but I am getting this below error:
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType
Below is the full error: `--------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_8332\3201768157.py in ----> 1 outputs = model.generate(input_ids, max_length=512) # reads the vision feature if file detacted 2 show_result(outputs) 3 #outputs
~\anaconda3\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) 28 return cast(F, decorate_context) 29
~\anaconda3\lib\site-packages\transformers\generation\utils.py in generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, **kwargs) 1389 1390 # 11. run greedy search -> 1391 return self.greedy_search( 1392 input_ids, 1393 logits_processor=logits_processor,
~\anaconda3\lib\site-packages\transformers\generation\utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs) 2177 2178 # forward pass to get next token -> 2179 outputs = self( 2180 **model_inputs, 2181 return_dict=True,
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], []
~\Desktop\My Projects\mm-cot\model.py in forward(self, input_ids, image_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 116 hidden_states = encoder_outputs[0] 117 --> 118 image_embedding = self.image_dense(image_ids) 119 image_att, _ = self.mha_layer(hidden_states, image_embedding, image_embedding) 120
~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1193 or _global_forward_hooks or _global_forward_pre_hooks): -> 1194 return forward_call(*input, **kwargs) 1195 # Do not call functions when jit is used 1196 full_backward_hooks, non_full_backward_hooks = [], []
~\anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input) 112 113 def forward(self, input: Tensor) -> Tensor: --> 114 return F.linear(input, self.weight, self.bias) 115 116 def extra_repr(self) -> str:
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType`
Is the issue with Vision features? Can anyone help me debug this?
Have you solved this issue? I meet the same bug as well. It seems that vision feature has not been passed to model.
from mm-cot.
Have the same error
TypeError: linear(): argument 'input' (position 1) must be Tensor, not NoneType
Any updates on the issue ?
from mm-cot.
Related Issues (20)
- How are the vision features generated here ? How to view detr.npy and clip.npy images HOT 1
- typo in utils.prompt line 104 and 106 HOT 1
- Implementation Mm-cot HOT 1
- Question: PC requirements
- How to train
- Question about two stages training? HOT 1
- I can't find main_central.py. HOT 1
- ImportError: cannot import name 'Conv2dSame' from 'timm.models.layers' (unknown location) HOT 5
- [17:28:39] [Model]: Loading declare-lab/flan-alpaca-large... HOT 3
- Where is Gold Rationale from? HOT 1
- "blip2_vicuna_instruct" can't find lead to nonetype HOT 1
- Request for Release of Multimodal-CoT Large 738M Model HOT 3
- While running ‵extract_caption.py`, raise many garbled text. So will you put the models in `https://huggingface.co/Salesforce/instructblip-vicuna-7b/tree/main` the `llm` folder? HOT 1
- ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`image_ids` in this case) have excessive nesting (inputs type `list` where type `int` is expected). HOT 1
- OverflowError: out of range integral type conversion attempted HOT 3
- Where is the main_central.py
- Can not train on GPU.
- Question on fine-tuning time HOT 1
- How to use the mm-cot frame as a utility library through local LLM? HOT 1
- OverflowError: can't convert negative int to unsigned HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mm-cot.