Comments (7)
@molbap
Yes, the evaluation part is giving me error.
Training itself is working fine. I can see finetune is working okay. (I checked by running prediction on the training data)
from transformers.
@muellerz @molbap @hadariru I think this happens because trainer accept the case when loss is None.
transformers/src/transformers/trainer.py
Line 3765 in ab0f050
when the loss is None and when you want to compute the metrics losses is not defined due to gather function for None in multi-gpu is useless. So you cannot del the losses variable since it has not been defined.
from transformers.
cc @molbap
from transformers.
Thanks for the issue @hadariru - just one note, it looks like the fine-tuning itself is working (ie if you let loss go down and don't add eval), it's the evaluation part in Trainer
that has an issue?
Seems the only way for losses
to be not accessed would be prediction_step
failing. cc @muellerzr in case you are familiar, will take a look at this soon
from transformers.
I think there are two ways to make this work
- @hadariru Make sure that Paligemma returns the appropriate losses value (check if you set appropriate arguements)
- @muellerz Or we can also set if else statement to the trainer for checking if that value can be deleted.
from transformers.
@SangbumChoi
this is the model that I used
model = PaliGemmaForConditionalGeneration.from_pretrained(
object_detection_config.MODEL_ID,
torch_dtype=object_detection_config.MODEL_DTYPE,
device_map=device,
revision=object_detection_config.MODEL_REVISION,
)
I tried to backtrack the reason why loss is None.
I found out that self.label_names
and loss_without_labels
when it is evaluating is [] and False
I am not sure on what value to give or how to set label_names on trainer
from transformers.
changing
data_collator = partial(self.data_collator, train=False)
-> data_collator = partial(self.data_collator, train=True)
on the get_eval_dataloader
gives me this error
Traceback (most recent call last):
File "xxx", line 361, in <module>
trainer.train()
File "xxxlib/python3.11/site-packages/transformers/trainer.py", line 1885, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer.py", line 2291, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
File "xxxlib/python3.11/site-packages/transformers/trainer.py", line 2721, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer.py", line 3572, in evaluate
output = eval_loop(
^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer.py", line 3780, in evaluation_loop
all_preds.add(logits)
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 326, in add
self.tensors = nested_concat(self.tensors, tensors, padding_index=self.padding_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in nested_concat
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in <genexpr>
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in nested_concat
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in <genexpr>
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in nested_concat
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 138, in <genexpr>
return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 140, in nested_concat
return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxlib/python3.11/site-packages/transformers/trainer_pt_utils.py", line 99, in torch_pad_and_concatenate
return torch.cat((tensor1, tensor2), dim=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 454 but got size 482 for tensor number 1 in the list.
0%| | 10/24240 [00:27<18:35:03, 2.76s/it]
from transformers.
Related Issues (20)
- Problem with the masked language modeling tutorial HOT 1
- When running `ruff format src/transformers`, some files needs to be reformatted HOT 2
- Something wrong for `StoppingCriteria` HOT 5
- Index out of range when generate using optimum HOT 1
- Fail to load model without .safetensors file
- GGUFTokenizerSkeleton AttributeError during conversion HOT 3
- Fixing Tensor Shape/Dimension Mismatch Errors in TimeSeries Transformer for Stock Price Prediction HOT 9
- You can't train a model that has been loaded with `device_map='auto'` in any distributed mode. HOT 2
- NotImplementedError: Cannot copy out of meta tensor; no data when embedding to meta HOT 3
- Add argument to set number of eval steps in Trainer HOT 2
- First token optimization in beam search
- Transformers master version breaks compatibility with `torch<2.3` HOT 1
- Missing upper bound in numpy requirements breaks transformers HOT 5
- Trainer: To keep unused columns for `compute_metrics` HOT 1
- RuntimeError: slow_conv2d_forward_mps: input(device='cpu') and weight(device=mps:0') HOT 1
- OOM when loading 300B models with `AutoModelForCausalLM.from_pretrained` and `BitsAndBytesConfig` quantization. HOT 1
- A question about the implementation of Sinkcache. HOT 2
- Multi-GPU inference affects LLM's (Llama2-7b-chat-hf) generation.
- `pip install accelerate` (and similar) error messages should specify min version HOT 1
- Incorrect docstring of `get_anyres_image_grid_shape` HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.