Comments (2)
Thanks for fixing!
from transformers.
transformer version: 4.42.3
I have another error:
File "/home/ss/train_frame/LLaMA-Factory/src/train.py", line 30, in <module>
main()
File "/home/ss/train_frame/LLaMA-Factory/src/train.py", line 21, in main
run_exp()
File "/home/ss/train_frame/LLaMA-Factory/src/llamafactory/train/tuner.py", line 93, in run_exp
run_exe(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/home/ss/train_frame/LLaMA-Factory/src/llamafactory/train/tuner.py", line 47, in run_exe
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/home/ss/train_frame/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 107, in run_sft
predict_results = trainer.predict(dataset, metric_key_prefix="predict", **gen_kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 244, in predict
return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/trainer.py", line 3717, in predict
output = eval_loop(
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/trainer.py", line 3826, in evaluation_loop
losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/ss/train_frame/LLaMA-Factory/src/llamafactory/train/sft/trainer.py", line 99, in prediction_step
loss, generated_tokens, _ = super().prediction_step( # ignore the returned labels (may be truncated)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/trainer_seq2seq.py", line 310, in prediction_step
generated_tokens = self.model.generate(**generation_inputs, **gen_kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/generation/utils.py", line 1914, in generate
result = self._sample(
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/transformers/generation/utils.py", line 2651, in _sample
outputs = self(
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 822, in forward
return model_forward(*args, **kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 810, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 789, in convert_to_fp32
return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_bf16_tensor)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 118, in recursively_apply
{
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 119, in <dictcomp>
k: recursively_apply(
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 126, in recursively_apply
return func(data, *args, **kwargs)
File "/home/ss/anaconda3-new/envs/train/lib/python3.10/site-packages/accelerate/utils/operations.py", line 781, in _convert_to_fp32
return tensor.float()
AttributeError: 'HybridCache' object has no attribute 'float'
from transformers.
Related Issues (20)
- cannot import get_full_repo_name from huggingface_hub after updating pytorch HOT 6
- Inconsistent special_token addition in EncoderDecoderModel forward pass
- Cannot find the best model after training HOT 1
- MPS support broken for T5 models HOT 1
- Pass `HFQuantizer` to `from_pretrained` kwargs HOT 1
- [i18n-<languageCode>] Translating docs to <languageName> HOT 1
- NumPy 2.0 support HOT 1
- Can I use "attn_implementation" in model config file HOT 4
- Encountering an error while loading a model using state_dict and quantization simultaneously HOT 6
- Fix 'Can't infer missing attention mask on `mps` device' HOT 2
- might be a waste of resources HOT 1
- Tensors' device passed to a model is not correct when ACCELERATE_TORCH_DEVICE is privateuseone
- Suport sdpa for RoBERTa and XLM-RoBERTa models
- Converting gguf fp16 & bf16 to hf is not supported. HOT 5
- Dead code, `cache_kwargs` HOT 1
- The conversion of the llama3 model back from gguf seems weird. HOT 5
- Train on logits instead of one hot vectors
- 'tf_keras' has no attribute 'activations' HOT 4
- Bug in whisper word-level timestamps (`tokenizer._decode_asr`) HOT 1
- RobertaForClassification throws an error because of dimension mismatch HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.