Comments (4)
from openrlhf.
from openrlhf.
Got error
NotImplementedError: You are calling
save_pretrained on a 4-bit converted model. This is currently not supported
after training a LLaMA-7B-based reward model with QLora via the default config settings here:
parser.add_argument("--pretrain", type=str, default="huggyllama/llama-7b")
parser.add_argument("--dataset", type=str, default="nz/anthropic-hh-golden-rlhf")
parser.add_argument("--dataset_probs", type=str, default="1.0", help="sampling probs for datasets")
parser.add_argument("--save_path", type=str, default="./ckpt/llama-7b-rm-harmless")
parser.add_argument("--save_steps", type=int, default=-1)
parser.add_argument("--logging_steps", type=int, default=1)
parser.add_argument("--eval_steps", type=int, default=-1)
parser.add_argument("--ckpt_path", type=str, default="./ckpt/checkpoints_rm")
parser.add_argument("--max_ckpt_num", type=int, default=3)
parser.add_argument("--max_ckpt_mem", type=int, default=1000) # 1000GB
parser.add_argument("--max_epochs", type=int, default=1)
parser.add_argument("--micro_train_batch_size", type=int, default=4)
parser.add_argument("--train_batch_size", type=int, default=128)
parser.add_argument("--max_samples", type=int, default=1000000)
parser.add_argument("--load_checkpoint", action="store_true", default=False)
parser.add_argument("--max_norm", type=float, default=1.0)
parser.add_argument("--max_len", type=int, default=2048)
parser.add_argument("--l2", type=float, default=0.0)
parser.add_argument("--loss", type=str, default="sigmoid")
parser.add_argument("--gradient_checkpointing", action="store_true", default=True)
parser.add_argument("--seed", type=int, default=42)
parser.add_argument("--local_rank", type=int, default=0, help="local_rank for deepspeed")
parser.add_argument("--zero_stage", type=int, default=3)
parser.add_argument("--bf16", action="store_true", default=True)
parser.add_argument("--learning_rate", type=float, default=1e-5)
parser.add_argument("--zpg", type=int, default=1, help="ZeRO++ max partition size")
parser.add_argument("--adam_offload", action="store_true", default=True)
parser.add_argument("--flash_attn", action="store_true", default=True)
parser.add_argument("--compute_fp32_loss", action="store_true", default=False)
parser.add_argument("--margin_loss", action="store_true", default=False)
parser.add_argument("--aux_loss_coef", type=float, default=0)
parser.add_argument("--grad_accum_dtype", type=str, default=None)
parser.add_argument("--disable_trace_cache", action="store_true", default=False)
parser.add_argument("--load_in_4bit", action="store_true", default=True)
parser.add_argument("--lora_rank", type=int, default=0)
parser.add_argument("--lora_alpha", type=int, default=16)
parser.add_argument("--target_modules", type=list, default=None)
parser.add_argument("--bos_token", type=str, default=None)
parser.add_argument("--eos_token", type=str, default=None)
parser.add_argument("--pad_token", type=str, default=None)
parser.add_argument("--unk_token", type=str, default=None)
from openrlhf.
Looks like the issue is due to transformers 4.36.2 being installed for some reason. Not sure what caused this.
from openrlhf.
Related Issues (20)
- DPO Finetuning constantly gives preference loss as 0.6931 HOT 8
- Difference between `DeepSpeedEngine.save_checkpoint()` and `DeepSpeedStrategy.save_model()` HOT 2
- DPO后的模型推理出的结果都是无序符号 HOT 1
- Support training from breakpoint HOT 3
- llama3 70B DPO example script
- where is gradient_accumulation HOT 1
- Support RLOO HOT 1
- 现在Train_PPO_llama_ray 过程中会把Actor Model切分到不同卡上吗 HOT 4
- ConnectionRefusedError: [Errno 111] Connection refused HOT 5
- packing的问题 HOT 2
- "right" padding hardcoded HOT 3
- Error while saving the model under 4bit lora HOT 2
- multinode ppo training extremely slow HOT 15
- 使用ray的时候Request Entity Too Large HOT 3
- dpo 训练显存 OOM HOT 1
- Online DPO 支持 HOT 4
- Feature: add DPO-P
- Zero stage 3 error HOT 1
- Performance of Iterative DPO? HOT 1
- Why multiplying rstd instead of dividing by rstd? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openrlhf.