flagopen / flagscale Goto Github PK
View Code? Open in Web Editor NEWFlagScale is a large model toolkit based on open-sourced projects.
License: Other
FlagScale is a large model toolkit based on open-sourced projects.
License: Other
请问怎么将HuggingFace版本的Aquila2-7B模型和Aquila2-34B模型转化为Megatron模型,scripts中的convert_hf_to_megatron.py文件中需要ref_model_path,但是在项目中没有给出
By now, FlagScale wupport training on chinese domestic hardwares, including Iluvatar CoreX and Baidu KUNLUN chips.
Any other hardwares will be supported in future?
在跑llama2 70B(减少层数)时,PP=1跟PP=4出现loss下降趋势不同的情况,log与曲线图见上述上传,脚本如下:
export CUDA_DEVICE_MAX_CONNECTIONS=1
GPUS_PER_NODE=8
# Change for multinode config
MASTER_ADDR=192.167.5.2
MASTER_PORT=29501
NUM_NODES=4
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE*$NUM_NODES))
CHECKPOINT_PATH='/data/zhangling21/ckpts/'
TENSORBOARD_LOGS_PATH='/data/zhangling21/tensorboard_logs/'
TOKENIZER_PATH='/data/zhangling21/llama_00_text_document/tokenizer/tokenizer.model'
DATA_PATH='/data/zhangling21/llama_00_text_document/llama_00_text_document'
DISTRIBUTED_ARGS=(
--nproc_per_node $GPUS_PER_NODE
--nnodes $NUM_NODES
--node_rank $NODE_RANK
--master_addr $MASTER_ADDR
--master_port $MASTER_PORT
)
# --tokenizer-type LLaMASentencePieceTokenizer \
# --rmsnorm-epsilon 1e-5
LLAMA_MODEL_ARGS=(
--num-layers 8
--hidden-size 8192
--ffn-hidden-size 28672
--num-attention-heads 64
--seq-length 4096
--max-position-embeddings 4096
--group-query-attention
--num-query-groups 8
--tokenizer-type Llama2Tokenizer
--tokenizer-model $TOKENIZER_PATH
--swiglu
--normalization RMSNorm
--use-rotary-position-embeddings
--no-position-embedding
--disable-bias-linear
)
# --optimizer adam
# --adam-eps 1e-05
# --no-contiguous-buffers-in-local-ddp
# --recompute-method uniform
# --no-async-tensor-model-parallel-allreduce
# --embedding-dropout 0
# --multi-query-attention
# --multi-query-group-num 8
# --ffn-dim-multiplier 1.3
# --recompute-granularity full
# --distribute-saved-activations
# --recompute-num-layers 1
# --memory-saving
# --fp16
# --optimizer adam
# --adam-eps 1e-05
TRAINING_ARGS=(
--micro-batch-size 1
--global-batch-size 44
--train-samples 24414
--weight-decay 1e-2
--optimizer adam
--clip-grad 1.0
--lr 0.00015
--lr-decay-style cosine
--min-lr 1.0e-5
--lr-warmup-fraction .01
--adam-beta1 0.9
--adam-beta2 0.95
--attention-dropout 0.0
--hidden-dropout 0.0
--untie-embeddings-and-output-weights
--multiple-of 4096
--no-gradient-accumulation-fusion
--recompute-granularity 'full'
--recompute-num-layers 1
--recompute-method 'uniform'
--no-async-tensor-model-parallel-allreduce
)
MODEL_PARALLEL_ARGS=(
--tensor-model-parallel-size 8
--pipeline-model-parallel-size 4
)
DATA_ARGS=(
--data-path $DATA_PATH
--split 1
)
EVAL_AND_LOGGING_ARGS=(
--log-interval 1
--init-method-std 0.02
--seed 1234
--eval-iters 0
--use-cpu-initialization
)
#--load "/data/zhangling21/llama_00_text_document/ckpt0227_8L"
#--no-load-rng
#--save "/data/zhangling21/llama_00_text_document/ckpt0227_8L"
#--save-interval 1
cmd="torchrun ${DISTRIBUTED_ARGS[@]} pretrain_llama.py \
${LLAMA_MODEL_ARGS[@]} \
${TRAINING_ARGS[@]} \
${MODEL_PARALLEL_ARGS[@]} \
${DATA_ARGS[@]} \
${EVAL_AND_LOGGING_ARGS[@]}"
echo $cmd
eval $cmd
Do model parallelism and pipeline parallelism support efficient fine-tuning methods such as Lora
你好,当前的FlagScale 支持在异构设备上训练吗?(Nvidia+昇腾)
如果不支持,未来是否考虑支持?
1.期望能用其他加速卡来切,而不仅仅是nv卡来切权重。因为默认的tools/checkpoint_util.py 里会设计到nv编译的逻辑,其他卡不支持。
2.多机分布式支持切割权重。因为有的加速卡没有配置共享存储,模型一大,拷贝权重就很不方便,期望能有多机切割权重的功能。
3.降低host端的峰值内存。由于不同机器上host端的内存不一样,nv机器上的内存有1T,单机就能切;但对于某些host端内存比较小的,比如512G的情况下,切割权重会出现oom,因此期望增加降峰值内存的功能,比如load 一层layer,就save 一层layer。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.