Git Product home page Git Product logo

flagscale's Issues

[QUESTION] Support other hardware?

By now, FlagScale wupport training on chinese domestic hardwares, including Iluvatar CoreX and Baidu KUNLUN chips.
Any other hardwares will be supported in future?

llama2 70B模型在不同PP下loss下降趋势不同

7_gpu_pp1.log
31_gpu_pp4.log
378074e481adec4316fc6dd978b7a61

在跑llama2 70B(减少层数)时,PP=1跟PP=4出现loss下降趋势不同的情况,log与曲线图见上述上传,脚本如下:

export CUDA_DEVICE_MAX_CONNECTIONS=1

GPUS_PER_NODE=8
# Change for multinode config
MASTER_ADDR=192.167.5.2
MASTER_PORT=29501
NUM_NODES=4
NODE_RANK=0
WORLD_SIZE=$(($GPUS_PER_NODE*$NUM_NODES))

CHECKPOINT_PATH='/data/zhangling21/ckpts/'
TENSORBOARD_LOGS_PATH='/data/zhangling21/tensorboard_logs/'
TOKENIZER_PATH='/data/zhangling21/llama_00_text_document/tokenizer/tokenizer.model'
DATA_PATH='/data/zhangling21/llama_00_text_document/llama_00_text_document'

DISTRIBUTED_ARGS=(
    --nproc_per_node $GPUS_PER_NODE
    --nnodes $NUM_NODES
    --node_rank $NODE_RANK
    --master_addr $MASTER_ADDR
    --master_port $MASTER_PORT
)
# --tokenizer-type LLaMASentencePieceTokenizer \
# --rmsnorm-epsilon 1e-5

LLAMA_MODEL_ARGS=(
    --num-layers 8
    --hidden-size 8192
    --ffn-hidden-size 28672
    --num-attention-heads 64
    --seq-length 4096
    --max-position-embeddings 4096
    --group-query-attention
    --num-query-groups 8
    --tokenizer-type Llama2Tokenizer
    --tokenizer-model $TOKENIZER_PATH
    --swiglu
    --normalization RMSNorm
    --use-rotary-position-embeddings
    --no-position-embedding
    --disable-bias-linear
)
# --optimizer adam
# --adam-eps 1e-05
# --no-contiguous-buffers-in-local-ddp
# --recompute-method uniform
# --no-async-tensor-model-parallel-allreduce
# --embedding-dropout 0
# --multi-query-attention
# --multi-query-group-num 8
# --ffn-dim-multiplier 1.3
# --recompute-granularity full
# --distribute-saved-activations
# --recompute-num-layers 1
# --memory-saving

# --fp16

    # --optimizer adam
    # --adam-eps 1e-05
TRAINING_ARGS=(
    --micro-batch-size 1
    --global-batch-size 44
    --train-samples 24414
    --weight-decay 1e-2
    --optimizer adam
    --clip-grad 1.0
    --lr 0.00015
    --lr-decay-style cosine
    --min-lr 1.0e-5
    --lr-warmup-fraction .01
    --adam-beta1 0.9
    --adam-beta2 0.95
    --attention-dropout 0.0
    --hidden-dropout 0.0
    --untie-embeddings-and-output-weights
    --multiple-of 4096
    --no-gradient-accumulation-fusion
    --recompute-granularity 'full'
    --recompute-num-layers 1
    --recompute-method 'uniform'
    --no-async-tensor-model-parallel-allreduce
)

MODEL_PARALLEL_ARGS=(
        --tensor-model-parallel-size 8
        --pipeline-model-parallel-size 4
)

DATA_ARGS=(
    --data-path $DATA_PATH
    --split 1
)

EVAL_AND_LOGGING_ARGS=(
    --log-interval 1
    --init-method-std 0.02
    --seed 1234
    --eval-iters 0
    --use-cpu-initialization
)
    #--load "/data/zhangling21/llama_00_text_document/ckpt0227_8L"
    #--no-load-rng
    #--save "/data/zhangling21/llama_00_text_document/ckpt0227_8L"
    #--save-interval 1

cmd="torchrun ${DISTRIBUTED_ARGS[@]} pretrain_llama.py \
        ${LLAMA_MODEL_ARGS[@]} \
        ${TRAINING_ARGS[@]} \
        ${MODEL_PARALLEL_ARGS[@]} \
        ${DATA_ARGS[@]} \
        ${EVAL_AND_LOGGING_ARGS[@]}"
echo $cmd
eval $cmd

期望新增以下切割模型权重的功能

1.期望能用其他加速卡来切,而不仅仅是nv卡来切权重。因为默认的tools/checkpoint_util.py 里会设计到nv编译的逻辑,其他卡不支持。
2.多机分布式支持切割权重。因为有的加速卡没有配置共享存储,模型一大,拷贝权重就很不方便,期望能有多机切割权重的功能。
3.降低host端的峰值内存。由于不同机器上host端的内存不一样,nv机器上的内存有1T,单机就能切;但对于某些host端内存比较小的,比如512G的情况下,切割权重会出现oom,因此期望增加降峰值内存的功能,比如load 一层layer,就save 一层layer。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.