Git Product home page Git Product logo

google / paxml Goto Github PK

View Code? Open in Web Editor NEW
388.0 15.0 54.0 4.17 MB

Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.

License: Apache License 2.0

Starlark 4.16% Python 87.03% Dockerfile 0.41% Shell 1.17% Jupyter Notebook 7.22%
c4 jax large-language-models llm model-flops parallelism gpt

paxml's Issues

Use bfloat16 for eval

I'm running paxml on an Intel Xeon CPU server using the paxml/main.py program. I'm trying to create a model that creates weights in bfloat16, and uses that datatype during eval. I modified the LmCloudSpmd2B configuration with the following lines:

MODEL_DTYPE = jnp.bfloat16
ICI_MESH_SHAPE = [1, 1, 1]

The training status output includes the following output.

model.dtype : type/jax.numpy/float32
model.fprop_dtype : dtype[bfloat16]

All of the other operator datatypes are float32. When I run that model with the --eval switch all of the computation is in float32. How can I direct paxml to use bfloat16?

Tom

ERROR: error loading package 'paxml'

I'm trying to run the PAX code in this repo:

I installed the prerequisites as mentioned in the repo:

python3 -m pip install -U pip
python3 -m pip install paxml praxis
cd paxml
bazel run -c opt --define=pax_task=lm \
    main -- \
    --exp=lm.decoder.ptb.PTBCharTransformerSmallSgd \
    --job_log_dir=/tmp/jax_log_dir/exp01 --alsologtostderr

To start training, I got the command line from here: https://github.com/google/paxml/blob/main/paxml/main.py#L19-L22

bazel run -c opt \
  third_party/py/paxml/tasks/lm/params:main -- \
  --exp=bert.BertAdamL4H128 \
  --job_log_dir=/tmp/jax_log_dir/exp01 --alsologtostderr

I'm encountering the following issue:

ERROR: Skipping 'paxml': error loading package 'paxml': Every .bzl file must have a corresponding package, but '//praxis:build-visibility.bzl' does not have one. Please create a BUILD file in the same or any parent directory. Note that this
BUILD file does not need to do anything except exist.
WARNING: Target pattern parsing failed.
ERROR: error loading package 'paxml': Every .bzl file must have a corresponding package, but '//praxis:build-visibility.bzl' does not have one. Please create a BUILD file in the same or any parent directory. Note that this BUILD file does not need to do anything except exist.
INFO: Elapsed time: 0.750s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
FAILED: Build did NOT complete successfully (0 packages loaded)
    currently loading: paxml
    Fetching @com_google_protobuf; Restarting.

Though I see the BUILD is present here: https://github.com/google/paxml/blob/main/paxml/BUILD

Is there anything I'm doing wrong? Any suggestions to get it running?

Error running Common Crawl example

Sorry to interrupt! When running

python3 .local/lib/python3.8/site-packages/paxml/main.py \
--exp=tasks.lm.params.c4.C4Spmd1BAdam4Replicas \
--job_log_dir=gs://<your-bucket> 

in the examples, I encountered the following error seeming to suggest I cannot load from the bucket provided in c4.py

Traceback (most recent call last):
  File ".local/lib/python3.8/site-packages/paxml/main.py", line 407, in <module>
    app.run(main, flags_parser=absl_flags.flags_parser)
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
  File ".local/lib/python3.8/site-packages/paxml/main.py", line 382, in main
    run(experiment_config=experiment_config,
  File ".local/lib/python3.8/site-packages/paxml/main.py", line 336, in run
    search_space = tuning_lib.get_search_space(experiment_config)
  File "/home/robertli/.local/lib/python3.8/site-packages/paxml/tuning_lib.py", line 81, in get_search_space
    search_space = pg.hyper.trace(inspect_search_space, require_hyper_name=True)
  File "/home/robertli/.local/lib/python3.8/site-packages/pyglove/core/hyper/dynamic_evaluation.py", line 586, in trace
    fun()
  File "/home/robertli/.local/lib/python3.8/site-packages/paxml/tuning_lib.py", line 77, in inspect_search_space
    _ = instantiate(d)
  File "/home/robertli/.local/lib/python3.8/site-packages/praxis/base_hyperparams.py", line 1103, in instantiate
    return config.Instantiate(**kwargs)
  File "/home/robertli/.local/lib/python3.8/site-packages/praxis/base_hyperparams.py", line 601, in Instantiate
    return self.cls(self, **kwargs)
  File "/home/robertli/.local/lib/python3.8/site-packages/paxml/seqio_input.py", line 443, in __init__
    self._dataset = self._get_dataset()
  File "/home/robertli/.local/lib/python3.8/site-packages/paxml/seqio_input.py", line 551, in _get_dataset
    ds = self._get_backing_ds(
  File "/home/robertli/.local/lib/python3.8/site-packages/paxml/seqio_input.py", line 686, in _get_backing_ds
    ds = self.mixture_or_task.get_dataset(
  File "/home/robertli/.local/lib/python3.8/site-packages/seqio/dataset_providers.py", line 1205, in get_dataset
    len(self.source.list_shards(split=split)) >= shard_info.num_shards)
  File "/home/robertli/.local/lib/python3.8/site-packages/seqio/dataset_providers.py", line 455, in list_shards
    return [_get_filename(info) for info in self.tfds_dataset.files(split)]
  File "/home/robertli/.local/lib/python3.8/site-packages/seqio/utils.py", line 152, in files
    split_info = self.builder.info.splits[split]
  File "/home/robertli/.local/lib/python3.8/site-packages/seqio/utils.py", line 129, in builder
    LazyTfdsLoader._MEMOIZED_BUILDERS[builder_key] = tfds.builder(
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/logging/__init__.py", line 169, in __call__
    return function(*args, **kwargs)
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/load.py", line 202, in builder
    return read_only_builder.builder_from_files(str(name), **builder_kwargs)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/read_only_builder.py", line 259, in builder_from_files
    builder_dir = _find_builder_dir(name, **builder_kwargs)
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/read_only_builder.py", line 327, in _find_builder_dir
    builder_dir = _find_builder_dir_single_dir(
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/read_only_builder.py", line 417, in _find_builder_dir_single_dir
    found_version_str = _get_version_str(
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/read_only_builder.py", line 484, in _get_version_str
    all_versions = version_lib.list_all_versions(os.fspath(builder_dir))
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow_datasets/core/utils/version.py", line 193, in list_all_versions
    if not root_dir.exists():
  File "/home/robertli/.local/lib/python3.8/site-packages/etils/epath/gpath.py", line 130, in exists
    return self._backend.exists(self._path_str)
  File "/home/robertli/.local/lib/python3.8/site-packages/etils/epath/backend.py", line 204, in exists
    return self.gfile.exists(path)
  File "/home/robertli/.local/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 288, in file_exists_v2
    _pywrap_file_io.FileExists(compat.path_to_bytes(path))
tensorflow.python.framework.errors_impl.PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '{
  "error": {
    "code": 403,
    "message": "[email protected] does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist).",
    "errors": [
      {
        "message": "[email protected] does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist)."'
	 when reading metadata of gs://mlperf-llm-public2/c4/en

I wonder if this is because I haven't configured something correctly, because the bucket seems like a public one.

I tried using the TFDS default bucket (gs://tfds-data/datasets) instead of gs://mlperf-llm-public2 and this problem doesn't arise, but it requires me to choose among available versions of c4 (not 3.0.4). Even then, I cannot proceed because it gives me some other error.

Thanks in advance for your attention and help!

Int8 checkpoint

Hi, is there a way to save a quantized int8 checkpoint? Looks like right now the checkpoint is in fp32.

ARM64 Build

I've been trying to install PAXML on Ubuntu 22.04 ARM64 but I seem to stuck in getting lingvo (mandatory dependency?) running there: I've been struggling to find a recipe for this. Has this been done? Any documentation about it?

Jax + tpu and AQT int8 train model loss is abnormal

I used the aqt_einsum function in the code to only quantify the qk sccore, and then trained the model. However, I found that the loss dropped very slowly after training to a certain number of steps (such as 200 steps), which was quite different from the loss curve trained by bfloat16. Am I missing something? For example, does backward need some additional processing?
ps: I train model on jax==0.4.23 and tpu v5p-8

In other words, is there a training example for AQT int8 in pax?

[Question] Very low MFU(30%~35%) when train bf16 Llama2 and GPT model with single SXM4 A100 machine.

I don't know what happened, is the calculation precision and parameter precision not set correctly? Deepspeed or Megatron could achieve 55% MFU easily with same machine.
Here is my bash script:

#! /bin/bash
set -u
set -o pipefail

TFDS_DATA_DIR=$1
VOCAB_PATH=$2
PREC=${3:-"bfloat16"}        # Precision (float32, bfloat16)
NUM_GPUS=${4:-8}      # Number of GPUs (1, 2, 4, 8)
PERCORE_BATCH_SIZE=${5:-4}
LOG_DIR=${6:-"test_logdir"}

export VOCAB_PATH=$VOCAB_PATH

BASE_XLA_FLAGS=${BASE_XLA_FLAGS:-"--xla_gpu_enable_latency_hiding_scheduler=true --xla_gpu_enable_triton_gemm=false
                       --xla_gpu_simplify_all_fp_conversions --xla_gpu_enable_async_all_gather=true
                       --xla_gpu_enable_async_reduce_scatter=true  --xla_gpu_enable_highest_priority_async_stream=true
                       --xla_gpu_enable_triton_softmax_fusion=false  --xla_gpu_all_reduce_combine_threshold_bytes=51200
                       --xla_gpu_graph_level=3 --xla_gpu_enable_async_all_reduce=true
                       --xla_gpu_enable_async_collectives=true --xla_gpu_enable_async_collective_permute=true
                       --xla_gpu_enable_async_all_gather=true --xla_gpu_enable_async_reduce_scatter=true
                       --xla_gpu_enable_async_all_to_all=true --xla_gpu_all_reduce_contiguous=true
                       --xla_gpu_all_reduce_blueconnect_num_devices_per_host=true
                       --xla_gpu_enable_cudnn_frontend=true --xla_gpu_enable_cudnn_fmha=true --xla_gpu_fused_attention_use_cudnn_rng=true
                       --xla_gpu_enable_cudnn_layer_norm "}
export XLA_FLAGS="$BASE_XLA_FLAGS ${XLA_FLAGS:-}"

export ENABLE_TE=1

mkdir -p ${LOG_DIR}
python3 -u -m paxml.main \
    --job_log_dir=${LOG_DIR} \
    --fdl_config=paxml.tasks.lm.params.nvidia.Llama2_7B \
    --fdl.FPROP_DTYPE=\"${PREC}\" \
    --fdl.ICI_MESH_SHAPE="[1,$(expr ${NUM_GPUS}), 1]" \
    --fdl.DCN_MESH_SHAPE="[1,1,1]" \
    --fdl.NUM_STAGES=1 \
    --fdl.MICROBATCH_SIZE=$PERCORE_BATCH_SIZE \
    --fdl.PERCORE_BATCH_SIZE=$PERCORE_BATCH_SIZE \
    --tfds_data_dir=$TFDS_DATA_DIR \
    --alsologtostderr \
    2>&1 | tee ${LOG_DIR}/llama2_7B_output.log

EXP_STATUS=$?

if [ $EXP_STATUS != 0 ]; then
  echo "Run failed"
else
  echo "Run succeeded!"
fi

According https://github.com/NVIDIA/JAX-Toolbox/tree/main/rosetta/rosetta/projects/pax, Nvidia train a 5B GPT model with Nativ BF16 in 256 A100 GPU. And its performance 465.45 Sequences/Sec when sequences global batch size is 8*256=2048. So it means it costed 4.4s per step. Am I correct?
This script could calculate its MFU which is 38.958427%. It's too low!

# Nvidia Jax GPT5B
card_num=256
gbs=8*card_num
layers=24
num_query=32
num_heads=32
enc_seq_len=2048
hs=4096
ffn_hs=16384
vocab=50304

sequences_per_sec=465.45
seconds_per_step=gbs/sequences_per_sec


#Model total parameters:
params_qkv_state = (1+2*(num_query/num_heads))*hs*hs
params_post_attention_linear = hs*hs
params_fead_forward_network = 2*hs*ffn_hs
params_vocabulary_embedding = hs*vocab


#FPROP:
qkv_state = gbs*2*(1+2*(num_query/num_heads))*enc_seq_len*hs*hs
attention_matrix_computation = gbs*2*enc_seq_len*enc_seq_len*hs
attention_over_values = gbs*2*enc_seq_len*enc_seq_len*hs
post_attention_linear_projection = gbs*2*enc_seq_len*hs*hs
fead_forward_network = gbs*(2*2*enc_seq_len*ffn_hs*hs)
vocabulary_embedding = gbs*2*enc_seq_len*hs*vocab

#BPROP:
#FPROP*2

model_params = (params_qkv_state+params_post_attention_linear+params_fead_forward_network)*layers + params_vocabulary_embedding 
model_float = 3*((qkv_state+attention_matrix_computation+attention_over_values+post_attention_linear_projection+fead_forward_network)*layers + vocabulary_embedding) 
model_flops = model_float/seconds_per_step
cluster_ideal_flops = 312*(10**12) * card_num
MFU = model_flops/cluster_ideal_flops
print("Model parameters {:4f}B MFU={:4f}%".format(model_params/(10**9),MFU*100))

Installing paxml from source failed due to dependency problem

This is using the latest development version (see full log here.):

#8 79.88 + git clone https://github.com/google/paxml.git /opt/paxml
#8 79.89 Cloning into '/opt/paxml'...
#8 80.87 + pushd /opt/paxml
#8 80.87 + git checkout HEAD
#8 80.87 /opt/paxml /
#8 80.89 Your branch is up to date with 'origin/main'.
#8 80.89 + pip install -e '.[gpu]'
...
#8 94.75 ERROR: Cannot install paxml and paxml[gpu]==1.0.0 because these package versions have conflicting dependencies.
#8 94.75 
#8 94.75 The conflict is caused by:
#8 94.75     paxml[gpu] 1.0.0 depends on tensorflow~=2.9.2
#8 94.75     tensorflow-text 2.9.0 depends on tensorflow<2.10 and >=2.9.0; platform_machine != "arm64" or platform_system != "Darwin"
#8 94.75     lingvo 0.12.1 depends on tensorflow==2.9

Pipeline Parallelism: F external/org_tensorflow/tensorflow/compiler/xla/array.h:446] Check failed: n < sizes_size Fatal Python error: Aborted

Hello!

I am trying to implement 126 million parameter GPT-3 with Pipeline Parallelism on PAXML. I run into some errors when NUM_MICROBATCHES > 1.

System:

8X NVIDIA A100-SXM 80 GB

Gin Configs:

from __gin__ import dynamic_registration

import __main__ as train_script
from paxml import gin_utils
from paxml.tasks.lm import model_params_with_gin
from paxml.tasks.lm.params import datasets_gin
from praxis import optimizers
from praxis import schedules
from praxis.layers import activations
from praxis.layers import repeats
from jax import numpy as jnp

MAX_SL=2048
SUMMARY_INTERVAL_STEPS=100
CHECKPOINT_EVERY_N_STEPS=1000
EVAL_INTERVAL_STEPS=100
MAX_STEPS=600000
NUM_STAGES = 4
ICI_MESH_SHAPE=[%NUM_STAGES, 1, 1, 2]
PERCORE_BATCH_SIZE = 2

MODEL = @model_params_with_gin.TransformerLmSpmdPipeline()
model_params_with_gin.TransformerLmSpmdPipeline:
  USE_REPEATED_LAYER = False
  MAX_SEQ_LEN = %MAX_SL
  NUM_LAYERS = 12
  NUM_HEADS = 12
  MODEL_DIMS = 768
  HIDDEN_DIMS = 3072
  DIMS_PER_HEAD = 64
  VOCAB_SIZE = 51200
  TRAINABLE_POSITION_EMB = True
  TRAINABLE_PE_MAX_SEQ_LEN = %MAX_SL
  ACTIVATION_CLS = @activations.GELU.HParams()
  PACKED_INPUT = True
  USE_BIAS = False
  MAX_STEPS=%MAX_STEPS
  INIT_STD = 0.023
  EVAL_INTERVAL_STEPS = 100
  NUM_STAGES = %NUM_STAGES
  NUM_MICROBATCHES = 2
  ICI_MESH_SHAPE = %ICI_MESH_SHAPE
  FPROP_DTYPE = @jnp.bfloat16
  SUMMARY_INTERVAL_STEPS=%SUMMARY_INTERVAL_STEPS
  CHECKPOINT_EVERY_N_STEPS=%CHECKPOINT_EVERY_N_STEPS
  EVAL_INTERVAL_STEPS=%EVAL_INTERVAL_STEPS

OPTIMIZER = @optimizers.Adam.HParams()
optimizers.Adam.HParams:
  beta1 = 0.9
  beta2 = 0.95
  learning_rate = 6e-4
  epsilon_root = 0.0
  epsilon = 1e-8
  weight_decay = 0.1
  clip_threshold = 1.0
  clip_gradient_norm_to_value = 5.0


SCHEDULER = @schedules.LinearRampupCosineDecay.HParams()
schedules.LinearRampupCosineDecay.HParams:
  warmup_steps = 636
  decay_start = 637
  decay_end = 500000
  min_ratio = 0.1
  max = 1.0

DATASET = @datasets_gin.PileUnsupervisedDataset()
datasets_gin.PileUnsupervisedDataset:
  MAX_SEQ_LEN = %MAX_SL
  PERCORE_BATCH_SIZE = %PERCORE_BATCH_SIZE

## experiment == model + dataset
EXPERIMENT = @model_params_with_gin.Experiment()
model_params_with_gin.Experiment:
  model = %MODEL
  dataset = %DATASET
  optimizer = %OPTIMIZER
  scheduler = %SCHEDULER
  
train_script.run:
  experiment_config = %EXPERIMENT

Command:

#! /bin/bash

set -x

PYTHONPATH=/pax/paxml:/pax/praxis python3 /pax/paxml/paxml/main.py \
    --exp=tasks.lm.params.c4.PileSpmdAdam \
    --gin_file="/pax/paxml/configs/gpt3_126_pp.gin" \
    --tfds_data_dir="/pax/datasets" \
    --vocab_path='/pax/vocab/c4_en_301_5Mexp2_spm.model' \
    --pmap_use_tensorstore=True \
    --job_log_dir=/logs/ \
    --alsologtostderr 

set +x

XLA Complie Time Error:

2022-10-10 16:01:05.537760: F external/org_tensorflow/tensorflow/compiler/xla/array.h:446] Check failed: n < sizes_size 
Fatal Python error: Aborted

Current thread 0x00007f5c10b73740 (most recent call first):
  File "/usr/local/lib/python3.8/dist-packages/jax/_src/dispatch.py", line 940 in backend_compile
  File "/usr/local/lib/python3.8/dist-packages/jax/_src/profiler.py", line 294 in wrapper
  File "/usr/local/lib/python3.8/dist-packages/jax/_src/dispatch.py", line 996 in compile_or_get_cached
  File "/usr/local/lib/python3.8/dist-packages/jax/interpreters/pxla.py", line 3048 in from_hlo
  File "/usr/local/lib/python3.8/dist-packages/jax/interpreters/pxla.py", line 2890 in compile
  File "/usr/local/lib/python3.8/dist-packages/jax/experimental/pjit.py", line 815 in _pjit_call_impl
  File "/usr/local/lib/python3.8/dist-packages/jax/core.py", line 685 in process_primitive
  File "/usr/local/lib/python3.8/dist-packages/jax/core.py", line 327 in bind_with_trace
  File "/usr/local/lib/python3.8/dist-packages/jax/core.py", line 324 in bind
  File "/usr/local/lib/python3.8/dist-packages/jax/experimental/pjit.py", line 385 in wrapped
  File "/pax/paxml/paxml/train.py", line 1087 in train_and_evaluate_spmd_model
  File "/pax/paxml/paxml/train.py", line 271 in train_and_evaluate
  File "/pax/paxml/paxml/main.py", line 290 in run_experiment
  File "/pax/paxml/paxml/main.py", line 535 in run
  File "/usr/local/lib/python3.8/dist-packages/gin/config.py", line 1582 in gin_wrapper
  File "/pax/paxml/paxml/main.py", line 588 in main
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 251 in _run_main
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 303 in run
  File "/pax/paxml/paxml/main.py", line 631 in <module>

There is no problem when NUM_MICROBATCHES = 1.

It would be great if someone could look into this to figure out what may be causing XLA to break when using NUM_MICROBATCHES > 1.

Pipeline Parallelism: USE_REPEATED_LAYERS bug

Hello!

I am trying to implement 126 million parameter GPT-3 with Pipeline Parallelism on PAXML. I notice that USE_REPEATED_LAYERS=True helps speed up compilation and also reduces the memory requirement. However, when I set USE_REPEATED_LAYERS=True with Pipeline Parallelism, I get the following error.

System:

8X NVIDIA A100-SXM 80 GB

Gin Configs:

from __gin__ import dynamic_registration

import __main__ as train_script
from paxml import gin_utils
from paxml.tasks.lm import model_params_with_gin
from paxml.tasks.lm.params import datasets_gin
from praxis import optimizers
from praxis import schedules
from praxis.layers import activations
from praxis.layers import repeats
from jax import numpy as jnp

MAX_SL=2048
SUMMARY_INTERVAL_STEPS=100
CHECKPOINT_EVERY_N_STEPS=1000
EVAL_INTERVAL_STEPS=100
MAX_STEPS=600000
NUM_STAGES = 4
ICI_MESH_SHAPE=[%NUM_STAGES, 1, 1, 2]
PERCORE_BATCH_SIZE = 2

MODEL = @model_params_with_gin.TransformerLmSpmdPipeline()
model_params_with_gin.TransformerLmSpmdPipeline:
  USE_REPEATED_LAYER = True
  MAX_SEQ_LEN = %MAX_SL
  NUM_LAYERS = 12
  NUM_HEADS = 12
  MODEL_DIMS = 768
  HIDDEN_DIMS = 3072
  DIMS_PER_HEAD = 64
  VOCAB_SIZE = 51200
  TRAINABLE_POSITION_EMB = True
  TRAINABLE_PE_MAX_SEQ_LEN = %MAX_SL
  ACTIVATION_CLS = @activations.GELU.HParams()
  PACKED_INPUT = True
  USE_BIAS = False
  MAX_STEPS=%MAX_STEPS
  INIT_STD = 0.023
  EVAL_INTERVAL_STEPS = 100
  NUM_STAGES = %NUM_STAGES
  NUM_MICROBATCHES = 1
  ICI_MESH_SHAPE = %ICI_MESH_SHAPE
  FPROP_DTYPE = @jnp.bfloat16
  SUMMARY_INTERVAL_STEPS=%SUMMARY_INTERVAL_STEPS
  CHECKPOINT_EVERY_N_STEPS=%CHECKPOINT_EVERY_N_STEPS
  EVAL_INTERVAL_STEPS=%EVAL_INTERVAL_STEPS

OPTIMIZER = @optimizers.Adam.HParams()
optimizers.Adam.HParams:
  beta1 = 0.9
  beta2 = 0.95
  learning_rate = 6e-4
  epsilon_root = 0.0
  epsilon = 1e-8
  weight_decay = 0.1
  clip_threshold = 1.0
  clip_gradient_norm_to_value = 5.0


SCHEDULER = @schedules.LinearRampupCosineDecay.HParams()
schedules.LinearRampupCosineDecay.HParams:
  warmup_steps = 636
  decay_start = 637
  decay_end = 500000
  min_ratio = 0.1
  max = 1.0

DATASET = @datasets_gin.PileUnsupervisedDataset()
datasets_gin.PileUnsupervisedDataset:
  MAX_SEQ_LEN = %MAX_SL
  PERCORE_BATCH_SIZE = %PERCORE_BATCH_SIZE

## experiment == model + dataset
EXPERIMENT = @model_params_with_gin.Experiment()
model_params_with_gin.Experiment:
  model = %MODEL
  dataset = %DATASET
  optimizer = %OPTIMIZER
  scheduler = %SCHEDULER
  
train_script.run:
  experiment_config = %EXPERIMENT

Command:

#! /bin/bash

set -x

PYTHONPATH=/pax/paxml:/pax/praxis python3 /pax/paxml/paxml/main.py \
    --exp=tasks.lm.params.c4.PileSpmdAdam \
    --gin_file="/pax/paxml/configs/gpt3_126_pp.gin" \
    --tfds_data_dir="/pax/datasets" \
    --vocab_path='/pax/vocab/c4_en_301_5Mexp2_spm.model' \
    --pmap_use_tensorstore=True \
    --job_log_dir=/logs/ \
    --alsologtostderr 

set +x

Error:

Traceback (most recent call last):
  File "/pax/paxml/paxml/main.py", line 631, in <module>
    app.run(main, flags_parser=_gin_flags_parser)
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.8/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/pax/paxml/paxml/main.py", line 588, in main
    run_with_gin()
  File "/usr/local/lib/python3.8/dist-packages/gin/config.py", line 1605, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/usr/local/lib/python3.8/dist-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
    raise proxy.with_traceback(exception.__traceback__) from None
  File "/usr/local/lib/python3.8/dist-packages/gin/config.py", line 1582, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/pax/paxml/paxml/main.py", line 535, in run
    run_experiment(
  File "/pax/paxml/paxml/main.py", line 290, in run_experiment
    train.train_and_evaluate(
  File "/pax/paxml/paxml/train.py", line 271, in train_and_evaluate
    train_and_evaluate_spmd_model(task_p, train_input_p, job_log_dir,
  File "/pax/paxml/paxml/train.py", line 851, in train_and_evaluate_spmd_model
    vars_weight_params = jax_task.model.abstract_init_with_metadata(
  File "/usr/local/lib/python3.8/dist-packages/flax/linen/transforms.py", line 1320, in wrapped_fn
    return jax.named_call(class_fn, name=full_name)(self, *args, **kwargs)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/usr/local/lib/python3.8/dist-packages/flax/linen/module.py", line 353, in wrapped_module_method
    return self._call_wrapped_method(fun, args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/flax/linen/module.py", line 652, in _call_wrapped_method
    y = fun(self, *args, **kwargs)
  File "/pax/praxis/praxis/base_layer.py", line 1231, in abstract_init_with_metadata
    variables_abstract = jax.eval_shape(init_fn, rngs)
  File "/usr/local/lib/python3.8/dist-packages/jax/_src/api.py", line 3024, in eval_shape
    out = pe.abstract_eval_fun(wrapped_fun.call_wrapped,
  File "/usr/local/lib/python3.8/dist-packages/jax/interpreters/partial_eval.py", line 662, in abstract_eval_fun
    _, avals_out, _ = trace_to_jaxpr_dynamic(
  File "/usr/local/lib/python3.8/dist-packages/jax/_src/profiler.py", line 294, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/jax/interpreters/partial_eval.py", line 1929, in trace_to_jaxpr_dynamic
    jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(
  File "/usr/local/lib/python3.8/dist-packages/jax/interpreters/partial_eval.py", line 1946, in trace_to_subjaxpr_dynamic
    ans = fun.call_wrapped(*in_tracers_)
  File "/usr/local/lib/python3.8/dist-packages/jax/linear_util.py", line 168, in call_wrapped
    ans = self.f(*args, **dict(self.params, **kwargs))
  File "/usr/local/lib/python3.8/dist-packages/jax/linear_util.py", line 168, in call_wrapped
    ans = self.f(*args, **dict(self.params, **kwargs))
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/pax/praxis/praxis/base_layer.py", line 1169, in force_init
    jax.tree_map(force, val)
  File "/pax/praxis/praxis/base_layer.py", line 1167, in force
    v.force_init(*args)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/pax/praxis/praxis/base_layer.py", line 1169, in force_init
    jax.tree_map(force, val)
  File "/pax/praxis/praxis/base_layer.py", line 1167, in force
    v.force_init(*args)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/pax/praxis/praxis/base_layer.py", line 1169, in force_init
    jax.tree_map(force, val)
  File "/pax/praxis/praxis/base_layer.py", line 1167, in force
    v.force_init(*args)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/pax/praxis/praxis/layers/pipeline.py", line 217, in force_init
    body_init_fn(self.body, None)
  File "/pax/praxis/praxis/layers/pipeline.py", line 162, in fn
    model.force_init(None)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/pax/praxis/praxis/base_layer.py", line 1169, in force_init
    jax.tree_map(force, val)
  File "/pax/praxis/praxis/base_layer.py", line 1167, in force
    v.force_init(*args)
  File "/usr/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
TypeError: force_init() takes 1 positional argument but 2 were given
  In call to configurable 'run' (<function run at 0x7fedd131ab80>)

Would you have any suggestions on how to fix this?

Unexpected Overheads with Activation Checkpointing with Pipeline Parallelism

We notice a buggy behavior with bitcasts and dynamic update slices. When we turn on activation checkpointing (e.g., saving outputs of projection layers using the SAVE_OUT_PROJ flag in PAXML) we see multiple extra updates and copies.

For example, we want to checkpoint an activation of shape [2,2048,48,128]. However, in the HLO below we see that the copies are of shape [15,1,2,2048,48,128]. Here, 15 is the number of microbatches we are using with pipeline parallelism.

Snippet of HLO:

fusion.549 = (bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, ..., kind=kLoop, calls=fused_computation.549, metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/dynamic_update_slice" source_file="/usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py" source_line=148}
get-tuple-element.5874 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} get-tuple-element(fusion.549), index=0
copy.583 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} copy(get-tuple-element.5874)
get-tuple-element.5866 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} get-tuple-element(fusion.549), index=1
copy.575 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} copy(get-tuple-element.5866)
get-tuple-element.5868 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} get-tuple-element(fusion.549), index=2
copy.577 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} copy(get-tuple-element.5868)
get-tuple-element.5870 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} get-tuple-element(fusion.549), index=3
copy.579 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} copy(get-tuple-element.5870)
get-tuple-element.5872 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} get-tuple-element(fusion.549), index=4
copy.581 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} copy(get-tuple-element.5872)

...

fused_computation.549 {
  param_1.8511 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} parameter(1)
  bitcast.52601 = bf16[15,1,2,48,128,2048]{5,4,3,2,1,0} bitcast(param_1.8511)
  param_0.6313 = bf16[2,48,128,2048]{3,2,1,0} parameter(0)
  bitcast.52600 = bf16[1,1,2,48,128,2048]{5,4,3,2,1,0} bitcast(param_0.6313)
  param_2.5901 = s32[] parameter(2)
  constant_7564 = s32[] constant(0)
  compare.3477 = pred[] compare(param_2.5901, constant_7564), direction=LT, metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/pipeline._scan_fn/pipeline._get_iteration_inputs/jit(remainder)/rem" source_file="/pax/praxis/praxis/layers/pipeline.py" source_line=422}
  constant_11524 = s32[] constant(15)
  add.6580 = s32[] add(param_2.5901, constant_11524), metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/add" source_file="/pax/praxis/praxis/base_layer.py" source_line=695}
  select.5360 = s32[] select(compare.3477, add.6580, param_2.5901), metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/select_n" source_file="/usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py" source_line=148}
  dynamic-update-slice.325 = bf16[15,1,2,48,128,2048]{5,4,3,2,1,0} dynamic-update-slice(bitcast.52601, bitcast.52600, select.5360, constant_7564, constant_7564, /*index=5*/constant_7564, constant_7564, constant_7564), metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/dynamic_update_slice" source_file="/usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py" source_line=148}
  bitcast.52599 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} bitcast(dynamic-update-slice.325), metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/dynamic_update_slice" source_file="/usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py" source_line=148}
  param_4.7770 = bf16[15,1,2,2048,48,128]{3,5,4,2,1,0} parameter(4)
  bitcast.52617.clone.1 = bf16[15,1,2,48,128,2048]{5,4,3,2,1,0} bitcast(param_4.7770)
  param_3.8428 = bf16[2,48,128,2048]{3,2,1,0} parameter(3)
  bitcast.52616.clone.1 = bf16[1,1,2,48,128,2048]{5,4,3,2,1,0} bitcast(param_3.8428)
  dynamic-update-slice.333.clone.1 = bf16[15,1,2,48,128,2048]{5,4,3,2,1,0} dynamic-update-slice(bitcast.52617.clone.1, bitcast.52616.clone.1, select.5360, constant_7564, constant_7564, /*index=5*/constant_7564, constant_7564, constant_7564), metadata={op_name="pjit(_wrapped_step_fn)/jit(main)/jvp(xformer_lm.apply)/xformer_lm/xformer_lm.compute_predictions/lm/transformer/pipeline/while/body/dynamic_update_slice" source_file="/usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py" source_line=148}
  ...
  ROOT tuple.356 = (bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}, bf16[15,1,2,2048,48,128]{3,5,4,2,1,0}) tuple(bitcast.52599, bitcast.52615.clone.1, bitcast.52611.clone.1, bitcast.52607.clone.1, bitcast.52603.clone.1)
}

It seems like there is a big buffer of size [15,1,2,2048,48,128] holding the activations for all microbatches. Within each microbatch, we are trying to update one row of this buffer (of shape [2,2048,48,128]). But XLA loads the entire buffer into memory, performs the update, and then copies the buffer back. We see this problem in our profiles. The amount of time spent on D2D copies (i.e., copy.575 to copy.583) is much larger than expected for the amount of data that should be copied. Right now, the time spent on activation checkpointing is 5% to 8% of the overall run time for a GPT-3 style model.

Our current understanding: The reason for the copy is because when bitcast is treated as computing a new value (e.g., like a convert or sqrt), then a new tensor must be used in each loop iteration, therefore a copy of each DUS result must be made. This should be able to be fixed by treating bitcast as an aliasing operation instead of computing a new value --- in the dataflow analysis. I think there is an option in dataflow analysis that configures how bitcast should be treated. In XLA TPU, the option is set to be true where bitcasts are treated as simply an aliasing operation.

Would someone be able to look into this?

I am attaching a link to the HLO: https://drive.google.com/drive/folders/1fYUsqfDgYRRpgOklE-k7qx_5ixkJzKPD?usp=sharing

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.