Git Product home page Git Product logo

mukhal / grace Goto Github PK

View Code? Open in Web Editor NEW
41.0 2.0 0.0 30.63 MB

[EMNLP 2023, Findings] GRACE: Discriminator-Guided Chain-of-Thought Reasoning

Home Page: https://arxiv.org/abs/2305.14934

Shell 0.14% Python 91.18% Makefile 0.01% Dockerfile 0.04% Jsonnet 0.01% Jupyter Notebook 0.40% C++ 0.03% Cuda 0.34% Cython 0.01% C 0.01% MDX 7.84%
chain-of-thought decoding language-model multi-step-reasoning reasoning text-generation llm mathematical-reasoning symbolic-reasoning

grace's People

Contributors

mukhal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

grace's Issues

Unable to load discriminator error

(grace) adithya@node02:\~/grace$ ls /data/adithya/ckpts/discrim/gsm8k
added_tokens.json  pytorch_model.bin  special_tokens_map.json  spiece.model  tokenizer_config.json  training_args.bin
(grace) adithya@node02:~/grace$  WANDB_MODE=disabled python run_grace.py \
>                         --model_name_or_path google/flan-t5-large \
>                         --in_file data/gsm8k/dev.jsonl \
>                         --task gsm8k \
>                         --disc_path /data/adithya/ckpts/discrim/gsm8k/ \
>                         --beta 0.1 --n_candidate_steps 20 --generation_type step-score \
>                         --step_sampling_method top_p --device2 cuda:7 --top_p .95 --sample_calc true \
>                         --max_steps 6  --max_step_length 60 --step_delimiter '|' --temperature .8  --n_self_consistency 1 --seed 42

Loading model from google/flan-t5-large
loading discriminator tokenizer from /data/adithya/ckpts/discrim/gsm8k/
Loading discriminator from /data/adithya/ckpts/discrim/gsm8k/
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.40k/1.40k [00:00<00:00, 4.07MB/s]
pytorch_model.bin: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 990M/990M [00:08<00:00, 114MB/s]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 147/147 [00:00<00:00, 668kB/s]
Traceback (most recent call last):
  File "/home/adithya/grace/run_grace.py", line 441, in <module>
    main(args)
  File "/home/adithya/grace/run_grace.py", line 83, in main
    discriminator.load_state_dict(ckpt)
  File "/data/adithya/anaconda3/envs/grace/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for T5EnergyDiscriminator:
        Unexpected key(s) in state_dict: "model.encoder.block.12.layer.0.SelfAttention.q.weight", "model.encoder.block.12.layer.0.SelfAttention.k.weight", "model.encoder.block.12.layer.0.SelfAttention.v.weight", "model.encoder.block.12.layer.0.SelfAttention.o.weight", "model.encoder.block.12.layer.0.layer_norm.weight", "model.encoder.block.12.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.12.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.12.layer.1.DenseReluDense.wo.weight", "model.encoder.block.12.layer.1.layer_norm.weight", "model.encoder.block.13.layer.0.SelfAttention.q.weight", "model.encoder.block.13.layer.0.SelfAttention.k.weight", "model.encoder.block.13.layer.0.SelfAttention.v.weight", "model.encoder.block.13.layer.0.SelfAttention.o.weight", "model.encoder.block.13.layer.0.layer_norm.weight", "model.encoder.block.13.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.13.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.13.layer.1.DenseReluDense.wo.weight", "model.encoder.block.13.layer.1.layer_norm.weight", "model.encoder.block.14.layer.0.SelfAttention.q.weight", "model.encoder.block.14.layer.0.SelfAttention.k.weight", "model.encoder.block.14.layer.0.SelfAttention.v.weight", "model.encoder.block.14.layer.0.SelfAttention.o.weight", "model.encoder.block.14.layer.0.layer_norm.weight", "model.encoder.block.14.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.14.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.14.layer.1.DenseReluDense.wo.weight", "model.encoder.block.14.layer.1.layer_norm.weight", "model.encoder.block.15.layer.0.SelfAttention.q.weight", "model.encoder.block.15.layer.0.SelfAttention.k.weight", "model.encoder.block.15.layer.0.SelfAttention.v.weight", "model.encoder.block.15.layer.0.SelfAttention.o.weight", "model.encoder.block.15.layer.0.layer_norm.weight", "model.encoder.block.15.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.15.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.15.layer.1.DenseReluDense.wo.weight", "model.encoder.block.15.layer.1.layer_norm.weight", "model.encoder.block.16.layer.0.SelfAttention.q.weight", "model.encoder.block.16.layer.0.SelfAttention.k.weight", "model.encoder.block.16.layer.0.SelfAttention.v.weight", "model.encoder.block.16.layer.0.SelfAttention.o.weight", "model.encoder.block.16.layer.0.layer_norm.weight", "model.encoder.block.16.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.16.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.16.layer.1.DenseReluDense.wo.weight", "model.encoder.block.16.layer.1.layer_norm.weight", "model.encoder.block.17.layer.0.SelfAttention.q.weight", "model.encoder.block.17.layer.0.SelfAttention.k.weight", "model.encoder.block.17.layer.0.SelfAttention.v.weight", "model.encoder.block.17.layer.0.SelfAttention.o.weight", "model.encoder.block.17.layer.0.layer_norm.weight", "model.encoder.block.17.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.17.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.17.layer.1.DenseReluDense.wo.weight", "model.encoder.block.17.layer.1.layer_norm.weight", "model.encoder.block.18.layer.0.SelfAttention.q.weight", "model.encoder.block.18.layer.0.SelfAttention.k.weight", "model.encoder.block.18.layer.0.SelfAttention.v.weight", "model.encoder.block.18.layer.0.SelfAttention.o.weight", "model.encoder.block.18.layer.0.layer_norm.weight", "model.encoder.block.18.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.18.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.18.layer.1.DenseReluDense.wo.weight", "model.encoder.block.18.layer.1.layer_norm.weight", "model.encoder.block.19.layer.0.SelfAttention.q.weight", "model.encoder.block.19.layer.0.SelfAttention.k.weight", "model.encoder.block.19.layer.0.SelfAttention.v.weight", "model.encoder.block.19.layer.0.SelfAttention.o.weight", "model.encoder.block.19.layer.0.layer_norm.weight", "model.encoder.block.19.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.19.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.19.layer.1.DenseReluDense.wo.weight", "model.encoder.block.19.layer.1.layer_norm.weight", "model.encoder.block.20.layer.0.SelfAttention.q.weight", "model.encoder.block.20.layer.0.SelfAttention.k.weight", "model.encoder.block.20.layer.0.SelfAttention.v.weight", "model.encoder.block.20.layer.0.SelfAttention.o.weight", "model.encoder.block.20.layer.0.layer_norm.weight", "model.encoder.block.20.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.20.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.20.layer.1.DenseReluDense.wo.weight", "model.encoder.block.20.layer.1.layer_norm.weight", "model.encoder.block.21.layer.0.SelfAttention.q.weight", "model.encoder.block.21.layer.0.SelfAttention.k.weight", "model.encoder.block.21.layer.0.SelfAttention.v.weight", "model.encoder.block.21.layer.0.SelfAttention.o.weight", "model.encoder.block.21.layer.0.layer_norm.weight", "model.encoder.block.21.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.21.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.21.layer.1.DenseReluDense.wo.weight", "model.encoder.block.21.layer.1.layer_norm.weight", "model.encoder.block.22.layer.0.SelfAttention.q.weight", "model.encoder.block.22.layer.0.SelfAttention.k.weight", "model.encoder.block.22.layer.0.SelfAttention.v.weight", "model.encoder.block.22.layer.0.SelfAttention.o.weight", "model.encoder.block.22.layer.0.layer_norm.weight", "model.encoder.block.22.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.22.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.22.layer.1.DenseReluDense.wo.weight", "model.encoder.block.22.layer.1.layer_norm.weight", "model.encoder.block.23.layer.0.SelfAttention.q.weight", "model.encoder.block.23.layer.0.SelfAttention.k.weight", "model.encoder.block.23.layer.0.SelfAttention.v.weight", "model.encoder.block.23.layer.0.SelfAttention.o.weight", "model.encoder.block.23.layer.0.layer_norm.weight", "model.encoder.block.23.layer.1.DenseReluDense.wi_0.weight", "model.encoder.block.23.layer.1.DenseReluDense.wi_1.weight", "model.encoder.block.23.layer.1.DenseReluDense.wo.weight", "model.encoder.block.23.layer.1.layer_norm.weight", "model.decoder.block.12.layer.0.SelfAttention.q.weight", "model.decoder.block.12.layer.0.SelfAttention.k.weight", "model.decoder.block.12.layer.0.SelfAttention.v.weight", "model.decoder.block.12.layer.0.SelfAttention.o.weight", "model.decoder.block.12.layer.0.layer_norm.weight", "model.decoder.block.12.layer.1.EncDecAttention.q.weight", "model.decoder.block.12.layer.1.EncDecAttention.k.weight", "model.decoder.block.12.layer.1.EncDecAttention.v.weight", "model.decoder.block.12.layer.1.EncDecAttention.o.weight", "model.decoder.block.12.layer.1.layer_norm.weight", "model.decoder.block.12.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.12.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.12.layer.2.DenseReluDense.wo.weight", "model.decoder.block.12.layer.2.layer_norm.weight", "model.decoder.block.13.layer.0.SelfAttention.q.weight", "model.decoder.block.13.layer.0.SelfAttention.k.weight", "model.decoder.block.13.layer.0.SelfAttention.v.weight", "model.decoder.block.13.layer.0.SelfAttention.o.weight", "model.decoder.block.13.layer.0.layer_norm.weight", "model.decoder.block.13.layer.1.EncDecAttention.q.weight", "model.decoder.block.13.layer.1.EncDecAttention.k.weight", "model.decoder.block.13.layer.1.EncDecAttention.v.weight", "model.decoder.block.13.layer.1.EncDecAttention.o.weight", "model.decoder.block.13.layer.1.layer_norm.weight", "model.decoder.block.13.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.13.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.13.layer.2.DenseReluDense.wo.weight", "model.decoder.block.13.layer.2.layer_norm.weight", "model.decoder.block.14.layer.0.SelfAttention.q.weight", "model.decoder.block.14.layer.0.SelfAttention.k.weight", "model.decoder.block.14.layer.0.SelfAttention.v.weight", "model.decoder.block.14.layer.0.SelfAttention.o.weight", "model.decoder.block.14.layer.0.layer_norm.weight", "model.decoder.block.14.layer.1.EncDecAttention.q.weight", "model.decoder.block.14.layer.1.EncDecAttention.k.weight", "model.decoder.block.14.layer.1.EncDecAttention.v.weight", "model.decoder.block.14.layer.1.EncDecAttention.o.weight", "model.decoder.block.14.layer.1.layer_norm.weight", "model.decoder.block.14.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.14.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.14.layer.2.DenseReluDense.wo.weight", "model.decoder.block.14.layer.2.layer_norm.weight", "model.decoder.block.15.layer.0.SelfAttention.q.weight", "model.decoder.block.15.layer.0.SelfAttention.k.weight", "model.decoder.block.15.layer.0.SelfAttention.v.weight", "model.decoder.block.15.layer.0.SelfAttention.o.weight", "model.decoder.block.15.layer.0.layer_norm.weight", "model.decoder.block.15.layer.1.EncDecAttention.q.weight", "model.decoder.block.15.layer.1.EncDecAttention.k.weight", "model.decoder.block.15.layer.1.EncDecAttention.v.weight", "model.decoder.block.15.layer.1.EncDecAttention.o.weight", "model.decoder.block.15.layer.1.layer_norm.weight", "model.decoder.block.15.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.15.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.15.layer.2.DenseReluDense.wo.weight", "model.decoder.block.15.layer.2.layer_norm.weight", "model.decoder.block.16.layer.0.SelfAttention.q.weight", "model.decoder.block.16.layer.0.SelfAttention.k.weight", "model.decoder.block.16.layer.0.SelfAttention.v.weight", "model.decoder.block.16.layer.0.SelfAttention.o.weight", "model.decoder.block.16.layer.0.layer_norm.weight", "model.decoder.block.16.layer.1.EncDecAttention.q.weight", "model.decoder.block.16.layer.1.EncDecAttention.k.weight", "model.decoder.block.16.layer.1.EncDecAttention.v.weight", "model.decoder.block.16.layer.1.EncDecAttention.o.weight", "model.decoder.block.16.layer.1.layer_norm.weight", "model.decoder.block.16.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.16.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.16.layer.2.DenseReluDense.wo.weight", "model.decoder.block.16.layer.2.layer_norm.weight", "model.decoder.block.17.layer.0.SelfAttention.q.weight", "model.decoder.block.17.layer.0.SelfAttention.k.weight", "model.decoder.block.17.layer.0.SelfAttention.v.weight", "model.decoder.block.17.layer.0.SelfAttention.o.weight", "model.decoder.block.17.layer.0.layer_norm.weight", "model.decoder.block.17.layer.1.EncDecAttention.q.weight", "model.decoder.block.17.layer.1.EncDecAttention.k.weight", "model.decoder.block.17.layer.1.EncDecAttention.v.weight", "model.decoder.block.17.layer.1.EncDecAttention.o.weight", "model.decoder.block.17.layer.1.layer_norm.weight", "model.decoder.block.17.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.17.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.17.layer.2.DenseReluDense.wo.weight", "model.decoder.block.17.layer.2.layer_norm.weight", "model.decoder.block.18.layer.0.SelfAttention.q.weight", "model.decoder.block.18.layer.0.SelfAttention.k.weight", "model.decoder.block.18.layer.0.SelfAttention.v.weight", "model.decoder.block.18.layer.0.SelfAttention.o.weight", "model.decoder.block.18.layer.0.layer_norm.weight", "model.decoder.block.18.layer.1.EncDecAttention.q.weight", "model.decoder.block.18.layer.1.EncDecAttention.k.weight", "model.decoder.block.18.layer.1.EncDecAttention.v.weight", "model.decoder.block.18.layer.1.EncDecAttention.o.weight", "model.decoder.block.18.layer.1.layer_norm.weight", "model.decoder.block.18.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.18.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.18.layer.2.DenseReluDense.wo.weight", "model.decoder.block.18.layer.2.layer_norm.weight", "model.decoder.block.19.layer.0.SelfAttention.q.weight", "model.decoder.block.19.layer.0.SelfAttention.k.weight", "model.decoder.block.19.layer.0.SelfAttention.v.weight", "model.decoder.block.19.layer.0.SelfAttention.o.weight", "model.decoder.block.19.layer.0.layer_norm.weight", "model.decoder.block.19.layer.1.EncDecAttention.q.weight", "model.decoder.block.19.layer.1.EncDecAttention.k.weight", "model.decoder.block.19.layer.1.EncDecAttention.v.weight", "model.decoder.block.19.layer.1.EncDecAttention.o.weight", "model.decoder.block.19.layer.1.layer_norm.weight", "model.decoder.block.19.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.19.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.19.layer.2.DenseReluDense.wo.weight", "model.decoder.block.19.layer.2.layer_norm.weight", "model.decoder.block.20.layer.0.SelfAttention.q.weight", "model.decoder.block.20.layer.0.SelfAttention.k.weight", "model.decoder.block.20.layer.0.SelfAttention.v.weight", "model.decoder.block.20.layer.0.SelfAttention.o.weight", "model.decoder.block.20.layer.0.layer_norm.weight", "model.decoder.block.20.layer.1.EncDecAttention.q.weight", "model.decoder.block.20.layer.1.EncDecAttention.k.weight", "model.decoder.block.20.layer.1.EncDecAttention.v.weight", "model.decoder.block.20.layer.1.EncDecAttention.o.weight", "model.decoder.block.20.layer.1.layer_norm.weight", "model.decoder.block.20.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.20.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.20.layer.2.DenseReluDense.wo.weight", "model.decoder.block.20.layer.2.layer_norm.weight", "model.decoder.block.21.layer.0.SelfAttention.q.weight", "model.decoder.block.21.layer.0.SelfAttention.k.weight", "model.decoder.block.21.layer.0.SelfAttention.v.weight", "model.decoder.block.21.layer.0.SelfAttention.o.weight", "model.decoder.block.21.layer.0.layer_norm.weight", "model.decoder.block.21.layer.1.EncDecAttention.q.weight", "model.decoder.block.21.layer.1.EncDecAttention.k.weight", "model.decoder.block.21.layer.1.EncDecAttention.v.weight", "model.decoder.block.21.layer.1.EncDecAttention.o.weight", "model.decoder.block.21.layer.1.layer_norm.weight", "model.decoder.block.21.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.21.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.21.layer.2.DenseReluDense.wo.weight", "model.decoder.block.21.layer.2.layer_norm.weight", "model.decoder.block.22.layer.0.SelfAttention.q.weight", "model.decoder.block.22.layer.0.SelfAttention.k.weight", "model.decoder.block.22.layer.0.SelfAttention.v.weight", "model.decoder.block.22.layer.0.SelfAttention.o.weight", "model.decoder.block.22.layer.0.layer_norm.weight", "model.decoder.block.22.layer.1.EncDecAttention.q.weight", "model.decoder.block.22.layer.1.EncDecAttention.k.weight", "model.decoder.block.22.layer.1.EncDecAttention.v.weight", "model.decoder.block.22.layer.1.EncDecAttention.o.weight", "model.decoder.block.22.layer.1.layer_norm.weight", "model.decoder.block.22.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.22.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.22.layer.2.DenseReluDense.wo.weight", "model.decoder.block.22.layer.2.layer_norm.weight", "model.decoder.block.23.layer.0.SelfAttention.q.weight", "model.decoder.block.23.layer.0.SelfAttention.k.weight", "model.decoder.block.23.layer.0.SelfAttention.v.weight", "model.decoder.block.23.layer.0.SelfAttention.o.weight", "model.decoder.block.23.layer.0.layer_norm.weight", "model.decoder.block.23.layer.1.EncDecAttention.q.weight", "model.decoder.block.23.layer.1.EncDecAttention.k.weight", "model.decoder.block.23.layer.1.EncDecAttention.v.weight", "model.decoder.block.23.layer.1.EncDecAttention.o.weight", "model.decoder.block.23.layer.1.layer_norm.weight", "model.decoder.block.23.layer.2.DenseReluDense.wi_0.weight", "model.decoder.block.23.layer.2.DenseReluDense.wi_1.weight", "model.decoder.block.23.layer.2.DenseReluDense.wo.weight", "model.decoder.block.23.layer.2.layer_norm.weight". `

I had used the download_models.py file to download the gsm8k discriminator.
Please let me know how to resolve this.

TypeError: Descriptors cannot be created directly.

I tried running the inference by using the command mentioned in the README.
the command I run is:
CUDA_VISIBLE_DEVICES=6,7 WANDB_MODE=disabled python run_grace.py \ --model_name_or_path mkhalifa/flan-t5-large-gsm8k \ --in_file data/gsm8k/dev.jsonl \ --task gsm8k \ --disc_path /data/adithya/ckpts/discrim/gsm8k/ \ --beta 0.1 --n_candidate_steps 20 --generation_type step-score \ --step_sampling_method top_p --device2 cuda:1 --top_p .95 --sample_calc true \ --max_steps 6 --max_step_length 60 --step_delimiter '|' --temperature .8 --n_self_consistency 1 --seed 42

I am running into an error by using the above command, however if I replace the model_name_or_path to "google/flan-t5-large", it does not give me an error. Could you please help me out here?

The following is the error:

args {
    "beta": 0.1,
    "bf16": false,
    "debug": false,
    "demos_file_name": "demos.jsonl",
    "device1": "cuda:0",
    "device2": "cuda:1",
    "disc_icl": false,
    "disc_path": "/data/adithya/ckpts/discrim/gsm8k/",
    "disc_step_score_aggregation": "mean",
    "eightbit": false,
    "generation_type": "step-score",
    "generator_batch_size": 32,
    "generator_beam_size": 3,
    "generator_only": false,
    "generator_sampling_method": "greedy",
    "goal": "eval",
    "icl": false,
    "in_file": "data/gsm8k/dev.jsonl",
    "instruction": null,
    "max_length": 256,
    "max_step_length": 60,
    "max_steps": 6,
    "model_name_or_path": "mkhalifa/flan-t5-large-gsm8k",
    "model_tokenizer_path": null,
    "n_candidate_steps": 20,
    "n_demos": 2,
    "n_examples": null,
    "n_samples_per_example": 10,
    "n_self_consistency": 1,
    "n_verifier_samples": 5,
    "normalize_disc_scores": true,
    "out_dir": null,
    "output_results_dir": null,
    "sample_calc": true,
    "seed": 42,
    "start_idx": 0,
    "step_delimiter": "|",
    "step_sampling_method": "top_p",
    "step_selection_method": "greedy",
    "task": "gsm8k",
    "temperature": 0.8,
    "top_k": 50,
    "top_p": 0.95,
    "use_verifier": false,
    "verbose": false,
    "verifier_batch_size": 32,
    "verifier_path": null
}
Traceback (most recent call last):
  File "/data/adithya/grace/run_grace.py", line 388, in <module>
    main(args)
  File "/data/adithya/grace/run_grace.py", line 29, in main
    tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
  File "/data/adithya/grace/transformers/src/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/data/adithya/grace/transformers/src/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
    return cls._from_pretrained(
  File "/data/adithya/grace/transformers/src/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/data/adithya/grace/transformers/src/transformers/models/t5/tokenization_t5_fast.py", line 133, in __init__
    super().__init__(
  File "/data/adithya/grace/transformers/src/transformers/tokenization_utils_fast.py", line 114, in __init__
    fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
  File "/data/adithya/grace/transformers/src/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer
    return converter_class(transformer_tokenizer).converted()
  File "/data/adithya/grace/transformers/src/transformers/convert_slow_tokenizer.py", line 445, in __init__
    from .utils import sentencepiece_model_pb2 as model_pb2
  File "/data/adithya/grace/transformers/src/transformers/utils/sentencepiece_model_pb2.py", line 91, in <module>
    _descriptor.EnumValueDescriptor(
  File "/data/adithya/anaconda3/envs/grace/lib/python3.10/site-packages/google/protobuf/descriptor.py", line 789, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.