Git Product home page Git Product logo

Comments (7)

Anindyadeep avatar Anindyadeep commented on September 12, 2024 1

Yes, funny part, while I was writing my issue I also got the root cause. So here are my learnings

  1. Make sure git-lfs is installed otherwise though you might get access but it will not download the large files
  2. Make sure protobuf is installed.
  3. Make sure onnxruntime is installed.

And yes that will install everything we need. Thanks @adarshxs for the headstart.

from llama-2-onnx.

JoshuaElsdon avatar JoshuaElsdon commented on September 12, 2024

Hello, I have tried the command as listed, it works correctly on my end. Could you provide some details about what version of ONNX you are using, and what operating system you are using?

from llama-2-onnx.

adarshxs avatar adarshxs commented on September 12, 2024

I get the same error

Linux 23fe32d5f4bf 5.4.0-72-generic #80-Ubuntu

CUDA: 12

onnx version: 1.13.0
onnxruntime-gpu version: 1.15.1

root@23fe32d5f4bf:/workspace/Llama-2-Onnx# python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"

/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names. Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
warnings.warn()

Traceback (most recent call last):
  File "MinimumExample/Example_ONNX_LlamaV2.py", line 166, in <module>
    response = run_onnx_llamav2(
  File "MinimumExample/Example_ONNX_LlamaV2.py", line 47, in run_onnx_llamav2
    llm_session = onnxruntime.InferenceSession(
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_float16/ONNX/LlamaV2_7B_float16.onnx failed: Protobuf parsing failed.

from llama-2-onnx.

Anindyadeep avatar Anindyadeep commented on September 12, 2024

Hello,
I am also facing the same problem here. Here is my specs

OS: Pop Os
GPU: NVIDIA GeForce RTX 3060 Mobile / Max-Q (6GB)
Memory: 16 GB

------------------

onnxruntime version: 1.15.1
onnxruntime-gpu version: 1.15.1

I have cloned the repo and also got access for the submodules. Here is the command to I ran

python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"

Here is the result I got:

"
/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Traceback (most recent call last):
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 166, in <module>
    response = run_onnx_llamav2(
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 47, in run_onnx_llamav2
    llm_session = onnxruntime.InferenceSession(
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx failed:Protobuf parsing failed.

from llama-2-onnx.

adarshxs avatar adarshxs commented on September 12, 2024

@Anindyadeep can you check the file size of the submodule you chose? (It may be surprisingly small for model weights)

Try running git lfs pull inside your chosen submodule. That should download the actual model weights instead of the pointers to the weights. The reason is running :

git submodule init <chosen_submodule> 
git submodule update

might not have downloaded the actual model weights but rather pointers to the files stored on lfs

from llama-2-onnx.

Anindyadeep avatar Anindyadeep commented on September 12, 2024

@adarshxs Thanks for the quick reply. So here's the thing, I have downloaded (updated the git submodule) for folders 7B_float16 and 7B_FT_float16 and it is showing 2.5M for both and for all the empty submodules (ones which I have not updated) is showing 4.0K as the size.

And after that I saw that inside 7B_float16/ONNX it might have all the files like

onnx__MatMul_21420
transformer.block_list.1.proj_norm.weight

(showing two types of files inside the ONNX folder) and I first thought that those were binary files, but I can open those and I saw these

FILE: onnx__MatMul_21420

version https://git-lfs.github.com/spec/v1
oid sha256:d661398b0bb3b10fad9d807e7b6062f9e04ac43db9c9e26bf3641baa7b0d92e8
size 90177536

FILE: transformer.block_list.1.proj_norm.weight

version https://git-lfs.github.com/spec/v1
oid sha256:5540f5f085777ef016b745d27a504c94b51a4813f9c5d1ab8ec609d1afaab6fa
size 8192

Although I got the access, but now it feels like, it has't downloaded the files properly when updated all the submodules.

from llama-2-onnx.

adarshxs avatar adarshxs commented on September 12, 2024

@Anindyadeep Yes I had the same issue. Make sure you have git lfs installed and run the command git lfs pull inside the submodule you want.
I suppose these are pointers to the actual weights:

version https://git-lfs.github.com/spec/v1
oid sha256:d661398b0bb3b10fad9d807e7b6062f9e04ac43db9c9e26bf3641baa7b0d92e8
size 90177536

running git lfs pull had the weights downloaded and fixed this issue for me

from llama-2-onnx.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.