Comments (8)
Turns out this issue is with every other model that I use except for gemma.
from mediapipe-samples.
@areebbashir Hello, recently I have also been trying to run the phi-2 model based on an Android device. However, I encountered an error while converting the model to a compatible mediapipe format, such as not being able to find the file where model_ckpt_util is located. My Python version is 3.9, and mediapipe versions are 0.10.11 and 0.10.13, both of which cannot run the following script properly. Can you point out the problem? Thank you very much!
import mediapipe as mp
from mediapipe.tasks.python.genai import converter
config = converter.ConversionConfig(
input_ckpt="E:/PythonProject/models/phi-2/model-00002-of-00002.safetensors",
ckpt_format="safetensors",
model_type="PHI_2",
backend="gpu",
output_dir="E:/PythonProject/models/phi-2_output/output",
combine_file_only=False,
vocab_model_file="E:/PythonProject/models/phi-2",
output_tflite_file="E:/PythonProject/models/phi-2_output/phi_2_model_gpu.bin"
)
converter.convert_checkpoint(config)
Running error:
Traceback (most recent call last):
File "E:\PythonProject\pythonProject2\convert_to_api.py", line 17, in
converter.convert_checkpoint(config)
File "E:\PythonProject\pythonProject2.venv\lib\site-packages\mediapipe\tasks\python\genai\converter\llm_converter.py", line 251, in convert_checkpoint
vocab_model_path = convert_bpe_vocab(
File "E:\PythonProject\pythonProject2.venv\lib\site-packages\mediapipe\tasks\python\genai\converter\llm_converter.py", line 193, in convert_bpe_vocab
model_ckpt_util.ConvertHfTokenizer(vocab_model_file, output_vocab_file)
AttributeError: module 'mediapipe.python._framework_bindings.model_ckpt_util' has no attribute 'ConvertHfTokenizer'
from mediapipe-samples.
I have not encountered this issue as of yet. But I had some other import errors when in Pip installed the mediapipe. Then I used
pip install mediapipe --user
which worked for me.
Also you are putting the wrong path in input_ckpt. It need to be absolute path to the folder containing the models of phi. Try thus
import mediapipe as mp from mediapipe.tasks.python.genai import converter config = converter.ConversionConfig( input_ckpt="E:/PythonProject/models/phi-2", ckpt_format="safetensors", model_type="PHI_2", backend="gpu", output_dir="E:/PythonProject/models/phi-2_output/output", combine_file_only=False, vocab_model_file="E:/PythonProject/models/phi-2", output_tflite_file="E:/PythonProject/models/phi-2_output/phi_2_model_gpu.bin" ) converter.convert_checkpoint(config)
from mediapipe-samples.
I have a feeling the 'stop' token is potentially different for the non-Gemma models, and would need to be updated in the app, but I'll need to verify that. I'll add this to my TODO list for after IO!
from mediapipe-samples.
@areebbashir Hello, recently I have also been trying to run the phi-2 model based on an Android device. However, I encountered an error while converting the model to a compatible mediapipe format, such as not being able to find the file where model_ckpt_util is located. My Python version is 3.9, and mediapipe versions are 0.10.11 and 0.10.13, both of which cannot run the following script properly. Can you point out the problem? Thank you very much!
import mediapipe as mp from mediapipe.tasks.python.genai import converter config = converter.ConversionConfig( input_ckpt="E:/PythonProject/models/phi-2/model-00002-of-00002.safetensors", ckpt_format="safetensors", model_type="PHI_2", backend="gpu", output_dir="E:/PythonProject/models/phi-2_output/output", combine_file_only=False, vocab_model_file="E:/PythonProject/models/phi-2", output_tflite_file="E:/PythonProject/models/phi-2_output/phi_2_model_gpu.bin" ) converter.convert_checkpoint(config)
Running error: Traceback (most recent call last): File "E:\PythonProject\pythonProject2\convert_to_api.py", line 17, in converter.convert_checkpoint(config) File "E:\PythonProject\pythonProject2.venv\lib\site-packages\mediapipe\tasks\python\genai\converter\llm_converter.py", line 251, in convert_checkpoint vocab_model_path = convert_bpe_vocab( File "E:\PythonProject\pythonProject2.venv\lib\site-packages\mediapipe\tasks\python\genai\converter\llm_converter.py", line 193, in convert_bpe_vocab model_ckpt_util.ConvertHfTokenizer(vocab_model_file, output_vocab_file) AttributeError: module 'mediapipe.python._framework_bindings.model_ckpt_util' has no attribute 'ConvertHfTokenizer'
Hey @vittalitty if you're still running into this issue after the feedback in the previous comment, can you put it into a new issue for tracking? Thanks!
from mediapipe-samples.
I have a feeling the 'stop' token is potentially different for the non-Gemma models, and would need to be updated in the app, but I'll need to verify that. I'll add this to my TODO list for after IO!
Thanks, In the meanwhile can you suggest what I could try from my end.
from mediapipe-samples.
You'll want to look up the info on that model on Hugging Face or wherever to find out what the EOD command should be, then replace it in the app (assuming that's the issue)
from mediapipe-samples.
Is there any API to stop generation based on some criteria?
from mediapipe-samples.
Related Issues (20)
- Android LLM Inference Conversion guide for models fine tuned with keras nlp
- LLM Models other than Gemma , falcon, phi, stablelm HOT 1
- About face_detector performance issues in android HOT 3
- How to get LLM model performance? HOT 1
- Gemma-1.1-2B-it-cpu-int4.bin generating junk response in phone. HOT 5
- Gesture recognizer does not run on the emulator HOT 1
- LLM inference error when trying to run model locally, but works on Google's site HOT 1
- Native binaries are not compliant with Google Play 64-bit requirement HOT 1
- [Face Detection LandMark] Add a confidence score to the eye landmarks when wearing a black glasses
- Support for armeabi-v7a (32-bit ARM) the LLM Inference API
- The model is not a valid Flatbuffer buffer
- Unable to detect hand pose estimation for Android live app
- library "libllm_inference_engine_jni.so" not found - LLM inference for Android Example. HOT 1
- java.lang.ArrayIndexOutOfBoundsException occured when I run the android sample with model SSDMobileNet-V2
- Does this Android sample support features like NNAPI and NumThreads? HOT 3
- [Need Support]No example of object tracking were found
- There's isn't any code sample how to use Mediapipe Gesture Recognizer for Live Stream on Python using the Custom Model (.task) HOT 1
- ValueError: Mobile SSD models are expected to have exactly 4 outputs, found 2 HOT 1
- How to improve the performance of the object detection example in the Android sample?
- Face Stylizer example
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mediapipe-samples.