anarchy-ai / llm-speed-benchmark Goto Github PK
View Code? Open in Web Editor NEWBenchmarking tool for assessing LLM models' performance across different hardwares
License: MIT License
Benchmarking tool for assessing LLM models' performance across different hardwares
License: MIT License
f3c0b913-b189-48c5-bd85-e5655cf81a4c - model running - running model with following parameters Namespace(max_length=50, temperature=0.9, top_k=50, top_p=0.9, num_return_sequences=1, uuid='f3c0b913-b189-48c5-bd85-e5655cf81a4c', prompt='Hello World!', model='4bit/Llama-2-7b-chat-hf', device='cuda:0')
f3c0b913-b189-48c5-bd85-e5655cf81a4c - metrics collector - Collected metrics for the 6 time, now waiting for 0 sec
Traceback (most recent call last):
File "/workspace/benchllm/model.py", line 54, in
output = hf.run_llm(args.model, args.prompt, args.device, {
File "/workspace/benchllm/src/hf.py", line 43, in run_llm
tokenizer = AutoTokenizer.from_pretrained(model_name)
File "/workspace/benchllm/env/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 768, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/workspace/benchllm/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2024, in from_pretrained
return cls._from_pretrained(
File "/workspace/benchllm/env/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2256, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/workspace/benchllm/env/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 124, in init
super().init(
File "/workspace/benchllm/env/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 120, in init
raise ValueError(
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers
library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
Hi,
does this also run on aws sagemaker? How to run it with TGI, VLLM, TensorRT-LLM etc?
Thanks,
Gerald
Currently, for most models, HuggingFace defaults to a dtype of float32. There should be an option to reduce this number.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.