Comments (5)
Hey, could you share a reproducer?
Some things are related to the fact that we keep track of the offset and a lot of information, which tiktoken does not.
But we could only do this when ask and improve speed potentially.
from tokenizers.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
from tokenizers.
It's high in my priority to do benchmarks and improve our code if needed!
from tokenizers.
For HF, we use
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
text = "xxx"
start = time.time()
encoded_input = tokenizer.encode(truncated_text)
end = time.time()
For tiktoken, we just initialize the tokenizer by tiktoken, all the other are the same
tokenizer = tiktoken.encoding_for_model("gpt-2")
please let me know if you need any other information
from tokenizers.
You are using GPT2Tokenizer
which is the slow one. Use GPT2TokenizerFast 😅
from tokenizers.
Related Issues (20)
- How to write custom Wordpiece class? HOT 3
- Link to download the training text in `docs/source/quicktour.rst` is broken HOT 5
- Strange warnings with tokenizer for some models HOT 5
- Bug with `CodeQwen1.5`: `data did not match any variant of untagged enum PyPreTokenizerTypeWrapper` HOT 1
- Converting `tokenizers` tokenizers into `tiktoken` tokenizers HOT 5
- How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification HOT 1
- How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer? HOT 3
- [BUG]Might be a bug in Unigram Trainer HOT 1
- Training HuggingFace tokenizer - ignore_merges HOT 2
- Memory leak for large strings HOT 8
- Deserializing BPE tokenizer failure HOT 4
- llama3 tokenizer doesn't round trip HOT 4
- [BUG] Fast tokenizer does not deal with AddedTokens properly(no problem in Transformers python tokenizer impl.) HOT 6
- How can I get the mapping relationship between byte values and Unicode characters of the fast tokenizer? HOT 5
- "Solution" to memory hogging in train_new_from_iterator with a hack HOT 7
- How to use `TokenizerBuilder`? HOT 4
- [Bug?] Modifying normalizer for pretrained tokenizers don't consistently work HOT 2
- Llama-3 offset-mapping needs fixing HOT 6
- `Encoding` object stub doesn't include `__len__` HOT 4
- Progress bar doesn't show in log file. HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tokenizers.