Comments (4)
Thank you for using sentencepiece.
Yes, you can simply feed Chinese and English only text as follows.
% spm_train --input=en.txt,zh.txt --model_prefix=shared.model ..
As spm_train only loads first 10M lines (can be changed --input_sentence_size), and the seed vocabs are generated from the first 2M sentences (--mining_sentence_size), it would be better to randomly merge the files in advance.
% shuf en.zh zh.txt | head -1000000 > shared.txt
% spm_train --input=shared.txt --model_prefix=..
Background:
By default, SentencePiece uses the Unicode script type as a boundary constraint, i.e., pieces crossing different script types will never be extracted. As Chinese character Kanji and Latin character have different script types, sentencepiece always puts a boundary between, "hockey" "羽毛" (Chinese and English boundary). --split_by_unicode_script=false disables this feature and may allow to extract the piece like "key羽".
Thank you.
from sentencepiece.
Thanks for the prompt reply and explanation. That sounds perfect for my use case. May I ask how should I balance between input sentence size and mining sentence size? Are there a range of optimal ratio that would work the best?
from sentencepiece.
I'd actually like to add a question: I noticed that there's a hard limit on --mining_sentence_size
of 5M. Is there a reason why this needs to be a hard constraint? Or would it be safe for me to modify it?
from sentencepiece.
Ideally, we want to set --mining_sentence_size == --input_sentence_size.
The default --mining_sentence_size is smaller, because the mining step is memory-consuming. In this step, frequent substrings are extracted from the corpus to make seed pieces. The space complexity of mining step is about O(4 * 4 N) bytes where N the number of Unicode characters in the corpus.
As this flag only affects the seed vocab selection, the default setting may work as long as the input is randomly shuffled. The main spm training uses the entire corpus.
By the way, I will make the hard limit of --mining_sentence_size to be the same as other variables, which should be more consistent.
from sentencepiece.
Related Issues (20)
- Segmentation fault (core dumped) HOT 2
- How to safely extend vocabulary? HOT 3
- Extract & modify the merge rules from the .model file of a SentencePiece BPE model HOT 1
- Same oov count while using different vocab size HOT 2
- Evaluate Profile-Guided Optimization (PGO)
- Official support for Android compilation in Release/Assets HOT 1
- Merging tokenizers issue HOT 4
- RuntimeError HOT 1
- coredump when build with CXXFLAG `-Wp,-D_GLIBCXX_ASSERTIONS` HOT 4
- High frequency token segmented into letter sequence when input is a tsv file HOT 3
- Error while installing the library "sentence-transformers" which has dependency on "sentencepiece" HOT 11
- Getting requirements to build wheel did not run successfully. HOT 5
- Not found google.protobuf packages HOT 1
- error while installing sentencepiece python 3.12.2 HOT 2
- Many tests fail HOT 2
- entry points return non-zero exit code (at least for `--help`) HOT 2
- HELP NEEDED Mask Token in SentencePiece tokenizer HELP NEEDED HOT 1
- Sequence of byte '<0x09>' as token HOT 1
- TSV for NFC normalization HOT 1
- Allow whitespace-only pieces
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sentencepiece.