Comments (12)
Cool! Then your understanding here seems correct!
OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?
Great, thanks for your time. I will let you know once we reproduce your conclusion . Besides, good luck to your PhD degree pursuit. :D
from datacomp.
@mingtan2 yes! that should be fine!
from datacomp.
Hi @zwsjink, the number of epochs controls the number of checkpoints that are saved during training. If training on k
samples (e.g., k = 128M
for the medium pool) with number of epoch n
, we will save a checkpoint after k // n
samples are seen. Hence each epoch corresponds to seeing k // n
samples from the training pool with replacement.
See here where number of samples per epoch are set.
See here where number of epochs is set to be the number of checkpoints.
from datacomp.
In your example of the 30M dataset for the medium scale with number of epochs = 10. Each epoch would correspond to sampling 12.8M samples from the 30M dataset (with replacement)
from datacomp.
In your example of the 30M dataset for the medium scale with number of epochs = 10. Each epoch would correspond to sampling 12.8M samples from the 30M dataset (with replacement)
OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?
from datacomp.
For participating in DataComp, you don't have to set --train-num-samples
or --epochs
directly. Please see this section of the README for a sample command line, where $scale
would be medium
for the 128M pool.
You can additionally set the --num_checkpoints
flag as seen here to specify how many checkpoints you would like to save. Our code will take care of setting --train-num-samples
and --epochs
accordingly under the hood.
Hope this helps!
from datacomp.
As for the performance deltas for setting different --num_checkpoints
, there should not be dramatic changes in downstream performance. At the start of every "epoch" the dataloader is re-initialized, hence different values for --num_checkpoints
will lead to different data orders similar to changing the random seed. Please see here for a note on seed variance.
from datacomp.
For participating in DataComp, you don't have to set
--train-num-samples
or--epochs
directly. Please see this section of the README for a sample command line, where$scale
would bemedium
for the 128M pool.You can additionally set the
--num_checkpoints
flag as seen here to specify how many checkpoints you would like to save. Our code will take care of setting--train-num-samples
and--epochs
accordingly under the hood.Hope this helps!
Well currently i'm not planning to participate in the track, just trying to follow the paper and do something very similar with you guys' OPEN_CLIP & CLIP_BENCHMARK toolbox on different dataset.
from datacomp.
Cool! Then your understanding here seems correct!
OK, thanks for your reply, so in my example, I would set --train-num-samples to 12.8M while set --epochs to 10. Alternatively, I can also do it with 25.6M train-num-samples and 5 epochs , right? as long as the multiplication meets the same, there should be no difference in the final training performance , I suppose ?
from datacomp.
As for the performance deltas for setting different
--num_checkpoints
, there should not be dramatic changes in downstream performance. At the start of every "epoch" the dataloader is re-initialized, hence different values for--num_checkpoints
will lead to different data orders similar to changing the random seed. Please see here for a note on seed variance.
@sagadre Had this same question. Thanks for explaining here. Then, it is allowed to set the num_checkpoints to be 1 (instead of 8) to accelerate training without reinitializing dataloader?
from datacomp.
@mingtan2 yes! that should be fine!
@sagadre In addition, is it allowed to disable dataset_resampled
here if it is compatible with DataComp challenge settings?
from datacomp.
Hi @mingtan2, yes you should keep the --dataset_resampled
for the challenge
from datacomp.
Related Issues (20)
- 14% of SHA256 hashes not matching HOT 32
- the normal success rate and downloading speed? HOT 1
- `zeroshot_templates` split error for FairFace / UTKFace HOT 9
- Deduplication against evaluation sets HOT 1
- Remove CSAM, if present HOT 5
- Metadata for datacomp-large text-based filter HOT 1
- Pretraining dataset HOT 1
- Training log HOT 1
- Frequency of Leaderboard Updates HOT 1
- About update metadata with the corresponding image sample in shards HOT 2
- ModuleNotFoundError: No module named 'training' HOT 2
- Availability of npy indices for large pool
- Average caption length for CommonPool HOT 1
- Downloading Commonpool XLarge
- ImageNet 21k based filtered dataset HOT 1
- Invalid files for Datacomp1B
- Problems in run train.py HOT 3
- Metadata downloading fails and no way to resume the download
- Redundant labels in iWILDCAM eval data
- Label Errors in ImageNet-O Eval Set
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from datacomp.