Comments (3)
Hi @StephennFernandes , your target_to_key preprocessor doesn't look correct:
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
it basically maps both "inputs" and "targets" fields to nothing, so you end up with no data after this preprocessor runs. See an example of correct usage here in the T5 codebase - https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/tasks.py#L50
from seqio.
@StephennFernandes Hi mate, have you done with data sampling for pre-training mT5? I am still stuck in pre-processing data for that (using Flax).
from seqio.
@tarudesu the following is how i create seqio TaskRegistry
per language and them further created MixtureRegistry
with different mixture rates to test which one works best on my version of mt5 model. you can latter use the name from the mixture registry to call the dataset mixture to pre-train the model.
btw i do use huggingface dataset and not tfds to load the dataset into seqio, the gen_dataset
func is what loads this.
the code is 1.5 year old so a word of caution before working it with your version of dependencies, but should work fine mostly.
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_from_disk
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
vocabulary = seqio.SentencePieceVocabulary("/home/stephen/Desktop/t5_test_run/t5x/HI_ALL_VOCAB_32000_UNIGRAM.model", extra_ids=100)
DEFAULT_OUTPUT_FEATURES = {
"inputs": seqio.Feature(
vocabulary=t5.data.get_default_vocabulary(), add_eos=True,
required=False),
"targets": seqio.Feature(
vocabulary=t5.data.get_default_vocabulary(), add_eos=True)
}
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_path=None):
dataset = load_from_disk(dataset_path)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_path=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
# link to the mt5 sentencepiece tokenizer vocabulary
vocabulary = seqio.SentencePieceVocabulary("/home/stephen/Desktop/t5_test_run/t5x/HI_ALL_VOCAB_32000_UNIGRAM.model", extra_ids=100)
#assamese_span_curruption
TaskRegistry.add(
"assamese_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/ASSAMESE'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#bengali_span_curruption
TaskRegistry.add(
"bengali_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/BENGALI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#bhisnupuriya_span_curruption
TaskRegistry.add(
"bhisnupuriya_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/BHISNUPURIYA'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#bodo_span_curruption
TaskRegistry.add(
"bodo_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/BODO'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#divehi_span_curruption
TaskRegistry.add(
"divehi_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/DIVEHI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#dogri_span_curruption
TaskRegistry.add(
"dogri_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/DOGRI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#english_span_curruption
TaskRegistry.add(
"english_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/ENGLISH'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#gujarati_span_curruption
TaskRegistry.add(
"gujarati_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/GUJARATI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#hindi_span_curruption
TaskRegistry.add(
"hindi_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/HINDI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#kannada_span_curruption
TaskRegistry.add(
"kannada_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/KANNADA'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#kashmiri_span_curruption
TaskRegistry.add(
"kashmiri_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/KASHMIRI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#konkani_span_curruption
TaskRegistry.add(
"konkani_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/KONKANI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#maithili_span_curruption
TaskRegistry.add(
"maithili_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/MAITHILI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#malayalam_span_curruption
TaskRegistry.add(
"malayalam_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/MALAYALAM'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#manipuri_span_curruption
TaskRegistry.add(
"manipuri_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/MANIPURI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#marathi_span_curruption
TaskRegistry.add(
"marathi_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/MARATHI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#nepali_span_curruption
TaskRegistry.add(
"nepali_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/NEPALI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#odia_span_curruption
TaskRegistry.add(
"odia_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/ODIA'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#panjabi_span_curruption
TaskRegistry.add(
"panjabi_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/PANJABI'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#sanskrit_span_curruption
TaskRegistry.add(
"sanskrit_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/SANSKRIT'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#tamil_span_curruption
TaskRegistry.add(
"tamil_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/TAMIL'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#telugu_span_curruption
TaskRegistry.add(
"telugu_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/TELUGU'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
#urdu_span_curruption
TaskRegistry.add(
"urdu_span_curruption",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_path='/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/URDU'),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=None,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"inputs": None,
"targets": None,
}, target_key="targets"),
seqio.preprocessors.tokenize,
# seqio.CacheDatasetPlaceholder(),
preprocessors.span_corruption,
seqio.preprocessors.append_eos_after_trim,
],
output_features={"targets": seqio.Feature(vocabulary=vocabulary, add_eos=True)},
metric_fns=[]
)
# defining a mixture of languages.
#seqio mixture 3.5
seqio.MixtureRegistry.add(
"mix_3.5",
["assamese_span_curruption", "bengali_span_curruption",
"bhisnupuriya_span_curruption", "bodo_span_curruption",
"divehi_span_curruption", "dogri_span_curruption",
"english_span_curruption", "gujarati_span_curruption",
"hindi_span_curruption", "kannada_span_curruption",
"kashmiri_span_curruption", "konkani_span_curruption",
"maithili_span_curruption", "malayalam_span_curruption",
"manipuri_span_curruption", "marathi_span_curruption",
"nepali_span_curruption", "odia_span_curruption",
"panjabi_span_curruption", "sanskrit_span_curruption",
"tamil_span_curruption", "telugu_span_curruption",
"urdu_span_curruption" ],
default_rate=3.5
)
seqio.MixtureRegistry.add(
"mix_3",
["assamese_span_curruption", "bengali_span_curruption",
"bhisnupuriya_span_curruption", "bodo_span_curruption",
"divehi_span_curruption", "dogri_span_curruption",
"english_span_curruption", "gujarati_span_curruption",
"hindi_span_curruption", "kannada_span_curruption",
"kashmiri_span_curruption", "konkani_span_curruption",
"maithili_span_curruption", "malayalam_span_curruption",
"manipuri_span_curruption", "marathi_span_curruption",
"nepali_span_curruption", "odia_span_curruption",
"panjabi_span_curruption", "sanskrit_span_curruption",
"tamil_span_curruption", "telugu_span_curruption",
"urdu_span_curruption" ],
default_rate=3
)
seqio.MixtureRegistry.add(
"mix_2",
["assamese_span_curruption", "bengali_span_curruption",
"bhisnupuriya_span_curruption", "bodo_span_curruption",
"divehi_span_curruption", "dogri_span_curruption",
"english_span_curruption", "gujarati_span_curruption",
"hindi_span_curruption", "kannada_span_curruption",
"kashmiri_span_curruption", "konkani_span_curruption",
"maithili_span_curruption", "malayalam_span_curruption",
"manipuri_span_curruption", "marathi_span_curruption",
"nepali_span_curruption", "odia_span_curruption",
"panjabi_span_curruption", "sanskrit_span_curruption",
"tamil_span_curruption", "telugu_span_curruption",
"urdu_span_curruption" ],
default_rate=2
)
seqio.MixtureRegistry.add(
"mix_4",
["assamese_span_curruption", "bengali_span_curruption",
"bhisnupuriya_span_curruption", "bodo_span_curruption",
"divehi_span_curruption", "dogri_span_curruption",
"english_span_curruption", "gujarati_span_curruption",
"hindi_span_curruption", "kannada_span_curruption",
"kashmiri_span_curruption", "konkani_span_curruption",
"maithili_span_curruption", "malayalam_span_curruption",
"manipuri_span_curruption", "marathi_span_curruption",
"nepali_span_curruption", "odia_span_curruption",
"panjabi_span_curruption", "sanskrit_span_curruption",
"tamil_span_curruption", "telugu_span_curruption",
"urdu_span_curruption" ],
default_rate=4
)```
from seqio.
Related Issues (20)
- Please include installation instructions HOT 2
- import seqio
- how to decide ideal mixture rates ? HOT 1
- FunctionDataSource does not allow function with 3 positional arguments thus shuffling does not work HOT 2
- Tokenizer is not behaving as expected on special tokens (doesn't recognize `pad` and `eos` tokens) HOT 1
- Using a registered task to add another HOT 1
- seqio 0.0.13 cannot be installed on Apple Silicon due to transitive tensorflow dependency of clu HOT 2
- How to apply the huggingface tokenizer in seqio.vocabulary
- Different preprocessors for each dataset split HOT 2
- import seqio: AttributeError: module 'typing' has no attribute 'get_origin' HOT 1
- Concatenating Tasks? HOT 2
- caching tasks goes out of memory due to apache beam HOT 2
- How to choose minimum sequence length while avoiding truncation
- TfdsDataProvider gives error with non-None tfds_data_dir HOT 2
- Dataset performance
- seqio.get_mixture_or_task('bool_q_template_0_no_opt_five_shot') failed
- unimax sampling ?
- ValueError: mutable default <class 'seqio.vocabularies.PassThroughVocabulary'> for field vocabulary is not allowed: use default_factory HOT 3
- Unimax sampler implementation?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from seqio.