Comments (5)
I have pushed some changes to add a mtf_model.export call. However, I believe I will need to update both the mesh and t5 packages to get everything working for you. I'll try to test this today to and update the packages.
from text-to-text-transfer-transformer.
@adarob
Thanks for the great work!
I was using the t5 version 0.2.0 , and I tried to deploy an exported sample model to GCP ML engine (single core cpu) but with no luck. The error was
Create Version failed. Model validation failed: SavedModel must contain exactly one metagraph with tag: serve For more information on how to export Tensorflow SavedModel, see https://www.tensorflow.org/api_docs/python/tf/saved_model.
I used saved_model_cli to inspect the exported file, and I guess the problem was due to the second 'serve' tag for tpu.
Another potential problem is that ML engine seems to require outer dimension to be unknown to enable batching , while the t5 batch size is set to a fixed integer.
Last but not the least, the savedModel works great on local machine, but it broke after I tried to quantize it by following the instructions on link.
So, my questions are:
- Is there an option to manage the tags and dimension in the exported SavedModel to support ML engine?
- Could you provide suggestions on t5 model quantization for serving?
Your help will be highly appreciated.
Regards.
from text-to-text-transfer-transformer.
Thanks for sharing these details. Unfortunately as researchers, we don't have time to test all of these applications so it's very helpful for users like yourself to both report and help us fix bugs like this.
Adding @toponado who may be able to help with the tpu tag issue. I wonder if there is a way to remove the tag after export?
I actually don't believe the dimensionality is an issue. The input placeholder is actually shaped [None]
and the batch is automatically padded as long as it is no bigger than what you set as batch_size
during export. We can come back and address this if it turns out to be a problem.
from text-to-text-transfer-transformer.
I just took a quick look and it looks like we just need to disable export_to_tpu
in the TPUEstimator
constructor: https://github.com/tensorflow/estimator/blob/08dd1e6ca94248691dfe00aadafc65e8b875e44f/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L2665
However, this appears to be already happening here: https://github.com/tensorflow/mesh/blob/4c656e2f249bcacbda684c4ff0a860b86f372a2c/mesh_tensorflow/transformer/utils.py#L1082
I'm not sure why it is still being exported with TPU...
from text-to-text-transfer-transformer.
Ah, it's because we call export_estimator_savedmodel
(https://github.com/tensorflow/estimator/blob/08dd1e6ca94248691dfe00aadafc65e8b875e44f/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L4219), which creates a new TPUEstimator with export_to_tpu
set to True
by default. Should be fairly easy to fix. I'll check it out tomorrow.
from text-to-text-transfer-transformer.
Related Issues (20)
- ValueError when evaluating tuning model using Mtf library
- using A100(40G)*8 gpus server to train T5-3b,it reports OOM resource is exhausted problem HOT 2
- How should I speed up T5 exported saved_model by using TF-TRT ?
- model.finetune(...) does not show the loss of the model HOT 6
- CUDA OOM with HF Model
- Predictions are inconsistent unless model is reloaded for each prediction HOT 1
- how to change teacher forcing fashion to autogressive fashion in training stage?
- ERROR:root:Path not found: gs://t5-data/pretrained_models/large/operative_config.gin HOT 6
- Fine tuning t5 without TPU
- About "seqio" in "hf_model.py"
- Question about the metric reported in the paper?
- All attempts to get a Google authentication bearer token failed, returning an empty token. HOT 2
- How to fine-tune T5 with a Casual Language Modeling object?
- cmd vs entrypoint youtube video suggestion HOT 1
- Question about cross-node(multi-node) data parallelism on GPU HOT 1
- Dependencies in `setup.py` have module conflicts.
- How can I get the best checkpoint in Squad?
- Custom Model
- Columns and DataType Not Explicitly Set on line 163 of eval_utils_test.py
- Clarification on T5 Model Pre-training Objective and Denoising Process
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from text-to-text-transfer-transformer.