Comments (13)
@amj @Kashomon So to get started on this, I'm wondering what our TPU instance management looks like. Would we be spinning up a instance manually, git-pulling, and pip install'ing to be able to train on TPU? Or is there a kubeified solution to this?
from minigo.
Notes to self:
This issue is for training on TPUs. Selfplay on TPUs is still a difficult problem
Anticipated issues:
- rewriting DualNetworkTrainer to use Estimator api (should be easy; I architected the input_fn and network definition functions based on estimator api.)
- during estimator rewrite, figure out how to continue logging all the custom metrics we've implemented. Hopefully these metrics can continue to be exported in an efficient fashion during TPU operation.
- figuring out how many parallel readers we can use; how large the files should be for the parallel readers; whether a sloppy parallel reader can be used.
- profile the cloud TPU execution trace to verify we're getting good throughput.
- get an idea of how expensive TPU training is, overhead-wise, and related, figure out what's the optimal saving schedule.
from minigo.
TPUs for GKE are quickly approaching Alpha. I'll put more details here soon
from minigo.
@brilee for now though, spinning up an instance, attaching at TPU to it, pulling the code, etc, is the way to get moving. It should be pretty painless to setup and hopefully you shouldn't have to do it very often ;)
from minigo.
more elaboration/notes to self:
- rewriting DualNetworkTrainer to use Estimator api (hah, I thought this would be easy)
- figure out how to initialize variable to previous model's weights, but save current run under a new model name.
- estimator assumes that model_dir will contain all things related to the training run - checkpoints, saved models, and logs. this means that tensorboard logs may be broken up if we use a separate directory for each model generation... perhaps use a single model_dir but periodically create a model export to GCS? this would require also building in a way to reinitialize training from an exported GCS model.
- figure out how to create a set of bootstrap weights
- replace StatisticsCollector with eval_metric_ops({...tf.metrics.mean()}
- create a training hook that evaluates the update ratios every so often
- figure out save/checkpoint schedule
- verify that tensorboard shows all the right things
- figure out how to export model as SavedModel. (I don't think the default exporter works, because it exports a graph that takes as input a serialized tf.Example.)
- figure out how to initialize variable to previous model's weights, but save current run under a new model name.
from minigo.
This is definitely going to require some sort of baking-in period to ensure I haven't messed up my estimator rewrite in some way...
from minigo.
That's a pretty good list. is it worth breaking the work up to two people?
from minigo.
Don't think this is parallelizable. I'm pretty much ripping out 75% of dual_net.
from minigo.
I'm pretty much ripping out 75% of dual_net.
This comment worries me quite a bit. It sounds like we're going to sacrifice readability here to make this work on TPUs. Is that the tradeoff? Should we keep around the old implementation for clarity and because it works?
from minigo.
Estimator does a host of things for you, like managing checkpointing frequency, initializing model weights, setting up logs, and streaming eval metrics (which obsoletes the homegrown StatisticsCollector stuff). So the code will actually get a lot shorter/cleaner as a result. The main pain point is twisting Estimator to do some of the things we'd done manually, but don't really fit into how Estimator wants to do things.
from minigo.
The TPUEstimator's marketing pitch is that you just use Estimator, swap out the Estimator impl, for TPUEstimator, and it should Just Work. But we'll see :)
from minigo.
I stumbled across https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_main.py#L248 while looking at examples. Seems like getting summaries logged during training requires a workaround at the moment.
from minigo.
we have cloud tpu training up now.
from minigo.
Related Issues (20)
- run concurrent selfplay without bazel HOT 1
- Running minigo with Sabaki GUI HOT 2
- Problem while building tpu-image HOT 3
- Problem in features.stone_features HOT 1
- Onscreen buttons in lw_demo don't toggle (work)
- Minigo not working on Coral accelerator HOT 4
- Add Edge TPU support to C++ engine HOT 1
- Decouple the conv data format from the input feature layout HOT 8
- How strong is the model in kyu/dan? HOT 7
- 000990-cormorant: stderr thread died HOT 1
- Wrong argument passed in minigui/fetch-and-run.sh HOT 1
- How to communicate with engine easily outside stdin HOT 2
- Support for sending board state to the engine via GTP HOT 6
- Looking for 9x9 model files in .minigo file format HOT 7
- Error on Minigo v15(990)
- tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Assign requires shapes of both tensors to match HOT 3
- The setting of num_readouts to get strongest of minigo
- train.sh in cloud tpu
- Minigo training using Coral Dev Board HOT 1
- ./cc/configure_tensorflow.sh HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from minigo.