Comments (12)
If you already have a trained model (and it fits in memory), then the simplest way to run inferencing in a Spark job is to use something like this example. Basically, you load the model in your map_fn
and inference on partitions. Note that the TFParallel
class is just a convenience function wrapping a RDD.mapPartitions()
call, so if it doesn't fit your needs, you can always write a similar class. Also, you may want to cache your model (if it's large), instead of reloading it in every map function call.
Note: there is also this example, which tries to emulate the Spark DataFrame API, but may be a little harder to follow how it works.
Finally, some folks wrap the TF model inside a Spark UDF, but I don't have an example of that here.
from tensorflowonspark.
@leewyang How can I cache my model in pyspark. I found the model got reloaded for every task. Here's a demonstration of how I predict the whole dataset
def _predict_dataset():
def _input_fn():
...
estimator = build_estimator(...)
return estimator.predict(_input_fn)
data.mapPartitions(lambda it: _predict_dataset())
from tensorflowonspark.
Can you please give me some coding snippets or resources to read or understand that how I can load my trained_models in spark? I'm not sure how to do this part.
from tensorflowonspark.
@jiqiujia could you please help me like how I can load my trained_model that are already saved in my project directory. Can you share the coding steps where you integrated your trained model in spark. Thank you
from tensorflowonspark.
@jahidhasanlinix you could follow the examples in this repo: https://github.com/yahoo/TensorFlowOnSpark/tree/master/examples
from tensorflowonspark.
@jiqiujia thank you. I'll check it.
from tensorflowonspark.
@jiqiujia assuming that your model won't change over the course of the job, you can just cache the model in the python worker processes via a global variable. Just check if it's none/null, and if so, load the model from disk, otherwise use the cached model.
from tensorflowonspark.
How can I load the model? I have a code base model and trained model saved in .pt. how can I load into the Cluster? Any help
from tensorflowonspark.
@jahidhasanlinix Not quite sure what you're doing here... *.pt are PyTorch models. Have you converted a TensorFlow model to PyTorch (or vice versa)?
from tensorflowonspark.
@leewyang https://github.com/hongzimao/decima-sim
Here is the source code I was trying to work with to integrate this pipeline into Spark, would you like to give me some instructions how to do that. I guess this repo, can give you some idea that what's I'm trying to say. Do you mind sharing your code to see how you did the integration of your tf code into spark. (This code in Tensoflow, but there's one is in Pytorch also, but tf understanding can help) but problem is I don't know how to integrate this repo code base into spark.
from tensorflowonspark.
@jahidhasanlinix Unfortunately, I think that code looks like it's beyond the scope of what TFoS is trying to do. Decima presumably integrates with (or replaces) the spark scheduler itself, while TFoS is more about using Spark (and it's default scheduler) to launch training/inferencing jobs on executors.
from tensorflowonspark.
@leewyang thank you so much for your response. Is there any other way to integrate this, can you help me with this.
from tensorflowonspark.
Related Issues (20)
- Writing checkpoints to HDFS takes long HOT 2
- when using mnist_spark.py , serializer.dump_stream Timeout while feeding partition HOT 2
- pkg_resources.DistributionNotFound: The 'tensorflow' distribution was not found and is required by the application HOT 3
- MNIST example - Exception in TF background thread HOT 2
- the doubt about the data policy HOT 1
- Performance issues in the program HOT 2
- Performance issues in examples/mnist/estimator (by P3) HOT 3
- Retaining original columns after inference HOT 2
- tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme 'cosn' not implemented HOT 2
- Model Saved with TF-2.5.0 HOT 3
- Get stuck at "Added broadcast_0_piece0 in memory on" while runing Spark standalone cluster HOT 1
- ExitCode: 13 executing mnist_data_setup.py on a yarn cluster HOT 3
- can it run on tensorflow-cpu? HOT 1
- can it run use ParameterServerStrategy HOT 3
- do we support scala & java code write tensorflow model with tenorflow-core-api ? HOT 3
- Evalator hangs while training HOT 1
- yarn mode error HOT 1
- error while running mnist_tf_ds.py HOT 1
- I have been trying to use TensorFlowOnSpark in Azure Synapse Analytics and I would like to ask if you have any information about its compatibility in this environment
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorflowonspark.