Git Product home page Git Product logo

streaming-model-serving's Introduction

Evaluating Model Serving Strategies over Streaming Data

This repository contains the implementation and experiments configuration to compare different model serving alternatives over streaming data. Our repository is organized as it follows: the implementation/ diretcory contains the code for all the benchmarked pipelines, alonside the scripts to run the experiments and the corresponding evaluation configurations. The models/ directory contains: (1) the scripts used to train the TensorFlow and PyTorch models (under models/training/), and (2) the pre-trained feed-forward models (under models/feedforward/).

Prerequisites

  1. Unix-like environment
  2. Maven
  3. Java 8
  4. Docker installation

How to run the experiments

All the different benchmarks can be executed using their corresponding run_benchmark.sh shell script. The script will first run the experiments (with their corresponding configurations), write the results to disk, and then compute the quality metrics per experiment type. The results will be printed to the console.

Model serving natively embedded into the stream processor

The code for the embedded inference approach is contained inside implementation/flink. The run_benchmark.sh shell script is used to experiment with all the embedded serving libraries (i.e., ND4J, ONNX, and TensorFlow SavedBundle). Simply running the run_benchmark.sh shell script will print all the computed quality metrics to the console. Thus, one only needs to execute implementation/flink/run_benchmark.sh once to obtain the ND4J, ONNX, and TensorFlow SavedBundle results.

Model serving via external services

We test two different model serving systems: Tensorflow Serving, and TorchServe. The code and experiments configurations are located at implementation/tensorflowserve and implementation/torchserve respectively. The run_benchmark.sh shell script will run the evaluation for each approach. Please note that the corresponding shell scripts must be executed manually for each approach: execute implementation/tensorflowserve/run_benchmark.sh to gather the TensorFlow Serving results and implementation/torchserve/run_benchmark.sh for the TorchServe tests.

Configurations

The experiment configuration files are located at implementation/{flink | tensorflowserve | torchserve}/expconfigs/ and can be adjusted to run the experiments with different configurations.

streaming-model-serving's People

Contributors

soniahorchidan avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.