Taxi Fare Prediction model Deployment Video
Blog with instructions on the run (Coming Soon)
The aim is to predict the captions of images using deep learning.
We used a pretrained GPT-2 model and deployed it as a webapp and a FastAPI endpoint using ServiceFoundry ๐
Setting up servicefoundry
Install and setup servicefoundry on your computer.
pip install servicefoundry
servicefoundry use server https://app.truefoundry.com
servicefoundry login
Deploying realtime inference
- Change working directory to predict folder
cd predict
-
Replace the
MLF_API_KEY
valuepredict.yaml
file with the API Key found in secrets tab of your TrueFoundry account (Instructions here) -
Copy the workspace_fqn of the workspace that you want to use from the workspace tab of TrueFoundry(Instructions here) and add it in
predict.yaml
file -
To deploy using python script:
python predict_deploy.py
To deploy using CLI:
servicefoundry deploy --file predict/predict_deploy.yaml
- Click on the dashboard link in the terminal to open the service deployment page with FastAPI EndPoint
Querying the deployed model
This can done via the fastapi endpoint directly via browser.
Deploying Demo
Note: It is necessary to deploy live inference model before being able to deploy a demo
-
Replace the
MLF_API_KEY
valuedemo.yaml
file with the API Key found in secrets tab of your TrueFoundry account (Instructions here) -
Copy the workspace_fqn of the workspace that you want to use from the workspace tab of TrueFoundry and add it in the
demo.yaml
file (Instructions here) -
To deploy using python script:
python demo/demo_deploy.py
To deploy using CLI:
servicefoundry deploy --file demo/demo_deploy.yaml
- Click on the dashboard link in the terminal
- Click on the "Endpoint" link on the dashboard to open the streamlit demo