Test out CLIP from OpenAI on various datasets like ImageNetV2, Coco, and DeepDrive.
Install the dependencies from requirements.txt
pip install -r requirements.txt
Follow the instructions below depending on which datasets you want to query:
- For ImageNetv2, download the matched-frequency dataset to the
imagenetv2
directory. - For Coco, download the features to the
coco
directory. - For DeepDrive, download the Images dataset and the features to the
bdd
directory.
Run the Streamlit app
streamlit run clip_app.py
The Jupyter notebooks in notebooks
show how new datasets can be processed.