Git Product home page Git Product logo

natural-language-object-retrieval's Introduction

Natural Language Object Retrieval

This repository contains the code for the following paper:

  • R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, T. Darrell, Natural Language Object Retrieval, in Computer Vision and Pattern Recognition (CVPR), 2016 (PDF)
@article{hu2016natural,
  title={Natural Language Object Retrieval},
  author={Hu, Ronghang and Xu, Huazhe and Rohrbach, Marcus and Feng, Jiashi and Saenko, Kate and Darrell, Trevor},
  journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2016}
}

Project Page: http://ronghanghu.com/text_obj_retrieval

Installation

  1. Download this repository or clone with Git, and then cd into the root directory of the repository.
  2. Run ./external/download_caffe.sh to download the SCRC Caffe version for this experiment. It will be downloaded and unzipped into external/caffe-natural-language-object-retrieval. This version is modified from the Caffe LRCN implementation.
  3. Build the SCRC Caffe version in external/caffe-natural-language-object-retrieval, following the Caffe installation instruction. Remember to also build pycaffe.

SCRC demo

  1. Download the pretrained models with ./models/download_trained_models.sh.
  2. Run the SCRC demo in ./demo/retrieval_demo.ipynb with Jupyter Notebook (IPython Notebook).

Image

Train and evaluate SCRC model on ReferIt Dataset

  1. Download the ReferIt dataset: ./datasets/download_referit_dataset.sh.
  2. Download pre-extracted EdgeBox proposals: ./data/download_edgebox_proposals.sh.
  3. You may need to add the SRCR root directory to Python's module path: export PYTHONPATH=.:$PYTHONPATH.
  4. Preprocess the ReferIt dataset to generate metadata needed for training and evaluation: python ./exp-referit/preprocess_dataset.py.
  5. Cache the scene-level contextual features to disk: python ./exp-referit/cache_referit_context_features.py.
  6. Build training image lists and HDF5 batches: python ./exp-referit/cache_referit_training_batches.py.
  7. Initialize the model parameters and train with SGD: python ./exp-referit/initialize_weights_scrc_full.py && ./exp-referit/train_scrc_full_on_referit.sh.
  8. Evaluate the trained model: python ./exp-referit/test_scrc_on_referit.py.

Optionally, you may also train a SCRC version without contextual feature, using python ./exp-referit/initialize_weights_scrc_no_context.py && ./exp-referit/train_scrc_no_context_on_referit.sh.

Train and evaluate SCRC model on Kitchen Dataset

  1. Download the Kitchen dataset: ./datasets/download_kitchen_dataset.sh.
  2. You may need to add the SRCR root directory to Python's module path: export PYTHONPATH=.:$PYTHONPATH.
  3. Build training image lists and HDF5 batches: python exp-kitchen/cache_kitchen_training_batches.py.
  4. Train with SGD: ./exp-kitchen/train_scrc_kitchen.sh.
  5. Evaluate the trained model: python exp-kitchen/test_scrc_on_kitchen.py.

natural-language-object-retrieval's People

Contributors

ronghanghu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

natural-language-object-retrieval's Issues

about Gradient clipping

hi hu,
I have some problem training the SCRC network.

  • I follow the readme and successfully train the network, but the model seems to be very unstable. The L2 norm is quite large, even when the training process is done. Did you encounter this problem?

Cannot start training due to core dumped

Hi Ronghang:
I downloaded the source code you provided and followed your steps to process the data, everything was OK, but when I started training, it always failed due to core dumped, as I was using Ubuntu16.04 and one GTX1080 for training, is it because of the batch_size problem or so on?
I would really want a solution, and appreciate it if you can give me some advice
Shuai

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.