ywchao / ho-rcnn Goto Github PK
View Code? Open in Web Editor NEWCode for reproducing the results in "Learning to Detect Human-Object Interactions"
Code for reproducing the results in "Learning to Detect Human-Object Interactions"
@ywchao Hi, could you please help me, thank you very much
when I running detection with a trained mode according to https://github.com/ywchao/ho-rcnn#running-detection-with-a-trained-model
./experiments/scripts/test_rcnn_caffenet_ho_pconv_ip1_s/01_person.sh
the terminal output the information as the following:
I'm trying to reproduce some results on the HICO-DET dataset, but I don't seem to find the correct split between rare and non-rare classes.
According to the paper, rare hoi classes are those having <5 instances in the training set. Rare classes should be 167 of the 600 hoi classes in total.
However, if I filter the training set using this criterion I find these counts:
count<k num_hois
5 100
6 114
7 123
8 129
9 132
10 138
11 144
12 154
13 159
14 164
15 168
16 174
17 178
18 183
19 187
20 190
I checked your evaluation code and I spotted a <10
in eval_one.m
, but even with that threshold, the number 167 doesn't come up.
Can you provide a list of exactly which hoi categories you consider rare and which not?
A format similar to the one you have on your website would be ideal.
Thanks!
I'm trying to evaluate my results on the hico-det dataset by running the evaluation code provided from the repo. However, I'm not quite sure how to do this. Can you give an explanation on how to do this?
hello
have You the proposals file separately? and in which file, the proposals is generated and the positive and negative data are generated.
thanks
@ywchao This interesting project gave a demo to run detection on the HICO-DET test set using a trained HO-RCNN model.
However, the input of the demo is so complex and not easy to understand.
It seems the input contains a few .mat file in the data folder and cache folder, but what's the use of them? and what is the meaning of the data in them? and how could we make our data (.jpg images and the .txt contains box information) to be the input of ho-rcnn?
Could anyone explain it? Thanks!
Hello,
Thank you for releasing the code. I do not have access to MATLAB. Is there a Python version of the code? I'd like to be able to input an mp4 video and get the HOI interactions output on the video.
Thank you,
I've read your paper Learning to Detect Human-Object Interactions which is very good, but i was wondering if you have tried implementing HO-RCNN with Faster R-CNN or YOLO?
How do you convert the output of (https://github.com/ywchao/hoi-det-ui) to the input of this repo (https://github.com/ywchao/ho-rcnn)?
I wonder how to convert the JSON output to anno.mat as the dataset.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.