Git Product home page Git Product logo

rlip's Introduction

👋 Hi, I am Hangjie Yuan (袁杭杰 in Chinese). I am currently a Ph.D. candidate from Zhejiang University supervised by Prof. Dong Ni, and a long-term research intern at Alibaba DAMO Academy. I am undertaking a visiting Ph.D. program at MMLab@NTU, supervised by Prof. Ziwei Liu. Additionally, I am supervised by Prof. Samuel Albanie (the University of Cambridge) and Dr. Shiwei Zhang (Alibaba DAMO Academy).

While conducting research, I prioritize humanity above all else. Therefore, the ultimate goal of my research is to prioritize human well-being.

Check out some of my cool projects: InstructVideo, VideoComposer, and the RLIP series (RLIP & RLIPv2).

Wanna keep up with my adventures? Click on over to my personal page for all the latest and greatest.

rlip's People

Contributors

jacobyuan7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rlip's Issues

Error when installing the environment

Hi, thank you for your nice work.

When I run the inference code

"bash scripts/Fine-tune_RLIP-ParSe_HICO.sh"

It shows the error

File "/home/qinqian/RLIP/models/hoi.py", line 24, in
from .transformer import TransformerEncoder, TransformerEncoderLayer
File "/home/qinqian/RLIP/models/transformer.py", line 29, in
from .ParSetransformer import ParSeTransformer, ParSeDeformableTransformer
File "/home/qinqian/RLIP/models/ParSetransformer.py", line 23, in
from transformers import RobertaModel, RobertaTokenizerFast, BertTokenizerFast, BertModel
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/transformers/init.py", line 43, in
from . import dependency_versions_check
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/transformers/dependency_versions_check.py",
line 41, in
require_version_core(deps[pkg])
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/transformers/utils/versions.py", line 101,
in require_version_core
return require_version(requirement, hint)
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/transformers/utils/versions.py", line 92, i
n require_version
if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)):
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/packaging/version.py", line 52, in parse
return Version(version)
File "/home/qinqian/anaconda3/envs/augm/lib/python3.8/site-packages/packaging/version.py", line 198, in init
raise InvalidVersion(f"Invalid version: '{version}'")
packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'

It seems that it is the transformer version problem, but I don't know how to fix it. Any reply will be appreciated. Thank you very much.

Best,
hangzhiyiwei

zero-shot settings questions

Hi, @JacobYuan7, thank you for your excellent work.

I have some questions about the zero-shot settings. I found that in the RFUC/ NFUC training annotation files, those images with unseen HOI classes are also included in the training. I just wonder what is the definition of the zero-shot settings. Only the HOI annotations are masked but all the images are available?

In this case, how can you use those images without annotations for training? I saw that those images' annotations are all set to 0. If so, will those images hurt the training?

Thank you very much.

How to interpret the results generated by the "inference on custom images" code?

Hey everyone,
I followed the steps from the README file to run the inferencing on custom images.

When the inferencing is over, I read the pickle file by running the following:

import pickle
file = open('custom_imgs.pickle', 'rb')
labels = pickle.load(file)

Then I have access to the dictionary that is generated in

outputs = model(samples, encode_and_save=False, memory_cache=memory_cache, **kwargs)
# outputs: a dict, whose keys are (['pred_obj_logits', 'pred_verb_logits',
# 'pred_sub_boxes', 'pred_obj_boxes', 'aux_outputs'])
# orig_target_sizes shape [bs, 2]
# orig_target_sizes = torch.stack([t["orig_size"] for t in targets], dim=0)

The keys in this dictionary are the images. However, after I access the label of each specific image, it is unclear to me how to "translate" this back to a readable format (as shown in the paper) since each key from this dictionary contains only tensors with the scores for the labels and verbs.

>>> labels['custom_imgs/0000005.png'].keys()
dict_keys(['labels', 'boxes', 'verb_scores', 'sub_ids', 'obj_ids'])
>>> labels['custom_imgs/0000005.png']['labels']
tensor([ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
         0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
         0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
         0,  0,  0,  0,  0,  0,  0,  0,  0,  0, 73, 24, 56, 24, 24, 66, 66, 73,
        56, 24, 66, 73, 73, 24, 66, 24, 73, 24, 73, 24, 56, 66, 24, 66, 24, 56,
        24, 73, 24, 73, 73, 73, 66, 24, 24, 66, 24, 41, 66, 24, 24, 56, 24, 24,
        41, 73, 24, 66, 56, 66, 24, 24, 66, 27, 73, 73, 73, 24, 24, 56, 73, 24,
        24, 73])
>>> labels['custom_imgs/0000005.png']['verb_scores']
tensor([[2.4966e-05, 1.3955e-05, 1.6503e-05,  ..., 9.9974e-05, 8.8934e-05,
         1.8285e-05],
        [3.4093e-05, 1.4082e-05, 1.0232e-05,  ..., 2.6095e-04, 4.8016e-05,
         1.2272e-05],
        [3.8148e-06, 1.7999e-06, 1.8853e-06,  ..., 3.6490e-05, 5.5943e-06,
         2.3014e-06],
        ...,
        [1.3244e-05, 5.0881e-06, 4.0611e-06,  ..., 1.0128e-04, 1.8682e-05,
         4.9957e-06],
        [3.6079e-05, 1.5216e-05, 1.1946e-05,  ..., 3.2203e-04, 5.4361e-05,
         1.4641e-05],
        [2.5316e-05, 1.1472e-05, 1.5237e-05,  ..., 1.2542e-04, 1.2565e-04,
         1.6679e-05]])

Any help is appreciated. Thanks!

Test/Inference code

Hi @JacobYuan7, first of all, thanks for sharing your work!

I had a question about generating HOIs on new images without annotations, is it possible to modify generate_vcoco_official.py to do this?

something about the VG dataset?

hello! There are some problems with the dataset link (VG) you provided. Could you please share the dataset you used for pre-training?
031d8dff10e8b713e43c9f50b2defa0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.