Git Product home page Git Product logo

Comments (5)

njcx-ai avatar njcx-ai commented on May 30, 2024

Hello,

Thank you for your interest in our work. We apologize for the issue you've encountered, which we have not come across in our previous experiments. We recommend that you first verify the consistency of your environment configuration version with the README file.

Due to our busy work schedule and the considerable time since we last engaged with this code, we anticipate revisiting the experiment over the weekend to assist you in troubleshooting the problem. We will provide you with a response at that time.

We appreciate your understanding and patience in this matter.

Xiang Chen

from hvpnet.

lv-9527 avatar lv-9527 commented on May 30, 2024

from hvpnet.

flow3rdown avatar flow3rdown commented on May 30, 2024

Hi,

Did you predict using the model you trained yourself? How did the model perform during training? In the RE task, 0 means that the two entities are not related, and predicting 0 is normal because there are many unrelated instances in the dataset.

from hvpnet.

lv-9527 avatar lv-9527 commented on May 30, 2024

Hello, thank you for your reply, I have solved the problem that predicted 0 before. The reason: I have the CPU version of CUDA as required by the requirements, but the GPU version is fine. But I still have a problem: I now want to replace the image and text data sets with my own images and text. What I need is the relation extraction aspect, there is MRE in the TXT folder in Re. PTH, MRE. PTH, MRE. PTH, based on my observations, these should be similar to a dictionary that will slice images and text maps, but I do not know how this is generated. I now have my own text, dictionary DICT, and pictures. The PTH file is missing, and the model won't work without it. I looked at the instructions on github and figured that you'd need a Data pre-processing, an NLTK parser to extract noun phrases from text, a toolkit to detect objects, and a dictionary to record relationships between objects. Can you be more specific, how is the PTH file generated?
微信图片_20240425142437

你好

你用你自己训练的模型预测了吗?模型在训练期间的表现如何?在 RE 任务中,0 表示两个实体不相关,预测 0 是正常的,因为数据集中有许多不相关的实例。

from hvpnet.

flow3rdown avatar flow3rdown commented on May 30, 2024

You need first to extract the noun phrases from the text using the NLTK parser, Spacy, or TextBlob. Then you can utilize the visual grounding toolkit to detect objects. More details can be seen in its repository. In addition, you need to record the correspondence between images and objects for subsequent reading.

from hvpnet.

Related Issues (18)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.