Git Product home page Git Product logo

eicr's Introduction

{"payload":{"allShortcutsEnabled":true,"fileTree":{"":{"items":[{"name":"configs","path":"configs","contentType":"directory"},{"name":"demo","path":"demo","contentType":"directory"},{"name":"denseclip","path":"denseclip","contentType":"directory"},{"name":"docker","path":"docker","contentType":"directory"},{"name":"extra_function","path":"extra_function","contentType":"directory"},{"name":"maskrcnn_benchmark","path":"maskrcnn_benchmark","contentType":"directory"},{"name":".flake8","path":".flake8","contentType":"file"},{"name":".gitignore","path":".gitignore","contentType":"file"},{"name":"11111111111.py","path":"11111111111.py","contentType":"file"},{"name":"ABSTRACTIONS.md","path":"ABSTRACTIONS.md","contentType":"file"},{"name":"CODE_OF_CONDUCT.md","path":"CODE_OF_CONDUCT.md","contentType":"file"},{"name":"DATASET.md","path":"DATASET.md","contentType":"file"},{"name":"GQA_200_ID_Info.json","path":"GQA_200_ID_Info.json","contentType":"file"},{"name":"GQA_200_Test.json","path":"GQA_200_Test.json","contentType":"file"},{"name":"INSTALL.md","path":"INSTALL.md","contentType":"file"},{"name":"LICENSE","path":"LICENSE","contentType":"file"},{"name":"README.md","path":"README.md","contentType":"file"},{"name":"Setup.ipynb","path":"Setup.ipynb","contentType":"file"},{"name":"TROUBLESHOOTING.md","path":"TROUBLESHOOTING.md","contentType":"file"},{"name":"apex-master.zip","path":"apex-master.zip","contentType":"file"},{"name":"cmd.cache","path":"cmd.cache","contentType":"file"},{"name":"cocoapi-master.zip","path":"cocoapi-master.zip","contentType":"file"},{"name":"requirements.txt","path":"requirements.txt","contentType":"file"},{"name":"run_VG.sh","path":"run_VG.sh","contentType":"file"},{"name":"setup.py","path":"setup.py","contentType":"file"},{"name":"test_binary.py","path":"test_binary.py","contentType":"file"},{"name":"test_main.py","path":"test_main.py","contentType":"file"},{"name":"visualize_sgcl.py","path":"visualize_sgcl.py","contentType":"file"}],"totalCount":28}},"fileTreeProcessingTime":3.003224,"foldersToFetch":[],"repo":{"id":678224775,"defaultBranch":"main","name":"EICR","ownerLogin":"myukzzz","currentUserCanPush":true,"isFork":false,"isEmpty":false,"createdAt":"2023-08-14T11:56:57.000+08:00","ownerAvatar":"https://avatars.githubusercontent.com/u/68174757?v=4","public":true,"private":false,"isOrgOwned":false},"symbolsExpanded":true,"treeExpanded":false,"refInfo":{"name":"main","listCacheKey":"v0:1708998767.0","canEdit":true,"refType":"branch","currentOid":"e16a9334c1679a0a89e5830fe3ee027cb9c31946"},"path":"README.md","currentUser":{"id":68174757,"login":"myukzzz","userEmail":"[email protected]"},"blob":{"rawLines":null,"stylingDirectives":null,"csv":null,"csvError":null,"dependabotInfo":{"showConfigurationBanner":null,"configFilePath":null,"networkDependabotPath":"/myukzzz/EICR/network/updates","dismissConfigurationNoticePath":"/settings/dismiss-notice/dependabot_configuration_notice","configurationNoticeDismissed":false},"displayName":"README.md","displayUrl":"https://github.com/myukzzz/EICR/blob/main/README.md?raw=true","headerInfo":{"blobSize":"6.18 KB","deleteTooltip":"Delete this file","editTooltip":"Edit this file","deleteInfo":{"deleteTooltip":"Delete this file"},"editInfo":{"editTooltip":"Edit this file"},"ghDesktopPath":"https://desktop.github.com","isGitLfs":false,"gitLfsPath":null,"onBranch":true,"shortPath":"524bd51","siteNavLoginPath":"/login?return_to=https%3A%2F%2Fgithub.com%2Fmyukzzz%2FEICR%2Fblob%2Fmain%2FREADME.md","isCSV":false,"isRichtext":true,"toc":[{"level":1,"text":"Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch","anchor":"environment-invariant-curriculum-relation-learning-for-fine-grained-scene-graph-generation-in-pytorch","htmlText":"Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch"},{"level":2,"text":"Installation","anchor":"installation","htmlText":"Installation"},{"level":2,"text":"Dataset","anchor":"dataset","htmlText":"Dataset"},{"level":2,"text":"Pretrained Models","anchor":"pretrained-models","htmlText":"Pretrained Models"},{"level":2,"text":"Perform training on Scene Graph Generation","anchor":"perform-training-on-scene-graph-generation","htmlText":"Perform training on Scene Graph Generation"},{"level":3,"text":"Set the dataset path","anchor":"set-the-dataset-path","htmlText":"Set the dataset path"},{"level":3,"text":"Choose a dataset","anchor":"choose-a-dataset","htmlText":"Choose a dataset"},{"level":3,"text":"Choose a task","anchor":"choose-a-task","htmlText":"Choose a task"},{"level":3,"text":"Choose your model","anchor":"choose-your-model","htmlText":"Choose your model"},{"level":3,"text":"Choose your Encoder","anchor":"choose-your-encoder","htmlText":"Choose your Encoder"},{"level":3,"text":"Examples of the Training Command","anchor":"examples-of-the-training-command","htmlText":"Examples of the Training Command"},{"level":2,"text":"Evaluation","anchor":"evaluation","htmlText":"Evaluation"},{"level":2,"text":"Citation","anchor":"citation","htmlText":"Citation"},{"level":2,"text":"Acknowledgment","anchor":"acknowledgment","htmlText":"Acknowledgment"}],"lineInfo":{"truncatedLoc":"125","truncatedSloc":"88"},"mode":"file"},"image":false,"isCodeownersFile":null,"isPlain":false,"isValidLegacyIssueTemplate":false,"issueTemplateHelpUrl":"https://docs.github.com/articles/about-issue-and-pull-request-templates","issueTemplate":null,"discussionTemplate":null,"language":"Markdown","languageID":222,"large":false,"loggedIn":true,"planSupportInfo":{"repoIsFork":null,"repoOwnedByCurrentUser":null,"requestFullPath":"/myukzzz/EICR/blob/main/README.md","showFreeOrgGatedFeatureMessage":null,"showPlanSupportBanner":null,"upgradeDataAttributes":null,"upgradePath":null},"publishBannersInfo":{"dismissActionNoticePath":"/settings/dismiss-notice/publish_action_from_dockerfile","releasePath":"/myukzzz/EICR/releases/new?marketplace=true","showPublishActionBanner":false},"rawBlobUrl":"https://github.com/myukzzz/EICR/raw/main/README.md","renderImageOrRaw":false,"richText":"<article class="markdown-body entry-content container-lg" itemprop="text"><div class="markdown-heading" dir="auto"><h1 tabindex="-1" class="heading-element" dir="auto">Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch<a id="user-content-environment-invariant-curriculum-relation-learning-for-fine-grained-scene-graph-generation-in-pytorch" class="anchor-element" aria-label="Permalink: Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch" href="#environment-invariant-curriculum-relation-learning-for-fine-grained-scene-graph-generation-in-pytorch"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Installation<a id="user-content-installation" class="anchor-element" aria-label="Permalink: Installation" href="#installation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">Check <a href="/myukzzz/EICR/blob/main/INSTALL.md">INSTALL.md for installation instructions, the recommended configuration is cuda-10.1 & pytorch-1.7.1.

\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Dataset<a id="user-content-dataset" class="anchor-element" aria-label="Permalink: Dataset" href="#dataset"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">Check <a href="/myukzzz/EICR/blob/main/DATASET.md">DATASET.md for instructions of dataset preprocessing (VG & GQA).

\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Pretrained Models<a id="user-content-pretrained-models" class="anchor-element" aria-label="Permalink: Pretrained Models" href="#pretrained-models"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">For VG dataset, the pretrained object detector we used is provided by <a href="https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch\">Scene-Graph-Benchmark

\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Perform training on Scene Graph Generation<a id="user-content-perform-training-on-scene-graph-generation" class="anchor-element" aria-label="Permalink: Perform training on Scene Graph Generation" href="#perform-training-on-scene-graph-generation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Set the dataset path<a id="user-content-set-the-dataset-path" class="anchor-element" aria-label="Permalink: Set the dataset path" href="#set-the-dataset-path"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">First, organize all the files like this:

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="datasets\n |-- vg\n |--detector_model\n |--pretrained_faster_rcnn\n |--model_final.pth\n |--GQA\n |--model_final_from_vg.pth \n |--glove\n |--.... (glove files, will autoly download)\n |--VG_100K\n |--.... (images)\n |--VG-SGG-with-attri.h5 \n |--VG-SGG-dicts-with-attri.json\n |--image_data.json \n |--gqa\n |--images\n |--.... (images)\n |--GQA_200_ID_Info.json\n |--GQA_200_Train.json\n |--GQA_200_Test.json">
datasets\n  <span class="pl-k">|-- vg\n    <span class="pl-k">|--detector_model\n      <span class="pl-k">|--pretrained_faster_rcnn\n        <span class="pl-k">|--model_final.pth\n      <span class="pl-k">|--GQA\n        <span class="pl-k">|--model_final_from_vg.pth       \n    <span class="pl-k">|--glove\n      <span class="pl-k">|--.... (glove files, will autoly download)\n    <span class="pl-k">|--VG_100K\n      <span class="pl-k">|--.... (images)\n    <span class="pl-k">|--VG-SGG-with-attri.h5 \n    <span class="pl-k">|--VG-SGG-dicts-with-attri.json\n    <span class="pl-k">|--image_data.json    \n  <span class="pl-k">|--gqa\n    <span class="pl-k">|--images\n      <span class="pl-k">|--.... (images)\n    <span class="pl-k">|--GQA_200_ID_Info.json\n    <span class="pl-k">|--GQA_200_Train.json\n    <span class="pl-k">|--GQA_200_Test.json
\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Choose a dataset<a id="user-content-choose-a-dataset" class="anchor-element" aria-label="Permalink: Choose a dataset" href="#choose-a-dataset"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">You can choose the training/testing dataset by setting the following parameter:

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="GLOBAL_SETTING.DATASET_CHOICE 'VG' #['VG', 'GQA']">
GLOBAL_SETTING.DATASET_CHOICE <span class="pl-s"><span class="pl-pds">'VG<span class="pl-pds">'  <span class="pl-c"><span class="pl-c">#['VG', 'GQA']
\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Choose a task<a id="user-content-choose-a-task" class="anchor-element" aria-label="Permalink: Choose a task" href="#choose-a-task"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">To comprehensively evaluate the performance, we follow three conventional tasks: 1) Predicate Classification (PredCls) predicts the relationships of all the pairwise objects by employing the given ground-truth bounding boxes and classes; 2) Scene Graph Classification (SGCls) predicts the objects classes and their pairwise relationships by employing the given ground-truth object bounding boxes; and 3) Scene Graph Detection (SGDet) detects all the objects in an image, and predicts their bounding boxes, classes, and pairwise relationships.

\n<p dir="auto">For Predicate Classification (PredCls), you need to set:

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL True">
MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL True
\n<p dir="auto">For Scene Graph Classification (SGCls):

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False">
MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False
\n<p dir="auto">For Scene Graph Detection (SGDet):

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="MODEL.ROI_RELATION_HEAD.USE_GT_BOX False MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False">
MODEL.ROI_RELATION_HEAD.USE_GT_BOX False MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False
\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Choose your model<a id="user-content-choose-your-model" class="anchor-element" aria-label="Permalink: Choose your model" href="#choose-your-model"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">We abstract various SGG models to be different relation-head predictors in the file roi_heads/relation_head/roi_relation_predictors.py, which are independent of the Faster R-CNN backbone and relation-head feature extractor. You can use GLOBAL_SETTING.RELATION_PREDICTOR to select one of them:

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="GLOBAL_SETTING.RELATION_PREDICTOR 'EICR_model'">
GLOBAL_SETTING.RELATION_PREDICTOR <span class="pl-s"><span class="pl-pds">'EICR_model<span class="pl-pds">'
\n<p dir="auto">The default settings are under configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml and maskrcnn_benchmark/config/defaults.py. The priority is command > yaml > defaults.py.

\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Choose your Encoder<a id="user-content-choose-your-encoder" class="anchor-element" aria-label="Permalink: Choose your Encoder" href="#choose-your-encoder"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">You need to further choose an object/relation encoder for "Motifs" or "VCTree" or "Self-Attention" predictor, by setting the following parameter:

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="GLOBAL_SETTING.BASIC_ENCODER 'Motifs'">
GLOBAL_SETTING.BASIC_ENCODER <span class="pl-s"><span class="pl-pds">'Motifs<span class="pl-pds">'
\n<div class="markdown-heading" dir="auto"><h3 tabindex="-1" class="heading-element" dir="auto">Examples of the Training Command<a id="user-content-examples-of-the-training-command" class="anchor-element" aria-label="Permalink: Examples of the Training Command" href="#examples-of-the-training-command"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">Training Example 1 : (VG, Motifs, PredCls)

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10050 --nproc_per_node=1 ./tools/relation_train_net.py --config-file "configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml" GLOBAL_SETTING.DATASET_CHOICE 'VG' GLOBAL_SETTING.RELATION_PREDICTOR 'EICR_model' GLOBAL_SETTING.BASIC_ENCODER 'Motifs' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE 'divide4' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE 'KL_logit_TopDown' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL True SOLVER.IMS_PER_BATCH 4 TEST.IMS_PER_BATCH 1 DTYPE "float16" SOLVER.MAX_ITER 120000 SOLVER.VAL_PERIOD 10000 SOLVER.CHECKPOINT_PERIOD 10000 GLOVE_DIR /data/myk/newreason/SHA/datasets/vg OUTPUT_DIR /data/myk/newreason/ICCV23/SHA/output/VG_predcls_EICR SOLVER.SCHEDULE.TYPE WarmupMultiStepLR SOLVER.STEPS "(56000, 96000)"">
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10050 --nproc_per_node=1 ./tools/relation_train_net.py --config-file <span class="pl-s"><span class="pl-pds">"configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml<span class="pl-pds">" GLOBAL_SETTING.DATASET_CHOICE <span class="pl-s"><span class="pl-pds">'VG<span class="pl-pds">' GLOBAL_SETTING.RELATION_PREDICTOR <span class="pl-s"><span class="pl-pds">'EICR_model<span class="pl-pds">' GLOBAL_SETTING.BASIC_ENCODER <span class="pl-s"><span class="pl-pds">'Motifs<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE <span class="pl-s"><span class="pl-pds">'divide4<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE <span class="pl-s"><span class="pl-pds">'KL_logit_TopDown<span class="pl-pds">' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL True SOLVER.IMS_PER_BATCH 4 TEST.IMS_PER_BATCH 1 DTYPE <span class="pl-s"><span class="pl-pds">"float16<span class="pl-pds">" SOLVER.MAX_ITER 120000 SOLVER.VAL_PERIOD 10000 SOLVER.CHECKPOINT_PERIOD 10000 GLOVE_DIR /data/myk/newreason/SHA/datasets/vg OUTPUT_DIR /data/myk/newreason/ICCV23/SHA/output/VG_predcls_EICR SOLVER.SCHEDULE.TYPE WarmupMultiStepLR    SOLVER.STEPS <span class="pl-s"><span class="pl-pds">"(56000, 96000)<span class="pl-pds">"
\n<p dir="auto">Training Example 2 : (GQA_200, Motifs, SGCls)

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10050 --nproc_per_node=1 ./tools/relation_train_net.py --config-file "configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml" GLOBAL_SETTING.DATASET_CHOICE 'GQA_200' GLOBAL_SETTING.RELATION_PREDICTOR 'EICR_model' GLOBAL_SETTING.BASIC_ENCODER 'Motifs' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE 'divide4' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE 'KL_logit_TopDown' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False SOLVER.IMS_PER_BATCH 4 TEST.IMS_PER_BATCH 1 DTYPE "float16" SOLVER.MAX_ITER 120000 SOLVER.VAL_PERIOD 10000 SOLVER.CHECKPOINT_PERIOD 10000 GLOVE_DIR /data/myk/newreason/SHA/datasets/vg OUTPUT_DIR /data/myk/newreason/ICCV23/SHA/output/VG_predcls_EICR SOLVER.SCHEDULE.TYPE WarmupMultiStepLR SOLVER.STEPS "(56000, 96000)"">
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10050 --nproc_per_node=1 ./tools/relation_train_net.py --config-file <span class="pl-s"><span class="pl-pds">"configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml<span class="pl-pds">" GLOBAL_SETTING.DATASET_CHOICE <span class="pl-s"><span class="pl-pds">'GQA_200<span class="pl-pds">' GLOBAL_SETTING.RELATION_PREDICTOR <span class="pl-s"><span class="pl-pds">'EICR_model<span class="pl-pds">' GLOBAL_SETTING.BASIC_ENCODER <span class="pl-s"><span class="pl-pds">'Motifs<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE <span class="pl-s"><span class="pl-pds">'divide4<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE <span class="pl-s"><span class="pl-pds">'KL_logit_TopDown<span class="pl-pds">' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False SOLVER.IMS_PER_BATCH 4 TEST.IMS_PER_BATCH 1 DTYPE <span class="pl-s"><span class="pl-pds">"float16<span class="pl-pds">" SOLVER.MAX_ITER 120000 SOLVER.VAL_PERIOD 10000 SOLVER.CHECKPOINT_PERIOD 10000 GLOVE_DIR /data/myk/newreason/SHA/datasets/vg OUTPUT_DIR /data/myk/newreason/ICCV23/SHA/output/VG_predcls_EICR SOLVER.SCHEDULE.TYPE WarmupMultiStepLR    SOLVER.STEPS <span class="pl-s"><span class="pl-pds">"(56000, 96000)<span class="pl-pds">"
\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Evaluation<a id="user-content-evaluation" class="anchor-element" aria-label="Permalink: Evaluation" href="#evaluation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">You can evaluate it by running the following command.

\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10083 --nproc_per_node=1 tools/relation_test_net.py --config-file "configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml" GLOBAL_SETTING.DATASET_CHOICE 'GQA_200' GLOBAL_SETTING.RELATION_PREDICTOR 'EICR_model' GLOBAL_SETTING.BASIC_ENCODER 'Motifs' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE 'divide4' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE 'KL_logit_TopDown' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR /home/myk/home/reason/newreason/SHA/datasets/vg/glove OUTPUT_DIR /home/myk/home/reason/newreason/SHA/output/GQA_precl_motif3samples_09aplha_start30000end60000/">
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10083 --nproc_per_node=1 tools/relation_test_net.py --config-file <span class="pl-s"><span class="pl-pds">"configs/SHA_GCL_e2e_relation_X_101_32_8_FPN_1x.yaml<span class="pl-pds">" GLOBAL_SETTING.DATASET_CHOICE <span class="pl-s"><span class="pl-pds">'GQA_200<span class="pl-pds">' GLOBAL_SETTING.RELATION_PREDICTOR <span class="pl-s"><span class="pl-pds">'EICR_model<span class="pl-pds">' GLOBAL_SETTING.BASIC_ENCODER <span class="pl-s"><span class="pl-pds">'Motifs<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.GROUP_SPLIT_MODE <span class="pl-s"><span class="pl-pds">'divide4<span class="pl-pds">' GLOBAL_SETTING.GCL_SETTING.KNOWLEDGE_TRANSFER_MODE <span class="pl-s"><span class="pl-pds">'KL_logit_TopDown<span class="pl-pds">' MODEL.ROI_RELATION_HEAD.USE_GT_BOX True MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False TEST.IMS_PER_BATCH 1 DTYPE <span class="pl-s"><span class="pl-pds">"float16<span class="pl-pds">" GLOVE_DIR /home/myk/home/reason/newreason/SHA/datasets/vg/glove OUTPUT_DIR /home/myk/home/reason/newreason/SHA/output/GQA_precl_motif3samples_09aplha_start30000end60000/
\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Citation<a id="user-content-citation" class="anchor-element" aria-label="Permalink: Citation" href="#citation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<div class="highlight highlight-source-shell notranslate position-relative overflow-auto" dir="auto" data-snippet-clipboard-copy-content="@inproceedings{min2023environment,\n title={Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation},\n author={Min, Yukuan and Wu, Aming and Deng, Cheng},\n booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},\n pages={13296--13307},\n year={2023}\n}">
@inproceedings{min2023environment,\n  title={Environment-Invariant Curriculum Relation Learning <span class="pl-k">for Fine-Grained Scene Graph Generation},\n  author={Min, Yukuan and Wu, Aming and Deng, Cheng},\n  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},\n  pages={13296--13307},\n  year={2023}\n}
\n<div class="markdown-heading" dir="auto"><h2 tabindex="-1" class="heading-element" dir="auto">Acknowledgment<a id="user-content-acknowledgment" class="anchor-element" aria-label="Permalink: Acknowledgment" href="#acknowledgment"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path d="m7.775 3.275 1.25-1.25a3.5 3.5 0 1 1 4.95 4.95l-2.5 2.5a3.5 3.5 0 0 1-4.95 0 .751.751 0 0 1 .018-1.042.751.751 0 0 1 1.042-.018 1.998 1.998 0 0 0 2.83 0l2.5-2.5a2.002 2.002 0 0 0-2.83-2.83l-1.25 1.25a.751.751 0 0 1-1.042-.018.751.751 0 0 1-.018-1.042Zm-4.69 9.64a1.998 1.998 0 0 0 2.83 0l1.25-1.25a.751.751 0 0 1 1.042.018.751.751 0 0 1 .018 1.042l-1.25 1.25a3.5 3.5 0 1 1-4.95-4.95l2.5-2.5a3.5 3.5 0 0 1 4.95 0 .751.751 0 0 1-.018 1.042.751.751 0 0 1-1.042.018 1.998 1.998 0 0 0-2.83 0l-2.5 2.5a1.998 1.998 0 0 0 0 2.83Z">\n<p dir="auto">Our code is on top of <a href="https://github.com/dongxingning/SHA-GCL-for-SGG\">SHA-GCL-for-SGG, we sincerely thank them for their well-designed codebase.

\n","renderedFileInfo":null,"shortPath":null,"symbolsEnabled":true,"tabSize":8,"topBannersInfo":{"overridingGlobalFundingFile":false,"globalPreferredFundingPath":null,"repoOwner":"myukzzz","repoName":"EICR","showInvalidCitationWarning":false,"citationHelpUrl":"https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-repository-on-github/about-citation-files","actionsOnboardingTip":null},"truncated":false,"viewable":true,"workflowRedirectUrl":null,"symbols":{"timed_out":false,"not_analyzed":false,"symbols":[{"name":"Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch","kind":"section_1","ident_start":2,"ident_end":103,"extent_start":0,"extent_end":6324,"fully_qualified_name":"Environment-Invariant Curriculum Relation Learning for Fine-Grained Scene Graph Generation in Pytorch","ident_utf16":{"start":{"line_number":0,"utf16_col":2},"end":{"line_number":0,"utf16_col":103}},"extent_utf16":{"start":{"line_number":0,"utf16_col":0},"end":{"line_number":125,"utf16_col":0}}},{"name":"Installation","kind":"section_2","ident_start":109,"ident_end":121,"extent_start":106,"extent_end":248,"fully_qualified_name":"Installation","ident_utf16":{"start":{"line_number":3,"utf16_col":3},"end":{"line_number":3,"utf16_col":15}},"extent_utf16":{"start":{"line_number":3,"utf16_col":0},"end":{"line_number":7,"utf16_col":0}}},{"name":"Dataset","kind":"section_2","ident_start":251,"ident_end":258,"extent_start":248,"extent_end":346,"fully_qualified_name":"Dataset","ident_utf16":{"start":{"line_number":7,"utf16_col":3},"end":{"line_number":7,"utf16_col":10}},"extent_utf16":{"start":{"line_number":7,"utf16_col":0},"end":{"line_number":11,"utf16_col":0}}},{"name":"Pretrained Models","kind":"section_2","ident_start":349,"ident_end":366,"extent_start":346,"extent_end":524,"fully_qualified_name":"Pretrained Models","ident_utf16":{"start":{"line_number":11,"utf16_col":3},"end":{"line_number":11,"utf16_col":20}},"extent_utf16":{"start":{"line_number":11,"utf16_col":0},"end":{"line_number":15,"utf16_col":0}}},{"name":"Perform training on Scene Graph Generation","kind":"section_2","ident_start":527,"ident_end":569,"extent_start":524,"extent_end":4989,"fully_qualified_name":"Perform training on Scene Graph Generation","ident_utf16":{"start":{"line_number":15,"utf16_col":3},"end":{"line_number":15,"utf16_col":45}},"extent_utf16":{"start":{"line_number":15,"utf16_col":0},"end":{"line_number":99,"utf16_col":0}}},{"name":"Set the dataset path","kind":"section_3","ident_start":575,"ident_end":595,"extent_start":571,"extent_end":1124,"fully_qualified_name":"Set the dataset path","ident_utf16":{"start":{"line_number":17,"utf16_col":4},"end":{"line_number":17,"utf16_col":24}},"extent_utf16":{"start":{"line_number":17,"utf16_col":0},"end":{"line_number":43,"utf16_col":0}}},{"name":"Choose a dataset","kind":"section_3","ident_start":1128,"ident_end":1144,"extent_start":1124,"extent_end":1291,"fully_qualified_name":"Choose a dataset","ident_utf16":{"start":{"line_number":43,"utf16_col":4},"end":{"line_number":43,"utf16_col":20}},"extent_utf16":{"start":{"line_number":43,"utf16_col":0},"end":{"line_number":50,"utf16_col":0}}},{"name":"Choose a task","kind":"section_3","ident_start":1295,"ident_end":1308,"extent_start":1291,"extent_end":2320,"fully_qualified_name":"Choose a task","ident_utf16":{"start":{"line_number":50,"utf16_col":4},"end":{"line_number":50,"utf16_col":17}},"extent_utf16":{"start":{"line_number":50,"utf16_col":0},"end":{"line_number":67,"utf16_col":0}}},{"name":"Choose your model","kind":"section_3","ident_start":2324,"ident_end":2341,"extent_start":2320,"extent_end":2907,"fully_qualified_name":"Choose your model","ident_utf16":{"start":{"line_number":67,"utf16_col":4},"end":{"line_number":67,"utf16_col":21}},"extent_utf16":{"start":{"line_number":67,"utf16_col":0},"end":{"line_number":78,"utf16_col":0}}},{"name":"Choose your Encoder","kind":"section_3","ident_start":2911,"ident_end":2930,"extent_start":2907,"extent_end":3133,"fully_qualified_name":"Choose your Encoder","ident_utf16":{"start":{"line_number":78,"utf16_col":4},"end":{"line_number":78,"utf16_col":23}},"extent_utf16":{"start":{"line_number":78,"utf16_col":0},"end":{"line_number":88,"utf16_col":0}}},{"name":"Examples of the Training Command","kind":"section_3","ident_start":3137,"ident_end":3169,"extent_start":3133,"extent_end":4989,"fully_qualified_name":"Examples of the Training Command","ident_utf16":{"start":{"line_number":88,"utf16_col":4},"end":{"line_number":88,"utf16_col":36}},"extent_utf16":{"start":{"line_number":88,"utf16_col":0},"end":{"line_number":99,"utf16_col":0}}},{"name":"Evaluation","kind":"section_2","ident_start":4992,"ident_end":5002,"extent_start":4989,"extent_end":5812,"fully_qualified_name":"Evaluation","ident_utf16":{"start":{"line_number":99,"utf16_col":3},"end":{"line_number":99,"utf16_col":13}},"extent_utf16":{"start":{"line_number":99,"utf16_col":0},"end":{"line_number":109,"utf16_col":0}}},{"name":"Citation","kind":"section_2","ident_start":5815,"ident_end":5823,"extent_start":5812,"extent_end":6157,"fully_qualified_name":"Citation","ident_utf16":{"start":{"line_number":109,"utf16_col":3},"end":{"line_number":109,"utf16_col":11}},"extent_utf16":{"start":{"line_number":109,"utf16_col":0},"end":{"line_number":122,"utf16_col":0}}},{"name":"Acknowledgment","kind":"section_2","ident_start":6160,"ident_end":6174,"extent_start":6157,"extent_end":6324,"fully_qualified_name":"Acknowledgment","ident_utf16":{"start":{"line_number":122,"utf16_col":3},"end":{"line_number":122,"utf16_col":17}},"extent_utf16":{"start":{"line_number":122,"utf16_col":0},"end":{"line_number":125,"utf16_col":0}}}]}},"copilotInfo":{"documentationUrl":"https://docs.github.com/copilot/overview-of-github-copilot/about-github-copilot-for-individuals","notices":{"codeViewPopover":{"dismissed":false,"dismissPath":"/settings/dismiss-notice/code_view_copilot_popover"}},"userAccess":{"hasSubscriptionEnded":false,"orgHasCFBAccess":false,"userHasCFIAccess":false,"userHasOrgs":false,"userIsOrgAdmin":false,"userIsOrgMember":false,"business":null,"featureRequestInfo":null}},"copilotAccessAllowed":false,"csrf_tokens":{"/myukzzz/EICR/branches":{"post":"zaUiFk1C3PZgpHKXYDsToNfQaPJUOLxN5KREDc-bl_FBqtmgkG13uH-HHnCl9TOQawtPH8EAkRpoo-C531ZeRg"},"/repos/preferences":{"post":"_WWAOcCC0dF3W8pkVWid291yT6IauEDZqwIW9zvGYI0pDq_96RNLW83dPaaXKDMrG-IL125IyPJhoksqC6L77Q"}}},"title":"EICR/README.md at main · myukzzz/EICR"}

eicr's People

Contributors

myukzzz avatar

Stargazers

Maëlic Neau avatar  avatar Jeff Carpenter avatar Rongjie Li avatar 一个快活的柠檬精 avatar  avatar

Watchers

 avatar

Forkers

ahappylemonjing

eicr's Issues

About the reweigting weight?

can you provide the weight of reweighting for VG and GQA?

moreover, i see you use the penet in your code, do you use the evaluating code of penet?

about the whole code

i find the code is not all the using code, when do you upload the whole code?
by the way, i see you use the PeNet in your baseline, and do you use the same evaluating code as Penet?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.