Git Product home page Git Product logo

dink-net's People

Contributors

yueliu1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dink-net's Issues

Source of model parameter

Hi, thanks for your awesome work. However, I met some problems when replicating your work. If I did not use the model parameters you provided, the results could not be aligned with the paper. How are these model parameters file trained? I am looking forward to your reply.

关于ogbn-products和ogbn-papers100M两个超大规模数据的实验

非常感谢作者这篇优秀的工作,有个问题想要请教下,在发布的代码中没有看到关于对ogbn-products和ogbn-papers100M数据集的配置(main.py的32-46行),除此之外,代码中看到有train_batch和train_test两个参数,在跑四个小数据集时默认是false全图训练,如果跑大数据集的话是不是设置为true即可,期待并感谢作者的解答

end-to-end issue

Hello. In your paper you mentioned that this work "was unified into an end-to-end framework".
However, in your published code: 1) you directly use ogb-supplied features instead of text attributes; 2) your work includes an inevitable pre-training process.
Do you consider this work as "end-to-end" and why? Looking forward to your reply.

Where can I get access to the implementation code?

Thanks for the inovative and motivative work that the authors provide. However, it seems that this project has no implementation code or the details of implementation. Could the authors provide us with it?

如何预训练模型呢?

您好,我尝试复现您代码中的预训练过程。按照您论文中appendix C. Design Details & Hyper-parameter Settings的设置来训练模型,但是效果不太好,例如训练cora数据集时,按照pretrain 200 epochs,lr 1e-3; finetune 200 epochs,lr 1e-2的协议,模型输出结果如下所示
预训练
epoch 009 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 019 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 029 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 039 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 049 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 059 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 069 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 079 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 089 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 099 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 109 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 119 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 129 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 139 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 149 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 159 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 169 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 179 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 189 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 199 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84

finetune
epoch 009 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 019 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 029 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 039 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 049 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 059 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 069 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 079 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 089 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 099 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 109 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 119 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 129 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 139 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 149 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 159 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 169 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 179 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 189 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84
epoch 199 | acc:30.24 | nmi:0.00 | ari:0.00 | f1:6.84

预训练和微调基本区别不大,是我哪里还没调试正确吗?谢谢~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.