Git Product home page Git Product logo

heco's Introduction

HeCo

This repo is for source code of KDD 2021 paper "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning".
Paper Link: https://arxiv.org/abs/2105.09111

Environment Settings

python==3.8.5
scipy==1.5.4
torch==1.7.0
numpy==1.19.2
scikit_learn==0.24.2

GPU: GeForce RTX 2080 Ti
CPU: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz

Usage

Fisrt, go into ./code, and then you can use the following commend to run our model:

python main.py acm --gpu=0

Here, "acm" can be replaced by "dblp", "aminer" or "freebase".

Some tips in parameters

  1. We suggest you to carefully select the “pos_num” (existed in ./data/pos.py) to ensure the threshold of postives for every node. This is very important to final results. Of course, more effective way to select positives is welcome.
  2. In ./code/utils/params.py, except "lr" and "patience", meticulously tuning dropout and tau is applaudable.
  3. In our experiments, we only assign target type of nodes with original features, but assign other type of nodes with one-hot. This is because most of datasets used only provide features of target nodes in their original version. So, we believe in that if high-quality features of other type of nodes are provided, the overall results will improve a lot. The AMiner dataset is an example. In this dataset, there are not original features, so every type of nodes are all asigned with one-hot. In other words, every node has the same quality of features, and in this case, our HeCo is far ahead of other baselines. So, we strongly suggest that if you have high-quality features for other type of nodes, try it!

Cite

@inproceedings{heco,
  author    = {Xiao Wang and
               Nian Liu and
               Hui Han and
               Chuan Shi},
  title     = {Self-supervised Heterogeneous Graph Neural Network with Co-contrastive
               Learning},
  booktitle = {{KDD} '21: The 27th {ACM} {SIGKDD} Conference on Knowledge Discovery
               and Data Mining, Virtual Event, Singapore, August 14-18, 2021},
  pages     = {1726--1736},
  year      = {2021}
}

Contact

If you have any questions, please feel free to contact me with [email protected]

heco's People

Contributors

liun-online avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

heco's Issues

关于聚类分析可视化用到的t-sne方法

您好,我针对acm数据集利用您给出的聚类分析方法后,聚类分析结果还可以,只是在二维可视化的时候要将64维的embedding结果降维到二维,用了文章中提到的t-sne方法,但是又两个类别样本点在二维空间上的区分并不是像论文中的那样明显。不知道可否给出tsne降维的参数或者代码模块进行参考,谢谢!

compared methods

can you provide me your compared methods codes, and I want to do some improvment, thanks a lot.

Unable to reproduce the results for ACM. Results highly depend on random seed.

Hello!

Using the default hyperparameters and the datasets provided in this repo, I am unable to reproduce the results for ACM. F1 macro, F1 micro and AUC are consistently around 1-2% worse than the results published in the paper.

I have also discovered that results highly depend on the random seed. Sometimes, a good seed can give very good results, while a bad seed can be -2% or -5%. This is only observed in ACM dataset, and to a smaller extent in AMiner dataset. For DBLP and Freebase, the results don't vary as much with random seed. I wonder if you observe the same problem?

For the reported results, randomly run 10 times and report the average results, did you run the experiments 10 times with different seeds? I saw that the seed for each dataset is fixed in params.py. Did you tune the seed for each dataset?

Thank you

几个重要但却迷糊的问题?

您好,看完文章,仍然存在几个疑惑,非常期待您能给出解答。
1、target node 是不是指的就是我们要去进行分类以及聚类的那一类节点(比如paper)?进一步讲,该方法是不是无法得到异构图中非target类型节点的表征,从而解决链接预测(比如paper与subject之间)等问题?
2、文中讲z^mp用于下游的应用(节点分类,聚类),是因为目标类型节点显式地(explicitly)参与到了z^mp的生成之中。这怎么理解,是指对于一个目标节点(target node i),其原路径邻居节点(meta-path based neighbors)
image
中只包含了与目标节点同种类型的节点?
3、eq(6)是GCN中拉普拉斯矩阵与节点特征进行聚合的方式,但是实际上GCN中还存在可训练参数W,为什么这里却没有,是省略了,还是这里确实就没有,如果本来就没有,背后的道理是啥?另外,看过你在文章中给出的例子,对于节点i的原路径邻居节点image仍然存在疑惑,具体来讲,对于一个目标节点而言,其一条原路径上所有节点(包含目标类型节点和非目标类型节点)都转化为它的一阶邻居节点?路径上其他节点之间的边是否仍然保留,然后再进行信息聚合?
4、可以举个更详细点的例子,说明节点原始特征x_i是如何初始化的嘛?具体而言,图中所有类型的节点都是有original attribute的吗,遇到没有原始特征/属性的类型节点怎么办(随机初始化)?

数据预处理相关问题请教

作者您好,感谢您的工作。对于数据预处理部分有两个问题想请问一下:
1:请问下数据预处理部分的“关系.txt”是怎么获取的。按照论文所述应该是follow的MAGNN的数据预处理方法,但是以MAGNN的DBLP数据集举例来说,其中的paper_author.txt的内容与Heco中的pa.txt并不一致。
2:数据集划分中的train,test,val_20,40.60.npy的生成方式似乎没有在代码中公开,想请问下这块的代码是怎么生成的。
祝好!

Torch not compiled with CUDA enabled.

If I am not using gpu then line 84 of sc_encoder.py causes an assertion error: Torch not compiled with CUDA enabled.
I can fix this by commenting out the characters “.cuda()” in line 84. Is there a better solution, e.g. adding an if statement to check the value of torch.cuda.is_available()?

Model Framework Diagram

Hello dear author!

I just got started with heterogeneous graph neural network. I would like to ask you how to draw the model frame diagram. I have been using PPT all the time, but I don't know how to draw the heterogeneous graph.

Thank you very much for your patient guidance.

您好,请问代码中pos.py选取正例的方法和论文描述的是否有出入?

以acm数据集为例,代码中pos.py选取正例时,load进来的pap和psp并没有表示两篇文章的在对应元路径下的连接条数,因为mp_gen.py生成的pap与psp只表示两篇paper是否有元路径相连,所以并没有考虑两篇文章的连接强度;但论文中说的是会根据两篇文章的多种元路径连接总数进行排序,然后选择具有最多元路径条数的5个点作为正例。也就是论文和代码在这里有些不一致,所以想请问您正例的选择方式到底应该是怎样的?

The first term of Eq(4) is right?

image
hello,eq(4) is to compute the contribution of different type neighbors for node i, why the first term in eq(4) need to sum then average all node V in graph. In the words, the first term in eq(4) should be remove some parts.
image

Don't know the data type

Could you provide the type of data in the data folder? such as, the data types in files such as "pa.txt" and "pc.txt" in the "data>dblp" folder.

关于HAN的baseline实验设置

抱歉打扰。文章中提到了将HAN作为baseline进行对比,想请教一下HAN这种有监督的端到端训练模型是如何设置对比实验的。感谢!

Positive Sample

Hello, the method of calculating the positive example proposed in the paper is very unique! However, some people are worried about the choice of its threshold. Most positive examples are actually pseudo positive examples, especially those with high threshold. For example DBLP, there are 1000, and more than half of them are inconsistent labels. What impact does this have on the model

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.