Git Product home page Git Product logo

hccf's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hccf's Issues

关于HCCF在其他数据集上的性能

您好,我目前在使用HCCF在Gowalla数据集上运行,但是模型效果比较差,比LightGCN和BPRMF都要差一些,请问您对于HCCF在训练时的参数有什么建议么?优先调整那些参数会更好一些呢?谢谢~

Over-smoothing

作者您好!
在论文中你们指出来现有的图协同过滤算法在层数加深的时候会受到过平滑的影响,那你们的模型是通过什么方式去解决这个问题的?是通过对比学习还是利用超图去挖掘全局的协同信号?这部分的逻辑我没用弄清楚。谢谢!

请教论文和代码中的几个小问题

首先很感谢您能提供开源代码供我们学习,且您的代码十分清晰简洁、令人赞叹,但还有几个小问题想请教您。

1.论文公式(3)和(6)都体现了残差连接,但是代码中(3)(6)好像并没有体现公式里的残差连接,是在实践中发现不加入残差连接效果更好吗?

2.您的HGNNLayer实现好像与论文公式(6)(7)不太一样,或者解释为令c=1且ψ(X)=σ(X),即去掉论文公式中的训练参数V、去掉残差连接。这样理解对吗?

3.此处在计算对比学习损失时,为什么要用detach()让GCN得到的结果不参与参数优化过程呢?这点我没能理解

关于论文中Figure 1的疑问

作者您好!我想了解一下Figure 1中的MAD是在训练结束时(所有epoch跑完),根据用户最终嵌入计算的吗?

about amazon dataset question

hello akaxlh !
your provide dataset amzon.rar unzip to get two file {trn_mat , tst_int },
and use your code not to normal onload . format of the tst_int is a list, not an np.object
Is the data set in the correct format?
image

residual connections is absent in torch version.

According to paper, you need to replace
from

lats.append(temEmbeds + hyperLats[-1])

to

lats.append(temEmbeds + hyperLats[-1] + lats[-1])

I chanced the code but i got an error, WARNING:root:NaN or Inf found in input tensor. Do you know why ?

对比损失函数

W = NNs.defineRandomNameParam([args.latdim, args.latdim])
pckHyperULat = tf.nn.l2_normalize(tf.nn.embedding_lookup(hyperULats[i], uniqUids), axis=1) @ W
这里的W是一个参数矩阵,为什么计算损失的时候需要用到这个矩阵?这个矩阵是可学习的吗?

关于数据集的问题

您好,非常感谢您将处理后的数据集和代码分享出来给我们学习。
想请问您一下,可以把处理数据集的代码分享出来给我们学习嘛?非常感谢!

代码

Traceback (most recent call last):
File "D:\Pycharm Professial\PyCharm 2021.3.3\plugins\python\helpers\pydev\pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\Pycharm Professial\PyCharm 2021.3.3\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/Pycharm Professial/HCCF-main/labcode_efficient.py", line 301, in
handler.LoadData()
File "D:\Pycharm Professial\HCCF-main\DataHandler.py", line 64, in LoadData
with open(self.trnfile, 'rb') as fs:
FileNotFoundError: [Errno 2] No such file or directory: 'Data/yelp/trnMat.pkl'

dataset

你好,第三个数据集有点问题麻烦可以看一下嘛

关于实验结果的疑问

你好, akaxlh,首先感谢你能公开代码供大家学习!
我对论文中的实验结果有一些疑问。我将论文中的实验结果与LightGCN、SGL中的实验结果进行了比较,发现HCCF论文中的在Yelp和Amazon-book数据集上的实验结果都会比原论文中的实验结果低一些。数据集划分方面,HCCF和LightGCN是相同的,都是7:1:2。我注意到HCCF论文中Amazon-book使用了20-core的设置,而LightGCN中的设置是10-core,这可能是造成实验结果较低的原因。我想问一下是不是还有一些其它不同的设置导致实验结果较低,或者是我错过了论文中的一些细节?还请赐教。
image

image

关于ml10m数据集的问题

您好,非常感谢您将处理后的数据集和代码分享出来给我们学习!
关于第二个ml10m这个数据集包,解压之后发现有很多part,但当我把每个part里面的数据作为邻接矩阵导出来后发现它们都是一样的,请问下这几个part都是指ml10m这一个数据集吗?还想请问下这样分成几个part的目的是什么?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.