huangtinglin / knowledge_graph_based_intent_network Goto Github PK
View Code? Open in Web Editor NEWLearning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
Learning Intents behind Interactions with Knowledge Graph for Recommendation, WWW2021
In the paper, user representation get from the Intenet Graph. Is IG just compose by user,intent,item (without entity about item)?
And how to calculate high-order representation of user, like e_u^(3) ?
First calculate high-order representation of item e_i_(_2), Then got e_u^(3) by equation (6)(7)(8) in paper?
Thx reply .
你好!感谢你们团队能提供代码供大家学习!这里有两个问题想问一下:
① 在论文中的ep,即意图的嵌入,应该是由关系嵌入计算得出的,但是在代码中是直接定义了一个latent_emb并直接使用,好像与文中不符(或者意图向量不是latent_emb?)。
② 在计算意图独立性时,论文中是用已经得到的ep进行计算,但是代码里用来计算的是参数是disen_weight_att,我理解这个参数是wrp,即关系r到意图p的一个可训练参数,这里也和原文中有些不同。
您好:
在了解KGIN的可解释性功能后,我想知道KGIN的实际用途是什么?能否向user推荐一个新的item,如何推荐,在代码或是论文中的哪一点体现出来。
十分感谢您的回答
I'm sorry to bother you!
Could you please tell me which particular Last-FM
data set you used in your paper? Is it the LFM-1B UGP
data set in the website link you gave?
PS: The KGAT paper provides lastfm
dataset address is https://grouplens.org/datasets/hetrec-2011/ Can the data of these two addresses be used?
What is the latent_emb used for? It is the user intent embedding? But shouldn't the user intent embedding be the disen_weight according to the defination in the paper? Look forward to your answer. Thank you!
感谢您们的开源工作,我从中学到了很多。有一个疑问希望获得答案。
我在做对比实验时,发现KGIN与KGAT论文中使用了相同的Amazon,Last数据集和同样的评估指标,但是为什么两篇论文中的结果recall相近,ndcg却有较大的差距?
Thank you very much for openning source.
I'd like to ask about the hardware of your experiment (GPU, memory size, etc.). Looking forward to your reply.
再次感谢您的开源代码!
最近在用您的代码跑实验的时候,发现完全相同的两次实验的评估结果不完全一样。这种情况是否正常?我注意到您的代码中已经有随机数的固定和一些复现的设置(这些代码我都是保持原状)。
想问一下作者,为什么实验的数据集是一样的,结果会如此不同,请问作者知道是什么原因吗?Knowledge graph convolutional networks for recommender systems,19年的文章,通常采取所有的items去除训练集的方式,实验的结果不在同一个量纲范围内。
你好,很感谢你们团队能提供代码,我有个问题,Recommender中定义了latent_emb,而根据论文中的公式1和公式2,latent_emb不应该是关系向量确定的吗?希望得到您的回复。
How get the explainable?
作者您好,感谢您的团队提供的KGIN代码!现在有一个问题想请教:在KGIN代码实现公式7的时候,
user_agg = torch.sparse.mm(interact_mat, entity_agg)
user_agg = user_agg * (disen_weight * score).sum(dim=1) + user_agg
我理解的上面一行得到的user_agg是公式7的ei,那么下面一行+user_agg对应到原公式7中,是不是公式最后+ei,这让我有些难以理解为什么这样做。
我看到之前的issue里面有人提到,但是我还是没有理解到您说的自连接的意思。
希望得到您的解答与纠正,谢谢!
Is the link to the paper wrong?
Hi, Huang.
The process of Relational Path-aware Aggregation realized on line 25-50 in your code (KGIN.py), and then sum representations of user and item up as the final representations(equtation(13) in your paper) on line 202-203 in your code (KGIN.py). But I can't find the process of Capturing Relational Paths(the quation(12) in your paper). Where is this process implemented in your code?
Thank you very much for reading my questions. Your paper is very helpful to me. I look forward to your reply.
您好!非常感谢您分享的代码 我注意到您的代码中有save的相关代码
"""save weight"""
if ret['recall'][0] == cur_best_pre_0 and args.save:
torch.save(model.state_dict(), args.out_dir + 'model_' + args.dataset + '.ckpt')
但是我发现下一次训练的时候并没有加载保存的参数,请问您的代码中实现恢复模型的相关代码了吗?
like you mention in this issue #14 to get each intent's score, but how can I get the relation's score in a intent?
dataloader里的_bi_norm_lap方法中的拉普拉斯矩阵计算那里是否应为I-D^{-1/2}AD^{-1/2}
May I ask you about the drawing the figure 4? How to create a similar Painting style.
Thanks a lot in advance if you could provide the script!?
Best
Thank you for adding the knowledge map to the original data, but I wonder if all the sequences of the test set and training set are sorted by timestamp?
Can you provide code to process raw data or a timestamp for each interaction?
感谢作者提供代码!注意到在论文中KGIN模型和调整为推荐任务的RGCN模型做了对比。可否提供一下用于对比的RGCN模型代码?不胜感激!
您好,最近在学习该工作的代码,通过调用给出的命令行参数,训练过程中evaluate.py文件的test_one_user(x)函数,在“user_pos_test = test_user_set[u]”处报错“name 'test_user_set' is not defined”。调试了一段时间没有找到出错的原因,想请问一下您是否知道原因?
Can you tell me about kgfinal.txt, in kgfinal.txt the head entity and tail entity id are independent or not. My understanding is that userid and itemid are independent in trian.txt, where itemid is kept the same as itemid in kgfinal.txt, and the two columns are not independent in knowledge graph.
首先十分感谢您的工作以及开源代码,从中我学习了很多。
关于代码中的正则项我有点疑惑,因为它与论文中提到的不同。代码中的正则项只包含用户,物品项,似乎没有其他模型的参数?对于这一点您怎么看?
你好~我想问一下Lastfm数据集中最后用于推荐的物品(item-list.txt中的内容)是原始数据中的tracks嘛?
感谢~
对于每一对交互历史(u,i),在引入intent之后,都会变成(u,p1,i),(u,p2,i),(u,p3,i),(u,p4,i)吗?也就是说意图集的4个p会对所有的interactions均有贡献吗?
请问针对utils/data_loader.py
,第69行的注释# including items + users
是否应该改为# including items
?因为变量n_entities不应该包括users,只包括items和其他entities。
n_entities = max(max(triplets[:, 0]), max(triplets[:, 2])) + 1 # including items + users
n_nodes = n_entities + n_users
->
n_entities = max(max(triplets[:, 0]), max(triplets[:, 2])) + 1 # including items
n_nodes = n_entities + n_users
学长您好,请问您实验配置是什么呢?我直接跑代码的话,速度会比您发布的日志慢3倍多,不知道问题出在哪里?谢谢!!!!
i run your code several times and always get negative cor num
start training ... using time 496.1616, training loss at epoch 0: 298.6853, cor: -103918.234375
or
`
start training ...
using time 636.8174, training loss at epoch 0: 299.3976, cor: -103918.351562
+-------+-------------------+--------------------+--------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+
| Epoch | training time | tesing time | Loss | recall | ndcg | precision | hit_ratio |
+-------+-------------------+--------------------+--------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+
| 1 | 636.8162143230438 | 137.00747871398926 | 194.09571838378906 | [0.07513288 0.09760893 0.11306548 0.12481185 0.13517161] | [0.06848064 0.07576022 0.08097082 0.08491058 0.08831252] | [0.02926674 0.02149495 0.01776641 0.01546985 0.01389799] | [0.26474582 0.34498854 0.39756429 0.43486379 0.46601035] |
+-------+-------------------+--------------------+--------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+----------------------------------------------------------+
using time 636.9025, training loss at epoch 2: 170.9751, cor: -113220.000000
using time 636.6866, training loss at epoch 3: 157.5762, cor: -113220.000000
using time 637.4699, training loss at epoch 4: 149.2337, cor: -113220.000000
`
it seems something has error?
老师您好:
我想询问一下公式7中1/Nu这个式子在代码何处体现出来user_agg = user_agg * (disen_weight * score).sum(dim=1) + user_agg # [n_users, channel]
,是我找的地方不对吗?烦请老师解答一下。
期待您的回复
I wonder how do you extract entities of the knowledge graph. Also, I cannot find the org_id of item_list.txt in the original dataset.
Can you introduce how to process this dataset? Or code?
一、
我注意到在代码:
adj = sp.coo_matrix((vals, (np_mat[:, 0], np_mat[:, 1])), shape=(n_nodes, n_nodes))
中,
np_mat[:, 0]
和np_mat[:, 1]
是否也应当如处理cf图那样,需要加上n_users。users并未被remap到entities中去,稀疏矩阵的前n_users行与n_users列应当为users预留。
对比之前KGAT中数据预处理的实现:
K, K_inv = _np_mat2sp_adj(np.array(self.relation_dict[r_id]), row_pre=self.n_users, col_pre=self.n_users)
不知道是否有理解错。
二、
我注意到在代码
user_agg = user_agg * (disen_weight * score).sum(dim=1) + user_agg # [n_users, channel]
中,
user_embedding的计算公式是否与论文中的Eq[7]有所出入。
论文中描述的是user所对应的item与intent加权求和,而在代码中的实现,是否可以理解为先求和再加权。
以及最后+ user_agg
是什么用意。
我理解的聚合应当在代码
entity_res_emb = torch.add(entity_res_emb, entity_emb) user_res_emb = torch.add(user_res_emb, user_emb)
中实现了。
希望得到您的解答与纠正。谢谢!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.