cciiplab / gce-gnn Goto Github PK
View Code? Open in Web Editor NEWThe source code for "Global Context Enhanced Graph Neural Network for Session-based Recommendation".
The source code for "Global Context Enhanced Graph Neural Network for Session-based Recommendation".
你好,想问一下,论文里全局图信息传播只发生在session内节点vi和vi的邻居节点之间,论文中的多跳是指什么意思呢?
就是正常情况下,一个节点想要利用到其二跳节点的信息,首先肯定是要从二跳节点传递到一跳节点,然后下一层卷积再发生二跳到当前节点的传递。可是论文中只写了当前节点和它的1-hop之间的信息传递。
Excellent work! I found that you reproduce the FGNN in your paper,because the original FGNN code is incomplete.Could you release your FGNN code?Thank you.
Good job!Could I answer a question?Why the n_node is equal items+1 for diginetica, but are equal items for Nowplaying and Tmall? I reference items from the paper GCE-GNN.
Excuse me, at present, I plan to add some other information in the data set to the session-based recommendation task to improve the recommendation accuracy (such as timestamp), and plan to cite and compare your GCE-GNN model. According to what you mentioned in the paper, I have downloaded the original data of the data set Tmall from IJCAI-15 competition(https://tianchi.aliyun.com/dataset/dataDetail?dataId=42)
The original Tmall data set contains the information I want to use, but I can’t find the data set 'tmall_data.csv' mentioned in your preprocessing file 'process_tmall.py' . Could you please send me the code for preprocessing the raw data, or send me the relevant processed files 'tmall_data.csv' if convenient, thank you very very much !
Thanks for sharing this awesome project! In the RecSys work, https://github.com/andrebola/session-rec-effect, the information about dataset 'Tmall' is inconsistent with that in your paper. And you didn't provide a code to prepare the dataset. So I want to know how that inconsistency exists and hopefully you can provide a guidance in README to help evaluate this project on the Tmall dataset. Great thanks in advance!
If there are content features of node, such as descriptions and categorical information of the product, where should this frozen embedding (e.g. if sentence embedding with size, number of node*number of embedding dimension) be concatenated in the model?
My understanding is to concatenate to the node embedding before attention network, similar to "Reversed Position Embedding", so it becomes node's representation from graph + Reversed Position Embedding + frozen content features embedding. Is this understanding correct?
Thank you
loss = model.loss_function(scores, targets - 1)
Why is the targets minus 1?
我看了图神经网络在会话推荐的一些文章,发现指标很乱,比如本篇论文中的P,有的地方写成了Recall,但是开源的代码是一样的。请问本文计算的P就是开源代码中的计算方式吗?那他为什么会和SR-GNN报道出来的值不一样呢?
python build_graph.py时,已经知道diginetica的sample_num设置为12。但是,数据集Tmall和Nowplaying的超参数sample_num没有给出。请问作者可以给出sample_num在数据集Tmall和Nowplaying上进行build_graph时的设置值吗?
Hello, I would like to ask if the indicator MRR is equivalent to MMR, and the indicator Recall is equivalent to HR?
According to your code in utils.py:
for i in np.arange(len(u_input) - 1):
u = np.where(node == u_input[i])[0][0]
u_adj[u][u] = 1
if u_input[i + 1] == 0:
break
v = np.where(node == u_input[i + 1])[0][0]
if u_adj[u][v] == 2:
u_adj[v][u] = 4
else:
u_adj[v][u] = 3
if u_adj[v][u] == 3:
u_adj[u][v] = 4
else:
u_adj[u][v] = 2
u_adj[v][v] = 1
It seems that u_adj[u][v]
will never be set as 2. I test the code on Diginetica and u_adj[u][v] = 2
never happens. Is this a bug or can you show us an example for it? What does 2 stand for?
Thank you.
作者您好,
loss = model.loss_function(scores, targets - 1)
代码中loss部分scores是预测结果?(我没有发现softmax层),targets-1的意图是什么呢?
Excuse me. Does the model that get the results recorded in the paper use the validation set for training? Are the hyperparameters obtained on the validation set first, and then the model is retrained on the training set and validation set? I can't seem to get the results in the paper.
us_pois = [list(reversed(upois)) + [0] * (max_len - le) if le < max_len else list(reversed(upois[-max_len:])) for upois, le in zip(inputData, len_data)]
Why reverse the sequence?
Thank you.
Hi,
Thanks for your sharing of this paper!
In your paper, I can't find whether you sort the session based on timestamp in Nowplaying datasets.
The preprocess is unclear. Would you mind sharing the preprocessed datasets Nowplaying?
Thanks!
Best regards
Hi,
Thanks for your sharing of this paper!
About the datasets Tmall, I find the website you provide in your paper and download the datasets. But I find that in the test dataset, there are no given labels for it. Only the training set has labels. So would you mind share the Tmall test dataset with sessions' labels for us?
Thanks!
您好。我像请问在做实验时是为每个指标(Recall@N,MRR@N)都找到其最大值(两个指标取最大值时模型参数不一致),还是找到一组超参数,该参数下两个指标的整体性能较高呢?
Thank you for releasing this great research along with it's source code. Is it possible to add license to GCE-GNN source code? I ask this question following suggestion on one of GitHub's page[1].
从论文中的数据集下载链接得到的数据集与其数据处理的文件对应不一致,请问这个数据集的链接是对的吗,或者说还有别的数据处理?
Excuse me. Does the model that get the results recorded in the paper use the validation set for training? Are the hyperparameters obtained on the validation set first, and then the model is retrained on the training set and validation set? I can't seem to get the results in the paper.
tmall_data.csv和tmall/dataset15.csv在论文给出的数据集网址上没有,请问能不能提供一下
还有实验中使用的nowplaying.csv
感谢🙏
您好!我使用您提供的预处理代码(process_tmall.py)处理下载的tmall数据,为什么得到的处理结果与您上传的datasets\Tmall中的数据不一样呢?您上传的训练集有10M,但是自己处理的只有8.7M。
Hello I find a lot of position but I don't find the dataset in codefile. Could you provide dataset15.csv?
Could you release these CSV files or the code that how you generate sessionId?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.