Git Product home page Git Product logo

gce-gnn's People

Contributors

cciiplab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

gce-gnn's Issues

Could you release the FGNN code?

Excellent work! I found that you reproduce the FGNN in your paper,because the original FGNN code is incomplete.Could you release your FGNN code?Thank you.

信息传递和信息汇聚问题

你好,想问一下,论文里全局图信息传播只发生在session内节点vi和vi的邻居节点之间,论文中的多跳是指什么意思呢?
就是正常情况下,一个节点想要利用到其二跳节点的信息,首先肯定是要从二跳节点传递到一跳节点,然后下一层卷积再发生二跳到当前节点的传递。可是论文中只写了当前节点和它的1-hop之间的信息传递。

Validation

Excuse me. Does the model that get the results recorded in the paper use the validation set for training? Are the hyperparameters obtained on the validation set first, and then the model is retrained on the training set and validation set? I can't seem to get the results in the paper.

dataset problem

Hello I find a lot of position but I don't find the dataset in codefile. Could you provide dataset15.csv?

关于数据集

tmall_data.csv和tmall/dataset15.csv在论文给出的数据集网址上没有,请问能不能提供一下
还有实验中使用的nowplaying.csv
感谢🙏

Where to concatenate content features of node? Should it be before attention network?

If there are content features of node, such as descriptions and categorical information of the product, where should this frozen embedding (e.g. if sentence embedding with size, number of node*number of embedding dimension) be concatenated in the model?

My understanding is to concatenate to the node embedding before attention network, similar to "Reversed Position Embedding", so it becomes node's representation from graph + Reversed Position Embedding + frozen content features embedding. Is this understanding correct?

Thank you

Tmall数据集问题

您好!我使用您提供的预处理代码(process_tmall.py)处理下载的tmall数据,为什么得到的处理结果与您上传的datasets\Tmall中的数据不一样呢?您上传的训练集有10M,但是自己处理的只有8.7M。

n_node about the datasets

Good job!Could I answer a question?Why the n_node is equal items+1 for diginetica, but are equal items for Nowplaying and Tmall? I reference items from the paper GCE-GNN.

About the Tmall datasets

Hi,
Thanks for your sharing of this paper!
About the datasets Tmall, I find the website you provide in your paper and download the datasets. But I find that in the test dataset, there are no given labels for it. Only the training set has labels. So would you mind share the Tmall test dataset with sessions' labels for us?
Thanks!

关于指标

我看了图神经网络在会话推荐的一些文章,发现指标很乱,比如本篇论文中的P,有的地方写成了Recall,但是开源的代码是一样的。请问本文计算的P就是开源代码中的计算方式吗?那他为什么会和SR-GNN报道出来的值不一样呢?

About preprocessing on Nowplaying

Hi,
Thanks for your sharing of this paper!
In your paper, I can't find whether you sort the session based on timestamp in Nowplaying datasets.
The preprocess is unclear. Would you mind sharing the preprocessed datasets Nowplaying?
Thanks!

Best regards

Data_preprocess

Excuse me, at present, I plan to add some other information in the data set to the session-based recommendation task to improve the recommendation accuracy (such as timestamp), and plan to cite and compare your GCE-GNN model. According to what you mentioned in the paper, I have downloaded the original data of the data set Tmall from IJCAI-15 competition(https://tianchi.aliyun.com/dataset/dataDetail?dataId=42)
The original Tmall data set contains the information I want to use, but I can’t find the data set 'tmall_data.csv' mentioned in your preprocessing file 'process_tmall.py' . Could you please send me the code for preprocessing the raw data, or send me the relevant processed files 'tmall_data.csv' if convenient, thank you very very much !

论文公式2

论文公式2
图片和代码
alpha = torch.matmul(torch.cat([extra_vector.unsqueeze(2).repeat(1, 1, neighbor_vector.shape[2], 1),
neighbor_vector, neighbor_weight.unsqueeze(-1)], -1), self.w_1).squeeze(-1) 不一致,感觉论文公式有误?

Question about dataset 'Tmall'

Thanks for sharing this awesome project! In the RecSys work, https://github.com/andrebola/session-rec-effect, the information about dataset 'Tmall' is inconsistent with that in your paper. And you didn't provide a code to prepare the dataset. So I want to know how that inconsistency exists and hopefully you can provide a guidance in README to help evaluate this project on the Tmall dataset. Great thanks in advance!

indicator problem

Hello, I would like to ask if the indicator MRR is equivalent to MMR, and the indicator Recall is equivalent to HR?

Why reverse the sequence?

reverse the sequence

us_pois = [list(reversed(upois)) + [0] * (max_len - le) if le < max_len else list(reversed(upois[-max_len:])) for upois, le in zip(inputData, len_data)]

Why reverse the sequence?
Thank you.

About validation

Excuse me. Does the model that get the results recorded in the paper use the validation set for training? Are the hyperparameters obtained on the validation set first, and then the model is retrained on the training set and validation set? I can't seem to get the results in the paper.

Tmall数据集及处理

从论文中的数据集下载链接得到的数据集与其数据处理的文件对应不一致,请问这个数据集的链接是对的吗,或者说还有别的数据处理?

最优结果

您好。我像请问在做实验时是为每个指标(Recall@N,MRR@N)都找到其最大值(两个指标取最大值时模型参数不一致),还是找到一组超参数,该参数下两个指标的整体性能较高呢?

Question w.r.t data processing

According to your code in utils.py:

    for i in np.arange(len(u_input) - 1):
            u = np.where(node == u_input[i])[0][0]
            u_adj[u][u] = 1
            if u_input[i + 1] == 0:
                break
            v = np.where(node == u_input[i + 1])[0][0]
            if u_adj[u][v] == 2:
                u_adj[v][u] = 4
            else:
                u_adj[v][u] = 3
            if u_adj[v][u] == 3:
                u_adj[u][v] = 4
            else:
                u_adj[u][v] = 2
            u_adj[v][v] = 1

It seems that u_adj[u][v] will never be set as 2. I test the code on Diginetica and u_adj[u][v] = 2 never happens. Is this a bug or can you show us an example for it? What does 2 stand for?

Thank you.

loss问题

作者您好,
loss = model.loss_function(scores, targets - 1)
代码中loss部分scores是预测结果?(我没有发现softmax层),targets-1的意图是什么呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.