jwzhanggy / graph-bert Goto Github PK
View Code? Open in Web Editor NEWSource code of Graph-Bert
License: MIT License
Source code of Graph-Bert
License: MIT License
Your work has inspired me a lot.Thank you for your very useful work.but when I try to run the code ,I find a error:
File "./Graph-Bert/code/MethodBertComp.py", line 11, in
from transformers.modeling_bert import BertPredictionHeadTransform, BertAttention, BertIntermediate, BertOutput
ModuleNotFoundError: No module named 'transformers.modeling_bert'
I guess that maybe transformer edition is not appropriate,please help me .
Thank you!
Although you said that your method support large size graph in your paper, I tested it on a graph with about 10, 000 nodes, the program was terminated while running because it exceeded the system memory. However, the memory of the server is 128G,so where might the problem lie?
I found out the script_3 is running on CPU, why? Though it is not very slow
I have some questions while reading the paper of Graph-BERT. What does the intimacy score mean? What the difference between intimacy score and similarity score? When I try to understand the intimacy matrix, I find the nearest neighbors will get the highest intimacy score(e.g. 1-hop neighbors always get the highest intimacy score). Why does the context nodes can cover both local neighbors as well as the nodes which are far away.
Could you please help figure out these problems? Thanks for your help!
Hi,
In your graph-bert, you calculate the intimacy matrix in step 1, then sample linkless subgraphs and embed nodes in step 2. In the process of sampling linkless subgraphs, nodes of the linkless subgraphs are sorted based on the intimacy scores.
The question is, is it possible to change this sampling method to something else? (e.g., ripple walk sampling algorithm. I know this is one of your great works.)
If yes, how and where should I rewrite this codes to implement that?
Thank you in advance.
Hi! Your paper states that:
"In the experiments, we first pre-train GRAPH-BERT based on
the node attribute reconstruction task with 200 epochs, then
load and pre-train the same GRAPH-BERT model again based
on the graph structure recovery task with another 200 epochs."
However, script_2_pre_train.py learns two different models. Could you please specify, whether you applied those models separately for the downstream task?
If your paper is describing the workflow correctly, what should I do to train one model on both tasks?
Thank you in advance for the answer.
Yours sincerely,
Irina Nikishina.
Hi,
Thanks for the great work and the source codes!
I was wondering whether the Eqn 3 from the paper is implemented in the codes? I could not find them, and notice that the position features are directly passed to nn.Embedding()
?
Graph-Bert/code/MethodBertComp.py
Lines 99 to 104 in e878576
Graph-Bert/code/MethodBertComp.py
Lines 109 to 114 in e878576
Thanks,
Vijay
Hi,
I have a question where fine-tune script loads the pre-trained weight files?
Hey!
Could you please clarify about what nfeature and ngraph mean in the code. I noticed that they aren't equal to the document encoding size and number of links respectively
Hi:
In script_3_fine_tuning, you have one downstream task graph-clustering
In MethodGraphBertGraphClustering,py,from line 38 to line 41, is your clustering process while you didn't reuse any pre-train results in this downstream task. Is it belong to fine-tuning?
kmeans = KMeans(n_clusters=self.cluster_number, max_iter=self.max_epoch)
if self.use_raw_feature:
clustering_result = kmeans.fit_predict(self.data['X'])
else:
clustering_result = kmeans.fit_predict(sequence_output.tolist())
Hi,
I'm wondering if it's possible to GRAPH-BERT on directed graphs.
Thank you.
Hi, thank you for your work.
I have a question about mini-batch size, where can I find it in your code?
Thank you.
Your work has inspired me a lot.Thank you for your very useful work.but when I try to run the code ,I find a error:
File "./Graph-Bert/code/MethodBertComp.py", line 11, in
from transformers.modeling_bert import BertPredictionHeadTransform, BertAttention, BertIntermediate, BertOutput
ModuleNotFoundError: No module named 'transformers.modeling_bert'
I guess that maybe transformer edition is not appropriate,could you tell me transformers edition,please help me .
Thank you!
There are too many nodes and edges. DatasetLoader features = torch.FloatTensor(np.array(features.todense())). Is there a way like bert batch?
The given cora example makes nodes from papers and has links between each node (paper). However, my dataset has a bunch of text files and my graph is focused on each individual text file rather than dependency of one file to another and the order of tokens/features within each file is important as well. I have an AST graph of each text file but I'm unsure of how I can represent it for input to Graph-Bert.
If I make nodes out of each line of text, the feature vectors will be too sparse for each node and the number of nodes will be extremely large. If I make nodes on each text file, the structure and dependency within each file will be lost and I am unsure of what the links will be then.
If there's a way to represent this for input to Graph-Bert, please let me know. Thank you.
Could the model be used for fine tuning on a custom dataset? I have a dataset of medical texts and have made a vocabualry graph out of it. I want to optimise the clustering and output a knowlege graph indicating the interactions between the entities.
Thanks in advance!
Hi, I wonder where do you implement the activation function in the code?
I saw you declared "hidden_act" as "gelu", but not using "hidden_act" or declare any activation function in the project.
I'm looking forward to your response, Thank you :)
I noticed that you assume everything is stored as numpy matrices already. Is there a pointer to how graphs are represented in the node and links files (is links just a sparse adjacency list and node a set of features)? Thanks!
Hi, is it possible to run the code for a custom dataset where the node features are float instead of binary (as in cora)?
If so, can you guide me to the files/tasks/concepts I should be looking at for making necessary changes?
I try to run script _3_fine_tuning.py on citeseer and pubmed, but it raises a OSError: ./data/pubmed//node not found. How do I get this data?
Sorry to bother you again. This issue happened when I try to use DatasetLoader.py to generate intimacy matrix "S", but isn’t mini-batch used in the model training process?
Originally posted by @code-gamer in #23 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.