lrjconan / gran Goto Github PK
View Code? Open in Web Editor NEWEfficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
License: MIT License
Efficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
License: MIT License
usage: run_exp.py [-h] -c CONFIG_FILE [-l LOG_LEVEL] [-m COMMENT] [-t]
run_exp.py: error: the following arguments are required: -c/--config_file
The settings are as follows
parser.add_argument(
'-c',
'--config_file',
type=str,
default="config/gran_DD.yaml",
required=True,
help="Path of config file")
The file exists under this directory but the compiler cannot find it.
Hello again GRAN team,
Apologies for this novice question - is there a way to use two GPUs instead of one during training?
Thanks,
Amit
The compute_mmd
function in dist_helper.py should compute square root of disc(s1, s1) + disc(s2, s2) - 2*disc(s1, s2)
right? Here is a reference for the same.
How to get the attributes of nodes and edges in the generated graphs. The current output only contains the adjacency matrix of the graphs. Is there any way we can get the nodes and edges attribute matrix also?
Thanks
After training model using grid database, I got a following message like..
File "run_exp.py", line 40, in main
runner.test()
File "/home/reasonance1216/GRAN-master/runner/gran_runner.py", line 334, in test
A_tmp = model(input_dict)
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 141, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/reasonance1216/GRAN-master/model/gran_mixture_bernoulli.py", line 445, in forward
A = self._sampling(batch_size)
File "/home/reasonance1216/GRAN-master/model/gran_mixture_bernoulli.py", line 292, in _sampling
A = torch.tril(A, diagonal=-1)
RuntimeError: invalid argument 1: expected a matrix at /opt/conda/conda-bld/pytorch_1544174967633/work/aten/src/THC/generic/THCTensorMathPairwise.cu:174
I haven't modified the source code at all. Is there anybody who are having same problem with me?
What changes do I have to make in order to train on CPUs, instead?
Hello, Has anybody used GRAN to generate the user interaction network structure on Twitter?
Hello,
when I try to train your model on IMDB-MULTI dataset, it fails on https://github.com/lrjconan/GRAN/blob/master/dataset/gran_data.py#L318 with ValueError: index can't contain negative values
.
Could you please advice me on how this can happen and possibly how to fix this?
Thanks
Hi Renjie,
I downloaded the gran_grid.pth model using the download_model.sh script and ran !python run_exp.py -c config/gran_grid.yaml -t in Google Colab and ran into the following error. Any suggestions what might be wrong? I also didn't change anything in the gran_grid.yaml file.
INFO | 2020-01-30 16:04:38,856 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211/log_exp_1211.txt
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 27 : Exp instance id = 1211
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 28 : Exp comment = None
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 29 : Config =
{'dataset': {'data_path': 'data/',
'dev_ratio': 0.2,
'has_node_feat': False,
'is_overwrite_precompute': False,
'is_sample_subgraph': True,
'is_save_split': False,
'loader_name': 'GRANData',
'name': 'grid',
'node_order': 'DFS',
'num_fwd_pass': 1,
'num_subgraph_batch': 50,
'train_ratio': 0.8},
'device': 'cuda:0',
'exp_dir': 'exp/GRAN',
'exp_name': 'GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211',
'gpus': [0],
'model': {'block_size': 1,
'dimension_reduce': True,
'edge_weight': 1.0,
'embedding_dim': 128,
'has_attention': True,
'hidden_dim': 128,
'is_sym': True,
'max_num_nodes': 361,
'name': 'GRANMixtureBernoulli',
'num_GNN_layers': 7,
'num_GNN_prop': 1,
'num_canonical_order': 1,
'num_mix_component': 20,
'sample_stride': 1},
'run_id': '1211',
'runner': 'GranRunner',
'save_dir': 'exp/GRAN/GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211',
'seed': 1234,
'test': {'batch_size': 20,
'better_vis': True,
'is_single_plot': False,
'is_test_ER': False,
'is_vis': True,
'num_test_gen': 20,
'num_vis': 20,
'num_workers': 0,
'test_model_dir': 'snapshot_model',
'test_model_name': 'gran_grid.pth',
'vis_num_row': 5},
'train': {'batch_size': 1,
'display_iter': 10,
'is_resume': False,
'lr': 0.0001,
'lr_decay': 0.3,
'lr_decay_epoch': [100000000],
'max_epoch': 3000,
'momentum': 0.9,
'num_workers': 0,
'optimizer': 'Adam',
'resume_dir': None,
'resume_epoch': 5000,
'resume_model': 'model_snapshot_0005000.pth',
'shuffle': True,
'snapshot_epoch': 100,
'valid_epoch': 50,
'wd': 0.0},
'use_gpu': True,
'use_horovod': False}
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
max # nodes = 361 || mean # nodes = 210.25
max # edges = 684 || mean # edges = 391.5
INFO | 2020-01-30 16:04:38,984 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2020-01-30 16:04:38,988 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479
100% 1/1 [00:09<00:00, 9.14s/it]
INFO | 2020-01-30 16:04:51,079 | gran_runner.py | line 314 : Average test time per mini-batch = 9.139426708221436
/usr/local/lib/python3.6/dist-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
ERROR | 2020-01-30 16:05:20,040 | run_exp.py | line 42 : Traceback (most recent call last):
File "run_exp.py", line 40, in main
runner.test()
File "/content/gdrive/My Drive/GRAN/runner/gran_runner.py", line 370, in test
mmd_degree_dev, mmd_clustering_dev, mmd_4orbits_dev, mmd_spectral_dev = evaluate(self.graphs_dev, graphs_gen, degree_only=False)
File "/content/gdrive/My Drive/GRAN/runner/gran_runner.py", line 77, in evaluate
mmd_4orbits = orbit_stats_all(graph_gt, graph_pred)
File "/content/gdrive/My Drive/GRAN/utils/eval_helper.py", line 396, in orbit_stats_all
sigma=30.0)
File "/content/gdrive/My Drive/GRAN/utils/dist_helper.py", line 157, in compute_mmd
disc(samples2, samples2, kernel, *args, **kwargs) -
File "/content/gdrive/My Drive/GRAN/utils/dist_helper.py", line 139, in disc
d /= len(samples1) * len(samples2)
ZeroDivisionError: division by zero
Hi, I just tried to train the model on a cpu but i ran into some Problems.
While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:
NLL Loss @ epoch 0001 iteration 00000001 = 0.0000
NLL Loss @ epoch 0063 iteration 00000250 = 0.0000
After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:
Lines 230 to 259 in 43cb443
Is this a bug or did i miss something?
Hello,
Can GRAN let us to generate graphs with costume attributes?
In gran_mixture_bernoulli.py
A_pad = input_dict['adj'] if 'adj' in input_dict else None
My question is during testing, is it possible to send the the adjacency matrix of an incomplete graph as A_pad and see if it gets regenerated again? If yes, is there any thing that I should keep in mind?
Looking at the calculation of the loss in the function mixture_bernoulli_loss
, it seems like the the loss for multiple orderings of the adjacency is summed. According to the paper, the goal is to optimize
Hi, I really appreciate your excellent work.
Actually, I am using your default hyperparameters and I am trying to train the GRAN model on Cora citation network datasets.
But I have found a weird thing that the model is trained by 5000 epochs only using 2 seconds. The more weird thing is that when I test the model, it will consume more than half an hour. The test results are not good yet.
I guess that is because the dataset contains only one graph with more than 3000 nodes.
So can you explain why the weird speed occurs and how can I accelerate the test process and improve the quality of generated graphs?
thanks
Running on Google Colab:
git clone https://github.com/lrjconan/GRAN.git
cd GRAN
pip install -r requirements.txt
so far so good. Then:
python run_exp.py -c config/gran_lobster.yaml
I got
Traceback (most recent call last):
File "run_exp.py", line 48, in <module>
main()
File "run_exp.py", line 17, in main
config = get_config(args.config_file, is_test=args.test)
File "/content/GRAN/utils/arg_helper.py", line 39, in get_config
config = edict(yaml.load(open(config_file, 'r')))
TypeError: load() missing 1 required positional argument: 'Loader'
Hi, I am trying to run the generator on a dataset with node features.
However, I noticed, the has_node_feat
option doesn't seem to do anything.
Is there a way to generate graphs with node features?
Error when testing at code
att_edge_feat = att_edge_feat.scatter(1, att_idx[[edges[:, 0]]], 1)
Hello GRAN team,
Apologies this is more request for help than a bug report. I am trying to replicate running the grid example within GRAN.
I have created a conda environment with pytorch 1.2.0 and python 3.7. I also installed the packages listed in requirements.txt using conda from default and conda-forge channels.
I have NVidia driver 515, graphics card is RTX A6000 and I have tried setting the environment to each of CUDA 10.2 and 11.7 on my machine (through PATH and LD_LIBRARY_PATH) with same error returned.
I am running into the following run-time error - RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
Any suggestions on how to resolve this?
Thanks,
Amit
runfile('_gran/run_exp.py', args='-c _gran/config/gran_grid.yaml', wdir='_gran', post_mortem=True)
Reloaded modules: runner, runner.gran_runner, model, model.gran_mixture_bernoulli, dataset, dataset.gran_data, utils, utils.data_helper, utils.logger, utils.train_helper, utils.arg_helper, utils.eval_helper, utils.dist_helper, utils.vis_helper, utils.data_parallel
_gran/utils/arg_helper.py:39: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = edict(yaml.load(open(config_file, 'r')))
INFO | 2023-03-21 12:51:49,425 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886/log_exp_1266886.txt
INFO | 2023-03-21 12:51:49,425 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886/log_exp_1266886.txt
INFO | 2023-03-21 12:51:49,426 | run_exp.py | line 27 : Exp instance id = 1266886
INFO | 2023-03-21 12:51:49,426 | run_exp.py | line 27 : Exp instance id = 1266886
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 28 : Exp comment = None
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 28 : Exp comment = None
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 29 : Config =
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 29 : Config =
INFO | 2023-03-21 12:51:49,533 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2023-03-21 12:51:49,533 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2023-03-21 12:51:49,536 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479
INFO | 2023-03-21 12:51:49,536 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479
{'dataset': {'data_path': 'data/',
'dev_ratio': 0.2,
'has_node_feat': False,
'is_overwrite_precompute': False,
'is_sample_subgraph': True,
'is_save_split': False,
'loader_name': 'GRANData',
'name': 'grid',
'node_order': 'DFS',
'num_fwd_pass': 1,
'num_subgraph_batch': 50,
'train_ratio': 0.8},
'device': 'cuda:0',
'exp_dir': 'exp/GRAN',
'exp_name': 'GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886',
'gpus': [0],
'model': {'block_size': 1,
'dimension_reduce': True,
'edge_weight': 1.0,
'embedding_dim': 128,
'has_attention': True,
'hidden_dim': 128,
'is_sym': True,
'max_num_nodes': 361,
'name': 'GRANMixtureBernoulli',
'num_GNN_layers': 7,
'num_GNN_prop': 1,
'num_canonical_order': 1,
'num_mix_component': 20,
'sample_stride': 1},
'run_id': '1266886',
'runner': 'GranRunner',
'save_dir': 'exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886',
'seed': 1234,
'test': {'batch_size': 20,
'better_vis': True,
'is_single_plot': False,
'is_test_ER': False,
'is_vis': True,
'num_test_gen': 20,
'num_vis': 20,
'num_workers': 0,
'test_model_dir': 'snapshot_model',
'test_model_name': 'gran_grid.pth',
'vis_num_row': 5},
'train': {'batch_size': 1,
'display_iter': 10,
'is_resume': False,
'lr': 0.0001,
'lr_decay': 0.3,
'lr_decay_epoch': [100000000],
'max_epoch': 3000,
'momentum': 0.9,
'num_workers': 0,
'optimizer': 'Adam',
'resume_dir': None,
'resume_epoch': 5000,
'resume_model': 'model_snapshot_0005000.pth',
'shuffle': True,
'snapshot_epoch': 100,
'valid_epoch': 50,
'wd': 0.0},
'use_gpu': True,
'use_horovod': False}
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
max # nodes = 361 || mean # nodes = 210.25
max # edges = 684 || mean # edges = 391.5
/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step()
before optimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()
before lr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
ERROR | 2023-03-21 12:51:49,593 | run_exp.py | line 42 : Traceback (most recent call last):
File "_gran/run_exp.py", line 38, in main
runner.train()
File "_gran/runner/gran_runner.py", line 248, in train
train_loss = model(*batch_fwd).mean()
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/utils/data_parallel.py", line 104, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/model/gran_mixture_bernoulli.py", line 438, in forward
att_idx=att_idx)
File "_gran/model/gran_mixture_bernoulli.py", line 218, in _inference
node_feat = self.decoder_input(A_pad) # BCN_max X H
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR | 2023-03-21 12:51:49,593 | run_exp.py | line 42 : Traceback (most recent call last):
File "_gran/run_exp.py", line 38, in main
runner.train()
File "_gran/runner/gran_runner.py", line 248, in train
train_loss = model(*batch_fwd).mean()
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/utils/data_parallel.py", line 104, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/model/gran_mixture_bernoulli.py", line 438, in forward
att_idx=att_idx)
File "_gran/model/gran_mixture_bernoulli.py", line 218, in _inference
node_feat = self.decoder_input(A_pad) # BCN_max X H
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
From the little I understood from the paper, at the t-th generation step, the initial node representation of the already generated graph is calculated and it is using those node representations the GNN produces the edges associated with the current block.
My question is this: How is the new vertices (associated with the current block generated)? The new vertices must be generated before the edges are.
Hello,
When I test the program, The following error occurs! How do I handle this error , thanks
File "C:\Users\Admin\PycharmProjects\pythonProject\endtoend\utils\dist_helper.py", line 157, in compute_mmd
disc(samples2, samples2, kernel, *args, **kwargs) -
File "C:\Users\Admin\PycharmProjects\pythonProject\endtoend\utils\dist_helper.py", line 139, in disc
d /= len(samples1) * len(samples2)
ZeroDivisionError: division by zero
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.