Git Product home page Git Product logo

gran's People

Contributors

adamoyoung avatar kyleamoore avatar lrjconan avatar pclucas14 avatar qiyan98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gran's Issues

-c/--config_file arguments problem

usage: run_exp.py [-h] -c CONFIG_FILE [-l LOG_LEVEL] [-m COMMENT] [-t]
run_exp.py: error: the following arguments are required: -c/--config_file

The settings are as follows

parser.add_argument(
'-c',
'--config_file',
type=str,
default="config/gran_DD.yaml",
required=True,
help="Path of config file")
The file exists under this directory but the compiler cannot find it.

Using more than one GPUs?

Hello again GRAN team,
Apologies for this novice question - is there a way to use two GPUs instead of one during training?

Thanks,
Amit

Attributes

How to get the attributes of nodes and edges in the generated graphs. The current output only contains the adjacency matrix of the graphs. Is there any way we can get the nodes and edges attribute matrix also?
Thanks

Got an error message in running the test of experiments 'grad_grid'

After training model using grid database, I got a following message like..

File "run_exp.py", line 40, in main
runner.test()
File "/home/reasonance1216/GRAN-master/runner/gran_runner.py", line 334, in test
A_tmp = model(input_dict)
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 141, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/reasonance1216/anaconda3/envs/wavernn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/reasonance1216/GRAN-master/model/gran_mixture_bernoulli.py", line 445, in forward
A = self._sampling(batch_size)
File "/home/reasonance1216/GRAN-master/model/gran_mixture_bernoulli.py", line 292, in _sampling
A = torch.tril(A, diagonal=-1)
RuntimeError: invalid argument 1: expected a matrix at /opt/conda/conda-bld/pytorch_1544174967633/work/aten/src/THC/generic/THCTensorMathPairwise.cu:174

I haven't modified the source code at all. Is there anybody who are having same problem with me?

Training on CPU?

What changes do I have to make in order to train on CPUs, instead?

Division By Zero

Hi Renjie,
I downloaded the gran_grid.pth model using the download_model.sh script and ran !python run_exp.py -c config/gran_grid.yaml -t in Google Colab and ran into the following error. Any suggestions what might be wrong? I also didn't change anything in the gran_grid.yaml file.

INFO | 2020-01-30 16:04:38,856 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211/log_exp_1211.txt
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 27 : Exp instance id = 1211
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 28 : Exp comment = None
INFO | 2020-01-30 16:04:38,857 | run_exp.py | line 29 : Config =

{'dataset': {'data_path': 'data/',
'dev_ratio': 0.2,
'has_node_feat': False,
'is_overwrite_precompute': False,
'is_sample_subgraph': True,
'is_save_split': False,
'loader_name': 'GRANData',
'name': 'grid',
'node_order': 'DFS',
'num_fwd_pass': 1,
'num_subgraph_batch': 50,
'train_ratio': 0.8},
'device': 'cuda:0',
'exp_dir': 'exp/GRAN',
'exp_name': 'GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211',
'gpus': [0],
'model': {'block_size': 1,
'dimension_reduce': True,
'edge_weight': 1.0,
'embedding_dim': 128,
'has_attention': True,
'hidden_dim': 128,
'is_sym': True,
'max_num_nodes': 361,
'name': 'GRANMixtureBernoulli',
'num_GNN_layers': 7,
'num_GNN_prop': 1,
'num_canonical_order': 1,
'num_mix_component': 20,
'sample_stride': 1},
'run_id': '1211',
'runner': 'GranRunner',
'save_dir': 'exp/GRAN/GRANMixtureBernoulli_grid_2020-Jan-30-16-04-38_1211',
'seed': 1234,
'test': {'batch_size': 20,
'better_vis': True,
'is_single_plot': False,
'is_test_ER': False,
'is_vis': True,
'num_test_gen': 20,
'num_vis': 20,
'num_workers': 0,
'test_model_dir': 'snapshot_model',
'test_model_name': 'gran_grid.pth',
'vis_num_row': 5},
'train': {'batch_size': 1,
'display_iter': 10,
'is_resume': False,
'lr': 0.0001,
'lr_decay': 0.3,
'lr_decay_epoch': [100000000],
'max_epoch': 3000,
'momentum': 0.9,
'num_workers': 0,
'optimizer': 'Adam',
'resume_dir': None,
'resume_epoch': 5000,
'resume_model': 'model_snapshot_0005000.pth',
'shuffle': True,
'snapshot_epoch': 100,
'valid_epoch': 50,
'wd': 0.0},
'use_gpu': True,
'use_horovod': False}
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
max # nodes = 361 || mean # nodes = 210.25
max # edges = 684 || mean # edges = 391.5
INFO | 2020-01-30 16:04:38,984 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2020-01-30 16:04:38,988 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479
100% 1/1 [00:09<00:00, 9.14s/it]
INFO | 2020-01-30 16:04:51,079 | gran_runner.py | line 314 : Average test time per mini-batch = 9.139426708221436
/usr/local/lib/python3.6/dist-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
ERROR | 2020-01-30 16:05:20,040 | run_exp.py | line 42 : Traceback (most recent call last):
File "run_exp.py", line 40, in main
runner.test()
File "/content/gdrive/My Drive/GRAN/runner/gran_runner.py", line 370, in test
mmd_degree_dev, mmd_clustering_dev, mmd_4orbits_dev, mmd_spectral_dev = evaluate(self.graphs_dev, graphs_gen, degree_only=False)
File "/content/gdrive/My Drive/GRAN/runner/gran_runner.py", line 77, in evaluate
mmd_4orbits = orbit_stats_all(graph_gt, graph_pred)
File "/content/gdrive/My Drive/GRAN/utils/eval_helper.py", line 396, in orbit_stats_all
sigma=30.0)
File "/content/gdrive/My Drive/GRAN/utils/dist_helper.py", line 157, in compute_mmd
disc(samples2, samples2, kernel, *args, **kwargs) -
File "/content/gdrive/My Drive/GRAN/utils/dist_helper.py", line 139, in disc
d /= len(samples1) * len(samples2)
ZeroDivisionError: division by zero

Issue while Training on CPU

Hi, I just tried to train the model on a cpu but i ran into some Problems.
While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:

NLL Loss @ epoch 0001 iteration 00000001 = 0.0000
NLL Loss @ epoch 0063 iteration 00000250 = 0.0000

After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:

GRAN/runner/gran_runner.py

Lines 230 to 259 in 43cb443

avg_train_loss = .0
for ff in range(self.dataset_conf.num_fwd_pass):
batch_fwd = []
if self.use_gpu:
for dd, gpu_id in enumerate(self.gpus):
data = {}
data['adj'] = batch_data[dd][ff]['adj'].pin_memory().to(gpu_id, non_blocking=True)
data['edges'] = batch_data[dd][ff]['edges'].pin_memory().to(gpu_id, non_blocking=True)
data['node_idx_gnn'] = batch_data[dd][ff]['node_idx_gnn'].pin_memory().to(gpu_id, non_blocking=True)
data['node_idx_feat'] = batch_data[dd][ff]['node_idx_feat'].pin_memory().to(gpu_id, non_blocking=True)
data['label'] = batch_data[dd][ff]['label'].pin_memory().to(gpu_id, non_blocking=True)
data['att_idx'] = batch_data[dd][ff]['att_idx'].pin_memory().to(gpu_id, non_blocking=True)
data['subgraph_idx'] = batch_data[dd][ff]['subgraph_idx'].pin_memory().to(gpu_id, non_blocking=True)
data['subgraph_idx_base'] = batch_data[dd][ff]['subgraph_idx_base'].pin_memory().to(gpu_id, non_blocking=True)
batch_fwd.append((data,))
if batch_fwd:
train_loss = model(*batch_fwd).mean()
avg_train_loss += train_loss
# assign gradient
train_loss.backward()
# clip_grad_norm_(model.parameters(), 5.0e-0)
optimizer.step()
avg_train_loss /= float(self.dataset_conf.num_fwd_pass)
# reduce
train_loss = float(avg_train_loss.data.cpu().numpy())

Is this a bug or did i miss something?

Possibility of sending the incomplete adjacency matrix of a graph in test

In gran_mixture_bernoulli.py
A_pad = input_dict['adj'] if 'adj' in input_dict else None

My question is during testing, is it possible to send the the adjacency matrix of an incomplete graph as A_pad and see if it gets regenerated again? If yes, is there any thing that I should keep in mind?

Loss with multiple permutations

Looking at the calculation of the loss in the function mixture_bernoulli_loss, it seems like the the loss for multiple orderings of the adjacency is summed. According to the paper, the goal is to optimize $$\log( p(G, \pi_1) + p(G, \pi_2) )$$ but it appears as if what is actually being optimized is $$\log( p(G, \pi_1) p(G, \pi_2) ) = \log( p(G, \pi_1) ) + \log( p(g, \pi_2) )$$. Is this intended? The second expression is a lower bound on the first but it was not explicitly mentioned in the paper afaik.

why GRAN model can be trained so fast while tested really slowly?

Hi, I really appreciate your excellent work.
Actually, I am using your default hyperparameters and I am trying to train the GRAN model on Cora citation network datasets.
But I have found a weird thing that the model is trained by 5000 epochs only using 2 seconds. The more weird thing is that when I test the model, it will consume more than half an hour. The test results are not good yet.
I guess that is because the dataset contains only one graph with more than 3000 nodes.
So can you explain why the weird speed occurs and how can I accelerate the test process and improve the quality of generated graphs?
thanks

TypeError: load() missing 1 required positional argument: 'Loader'

Running on Google Colab:

git clone https://github.com/lrjconan/GRAN.git
cd GRAN
pip install -r requirements.txt

so far so good. Then:

python run_exp.py -c config/gran_lobster.yaml

I got

Traceback (most recent call last):
  File "run_exp.py", line 48, in <module>
    main()
  File "run_exp.py", line 17, in main
    config = get_config(args.config_file, is_test=args.test)
  File "/content/GRAN/utils/arg_helper.py", line 39, in get_config
    config = edict(yaml.load(open(config_file, 'r')))
TypeError: load() missing 1 required positional argument: 'Loader'

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

Hello GRAN team,
Apologies this is more request for help than a bug report. I am trying to replicate running the grid example within GRAN.

I have created a conda environment with pytorch 1.2.0 and python 3.7. I also installed the packages listed in requirements.txt using conda from default and conda-forge channels.

I have NVidia driver 515, graphics card is RTX A6000 and I have tried setting the environment to each of CUDA 10.2 and 11.7 on my machine (through PATH and LD_LIBRARY_PATH) with same error returned.

I am running into the following run-time error - RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

Any suggestions on how to resolve this?

Thanks,
Amit

runfile('_gran/run_exp.py', args='-c _gran/config/gran_grid.yaml', wdir='_gran', post_mortem=True)
Reloaded modules: runner, runner.gran_runner, model, model.gran_mixture_bernoulli, dataset, dataset.gran_data, utils, utils.data_helper, utils.logger, utils.train_helper, utils.arg_helper, utils.eval_helper, utils.dist_helper, utils.vis_helper, utils.data_parallel
_gran/utils/arg_helper.py:39: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = edict(yaml.load(open(config_file, 'r')))
INFO | 2023-03-21 12:51:49,425 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886/log_exp_1266886.txt
INFO | 2023-03-21 12:51:49,425 | run_exp.py | line 26 : Writing log file to exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886/log_exp_1266886.txt
INFO | 2023-03-21 12:51:49,426 | run_exp.py | line 27 : Exp instance id = 1266886
INFO | 2023-03-21 12:51:49,426 | run_exp.py | line 27 : Exp instance id = 1266886
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 28 : Exp comment = None
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 28 : Exp comment = None
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 29 : Config =
INFO | 2023-03-21 12:51:49,427 | run_exp.py | line 29 : Config =
INFO | 2023-03-21 12:51:49,533 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2023-03-21 12:51:49,533 | gran_runner.py | line 124 : Train/val/test = 80/20/20
INFO | 2023-03-21 12:51:49,536 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479
INFO | 2023-03-21 12:51:49,536 | gran_runner.py | line 137 : No Edges vs. Edges in training set = 111.70632737276479

{'dataset': {'data_path': 'data/',
'dev_ratio': 0.2,
'has_node_feat': False,
'is_overwrite_precompute': False,
'is_sample_subgraph': True,
'is_save_split': False,
'loader_name': 'GRANData',
'name': 'grid',
'node_order': 'DFS',
'num_fwd_pass': 1,
'num_subgraph_batch': 50,
'train_ratio': 0.8},
'device': 'cuda:0',
'exp_dir': 'exp/GRAN',
'exp_name': 'GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886',
'gpus': [0],
'model': {'block_size': 1,
'dimension_reduce': True,
'edge_weight': 1.0,
'embedding_dim': 128,
'has_attention': True,
'hidden_dim': 128,
'is_sym': True,
'max_num_nodes': 361,
'name': 'GRANMixtureBernoulli',
'num_GNN_layers': 7,
'num_GNN_prop': 1,
'num_canonical_order': 1,
'num_mix_component': 20,
'sample_stride': 1},
'run_id': '1266886',
'runner': 'GranRunner',
'save_dir': 'exp/GRAN/GRANMixtureBernoulli_grid_2023-Mar-21-12-51-49_1266886',
'seed': 1234,
'test': {'batch_size': 20,
'better_vis': True,
'is_single_plot': False,
'is_test_ER': False,
'is_vis': True,
'num_test_gen': 20,
'num_vis': 20,
'num_workers': 0,
'test_model_dir': 'snapshot_model',
'test_model_name': 'gran_grid.pth',
'vis_num_row': 5},
'train': {'batch_size': 1,
'display_iter': 10,
'is_resume': False,
'lr': 0.0001,
'lr_decay': 0.3,
'lr_decay_epoch': [100000000],
'max_epoch': 3000,
'momentum': 0.9,
'num_workers': 0,
'optimizer': 'Adam',
'resume_dir': None,
'resume_epoch': 5000,
'resume_model': 'model_snapshot_0005000.pth',
'shuffle': True,
'snapshot_epoch': 100,
'valid_epoch': 50,
'wd': 0.0},
'use_gpu': True,
'use_horovod': False}
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
max # nodes = 361 || mean # nodes = 210.25
max # edges = 684 || mean # edges = 391.5
/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
ERROR | 2023-03-21 12:51:49,593 | run_exp.py | line 42 : Traceback (most recent call last):
File "_gran/run_exp.py", line 38, in main
runner.train()
File "_gran/runner/gran_runner.py", line 248, in train
train_loss = model(*batch_fwd).mean()
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/utils/data_parallel.py", line 104, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/model/gran_mixture_bernoulli.py", line 438, in forward
att_idx=att_idx)
File "_gran/model/gran_mixture_bernoulli.py", line 218, in _inference
node_feat = self.decoder_input(A_pad) # BCN_max X H
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

ERROR | 2023-03-21 12:51:49,593 | run_exp.py | line 42 : Traceback (most recent call last):
File "_gran/run_exp.py", line 38, in main
runner.train()
File "_gran/runner/gran_runner.py", line 248, in train
train_loss = model(*batch_fwd).mean()
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/utils/data_parallel.py", line 104, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "_gran/model/gran_mixture_bernoulli.py", line 438, in forward
att_idx=att_idx)
File "_gran/model/gran_mixture_bernoulli.py", line 218, in _inference
node_feat = self.decoder_input(A_pad) # BCN_max X H
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bhatiaa1/anaconda3/envs/GRAN/lib/python3.7/site-packages/torch/nn/functional.py", line 1369, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

How are the nodes being produced?

From the little I understood from the paper, at the t-th generation step, the initial node representation of the already generated graph is calculated and it is using those node representations the GNN produces the edges associated with the current block.

My question is this: How is the new vertices (associated with the current block generated)? The new vertices must be generated before the edges are.

ZeroDivisionError: division by zero

Hello,
When I test the program, The following error occurs! How do I handle this error , thanks

File "C:\Users\Admin\PycharmProjects\pythonProject\endtoend\utils\dist_helper.py", line 157, in compute_mmd
disc(samples2, samples2, kernel, *args, **kwargs) -
File "C:\Users\Admin\PycharmProjects\pythonProject\endtoend\utils\dist_helper.py", line 139, in disc
d /= len(samples1) * len(samples2)
ZeroDivisionError: division by zero

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.