Git Product home page Git Product logo

attmpti's People

Contributors

na-z avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

attmpti's Issues

The ScanNet dataset

Hi,
thanks for sharing the code!
I am new to 3d point could segmentation. The ScanNet dataset seems to be very large(around 1.2T). Did you download the whole dataset or just download some of the datasets? Please help me! Looking forward to your answer!

Training Time

@Na-Z
Hi,
thanks for sharing the code!

I am new to 3d point could segmentation. Is this task GPU-friendly under the few-shot setting?
How long did it take to run the experiment on each dataset?

Looking forward to your reply. Thanks again~

Best,

Data preparation about S3DIS step 3

Hello, I think I need some help with data preprocessing.

I have completed the first two steps of preparation for S3DIS. The generated numpy files are stored in ./datasets/S3DIS/scenes/ by default successfully.

I run 'python ./preprocess/room2blocks.py --data_path=./datasets/S3DIS/scenes/' under the attMPTI folder. It show :
0 scenes to be split...
Total samples: 0

I run 'python room2blocks.py --data_path=../datasets/S3DIS/scenes/' under the preprocess folder. It show:
0 scenes to be split...
Total samples: 0
I don't understand why this problem occurs, when the npy file has already been generated into the default folder.

About class2scans.pkl

#7 (comment)

Sorry to bother u. I have the same problem when I use 'bash scripts/pretrain_segmentor.sh.' under attMPTI-main.
Can you help me see what's wrong?

It shows:
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/root/anaconda3/envs/dgcnn/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
------------ Options -------------
base_widths: [128, 64]
batch_size: 16
cvfold: 0
data_path: ./datasets/S3DIS/blocks_bs1_s1
dataset: s3dis
dgcnn_k: 20
dgcnn_mlp_widths: [512, 256]
dist_method: euclidean
edgeconv_widths: [[64, 64], [64, 64], [64, 64]]
eval_interval: 3
gamma: 0.5
k_connect: 200
k_shot: 1
log_dir: ./log_s3dis/log_pretrain_s3dis_S0
lr: 0.001
model_checkpoint_path: None
n_episode_test: 100
n_iters: 100
n_queries: 1
n_subprototypes: 100
n_way: 2
n_workers: 16
output_dim: 64
pc_attribs: xyzrgbXYZ
pc_augm: True
pc_augm_jitter: 1
pc_augm_mirror_prob: 0
pc_augm_rot: 1
pc_augm_scale: 0
pc_in_dim: 9
pc_npts: 2048
phase: pretrain
pretrain_checkpoint_path: None
pretrain_gamma: 0.5
pretrain_lr: 0.001
pretrain_step_size: 50
pretrain_weight_decay: 0.0001
save_path: ./log_s3dis/
sigma: 1.0
step_size: 5000
use_attention: False
-------------- End ----------------

{0: 'ceiling', 1: 'floor', 2: 'wall', 3: 'beam', 4: 'column', 5: 'window', 6: 'door', 7: 'table', 8: 'chair', 9: 'sofa', 10: 'bookcase', 11: 'board', 12: 'clutter'}
==== class to scans mapping is done ====
class_id: 0 | min_ratio: 0.05 | min_pts: 100 | class_name: ceiling | num of scans: 0
class_id: 1 | min_ratio: 0.05 | min_pts: 100 | class_name: floor | num of scans: 0
class_id: 2 | min_ratio: 0.05 | min_pts: 100 | class_name: wall | num of scans: 0
class_id: 3 | min_ratio: 0.05 | min_pts: 100 | class_name: beam | num of scans: 0
class_id: 4 | min_ratio: 0.05 | min_pts: 100 | class_name: column | num of scans: 0
class_id: 5 | min_ratio: 0.05 | min_pts: 100 | class_name: window | num of scans: 0
class_id: 6 | min_ratio: 0.05 | min_pts: 100 | class_name: door | num of scans: 0
class_id: 7 | min_ratio: 0.05 | min_pts: 100 | class_name: table | num of scans: 0
class_id: 8 | min_ratio: 0.05 | min_pts: 100 | class_name: chair | num of scans: 0
class_id: 9 | min_ratio: 0.05 | min_pts: 100 | class_name: sofa | num of scans: 0
class_id: 10 | min_ratio: 0.05 | min_pts: 100 | class_name: bookcase | num of scans: 0
class_id: 11 | min_ratio: 0.05 | min_pts: 100 | class_name: board | num of scans: 0
class_id: 12 | min_ratio: 0.05 | min_pts: 100 | class_name: clutter | num of scans: 0
Traceback (most recent call last):
File "main.py", line 116, in
pretrain(args)
File "/private/attMPTI-main/runs/pre_train.py", line 98, in pretrain
DATASET = S3DISDataset(args.cvfold, args.data_path)
File "/private/attMPTI-main/dataloaders/s3dis.py", line 38, in init
self.class2scans = self.get_class2scans()
File "/private/attMPTI-main/dataloaders/s3dis.py", line 69, in get_class2scans
with open(class2scans_file, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: './datasets/S3DIS/blocks_bs1_s1/class2scans.pkl'

room2blocks.py giving inconsistent number of blocks compared to paper.

Hi,
I did the preprocessing step where the rooms are divided into blocks using the provided room2blocks.py.
I am getting a total of 7521 blocks, whereas in the paper it has been mentioned that 7547 number of blocks are generated.
Am I supposed to change any parameter in the code?

Errors occur when running 'bash train_attMPTI.sh'

Errors are listed as below.

Traceback (most recent call last):
File "/home/liuxuanchen/Develop/few-shot-point-cloud/attMPTI/main.py", line 101, in
train(args)
File "/home/liuxuanchen/Develop/few-shot-point-cloud/attMPTI/runs/mpti_train.py", line 58, in train
loss, accuracy = MPTI.train(data)
File "/home/liuxuanchen/Develop/few-shot-point-cloud/attMPTI/models/mpti_learner.py", line 63, in train
query_logits, loss= self.model(support_x, support_y, query_x, query_y)
File "/home/liuxuanchen/anaconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in call_impl
return forward_call(*input, **kwargs)
File "/home/liuxuanchen/Develop/few-shot-point-cloud/attMPTI/models/mpti.py", line 107, in forward
A = self.calculateLocalConstrainedAffinity(node_feat, k=self.k_connect)
File "/home/liuxuanchen/Develop/few-shot-point-cloud/attMPTI/models/mpti.py", line 258, in calculateLocalConstrainedAffinity
A = A.scatter
(1, I, knn_similarity)
RuntimeError: Expected index [4396, 200] to be smaller than self [4396, 4396] apart from dimension 1 and to be smaller size than src [4396, 192]

Model trained weights

Hi Na Zhao,
Nice work on the task of few-shot learning on point clouds.
Would you please provide the trained weights of both the feature extractor and attMPTI?
It would really help.

Thanks.

eval_attMPTI.sh

when I was training the 'eval_attMPTI.sh' file, something went wrong:I would be very grateful for your solution.Thanks!
(attMPTI17) excellence@cyy:~/attMPTI-main$ bash /home/attMPTI-main/scripts/eval_attMPTI.sh
/home/attMPTI-main/scripts/eval_attMPTI.sh: line 14: SURE: command not found
Traceback (most recent call last):
File "main.py", line 112, in
eval(args)
File "/home/attMPTI-main/runs/eval.py", line 102, in eval
logger = init_logger(args.log_dir, args)
File "/home/attMPTI-main/utils/logger.py", line 34, in init_logger
mkdir(log_dir)
File "/home/attMPTI-main/utils/logger.py", line 22, in mkdir
os.makedirs(path)
File "/home/anaconda3/envs/attMPTI17/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
FileNotFoundError: [Errno 2] No such file or directory: ''

eval.py error for S3DIS-S0-N2-K5

I am getting the following error while running train_MPTI.sh for s=0, n=2, and k=5:

Traceback (most recent call last):
  File "main.py", line 113, in <module>
    train(args)
  File "/home2/siddharth/attMPTI/runs/mpti_train.py", line 72, in train
    valid_loss, mean_IoU = test_few_shot(VALID_LOADER, MPTI, logger, VALID_CLASSES)
  File "/home2/siddharth/attMPTI/runs/eval.py", line 101, in test_few_shot
    mean_IoU = evaluate_metric(logger, predicted_label_total, gt_label_total, label2class_total, test_classes)
  File "/home2/siddharth/attMPTI/runs/eval.py", line 66, in evaluate_metric
    iou = true_positive_classes[c] / float(gt_classes[c] + positive_classes[c] - true_positive_classes[c])
ZeroDivisionError: float division by zero

the validation classes are: [ 3 11 10 0 8 4]
validation data length: 142
It seems that for classes 0, 8 and 4 the counter in true_positive_classes, gt_classes and positive_classes are 0.
As this error can be avoided by using a condition, I wanted to make sure that there is no fault in the validation dataset, as there is not a single sample from class 0, 8, and 4. Is this normal, or should I regenerate the validation dataset?

No class2scans.pkl

Hi, thanks for your excellent work!

I have tried to run your code but face some issues.
After I processed the data successfully as introduction, I run :

bash scripts/pretrain_segmentor.sh

But ir reports:

{0: 'ceiling', 1: 'floor', 2: 'wall', 3: 'beam', 4: 'column', 5: 'window', 6: 'door', 7: 'table', 8: 'chair', 9: 'sofa', 10: 'bookcase', 11: 'board', 12: 'clutter'}
==== class to scans mapping is done ====
class_id: 0 | min_ratio: 0.05 | min_pts: 100 | class_name: ceiling | num of scans: 0
class_id: 1 | min_ratio: 0.05 | min_pts: 100 | class_name: floor | num of scans: 0
class_id: 2 | min_ratio: 0.05 | min_pts: 100 | class_name: wall | num of scans: 0
class_id: 3 | min_ratio: 0.05 | min_pts: 100 | class_name: beam | num of scans: 0
class_id: 4 | min_ratio: 0.05 | min_pts: 100 | class_name: column | num of scans: 0
class_id: 5 | min_ratio: 0.05 | min_pts: 100 | class_name: window | num of scans: 0
class_id: 6 | min_ratio: 0.05 | min_pts: 100 | class_name: door | num of scans: 0
class_id: 7 | min_ratio: 0.05 | min_pts: 100 | class_name: table | num of scans: 0
class_id: 8 | min_ratio: 0.05 | min_pts: 100 | class_name: chair | num of scans: 0
class_id: 9 | min_ratio: 0.05 | min_pts: 100 | class_name: sofa | num of scans: 0
class_id: 10 | min_ratio: 0.05 | min_pts: 100 | class_name: bookcase | num of scans: 0
class_id: 11 | min_ratio: 0.05 | min_pts: 100 | class_name: board | num of scans: 0
class_id: 12 | min_ratio: 0.05 | min_pts: 100 | class_name: clutter | num of scans: 0
Traceback (most recent call last):
File "main.py", line 116, in
pretrain(args)
File "/media/work/li/new/Few-shot/PointSeg/attMPTI/runs/pre_train.py", line 98, in pretrain
DATASET = S3DISDataset(args.cvfold, args.data_path)
File "/media/work/li/new/Few-shot/PointSeg/attMPTI/dataloaders/s3dis.py", line 38, in init
self.class2scans = self.get_class2scans()
File "/media/work/li/new/Few-shot/PointSeg/attMPTI/dataloaders/s3dis.py", line 69, in get_class2scans
with open(class2scans_file, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: './datasets/S3DIS/blocks_bs1_s1/class2scans.pkl'

I'm not sure where is wrong. I cannot find class2scans.pkl. Thanks for your help!

code

Looking forward to your code

Visualization of Point Cloud Results

Hi,Na Zhao. Thank you for opening up your code for research. I notice that you show some nice visualizations of the results in Figures 6 and 7 of your paper. However, when I evaluate and save the .ply visualization results, the point cloud is only a very small part of the room (a small block, especially in scannet dataset) and extremely sparse. May I ask how you save the results and display them? Can you share that part of the code? Thank you very much!!

Performance on ScanNet

Thanks for your opening source the nice project.
I run attMPTI on the two dataset using the hyper-parameters according to the paper. I can get the reported performance of the paper on S3DIS but fail on ScanNet. Concretly, in 2-way-1shot task on both split-0 and split-1 of ScanNet , i set n_prototypes=100, knn_neighbor=200, Gausian_sigma=5. The meanIoU i get are 39.5% and 37.6% on split-0 and split-1 respectively, which are about 3% dropped from the paper. I have also tried Gausian_sigma=1 but the result changes slightly.
Do you have these problems when you conducting the experiments on ScanNet? Or do you know how i can fix these problems?

gpu errors

I encountered a GPU problem during the training process on S3DIS dataset . How can I solve it? Thanks for your replying.
usage: main.py [-h]
[--phase {pretrain,finetune,prototrain,protoeval,mptitrain,mptieval}]
[--dataset DATASET] [--cvfold CVFOLD] [--data_path DATA_PATH]
[--pretrain_checkpoint_path PRETRAIN_CHECKPOINT_PATH]
[--model_checkpoint_path MODEL_CHECKPOINT_PATH]
[--save_path SAVE_PATH] [--eval_interval EVAL_INTERVAL]
[--batch_size BATCH_SIZE] [--n_workers N_WORKERS]
[--n_iters N_ITERS] [--lr LR] [--step_size STEP_SIZE]
[--gamma GAMMA] [--pretrain_lr PRETRAIN_LR]
[--pretrain_weight_decay PRETRAIN_WEIGHT_DECAY]
[--pretrain_step_size PRETRAIN_STEP_SIZE]
[--pretrain_gamma PRETRAIN_GAMMA] [--n_way N_WAY]
[--k_shot K_SHOT] [--n_queries N_QUERIES]
[--n_episode_test N_EPISODE_TEST] [--pc_npts PC_NPTS]
[--pc_attribs PC_ATTRIBS] [--pc_augm]
[--pc_augm_scale PC_AUGM_SCALE] [--pc_augm_rot PC_AUGM_ROT]
[--pc_augm_mirror_prob PC_AUGM_MIRROR_PROB]
[--pc_augm_jitter PC_AUGM_JITTER] [--dgcnn_k DGCNN_K]
[--edgeconv_widths EDGECONV_WIDTHS]
[--dgcnn_mlp_widths DGCNN_MLP_WIDTHS]
[--base_widths BASE_WIDTHS] [--output_dim OUTPUT_DIM]
[--use_attention] [--dist_method DIST_METHOD]
[--n_subprototypes N_SUBPROTOTYPES] [--k_connect K_CONNECT]
[--sigma SIGMA]
main.py: error: unrecognized arguments: --gpu_id 0 --similarity_function gaussian --output_widths [64]

Question about pointcloud sampler

The code line shows that the point clouds sampled for the query set are decided by the groundtruth labels when testing. I don‘t think this is a reasonable experimental design.

Furthermore, why the point cloud in the query set must contain the target class when testing rather than evaluation?
Thanks, @Na-Z

About eval script

Sorry to bother you.

I copy ‘main.py’ to the 'scripts' folder and run 'bash scripts/eval_attMPTI.sh'.
Error message is 'FileNotFoundError', but file name is not given:

/private/attMPTI-main# bash scripts/eval_attMPTI.sh
scripts/eval_attMPTI.sh: line 15: SURE: command not found
Traceback (most recent call last):
File "main.py", line 112, in
eval(args)
File "/private/attMPTI-main/runs/eval.py", line 102, in eval
logger = init_logger(args.log_dir, args)
File "/private/attMPTI-main/utils/logger.py", line 34, in init_logger
mkdir(log_dir)
File "/private/attMPTI-main/utils/logger.py", line 22, in mkdir
os.makedirs(path)
File "/root/anaconda3/envs/dgcnn/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileNotFoundError: [Errno 2] No such file or directory: ''

Can you give me some help? Thanks!

model.train() mode not specified in pre_train.py

Hi,
In the training loop of pre_train.py, the model is not set to train mode (model.train()).
Does this affect the training process or the mode is automatically set to .train() after the evaluation is done by torch?

from torch-cluster import fps报错

from torch-cluster import fps报错,其他模块均按照readme要求安装

pip install torch-cluster==latest+cu101 -f https://pytorch-geometric.com/whl/torch-1.5.0.html安装失败,无法找到对应包,所以直接使用的对应版本链接whel安装 pip install https://data.pyg.org/whl/torch-1.5.0/torch_cluster-latest%2Bcu101-cp36-cp36m-linux_x86_64.whl

但是运行时报错找不到模块
(torch1.4.0) ubuntu@ubuntu-System-Product-Name:/data/NZH/attMPTI-main$ bash scripts/train_attMPTI.sh
Traceback (most recent call last):
File "main.py", line 100, in
from runs.mpti_train import train
File "/data/NZH/attMPTI-main/runs/mpti_train.py", line 10, in
from runs.eval import test_few_shot
File "/data/NZH/attMPTI-main/runs/eval.py", line 14, in
from models.mpti_learner import MPTILearner
File "/data/NZH/attMPTI-main/models/mpti_learner.py", line 10, in
from models.mpti import MultiPrototypeTransductiveInference
File "/data/NZH/attMPTI-main/models/mpti.py", line 11, in
from torch_cluster import fps
File "/home/ubuntu/anaconda3/envs/torch1.4.0/lib/python3.6/site-packages/torch_cluster/init.py", line 13, in
library, [osp.dirname(file)]).origin)
File "/home/ubuntu/anaconda3/envs/torch1.4.0/lib/python3.6/site-packages/torch/_ops.py", line 106, in load_library
ctypes.CDLL(path)
File "/home/ubuntu/anaconda3/envs/torch1.4.0/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libtorch_cpu.so: cannot open shared object file: No such file or directory

transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered

==[Train] Iter: 0 | Loss: 0.9923 | Accuracy: 0.596924 ==
==[Train] Iter: 1 | Loss: 0.9856 | Accuracy: 0.795410 ==
Traceback (most recent call last):
File "main.py", line 101, in
train(args)
File "/home/ailab/workspace/CMQ/attMPTI/runs/mpti_train.py", line 59, in train
loss, accuracy = MPTI.train(data)
File "/home/ailab/workspace/CMQ/attMPTI/models/mpti_learner.py", line 67, in train
query_logits, loss= self.model(support_x, support_y, query_x, query_y)
File "/root/anaconda3/envs/attmpti/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ailab/workspace/CMQ/attMPTI/models/mpti.py", line 87, in forward
fg_prototypes, fg_labels = self.getForegroundPrototypes(support_feat, fg_mask, k=self.n_subprototypes)
File "/home/ailab/workspace/CMQ/attMPTI/models/mpti.py", line 187, in getForegroundPrototypes
class_prototypes = self.getMutiplePrototypes(feat, k)
File "/home/ailab/workspace/CMQ/attMPTI/models/mpti.py", line 148, in getMutiplePrototypes
fps_index = fps(feat, None, ratio=ratio, random_start=False).unique()
File "/root/anaconda3/envs/attmpti/lib/python3.6/site-packages/torch/tensor.py", line 384, in unique
return torch.unique(self, sorted=sorted, return_inverse=return_inverse, return_counts=return_counts, dim=dim)
File "/root/anaconda3/envs/attmpti/lib/python3.6/site-packages/torch/functional.py", line 471, in unique
return_counts=return_counts,
RuntimeError: transform: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered

Is this problem caused by my insufficient memory?I hope someone can answer my doubts

Adapt to part-level segmentation

Hi, Thanks for the great work.
If we want to adapt the method to object part segmentation (e.g., ShapeNet and PartNet), could you please help to point out which code we need to modify?

The problem of preprocessing the ScanNet dataset

When I run "python collect_scannet_data.py --data_path $path_to_ScanNet_raw_data", I found that the data obtained after collect_scannet_data did not contain the data with label 17.
So when I run "python ./preprocess/room2blocks.py --data_path ./datasets/ScanNet/scenes/ --dataset scannet" later, number of scans with class id 17 is 0. Therefore, during the training of the model, whenever the scene containing class number 17 is sampled, the errors will appear (Because scene contains label 17 is None).
I tried to reprocess the data several times, but it didn't work. I think there are some bugs in the code. I hope you can solve the problems.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.