Git Product home page Git Product logo

centerpoint-kitti's Introduction

CenterPoint

3D Object Detection and Tracking using center points in the bird-eye view.

Center-based 3D Object Detection and Tracking,
Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl,
arXiv technical report (arXiv 2006.11275)

@article{yin2021center,
  title={Center-based 3D Object Detection and Tracking},
  author={Yin, Tianwei and Zhou, Xingyi and Kr{\"a}henb{\"u}hl, Philipp},
  journal={CVPR},
  year={2021},
}

CenterPoint is also implemented in the official OpenPCDet repo and can reproduce results on nuScenes and Waymo.

This repo is an reimplementation of CenterPoint on the KITTI dataset. For nuScenes and Waymo, please refer to the original repo. Please refer to INSTALL.md for installation. We provide two configs, centerpoint.yaml for the vanilla centerpoint model and centerpoint_rcnn.yaml which combines centerpoint with PVRCNN.

Acknowledgement

Our code is based on OpenPCDet. Some util files are copied from mmdetection and mmdetection3d. Thanks OpenMMLab Development Team for their awesome codebases.

centerpoint-kitti's People

Contributors

tianweiy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

centerpoint-kitti's Issues

RuntimeError: CUDA error: out of memory

hello, this is my error when i try to train the centerpoint.yaml

cfg.OPTIMIZATION = edict()
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.BATCH_SIZE_PER_GPU: 4
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.NUM_EPOCHS: 80
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.OPTIMIZER: adam_onecycle
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR: 0.003
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.WEIGHT_DECAY: 0.01
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.MOMENTUM: 0.9
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.MOMS: [0.95, 0.85]
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.PCT_START: 0.4
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.DIV_FACTOR: 10
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.DECAY_STEP_LIST: [35, 45]
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_DECAY: 0.1
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_CLIP: 1e-07
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.LR_WARMUP: False
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.WARMUP_EPOCH: 1
2022-04-07 11:02:52,498 INFO cfg.OPTIMIZATION.GRAD_NORM_CLIP: 10
2022-04-07 11:02:52,498 INFO cfg.TAG: centerpoint
2022-04-07 11:02:52,498 INFO cfg.EXP_GROUP_PATH: kitti_models
2022-04-07 11:02:52,570 INFO Database filter by min points Car: 14357 => 13532
2022-04-07 11:02:52,571 INFO Database filter by min points Pedestrian: 2207 => 2168
2022-04-07 11:02:52,571 INFO Database filter by min points Cyclist: 734 => 705
2022-04-07 11:02:52,582 INFO Database filter by difficulty Car: 13532 => 10759
2022-04-07 11:02:52,584 INFO Database filter by difficulty Pedestrian: 2168 => 2075
2022-04-07 11:02:52,585 INFO Database filter by difficulty Cyclist: 705 => 581
2022-04-07 11:02:52,588 INFO Loading KITTI dataset
2022-04-07 11:02:52,646 INFO Total samples for KITTI dataset: 3712
Traceback (most recent call last):
File "train.py", line 202, in
main()
File "train.py", line 116, in main
model = build_network(model_cfg=cfg.MODEL, num_class=len(cfg.CLASS_NAMES), dataset=train_set)
File "../pcdet/models/init.py", line 18, in build_network
model_cfg=model_cfg, num_class=num_class, dataset=dataset
File "../pcdet/models/detectors/init.py", line 30, in build_detector
model_cfg=model_cfg, num_class=num_class, dataset=dataset
File "../pcdet/models/detectors/centerpoint.py", line 7, in init
self.module_list = self.build_networks()
File "../pcdet/models/detectors/detector3d_template.py", line 47, in build_networks
model_info_dict=model_info_dict
File "../pcdet/models/detectors/detector3d_template.py", line 136, in build_dense_head
voxel_size=model_info_dict.get('voxel_size', False)
File "../pcdet/models/dense_heads/center_head.py", line 66, in init
[self.class_names.index(x) for x in cur_class_names if x in class_names]
RuntimeError: CUDA error: out of memory

base on GTX3070 cuda11.1 pytorch1.8.2 , can u give me some suggestions

Question about validation

Hi,

I have some questions regarding the validation set.

  1. As the code shown in the kitti_dataset.yaml, the 'kitti_infos_val.pkl' is regarded as the test set. I wonder the reason why is it set like this.

Screen Shot 2021-06-10 at 11 19 36 AM

  1. Then, whether there is a validation step in this algorithm? If the answer is yes, would you point out the validation is completed in which file?

  2. And where did you split the training data and the validation data?

  3. In your opinion and experience, what is the number of epochs that could let CenterPoint full trained on KITTI? Or in another word, why did you choose to use 80 epochs to train the CenterPoint?

Thanks a lot~

dynamic voxelization vs hard voxelization

Hi Tianwei, I copied your dynamic voxelization implementation from other repo to adapt to OpenPcdet here. Stangely, I found no speed improments compared to the original hard voxelization by OpenPcdet.

I compare the iter/second, no improvements in speed.
The original hard voxelization was done in dataload stage.
image

The dynamic voxelization was done in model VFE stage.
image

Is it because the original hard voxelization metho already a optimized one?

what exactly this model is dealing with images?

Hi I am new to this 3d object detection using lidar, images, calibs ..., So in this model where exactly the fusion of images with point cloud data is happening. I am unable to find the information from the research paper of what this model is dealing with images! I think the model architecture is solely dealing with point cloud data. If I am wrong please let me know. And also please let me know what is the purpose of images while training the data!

ImportError: cannot import name 'iou3d_nms_cuda' from 'pcdet.ops.iou3d_nms' (unknown location)

When I ran python /content/CenterPoint-KITTI-main/tools/train.py --cfg_file ${CONFIG_FILE} , I'm getting this error

Traceback (most recent call last):
File "/content/CenterPoint-KITTI-main/tools/train.py", line 11, in
from test import repeat_eval_ckpt
File "/content/CenterPoint-KITTI-main/tools/test.py", line 13, in
from eval_utils import eval_utils
File "/content/CenterPoint-KITTI-main/tools/eval_utils/eval_utils.py", line 7, in
from pcdet.models import load_data_to_gpu
File "/content/CenterPoint-KITTI-main/pcdet/models/init.py", line 6, in
from .detectors import build_detector
File "/content/CenterPoint-KITTI-main/pcdet/models/detectors/init.py", line 1, in
from .detector3d_template import Detector3DTemplate
File "/content/CenterPoint-KITTI-main/pcdet/models/detectors/detector3d_template.py", line 6, in
from ...ops.iou3d_nms import iou3d_nms_utils
File "/content/CenterPoint-KITTI-main/pcdet/ops/iou3d_nms/iou3d_nms_utils.py", line 9, in
from . import iou3d_nms_cuda
ImportError: cannot import name 'iou3d_nms_cuda' from 'pcdet.ops.iou3d_nms' (unknown location)
[22]

Kitti dataset preprocess without images

Since this is a Lidar-only object detection model, could we preprocess the data without the images? I think if we don't use the evaluation metric of 'bbox', we can ignore the infos from images, right?

Another question is, why do we only count the points inside the gt in the FOV of the image?

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Hello, I am getting this error anybody having information about this?

epochs: 0%| | 0/80 [00:01<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 198, in
main()
File "train.py", line 170, in main
merge_all_iters_to_one_epoch=args.merge_all_iters_to_one_epoch
File "/media/storage1/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 93, in train_model
dataloader_iter=dataloader_iter
File "/media/storage1/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 38, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/media/storage1/CenterPoint-KITTI/pcdet/models/init.py", line 30, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/mahmood/anaconda3/envs/openpcdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/storage1/CenterPoint-KITTI/pcdet/models/detectors/centerpoint.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/mahmood/anaconda3/envs/openpcdet/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/storage1/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 78, in forward
gt_boxes=data_dict['gt_boxes']
File "/media/storage1/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 142, in assign_targets
heatmaps = np.array(heatmaps).transpose(1, 0).tolist()
File "/home/mahmood/anaconda3/envs/openpcdet/lib/python3.7/site-packages/torch/_tensor.py", line 643, in array
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

How do I make the code run?Does this code actually run?

If I refer to https://github.com/tianweiy/CenterPoint-KITTI/blob/main/docs/GETTING_STARTED.md,it will print a series of BUG, all of them comes from pcdet/models/dense_heads/centerpoint_head_single.py
I find that some of the intermediate values are tensors included in a list, or even in a two-level list.In some calculations, the dimensions and size of the two tensors don't match.
But I can run other models like pointrcnn in this code.

dist.init_process_group hangs

Hi,

I'm trying to train CenterPoint on KITTI. Single GPU training is fine, but 4-GPU training hangs at this line:

dist.init_process_group(
backend=backend,
init_method='tcp://127.0.0.1:%d' % tcp_port,
rank=local_rank,
world_size=num_gpus
)

I'm using a nccl backend, tcp_port is free.

Any help would be appreciated!

pre-trained model

hello, Thanks for your work. Can I get the pre-trained model on KITTI ?

Question about the training

Hey there!

First, I appreciate your fast responses - it is of course not obvious :)

I have a couple of questions about the training process:

  1. What's the difference between the arguments --cpkt and --pretrained_model in train.py? Isn't a pre-trained model just a model that is trained for some epochs, like a checkpoint?

  2. I was wondering if there are blocks of code that are not trained, and used as pretrained models. For example, the VoxelNet part - do we actually train the subnetwork that processes the voxels into the overhead-view pseudo image?

  3. Is the VoxelNet backbone responsible for both voxelizing the point cloud and creating the overhead-view pseudo?

  4. We've trained the model with 10% of the training data for 80 epochs and a batch_size of 4. To our big surprise, it performed almost as good as the model trained on the full data that you referred to here: #9. Does it make any sense?

  5. Do we use RCNNs at the first stage for centerpoint.yaml config?

  6. We're struggling to understand the heatmap concept. How it is created, and how the GaussianFocalLoss loss function is applied to it. Do you have any hint where we might find some answer for beginners? Google assume we've been born with that knowledge.

Thanks :)

The result of PointPillars Anchor-based and Center-based is different.

@tianweiy
I am currently working on improving the performance of Point Pillars' Kitti 3d data set.
In your paper, when the head of PointPillars was center-based, you mentioned that the performance improvement was made in nuScenes dataset.
So I checked whether the performance improvement was made in kitti dataset as well, but the performance was rather poor.
Could you give me an additional explanation on this part?

I conducted an experiment using the mmdetection3d library.

I used for detect Car, Pedestrian, Cyclist.

the config only different of head.

And the results are below.


Overall 3D AP@40 Easy Moderate Hard
PointPillars 76.5073 64.7261 61.3703
PointPillars + CenterPointHead 72.5513 61.0547 58.1883

one stage model

Hi, Tianwei. Thanks for your great work. I have checked your implementation of CenterPoint model based on PCDet in this repo. The version is one stage centerpoint? When can we get the official two stage model? Or the latest CenterPoint++ model?

Thanks!

help,KeyError: 'CenterPoint'

the error in running train.py
Traceback (most recent call last):
File "train.py", line 198, in
main()
File "train.py", line 115, in main
model = build_network(model_cfg=cfg.MODEL, num_class=len(cfg.CLASS_NAMES), dataset=train_set)
File "/data/DL/3D_Detect/OpenPCDet-master/pcdet/models/init.py", line 18, in build_network
model_cfg=model_cfg, num_class=num_class, dataset=dataset
File "/data/DL/3D_Detect/OpenPCDet-master/pcdet/models/detectors/init.py", line 25, in build_detector
model = all[model_cfg.NAME](
KeyError: 'CenterPoint'

On model robustness

I've inferred the Pandset(Waymo) dataset by your KITTI weight, it can't work well.
but can inference the KITTI by cityscape models in 2D.
What are the differences between different point cloud datasets?(except the number of laser lines)

How to detect bounding boxes at the back of the vehicle?

This is a dataset/general query.

I have trained an object detector with "centerpoint_rnn.cfg" config and the model has trained and is working well. But I noticed just now that the KITTI dataset has gt bbox information just for objects which are in the FOV of the cameras. So the cars at the back of the vehicle are not detected. This has caused problems for me since I am working on Moving Object Segmentation and I want to detect vehicles that are at the back as well. Any suggestion on how that can be achieved.
Screenshot from 2021-10-18 12-47-22
In the image, it can be seen that the cars in front of the vehicle bearing laser scanner are detected but there is a car just behind the vehicle which remains undetected

Scores on the demo, Voxelization

Hi, I have some questions if you have time:

1- As written in the paper, CenterPoint gives scores and 3D boxes after the second stage. As written here in the issues, CenterPoint-KITTI doesn't have the second stage. I wonder, how can I see scores and 3D boxes in the demo? How/Where are these scores calculated in the code? Is it calculated as IoU?

2- General question: Points have (x, y, z, intensity) values, then we voxelized them. If I understand it right, here voxelization means actually dividing the whole point cloud into small sub-volumes with size e.g. (0.05, 0.05, 0.1), and each volume (voxel) has e.g. 5 points. Then we take the average of points in each volume and represent that volume as one averaged point. After that, we again have (x,y,z, intensity) values but averaged. Is my understanding right? And what is the actual purpose here? Why do we need to do this?

Thank you in advance.

How to implement a two-stage centerpoint on waymo

I implement a two-stage centerpoint use with openpcdet, I can't get a good result. I use the proposal_layer in roi_head to obtain one-stage detection results, (I notice that this is different from yours, you seem to integrate this process into predict process in centerhead). But I don’t know the essential difference between the two implements. Is it the reason for the relevant parameters? such as nms_pre_max_size=4096,
nms_post_max_size=500 in yours implement. in pcdet, NMS_PRE_MAXSIZE: 1024, NMS_POST_MAXSIZE: 100.

Besides, I want to ask a question, Why NMS_POST_MAXSIZE (PVRCNN in waymo) can be set to 100. What to do if the number of targets in the scene is greater than 100 ?

The result of Centerpoint training on KITTI.

Hi,
I am now trying to study how to training Centerpoint on KITTI. My setting is training on the original KITTI dataset of 20 epoches. And what I got is as follows:

Car [email protected], 0.70, 0.70:
bbox AP:94.7996, 88.9117, 87.8464
bev AP:89.7246, 86.8356, 83.5177
3d AP:86.4783, 76.2949, 72.4491
aos AP:94.75, 88.76, 87.63
Car [email protected], 0.70, 0.70:
bbox AP:96.9610, 91.3264, 88.8810
bev AP:93.9845, 87.7600, 85.2048
3d AP:88.2733, 75.9294, 73.1899
aos AP:96.92, 91.16, 88.66
Car [email protected], 0.50, 0.50:
bbox AP:94.7996, 88.9117, 87.8464
bev AP:94.9225, 89.3233, 88.8077
3d AP:94.8194, 89.2578, 88.6416
aos AP:94.75, 88.76, 87.63
Car [email protected], 0.50, 0.50:
bbox AP:96.9610, 91.3264, 88.8810
bev AP:97.2669, 93.8379, 93.0098
3d AP:97.1964, 93.4531, 91.2541
aos AP:96.92, 91.16, 88.66
Pedestrian [email protected], 0.50, 0.50:
bbox AP:66.8845, 65.2675, 62.3663
bev AP:57.1777, 53.5311, 50.7458
3d AP:50.1384, 48.6857, 44.5479
aos AP:64.70, 62.75, 59.74
Pedestrian [email protected], 0.50, 0.50:
bbox AP:68.0557, 65.3659, 62.4463
bev AP:55.0341, 52.0861, 48.6433
3d AP:48.6057, 45.7464, 41.7122
aos AP:65.56, 62.53, 59.48
Pedestrian [email protected], 0.25, 0.25:
bbox AP:66.8845, 65.2675, 62.3663
bev AP:74.2538, 72.4498, 69.3171
3d AP:74.1821, 72.2494, 69.1611
aos AP:64.70, 62.75, 59.74
Pedestrian [email protected], 0.25, 0.25:
bbox AP:68.0557, 65.3659, 62.4463
bev AP:74.8110, 73.2887, 70.1591
3d AP:74.7206, 73.0705, 69.9633
aos AP:65.56, 62.53, 59.48
Cyclist [email protected], 0.50, 0.50:
bbox AP:84.2834, 73.6666, 70.5905
bev AP:80.0970, 66.7770, 63.7807
3d AP:74.6868, 62.2825, 57.8456
aos AP:84.12, 73.11, 70.07
Cyclist [email protected], 0.50, 0.50:
bbox AP:88.0991, 74.7699, 71.1338
bev AP:81.6918, 67.8036, 63.8179
3d AP:75.0102, 61.3894, 57.8612
aos AP:87.91, 74.20, 70.57
Cyclist [email protected], 0.25, 0.25:
bbox AP:84.2834, 73.6666, 70.5905
bev AP:81.9637, 69.6221, 67.1479
3d AP:81.9637, 69.6218, 67.1479
aos AP:84.12, 73.11, 70.07
Cyclist [email protected], 0.25, 0.25:
bbox AP:88.0991, 74.7699, 71.1338
bev AP:85.6864, 70.9843, 67.4068
3d AP:85.6864, 70.9842, 67.4062
aos AP:87.91, 74.20, 70.57

My questions are:

  1. I am not sure if it's right or not? Did you get similar results?
  2. And what does the "Cyclist [email protected], 0.25, 0.25" mean? Could we modify the setting? Or this setting is fixed because we use the KITTI dataset?

Thanks in advance!

raise FileNotFoundError : File "/content/CenterPoint-KITTI-main/OpenPCDet/pcdet/models/detectors/detector3d_template.py", line 350, in load_params_with_optimizer

I am using GPU in colab, I have set --ckpt argument to 0
When I ran train.py --cfg_file centerpoint.yaml I am getting this error.

Traceback (most recent call last):
File "train.py", line 198, in
main()
File "train.py", line 129, in main
it, start_epoch = model.load_params_with_optimizer(args.ckpt, to_cpu=dist, optimizer=optimizer, logger=logger)
File "/content/CenterPoint-KITTI-main/OpenPCDet/pcdet/models/detectors/detector3d_template.py", line 350, in load_params_with_optimizer
raise FileNotFoundError
FileNotFoundError

Questions about "map_to_bev" module.

Hi, @tianweiy

As mentioned in the Thesis of CenterPoint: “We rely on a standard 3D backbone that extracts map-view feature representation from Lidar point-clouds.” But in the code, you introduce the "map_to_BEV" module. Is BEV the same as the"map-view" you have mentioned? Or they are different but you just what to do detection in BEV here?

Thanks in advance! :D

the loss cannot be reduced

The training effect using your cfg and kitti datasets is not very good, and the loss cannot be reduced. In the parameter adjustment section, may I ask if you have any suggestions?

Question about CenterPoint's model

Hi,

sorry to bother you. Regarding the model of CenterPoint, I have the following questions:

  1. Which is the first network layer for CenterPoint? In my opinion, the first layer of CenterPoint network may be the MeanVFE, correct?

  2. What is the input and output of the first layer network?What is the size of its input? Could you please point out where is it defined?

Thank you so much.

the point range and voxel size of the custom data

hi, Thanks for your nice work!
I want to train with my own data. I change the point range size :[-140.8,-80,-3,140.8,80,1] and the voxel size:[0.1,0.05,0.5]
But when I started training, I found that the loss of the model could not drop, and no results could be detected. I would like to ask if you want to modify the point cloud scope to adapt to your own data, which content should be modified? My most sincere thanks in advance !!

I am looking forward to your reply, which is very important to me. Thank you!

how to visualize the result.pkl?

I have tested the model with 151 samples. The result was stored in OpenPCDet/output/content/CenterPoint-KITTI-main/OpenPCDet/tools/cfgs/kitti_models/pointpillar/default/eval/epoch_9/val/default as result,pkl, so how to visualize the result

TypeError: expected str, bytes or os.PathLike object, not NoneType

Hi I ran this code in colab after installing all the requirements mentioned in Install.md and followed Getting_started.md
! python /content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py
I am getting this error:

Traceback (most recent call last):
File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 198, in
main()
File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 59, in main
args, cfg = parse_config()
File "/content/CenterPoint-KITTI-main/OpenPCDet/tools/train.py", line 48, in parse_config
cfg_from_yaml_file(args.cfg_file, cfg)
File "/content/CenterPoint-KITTI-main/OpenPCDet/pcdet/config.py", line 72, in cfg_from_yaml_file
with open(cfg_file, 'r') as f:
TypeError: expected str, bytes or os.PathLike object, not NoneType
[ ]

about the test on tracking in this project

Hello! Thanks for your projects! I would like to ask if this project have the test of tracking on KITTI dataset.
I can't find the code named tracking in tools.
How can I run this project and get the effect of tracking.
Looking forward to your reply!

RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'other'

try this

if not (0 <= center_int[0] < feature_map_size[0].cuda().int()
      and 0 <= center_int[1] < feature_map_size[1].cuda().int()):

Originally posted by @tianweiy in #3 (comment)
I also encountered this problem. After using this code to change, a new problem appeared. I also convert the type of feature_map_size to int32. When the same problem occurs, please tell me how I should change it.
" File "/Data0/master/zrx/zrxj/CenterPoint-KITTI-main/pcdet/models/dense_heads/centerpoint_head_single.py", line 283, in get_targets_single
feature_map_size[0] * feature_map_size[1])
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #2 'other'"

ERROR: Could not find a version that satisfies the requirement spconv (from pcdet) ERROR: No matching distribution found for spconv`

When I run the script pip install pcdet-0.3.0+95b7309-cp37-cp37m-linux_x86_64.whl. the error is below.
My environment is below:
pytorch1.7.1
cuda11.1
A600

Processing ./pcdet-0.3.0+95b7309-cp37-cp37m-linux_x86_64.whl Requirement already satisfied: pyyaml in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (5.4.1) Requirement already satisfied: numba in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (0.52.0) Requirement already satisfied: numpy in /home/CN/zizhang.wu/anaconda3/envs/CaDDN/lib/python3.7/site-packages (from pcdet==0.3.0+95b7309) (1.20.1) ERROR: Could not find a version that satisfies the requirement spconv (from pcdet) ERROR: No matching distribution found for spconv

about multihead training

Hi, tianwei

I find that there is only centerpoint_head_single in the densehead module, have you ever implemented the multihead version?

multi gpu training result is incorrect , how to correct it ?

my trainning result with single gpu is like this
image
but multi gpu training result is like this
image

can you help me see what's wrong when multi gpu mode ?

btw, in order to run the repo code ,i modify some code , like

  1. modify pcdet/datasets/processor/data_process.py function transform_points_to_voxels some spconv api to adapt spconv2.x version
def transform_points_to_voxels(self, data_dict=None, config=None, voxel_generator=None):
        if data_dict is None:
#            try:
#                from spconv.utils import VoxelGeneratorV2 as VoxelGenerator
#            except:
#                from spconv.utils import VoxelGenerator
#
            from spconv.pytorch.utils import PointToVoxel
            voxel_generator = PointToVoxel(
                vsize_xyz=config.VOXEL_SIZE,
                coors_range_xyz=self.point_cloud_range,
                num_point_features=self.num_point_features,
                max_num_voxels=config.MAX_NUMBER_OF_VOXELS[self.mode],
                max_num_points_per_voxel=config.MAX_POINTS_PER_VOXEL
            )
#            voxel_generator = VoxelGenerator(
#                voxel_size=config.VOXEL_SIZE,
#                point_cloud_range=self.point_cloud_range,
#                max_num_points=config.MAX_POINTS_PER_VOXEL,
#                max_voxels=config.MAX_NUMBER_OF_VOXELS[self.mode]
#            )
            grid_size = (self.point_cloud_range[3:6] - self.point_cloud_range[0:3]) / np.array(config.VOXEL_SIZE)
            self.grid_size = np.round(grid_size).astype(np.int64)
            self.voxel_size = config.VOXEL_SIZE
            return partial(self.transform_points_to_voxels, voxel_generator=voxel_generator)

        points = data_dict['points']
        # voxel_output = voxel_generator.generate(points)
        voxel_output = voxel_generator(torch.from_numpy(points))
        if isinstance(voxel_output, dict):
            voxels, coordinates, num_points = \
                voxel_output['voxels'], voxel_output['coordinates'], voxel_output['num_points_per_voxel']
        else:
            voxels, coordinates, num_points = voxel_output

        if not data_dict['use_lead_xyz']:
            voxels = voxels[..., 3:]  # remove xyz in voxels(N, 3)

        data_dict['voxels'] = voxels
        data_dict['voxel_coords'] = coordinates
        data_dict['voxel_num_points'] = num_points
        return data_dict
  1. modify pcdet/models/dense_heads/centerpoint_head_single.py function assign_targets to solve tensor convert problem because numpy version is too high.
def assign_targets(self, gt_boxes):
        """Generate targets.

        Args:
            gt_boxes: (B, M, 8) box + cls 

        Returns:
            Returns:
                tuple[list[torch.Tensor]]: Tuple of target including \
                    the following results in order.

                    - list[torch.Tensor]: Heatmap scores.
                    - list[torch.Tensor]: Ground truth boxes.
                    - list[torch.Tensor]: Indexes indicating the \
                        position of the valid boxes.
                    - list[torch.Tensor]: Masks indicating which \
                        boxes are valid.
        """
        gt_bboxes_3d, gt_labels_3d = gt_boxes[..., :-1], gt_boxes[..., -1]

        heatmaps, anno_boxes, inds, masks = multi_apply(
            self.get_targets_single, gt_bboxes_3d, gt_labels_3d)
        # transpose heatmaps, because the dimension of tensors in each task is
        # different, we have to use numpy instead of torch to do the transpose.
        # heatmaps = np.array(heatmaps).transpose(1, 0).tolist()
        heatmaps = list(map(list, zip(*heatmaps)))
        heatmaps = [torch.stack(hms_) for hms_ in heatmaps]
        # transpose anno_boxes
        # anno_boxes = np.array(anno_boxes).transpose(1, 0).tolist()
        anno_boxes = list(map(list, zip(*anno_boxes)))
        anno_boxes = [torch.stack(anno_boxes_) for anno_boxes_ in anno_boxes]
        # transpose inds
        # inds = np.array(inds).transpose(1, 0).tolist()
        inds = list(map(list, zip(*inds)))
        inds = [torch.stack(inds_) for inds_ in inds]
        # transpose inds
        # masks = np.array(masks).transpose(1, 0).tolist()
        masks = list(map(list, zip(*masks)))
        masks = [torch.stack(masks_) for masks_ in masks]
        
        all_targets_dict = {
            'heatmaps': heatmaps,
            'anno_boxes': anno_boxes,
            'inds': inds,
            'masks': masks
        }
        
        return all_targets_dict

my cmd is
single gpu training : python train.py --cfg_file cfgs/kitti_models/centerpoint.yaml
multi gpu training : bash scripts/dist_train.sh 8 --cfg_file cfgs/kitti_models/centerpoint.yaml

about two stage centerpoint

I'm sorry to bother you.
I'd like to ask if the idea of 4.1 two stage centerpoint in this article has not been implemented in centerpoint Kitti. Because I didn't find them in the repo. We look forward to your reply.

The article idea as follow:
We extract one point-feature from the 3D center of each face of the predicted bounding box. Note that the bounding box center, top and bottom face centers all project to the same point in map-view. We thus only consider the four outward-facing box-faces together with the predicted object center. For each point, we extract a feature using bilinear interpolation from the backbone map-view output M. Next, we concatenate the extracted point-features and pass them through an MLP. The second stage predicts a class-agnostic confidence score and box refinement on top of one-stage CenterPoint's prediction results.

Running the NN on Colab

Hi there,

we are interested in your work and trying to implement it for a lecture at the university. Since we don't have a laptop with GPU, we want to run it on Colab. We went through the installation and demo instructions. We have been struggling with a lot of compatibility issues and we could not run the network. Below you can see our approach on Colab:

  1. we cloned OpenPCDet, install requirements.
  2. we install vtk==8.1.2 and mayavi.
  3. we installed cmake==3.13.0
  4. we installed spconv v 1.2.1 with
    !pip install git+https://github.com/traveller59/[email protected]
  5. Colab comes with 1.9.0+cu111. we uninstalled this and installed 1.3.0+cu100
  6. we run python setup.py develop
  7. We added a pre-trained model (checkpoint_epoch_80.pth) from your repo and one frame from KITTI to Colab
  8. we run this command
    !python /content/OpenPCDet/tools/demo.py --cfg_file cfgs/kitti_models/pv_rcnn.yaml
    --ckpt checkpoint_epoch_80.pth
    --data_path /content/OpenPCDet/tools/000000.bin

We have these errors after step 8:

/usr/local/lib/python3.7/dist-packages/traits/etsconfig/etsconfig.py:412: UserWarning: Environment variable "HOME" not set, setting home directory to /tmp
% (environment_variable, parent_directory)
Traceback (most recent call last):
File "/content/OpenPCDet/tools/demo.py", line 10, in
from pcdet.datasets import DatasetTemplate
File "/content/OpenPCDet/pcdet/datasets/init.py", line 7, in
from .dataset import DatasetTemplate
File "/content/OpenPCDet/pcdet/datasets/dataset.py", line 8, in
from .augmentor.data_augmentor import DataAugmentor
File "/content/OpenPCDet/pcdet/datasets/augmentor/data_augmentor.py", line 6, in
from . import augmentor_utils, database_sampler
File "/content/OpenPCDet/pcdet/datasets/augmentor/database_sampler.py", line 5, in
from ...ops.iou3d_nms import iou3d_nms_utils
File "/content/OpenPCDet/pcdet/ops/iou3d_nms/iou3d_nms_utils.py", line 9, in
from . import iou3d_nms_cuda
ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

We are stuck here and have been dealing with this for many days. What is the problem? How should we implement this network on Colab? Could you help us?

allow_unreachable=True) # allow_unreachable flag RuntimeError: derivative for to_sparse is not implemented

When I add sparse conv to other code, the error is below.
Traceback (most recent call last): File "train.py", line 215, in <module> main() File "train.py", line 185, in main merge_all_iters_to_one_epoch=args.merge_all_iters_to_one_epoch File "/newnfs/zzwu/08_3d_code/CaDDN_CenterPoint/tools/train_utils/train_utils.py", line 95, in train_model dataloader_iter=dataloader_iter File "/newnfs/zzwu/08_3d_code/CaDDN_CenterPoint/tools/train_utils/train_utils.py", line 42, in train_one_epoch loss.backward() File "/home/CN/zizhang.wu/anaconda3/envs/CaDDN_CenterPoint/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/CN/zizhang.wu/anaconda3/envs/CaDDN_CenterPoint/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: derivative for to_sparse is not implemented

RuntimeError: Error compiling objects for extension

when I run the seyup.py there are some error and here is my conda environment.
can someone help me to solve these problem

pcdet 0.3.0+95b7309 dev_0
pcre 8.45 h295c915_0
pillow 8.3.2 pypi_0 pypi
pip 21.2.2 py36h06a4308_0
protobuf 3.17.3 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pyface 7.3.0 py36h06a4308_1
pygments 2.11.2 pyhd3eb1b0_0
pyparsing 2.4.7 pypi_0 pypi
pyqt 5.9.2 py36h05f1152_2
python 3.6.13 h12debd9_1
python-dateutil 2.8.2 pypi_0 pypi
pytorch 1.7.0 py3.6_cuda11.0.221_cudnn8.0.3_0 pyt

=====================================================================
log msg:

running develop
running egg_info
writing pcdet.egg-info/PKG-INFO
writing dependency_links to pcdet.egg-info/dependency_links.txt
writing requirements to pcdet.egg-info/requires.txt
writing top-level names to pcdet.egg-info/top_level.txt
reading manifest file 'pcdet.egg-info/SOURCES.txt'
adding license file 'LICENSE'
writing manifest file 'pcdet.egg-info/SOURCES.txt'
running build_ext
building 'pcdet.ops.iou3d_nms.iou3d_nms_cuda' extension
Emitting ninja build file /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] /usr/local/cuda/bin/nvcc -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/TH -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lidar/anaconda3/envs/centerpoint/include/python3.6m -c -c /home/lidar/centerpoint_workspace/CenterPoint-KITTI/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu -o /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
FAILED: /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o
/usr/local/cuda/bin/nvcc -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/TH -I/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/lidar/anaconda3/envs/centerpoint/include/python3.6m -c -c /home/lidar/centerpoint_workspace/CenterPoint-KITTI/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.cu -o /home/lidar/centerpoint_workspace/CenterPoint-KITTI/build/temp.linux-x86_64-3.6/pcdet/ops/iou3d_nms/src/iou3d_nms_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=iou3d_nms_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
nvcc fatal : Unsupported gpu architecture 'compute_86'
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1522, in _run_ninja_build
env=env)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "setup.py", line 106, in
'src/sampling_gpu.cu',
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 653, in build_extensions
build_ext.build_extensions(self)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 482, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1238, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/lidar/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1538, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

Hi Thank for sharing your great task.

I'm trying to training with single GPU.

If I set batch size > 1, always cuda out of memory issues are occur, so I set the batch size = 1

2021-06-03 16:12:13,039 INFO Start training home/kimsuyeon/a/CenterPoint-KITTI/tools/cfgs/kitti_models/centerpoint_rcnn(default)
epochs: 0%| | 0/80 [00:01<?, ?it/s]
Traceback (most recent call last): | 0/3712 [00:00<?, ?it/s]
File "train.py", line 198, in
main()
File "train.py", line 153, in main
train_model(
File "/home/kimsuyeon/a/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 86, in train_model
accumulated_iter = train_one_epoch(
File "/home/kimsuyeon/a/CenterPoint-KITTI/tools/train_utils/train_utils.py", line 38, in train_one_epoch
loss, tb_dict, disp_dict = model_func(model, batch)
File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/init.py", line 30, in model_func
ret_dict, tb_dict, disp_dict = model(batch_dict)
File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/detectors/centerpoint_rcnn.py", line 11, in forward
batch_dict = cur_module(batch_dict)
File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 77, in forward
targets_dict = self.assign_targets(
File "/home/kimsuyeon/a/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 142, in assign_targets
heatmaps = np.array(heatmaps).transpose(1, 0).tolist()
File "/home/kimsuyeon/a/lib/python3.8/site-packages/torch/tensor.py", line 621, in array
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

And I got this problem. Do you have any idea?

Could we use tensorboard to visualize the model?

Hi,

I have noticed that you have used the tensorboard to save and plot the loss and learning_rate as follows:

image

So I want to ask have you ever used the tensorboard to also save and visualize the details of layers, weights and biases of the model?

训练过程中gaussian_focal_loss函数报错

File "/home/neousys/cjg/CenterPoint-KITTI/pcdet/models/dense_heads/centerpoint_head_single.py", line 687, in gaussian_focal_loss
pos_loss = (-(pred + eps).log() * (1 - pred).pow(alpha) * pos_weights).float()
RuntimeError: expected backend CUDA and dtype Float but got backend CUDA and dtype Byte

您好,我的环境是cuda10.0,torch1.1,spconv1.0,报错后我尝试强制转换成flaot类型,但是not work,请问我该怎么办?

ImportError: cannot import name 'iou3d_nms_cuda' from 'pcdet.ops.iou3d_nms' (unknown location)

While running
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

I get this strange error. I could not found anything related on internet

Traceback (most recent call last):
File "/home/mahmood/anaconda3/lib/python3.8/runpy.py", line 185, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/home/mahmood/anaconda3/lib/python3.8/runpy.py", line 111, in _get_module_details
import(pkg_name)
File "/media/storage2/CenterPoint-KITTI/pcdet/datasets/init.py", line 7, in
from .dataset import DatasetTemplate
File "/media/storage2/CenterPoint-KITTI/pcdet/datasets/dataset.py", line 8, in
from .augmentor.data_augmentor import DataAugmentor
File "/media/storage2/CenterPoint-KITTI/pcdet/datasets/augmentor/data_augmentor.py", line 6, in
from . import augmentor_utils, database_sampler
File "/media/storage2/CenterPoint-KITTI/pcdet/datasets/augmentor/database_sampler.py", line 5, in
from ...ops.iou3d_nms import iou3d_nms_utils
File "/media/storage2/CenterPoint-KITTI/pcdet/ops/iou3d_nms/iou3d_nms_utils.py", line 9, in
from . import iou3d_nms_cuda
ImportError: cannot import name 'iou3d_nms_cuda' from 'pcdet.ops.iou3d_nms' (unknown location)

Training with pillarvfe

At present, VFe is using meanvfe, but it can't be training with pillavfe. What changes need to be do? It seems that Shencheng headmap doesn't match.

Details about the KITTI model

Hi, thanks for your excellent work!
I noticed that unlike the models on nuScenes and Waymo, the KITTI model has no 'Shared Conv' and 'Separate Heads'. The size of the feature maps for the detector head is also different.

Could you tell me the reason for such a model design?
I would like to use CenterPoint on a new dataset that has a similar size to KITTI and uses the same LiDAR as used by KITTI. However, this dataset has 8 categories and there is a severe category imbalance like in nuScenes.

Can you give me some advice on the structure of CenterPoint, such as whether to use ‘Share Conv’ and ‘Separate Head’?

Also, I would like to ask how to set the ‘SAMPLE_GROUPS’ for different categories in GT Sampling.

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.