Git Product home page Git Product logo

centerpoint's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

centerpoint's Issues

sorry, but i cannot understand how the model is built

I've been studying this for a long time, but i really cannot understand how Det3D build a model.
i am sure that the model is built by class 'Register', but when i look into the class 'Register', i cannot find any line of code which has contributions to build a Model.
could you please give any help to clarify the problem?
thanks so much and always waiting for your reply

Question about nms

Why did you use both two kinds of NMSs in CenterHead?
One is a conventional IoU based NMS and the other is a fast NMS using max pooling.

I suppose these NMSs are redundunt, but is it better to use both NMSs?
Or is it possible to skip either nms?

Semantic segmentation using CenterPoint

[Question] Do you think it is possible to use CenterPoint for semantic segmentation in addition to detections and tracking? For example, by adding another head to your structure for this purpose and retrain the model if its necessary.

error in deformable_col2im: invalid device function

Hi @tianweiy, thank you for sharing your great code!
I'm getting this error while running:
error in deformable_im2col: invalid device function
error in deformable_col2im: invalid device function

my env:
python 3.6
cuda 10.1
torchvision 0.5.0
torch 1.4

While executing bash setup.py no error appears, I get as normal:
running build_ext
copying build/lib.linux-x86_64-3.6/deform_conv_cuda.cpython-36m-x86_64-linux-gnu.so ->
running build_ext
copying build/lib.linux-x86_64-3.6/iou3d_nms_cuda.cpython-36m-x86_64-linux-gnu.so ->

Can you help me solve this?
Thank you

result for trainval and test?

Hi, I just wonder if I can get the trainval or test result (json file), just like megvi provide their trainval.json result and test.json result.

Cannot find metric pred_frequencies required by summarize

Thanks for your contribution!
I have met a issue when I was testing the tracking performance by running
"bash tracking_scripts/centerpoint_voxel_1440_dcn_flip_testset.sh"

the following is the error codes:
image

and the version of my motmetrics is 1.1.3.

Thank you!

about KITTI dataset

Hi, sir. nice work firstly. In the issue before, I notice that this codebase don't support kitti dataset, if I want to train kitti or test on kitti dataset, is there any method to do this? Very appreciate for your reply!

Detection of a specific point on PR curve

Hey, thanks for your nice work.

After inference on NuScenes, we can get the detection and summary below.

The PR curve are from the points of different threshold. What is the point of threshold for the generated detection? And how to determine or set the threshold value for the detection? I've tried
to change score_threshold in the configs but the whole curve changed instead.

Since the FP is too high for our application, I'd like to modify the policy to make the precision higher. Please correct me if there's a better approach.

Besides, I found the detection seems to become better after tracking. What can I do if I also want to plot the PR curve for the tracking result?

Best,

Detection results trainval

Hi, thank you for the nice work!
Would it be possible to get the detection results of your method also for the trainval set of nuScenes?

Tracking inference code

Hi, Thank you for the code. The documentation is excellent and I have been able to reproduce most of the results. I have an issue with the tracking code (pub_test.py)
image

I'm using the 'centerpoint_pp_512_circle_nms_tracking' from model zoo.
I understand that this folder contains infos_val_10sweeps_withvelo_filter_True.json file that has the precalculated output of CenterPoint network (velocity predictions etc), using which you perform tracking.

If I want to perform end-to-end tracking, i.e I want to generate the infos_val_10sweeps_withvelo_filter_True.json file , how do I do it?

(Correct me if I'm wrong, the infos_val_10sweeps_withvelo_filter_True.json file is not the same as infos_val_10sweeps_withvelo_filter_True.pkl in detection case, where it refers to annotations for validation set)

Unable to use pub_test.py with v1.0-test!

I am facing this error because I am trying to evaluate on the testing set. How to view the predictions after running dist_test.py?

Traceback (most recent call last):
  File "tools/tracking/pub_test.py", line 192, in <module>
    eval_tracking()
  File "tools/tracking/pub_test.py", line 160, in eval_tracking
    args.root
  File "tools/tracking/pub_test.py", line 176, in eval
    nusc_dataroot=root_path,
  File "/home/*******/CenterPoint_ws/nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py", line 85, in __init__
    gt_boxes = load_gt(nusc, self.eval_set, TrackingBox, verbose=verbose)
  File "/home/*******/CenterPoint_ws/nuscenes-devkit/python-sdk/nuscenes/eval/common/loaders.py", line 94, in load_gt
    'Error: You are trying to evaluate on the test set but you do not have the annotations!'
AssertionError: Error: You are trying to evaluate on the test set but you do not have the annotations!

First, I ran :

python tools/dist_test.py configs/centerpoint/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset.py --work_dir work_dirs/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset --checkpoint work_dirs/nusc_centerpoint_voxelnet_dcn_0075voxel_flip_testset/epoch_20.pth --speed_test --testset

after that :
bash tracking_scripts/centerpoint_voxel_1440_dcn_flip_testset.sh

Visualize 3D Bounding Boxes Over Images and Lidar Data in ROS rviz

@tianweiy I was wondering how can we inference some of the pre-trained models provided in the MODEL ZOO on the test dataset of nuScenes and generate the co-ordinates of the bounding box for cars, pedestrians etc. And then how can we visualize them by overlaying them over camera images (front, front right, front left, back, back right and back left) and also in the point cloud visualization using ros rviz?

If would like to assist me with the above mentioned problem, I will be grateful!

Reproduce the CenterPoint-Pillar with two 2080Ti

Hi!
I tried to reproduce your experiment on two 2080Tis.
According to your code, I used the training method in your paper, but the training result is still about 28mAP.
In your paper, you used some methods to replicate the experiment on pointPillar, which increased the experimental results to 45.5mAP.
How did you solve it?
Is it related to the machine(you use 4 V100s)?
Thank you

Nvidia Docker

Is possible to get the Nvidia Docker image for this project, we need not have to downgrade our Nvidia drivers, CUDA version from 11 to 10 and cuDNN and would be sure that this works. Please find below the link for Nvidia Docker
https://github.com/NVIDIA/nvidia-docker

Webcam/Video Demo?

That's quite a promising work. But how can one test the detection and tracking with own video or live through webcam.
Thanks

ROS inference file "multi_sweep_inference.py" bad callback: <function rslidar_callback at 0x7fdebadcf2f0> IndexError: invalid index to scalar variable

@muzi2045 Since the Det3D code is written in Python3, I was able to configure ROS with Python3 using this article.

After than created a catkin workspace and also created a catkin package "centerpoint_ros_node" with all the dependencies catkin_create_pkg centerpoint_ros_node std_msgs rospy sensor_msgs nav_msgs jsk_recognition_msgs and copied the code of this repository and built the project with catkin_make -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so.

Then I source devel/setup.bash and ran rosrun centerpoint_ros_node multi_sweep_inference.py and simultaneously ran the rosbag play -l <nuScenes bag file name>.bag. But I encountered the following error.

  1. rostopic echo /pp_boxes does not output anything
get one frame lidar data.
 concate pointcloud shape: (173376, 5)
[ERROR] [1595837038.808509]: bad callback: <function rslidar_callback at 0x7fdebadcf2f0>
Traceback (most recent call last):
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/siddhantsahu/centerpoint_p3_ws/src/centerpoint_ros_node/src/tools/multi_sweep_inference.py", line 339, in rslidar_callback
    scores, dt_box_lidar, types = proc_1.run()
  File "/home/siddhantsahu/centerpoint_p3_ws/src/centerpoint_ros_node/src/tools/multi_sweep_inference.py", line 192, in run
    outputs = self.net(self.inputs)[0]
  File "/home/siddhantsahu/miniconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/siddhantsahu/Desktop/CenterPoint/det3d/models/detectors/point_pillars.py", line 48, in forward
    x = self.extract_feat(data)
  File "/home/siddhantsahu/Desktop/CenterPoint/det3d/models/detectors/point_pillars.py", line 26, in extract_feat
    input_features, data["coors"], data["batch_size"], data["input_shape"]
  File "/home/siddhantsahu/miniconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/siddhantsahu/Desktop/CenterPoint/det3d/models/readers/pillar_encoder.py", line 177, in forward
    self.nx = input_shape[0]
IndexError: invalid index to scalar variable.

Questions about the size of the prediction map

Hi.
Q1. Is the size of your prediction maps for heat and box reg both 128 ร— 128?
Q2. How to understand the dense reg, if only the pixels at the keypoints locations (heat=1.0) are used to train box regression?

Thank you very much.

Some doubts in code

Hi,

First of all thanks for the great work and sharing the code. ๐Ÿ‘

I am trying to use your code on my custom dataset. I am using Centerpoint-PointPillars model with circle-nms and while scanning through the code, had following doubts:

  1. During preprocessing, the dimensions (length and width) are scaled to the final feature resolution https://github.com/tianweiy/CenterPoint/blob/master/det3d/datasets/pipelines/preprocess.py#L671, but during predict step, these are not decoded back into original meter space https://github.com/tianweiy/CenterPoint/blob/master/det3d/core/utils/center_utils.py#L342. I am not sure if that's right; though in nuscenes evaluations, it might not impact, since it looks for distance based metric rather than IOU.

  2. In the circle_nms, I presume the min_radius as defined in the test_cfg is in meters, but in the circle_nms_jit, it uses this radius value for comparison against L2-distance https://github.com/tianweiy/CenterPoint/blob/master/det3d/core/utils/circle_nms_jit.py#L26. Does this mean that this radius is actually distance squared, or am I missing something?

Could you please clarify my above doubts?

Looking forward to your reply.

Thank You.

Anuj

Question about pedestrian detection on waymo

Have you tried to train a model for pedestrian on waymo?

Though I'm trying to train a CenterPoint/PointPillars model for pedestrian on waymo, the model gets a poor result (about 20% AP).
Do you have any ideas to improve the result?

About the contribution of multi-frames data

Hi, I have tried to inference the network with nuscene_dataset_mini rosbag, and have some reasonable results
2020-06-29-15-27-41

this results is inference with single sweep data,
so there look like some detections are not stable when in a real-time system.
will it be better when concate past 5 or 10 sweep lidar data ?
the detections results will be more stable or boost the mAP result?

Another problem is the multiclass score filter threshold,
the core problem is that how to distinguish some small obstacles such as pedestrian and cyclist.
these small object normally have lower score than big cars,
if set the all class threshold to 0.1,
the FP results will be too much.
here is a example, some people together to be detect as one car.
image

here is my personal setting when deal with the raw predicts from network:

    car_indices =                  get_annotations_indices(0, 0.4, label_preds_, scores_)
    truck_indices =                get_annotations_indices(1, 0.4, label_preds_, scores_)
    construction_vehicle_indices = get_annotations_indices(2, 0.4, label_preds_, scores_)
    bus_indices =                  get_annotations_indices(3, 0.3, label_preds_, scores_)
    trailer_indices =              get_annotations_indices(4, 0.4, label_preds_, scores_)
    barrier_indices =              get_annotations_indices(5, 0.4, label_preds_, scores_)
    motorcycle_indices =           get_annotations_indices(6, 0.15, label_preds_, scores_)
    bicycle_indices =              get_annotations_indices(7, 0.15, label_preds_, scores_)
    pedestrain_indices =           get_annotations_indices(8, 0.12, label_preds_, scores_)
    traffic_cone_indices =         get_annotations_indices(9, 0.1, label_preds_, scores_)

@tianweiy

3D tracking on Waymo dataset

Hi, thanks for the nice work.

I'm able to generate my_preds.bin by following the instruction. How about the tracking part? Is it possible to use the provided scripts? It'll be nice if you can also share some tracking results.

Best,

Parameter about post_center_limit_range in test_cfg

post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0]
anchor_ranges=[-51.2, -51.2, 0.49, 51.2, 51.2, 0.49]

post_center_limit_range is larger than anchor_ranges, why not equal?
Exist any meaningful prediction boxes between 51.2 and 61.2?

Testing with KITTI point cloud files, the results of orientation is not aligned with bounding box

I started visualizng results of PointPillars "nusc_centerpoint_pp_02voxel_circle_nms.py" on KITTI/ VLP 32 our own recorded bags. Results are not good.
Is algorithm has generalization capability to work with other pointcloud opensource dat like KITTI ROS bags(VLP64)/VLP32 data.
Algorithm does well on Nuscenes rosbag files/nuscene pointcloud data.
But from the results on other data looks it is not generalizing
Kindly help.

Help to install dcn

Hi,

Thanks for your clarifications earlier on the code. I am now trying to use the deformable convolution kernels with the VoxelNet version. But am having issues in installing it and importing it further. I installed it by:

cd CenterPoint/det3d/ops/dcn
python setup.py bdist_wheel
pip install dist/*.whl

It says its installing, but when trying to import e.g. from det3d.ops.dcn import DeformConv, ModulatedDeformConvPack, it fails with following error:

ImportError: cannot import name 'deform_conv_cuda'

I am not sure what I am missing here. Could you please share the setup steps for this?

Thanks !!
Anuj

Why accumulating sweeps from key frame instead of the non-key frames?

Hi, very nice work.
I am confused about the data preparation code.
According to my understanding, the samples/LIDAR_TOP contains the annotated key-frame, while sweeps/LIDAR_TOP contains non-key frames which are un-annotated. Acoording to CBGS paper, they accumulates non-key-frames to the key-frame to form the dense point cloud. However, in your code (actually also in the OpenPCDet repository), they accumulate the key-frames to the key frame(your code is here). Is there any explanations for doing so? Thanks!

some question about bbox_head output

trying to export to onnx from prtrained waymo model from CenterPoint codebase (only detection vehicle class):

image

here is the original pointpillars bbox header(9 classes) export from OpenPCDet:
image

and the nuscenens bbox header export from CenterPoint (10 classes):
image

is there really need to separate the box_head into so many heads with [x,y],[z],[lwh],[yaw_sin, yaw_cos], [class] ?
if here is like 10 classes to be trained, there will be 5*10 heads in bbox_heads with the same feature map, it looks like not a good way to deploy it in a real scene.
@tianweiy

Visualization on the validation set as in Demo.

This error occurs when I try to run the demo visualization on the validation set. Do you know where is the problem ?
I changes the directories to data/nuscenes and the directory to infos_val_10sweeps_withvelo_filter_True.pkl and set the dataset to cfg.data.val.

(CenterPoint) ***********:~/CenterPoint_ws/CenterPoint$ python tools/demo.py 
Use HM Bias:  -2.19
10
Traceback (most recent call last):
  File "tools/demo.py", line 131, in <module>
    main()
  File "tools/demo.py", line 83, in main
    for i, data_batch in enumerate(data_loader):
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 576, in __next__
    idx, batch = self._get_batch()
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 553, in _get_batch
    success, data = self._try_get_batch()
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 511, in _try_get_batch
    data = self.data_queue.get(timeout=timeout)
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/multiprocessing/queues.py", line 113, in get
    return _ForkingPickler.loads(res)
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 276, in rebuild_storage_fd
    fd = df.detach()
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach
    return reduction.recv_handle(conn)
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/multiprocessing/reduction.py", line 182, in recv_handle
    return recvfds(s, 1)[0]
  File "/home/*******/.conda/envs/CenterPoint/lib/python3.6/multiprocessing/reduction.py", line 161, in recvfds
    len(ancdata))
RuntimeError: received 0 items of ancdata

Question about using DistributedDataParallel

After using DistributedDataParallel:
python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py CONFIG_PATH

There is a decline in the performance of detection compared with using a single GPU. And the training time did not decrease significantly. Has anyone encountered a similar situation, and how to solve it?

About the centerpillar encoder part

thanks for your great work!
I have tried to inference this network with the pretrained model: model

but it looks like here has some tensor shape mismatch in pillar encoder part when using the pretrained model file
features shape: torch.Size([13402, 20, 9])
image

it looks like training part using 10 dim tensor to generate bev feature map ranther than 9 dim.

the config file to build the network is configs/centerpoint/nusc_centerpoint_pp_02voxel_circle_nms_demo.py

hopefully for any advice.
@tianweiy

Colab

Hi guys, fantastic work and thanks for releasing all these models and code. Have you all considered releasing a Colab with some demos? Could be cool to see.

Bubid error

while we build the CenterPoint, the error occured , which is as follows:
nuScenes devkit not found!
nuScenes devkit not Found!

we have add the python path export PYTHONPATH="${PYTHONPATH}:/home/lgl/3D_object_detection/CenterPoint/nuscenes-devkit/python-sdk" to ~/.bashrc and reactivate bash, but it doesn't seem to be working.

spconv/src/spconv/indice.cu 125

Hi,

I was tring to train with confif file "nusc_centerpoint_voxelnet_01voxel.py" with 1 GPU and sweep=1.
I encountered the crash during training. Kindly help.

File "/home/Nuscene_Top/CenterPoint/tools/train.py", line 128, in main
logger=logger,
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/apis/train.py", line 381, in train_detector
trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank)
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 538, in run
epoch_runner(data_loaders[i], self.epoch, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 405, in train
self.model, data_batch, train_mode=True, **kwargs
File "/home/Nuscene_Top/CenterPoint/det3d/torchie/trainer/trainer.py", line 363, in batch_processor_inline
losses = model(example, return_loss=True)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 47, in forward
x = self.extract_feat(data)
File "/home/Nuscene_Top/CenterPoint/det3d/models/detectors/voxelnet.py", line 24, in extract_feat
input_features, data["coors"], data["batch_size"], data["input_shape"]
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/Nuscene_Top/CenterPoint/det3d/models/backbones/scn.py", line 364, in forward
ret = self.middle_conv(ret)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/modules.py", line 123, in forward
input = module(input)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/conv.py", line 155, in forward
self.stride, self.padding, self.dilation, self.output_padding, self.subm, self.transposed, grid=input.grid)
File "/home/anaconda3/envs/centerpoint/lib/python3.6/site-packages/spconv/ops.py", line 89, in get_indice_pairs
stride, padding, dilation, out_padding, int(subm), int(transpose))
RuntimeError: /home/Nuscene_Top/spconv/src/spconv/indice.cu 125
cuda execution failed with error 2

Plot Bounding Boxes over 6 Camera Images and Publish to ROS Topics to be Visualized in Rviz

@tianweiy @muzi2045 From the trained model's generated co-ordinates of the 3D bounding boxes from line 197 in single_inference.py, I wanted to first transform these points into their 6 different respective camera plane and plot it using OpenCV/Matplotlib as described in the render_annotation() function in nuScenes devkit nuscenes.py at line 903. Additionally, it would great if the tracking id and object category could be labeled over these bounding boxes.

Once this is done, we can publish these images with bounding boxes to a ROS topic to be visualized in Rviz. And these changes to the single_inference.py and multi_sweep_inference.py files.

Waymo point cloud decoding bug

After decoding two segments from Waymo dataset, I visualized the point clouds. And I think there is a decoding error in the code. At screenshots below, you can see that the return of the side lidars (+ part of the top lidar) and the return of the top lidar are not int the same level.
Is there anyone experiencing the same issue?

ss_waymo_open3d_2
ss_waymo_open3d_1

segment-1022527355599519580_4866_960_4886_960_with_camera_labels
segment-10231929575853664160_1160_000_1180_000_with_camera_labels

use kitti

how to use centerpoint in kitti? thank you

Tracking Inference Code

Could you please provide a simple tracking function similar to the single_inference.py ROS code?
I tried to implement pub_test.py in ROS with no success [The tracker seems to work only for nuSences data].
I am trying to get tracking results by applying raw Lidar point clouds to get detections (BBox + score + type) and track them in the next lidar point cloud.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.