Git Product home page Git Product logo

lift-splat-shoot's Introduction

Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D

PyTorch code for Lift-Splat-Shoot (ECCV 2020).

Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D
Jonah Philion, Sanja Fidler
ECCV, 2020 (Poster)
[Paper] [Project Page] [10-min video] [1-min video]

Abstract: The goal of perception for autonomous vehicles is to extract semantic representations from multiple sensors and fuse these representations into a single "bird's-eye-view" coordinate frame for consumption by motion planning. We propose a new end-to-end architecture that directly extracts a bird's-eye-view representation of a scene given image data from an arbitrary number of cameras. The core idea behind our approach is to "lift" each image individually into a frustum of features for each camera, then "splat" all frustums into a rasterized bird's-eye-view grid. By training on the entire camera rig, we provide evidence that our model is able to learn not only how to represent images but how to fuse predictions from all cameras into a single cohesive representation of the scene while being robust to calibration error. On standard bird's-eye-view tasks such as object segmentation and map segmentation, our model outperforms all baselines and prior work. In pursuit of the goal of learning dense representations for motion planning, we show that the representations inferred by our model enable interpretable end-to-end motion planning by "shooting" template trajectories into a bird's-eye-view cost map output by our network. We benchmark our approach against models that use oracle depth from lidar. Project page: https://nv-tlabs.github.io/lift-splat-shoot/.

Questions/Requests: Please file an issue if you have any questions or requests about the code or the paper. If you prefer your question to be private, you can alternatively email me at [email protected].

Citation

If you found this codebase useful in your research, please consider citing

@inproceedings{philion2020lift,
    title={Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D},
    author={Jonah Philion and Sanja Fidler},
    booktitle={Proceedings of the European Conference on Computer Vision},
    year={2020},
}

Preparation

Download nuscenes data from https://www.nuscenes.org/. Install dependencies.

pip install nuscenes-devkit tensorboardX efficientnet_pytorch==0.7.0

Pre-trained Model

Download a pre-trained BEV vehicle segmentation model from here: https://drive.google.com/file/d/18fy-6beTFTZx5SrYLs9Xk7cY-fGSm7kw/view?usp=sharing

Vehicle IOU (reported in paper) Vehicle IOU (this repository)
32.07 33.03

Evaluate a model

Evaluate the IOU of a model on the nuScenes validation set. To evaluate on the "mini" split, pass mini. To evaluate on the "trainval" split, pass trainval.

python main.py eval_model_iou mini/trainval --modelf=MODEL_LOCATION --dataroot=NUSCENES_ROOT

Visualize Predictions

Visualize the BEV segmentation output by a model:

python main.py viz_model_preds mini/trainval --modelf=MODEL_LOCATION --dataroot=NUSCENES_ROOT --map_folder=NUSCENES_MAP_ROOT

Visualize Input/Output Data (optional)

Run a visual check to make sure extrinsics/intrinsics are being parsed correctly. Left: input images with LiDAR scans projected using the extrinsics and intrinsics. Middle: the LiDAR scan that is projected. Right: X-Y projection of the point cloud generated by the lift-splat model. Pass --viz_train=True to view data augmentation.

python main.py lidar_check mini/trainval --dataroot=NUSCENES_ROOT --viz_train=False

Train a model (optional)

Train a model. Monitor with tensorboard.

python main.py train mini/trainval --dataroot=NUSCENES_ROOT --logdir=./runs --gpuid=0
tensorboard --logdir=./runs --bind_all

Acknowledgements

Thank you to Sanja Fidler, as well as David Acuna, Daiqing Li, Amlan Kar, Jun Gao, Kevin, Xie, Karan Sapra, the NVIDIA AV Team, and NVIDIA Research for their help in making this research possible.

lift-splat-shoot's People

Contributors

jonahthelion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lift-splat-shoot's Issues

Question about weight decay

Hi, Thank you for your great work! However, there are two points in this code that I don't understand.

  1. I saw 1e-7 is used as the weight decay for Adam. This is the first time of me seeing such a small weight decay for Adam. Any insights in that?
    weight_decay=1e-7,
  2. _I can understand using data augmentation in the training phase. Why is it still used in the testing phase? Especially cropping along the horizontal axis. Will it cause inaccurate testing results?
    crop_h = int((1 - np.mean(self.data_aug_conf['bot_pct_lim']))*newH) - fH
    crop_w = int(max(0, newW - fW) / 2)

Looking forward to your reploy!

About the Splat module

Thanks for your great work. We have been quite eager to know more about this work.
For me, it is still rather confusing how the network converts the (C,D,H,W) image features into the BEV image.

I found you mention both point-pillar oft paper. It is rather difficult for me to merge these two methods here (both conceptually and in code). And your video presents some codes on the splat part which is still quite confusing for me...

Would love to ask for some more detailed introduction or some snippets of codes about this?

Thank you.

visualize the bev feature maps

Hi, authors. I'm trying your codes and when i visualize the bev feature maps during training, the following results are confused for me. I wonder if they are correct? In my opinion, there should be some obvious features (e.g., edge, contour) in the feature map. Looking forward to your reply ^-^
image

Training with 'vizdata'

'vizdata' is ment for visualization of lidar data only, i just tried a dumb thing by assuming vizdata might be useful for detections. I was thinking it is for detections rather than segmentation. Is it possible to have training with detections alone?
Is this github works only for segmentationdata? when i try giving vizdata, it is giving the below error
for batchi, (imgs, rots, trans, intrins, post_rots, post_trans, lid_pts, binimgs) in enumerate(trainloader):
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 84, in
return [default_collate(samples) for samples in transposed]
File "/anaconda3/envs/data_visual/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 80137] at entry 0 and [3, 80421] at entry 1

Details about the code in model.py

Thanks a lot for sharing the code. You have done a great work!

I have some questions about your code: In the model.py file, can you provide more details about the get_geometry function and the voxel_pooling function? I'm so confused about how they actually work.

Thanks a lot!

Research

How to use the bev radar of nuscenes data and encode it instead of the lidar top data

Hi, @Kevinpgalligan and @maciej-autobon,

          Hi, @Kevinpgalligan and @maciej-autobon,

I have created the dataset for the road segmentation and drivable area segmentation, like this:
@@1626147105738
@@1626147118881

But if I use the original training setting, the performace is very worse. Is there any suggestion? Thanks for your help.

Originally posted by @zhangchbin in #8 (comment)

Why don't we change the intrinsic matrix scale?

Hello,

This work is very impressive. However, I have a little problem.

When you use the get_geometry function, I noticed that although the feature map is smaller than the original input images, you didn't change the intrinsic matrix.

Does this cause any problems? If you can answer this for me, I would greatly appreciate it. Thanks!

Why still cropping imgs during validation phase?

Hello, thanks for your excellent work~
I find that during validation phase the data process still crop imgs, and this would decrease the fov of imgs, so why still do this procedure? Maybe at the test phase, the cropping process would be closed?
Looking forward for your reply, appreciate it~

Calculation Question about Rank

Hello and thank you for your outstanding work. I would like to ask why the rank values are calculated in this way, and does the expression ensure that the rank values are different for point clouds from different BEV grids?

ranks = geom_feats[:, 0] * (self.nx[1] * self.nx[2] * B)\

Depth distribution and context vector of a pixel

Wanted to understand more of lift stage.
Basically they mentioned in the paper that lift stage is where a 2D to 3D conversion happens.
As a first step in this process ' generate representations at all possible depths for each pixel' which is what they call it is D which gets generated from 4.0 to 45 with range of 1.0. Basically Depth distribution is getting generated from 4.0 to 45.0 with the range of 1.0, is not it?
Then there is something called context vector C, am not sure how this gets generated for each pixel.

Would be great help if there anybody gives a little more eplanation on both of these (Depth distribution and Context vector of each pixel)?

data augmentation

Why dont you use ColorJitter in data augmentation? Does it influence performence?

Questions about creating depth bev map

Thank you for your exciting research!
Could you possibly get the depth BEV map?
I'm wondering if there is any code implemented for this.
For example, how do we get a 1x180x180 bev depth map?

Best,
Minseok Joo

Do we need to train 10000 epochs?

Thanks for you work!

In the paper, you said 300k steps were used to train the network, but in code you use 10000 epochs. Through calculation, I found that 300k steps amount to 43 epochs(in nuScenes trainval dataset), I don't know if something is wrong.

Look forward to your reply!

About the code for training

Hello,
I am interested in your work and now trying to run the code for mini dataset. I find the terminal outputing "Error in magma_getdevice_arch: MAGMA not initialized (call magma_init() first) or bad device" 4 times every epoch and seems it occurs at line 84 in train.py. However the program is still running without being interrupted by the error. So should I ignore the problem or there are some solutions for it?
My running environment is Ubuntu 18.04 with python 3.6 and cuda 11.4.
Thanks.

ranks

Hello, I am confused about the ranks in src/models.py. Can you explain or share more details about the ranks? Thank you!

Minor issues getting the scripts to run

Thanks for sharing!

I've come across a few minor issues while running the scripts.

  1. tensorboardX was missing as a dependency, I couldn't run the "Visualize Input/Output Data" script without it.
  2. "Evaluate a model" - this failed because the default GPU ID is 1 in the script, and I only have 1 GPU. (src/explore.py, eval_model_iou() function). I fixed this by changing the default to 0. There are other functions in this script that also use gpuid=1 as default. I see now that you can set gpuid from the CLI, but 0 might be a saner default.
  3. "Visualise predictions" - fails due to a missing file (see below). Perhaps I'm not setting the map_folder parameter correctly, or it presumes that you have the full nuScenes dataset available rather than just the mini version?
python main.py viz_model_preds mini --modelf=/home/kevingal/phd/placement/liftsplat-model.pt --dataroot=/home/kevingal/phd/placement/nuscenes/ --map_folder=/home/kevingal/phd/placement/nuscenes/mini/
NuscData: 323 samples. Split: train.
                   Augmentation Conf: {'resize_lim': (0.193, 0.225), 'final_dim': (128, 352), 'rot_lim': (-5.4, 5.4), 'H': 900, 'W': 1600, 'rand_flip': True, 'bot_pct_lim': (0.0, 0.22), 'cams': ['CAM_FRONT_LEFT', 'CAM_FRONT', 'CAM_FRONT_RIGHT', 'CAM_BACK_LEFT', 'CAM_BACK', 'CAM_BACK_RIGHT'], 'Ncams': 5}
NuscData: 81 samples. Split: val.
                   Augmentation Conf: {'resize_lim': (0.193, 0.225), 'final_dim': (128, 352), 'rot_lim': (-5.4, 5.4), 'H': 900, 'W': 1600, 'rand_flip': True, 'bot_pct_lim': (0.0, 0.22), 'cams': ['CAM_FRONT_LEFT', 'CAM_FRONT', 'CAM_FRONT_RIGHT', 'CAM_BACK_LEFT', 'CAM_BACK', 'CAM_BACK_RIGHT'], 'Ncams': 5}
Traceback (most recent call last):
  File "main.py", line 13, in <module>
    Fire({
  File "/home/kevingal/anaconda3/envs/liftsplat/lib/python3.8/site-packages/fire/core.py", line 138, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/kevingal/anaconda3/envs/liftsplat/lib/python3.8/site-packages/fire/core.py", line 463, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/kevingal/anaconda3/envs/liftsplat/lib/python3.8/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/kevingal/phd/placement/lift-splat-shoot/src/explore.py", line 293, in viz_model_preds
    nusc_maps = get_nusc_maps(map_folder)
  File "/home/kevingal/phd/placement/lift-splat-shoot/src/tools.py", line 288, in get_nusc_maps
    nusc_maps = {map_name: NuScenesMap(dataroot=map_folder,
  File "/home/kevingal/phd/placement/lift-splat-shoot/src/tools.py", line 288, in <dictcomp>
    nusc_maps = {map_name: NuScenesMap(dataroot=map_folder,
  File "/home/kevingal/anaconda3/envs/liftsplat/lib/python3.8/site-packages/nuscenes/map_expansion/map_api.py", line 88, in __init__
    with open(self.json_fname, 'r') as fh:
FileNotFoundError: [Errno 2] No such file or directory: '/home/kevingal/phd/placement/nuscenes/mini/maps/singapore-hollandvillage.json'

Cheers.

model can't adapt to different intrinsics?

Is it true that one model can only use one set of intrinsics? when I try to run your pre-trained model on my own dataset which has different intrinsics, object distances become slightly wrong.

Evaluation result

Thanks for sharing the excellent work!
I have a question about the evaluation of the pretrained result.
I got the result different from the paper.
Do you have an idea how to correct it? Thank you!

> ======                                                                                                                                                                                 
> Loading NuScenes tables for version v1.0-mini...                                                                                                                                       
> 23 category,                                                                                                                                                                           
> 8 attribute,                                                                                                                                                                           
> 4 visibility,                                                                                                                                                                          
> 911 instance,                                                                                                                                                                          
> 12 sensor,                                                                                                                                                                             
> 120 calibrated_sensor,                                                                                                                                                                 
> 31206 ego_pose,                                                                                                                                                                        
> 8 log,                                                                                                                                                                                 
> 10 scene,                                                                                                                                                                              
> 404 sample,                                                                                                                                                                            
> 31206 sample_data,                                                                                                                                                                     
> 18538 sample_annotation,                                                                                                                                                               
> 4 map,                                                                                                                                                                                 
> Done loading in 0.348 seconds.                                                                                                                                                         
> ======                                                                                                                                                                                 
> Reverse indexing ...                                                                                                                                                                   
> Done reverse indexing in 0.1 seconds.                                                                                                                                                  
> ======                                                                                                                                                                                 
> NuscData: 323 samples. Split: train.                                                                                                                                                   
>                    Augmentation Conf: {'resize_lim': (0.193, 0.225), 'final_dim': (128, 352), 'rot_lim': (-5.4, 5.4), 'H': 900, 'W': 1600, 'rand_flip': True, 'bot_pct_lim': (0.0, 0.22
> ), 'cams': ['CAM_FRONT_LEFT', 'CAM_FRONT', 'CAM_FRONT_RIGHT', 'CAM_BACK_LEFT', 'CAM_BACK', 'CAM_BACK_RIGHT'], 'Ncams': 5}                                                              
> NuscData: 81 samples. Split: val.                                                                                                                                                      
>                    Augmentation Conf: {'resize_lim': (0.193, 0.225), 'final_dim': (128, 352), 'rot_lim': (-5.4, 5.4), 'H': 900, 'W': 1600, 'rand_flip': True, 'bot_pct_lim': (0.0, 0.22
> ), 'cams': ['CAM_FRONT_LEFT', 'CAM_FRONT', 'CAM_FRONT_RIGHT', 'CAM_BACK_LEFT', 'CAM_BACK', 'CAM_BACK_RIGHT'], 'Ncams': 5}                                                              
> Loaded pretrained weights for efficientnet-b0                                                                                                                                          
> loading model525000.pt                                                                                                                                                                 
> running eval...                                                                                                                                                                        
> {'loss': 0.12198955280545318, 'iou': 0.2699367443187027}   

Question about the lane segmentation task

Congratulations for the great job.

I have a question about the lane segmentation task. I would like to ask you for the details of generating the lane mask.
In the original paper, you point out that ' For mapping, we use transform map layers from the nuScenes map into the ego frame using the provided 6 DOF localization and rasterize.' Can you provide more details about that or provide the code for me?

Thank you very much! Looking forward to your reply!

Wrong intrinsics?

I notice that the model uses the intrinsics of the original cameras (no downscaling):

combine = rots.matmul(torch.inverse(intrins))

But since the images have been downscaled (as specified by final_dim), would that imply that the intrinsics should also be downscaled? I don't see the intrinsics being downscaled anywhere.

Could this be a bug? Or maybe I miss something...

Release of motion planning part of code

Hi, thanks for your great work on Lift-Splat-Shoot (LSS). I wonder if you have any plan to release the motion planning part of code?

Currently I only find code of BEV segmentation -- the first half of the LSS model. Thanks!

depth choice

why we choose depth from 4m to 45? what is its basis?

How is the NuScenes folder structure is supposed to be?

I was trying to run the demo but I keep having problems with the dataset root.

Looking at the NuScenes forum, they say that the dataset must be consolidated into one folder following this file tree (which is how I currently have it):

└── /data/sets/nuscenes
   ├── maps
   ├── samples
   ├── sweeps
   └── v1.0-{mini, test, trainval}
       ├── Usual files (e.g. attribute.json, calibrated_sensor.json etc.)
       └── category.json  <- contains the categories of the labels

But when trying to evaluate the model with:

python main.py eval_model_iou mini/trainval --modelf=models/model525000.pt --dataroot=data/nuscenes

The following error pops out:

AssertionError: Database version not found: data/nuscenes/mini/trainval/v1.0-mini/trainval

Where a mini folder seems to be needed which is not specified on the NuScenes recommendation.

@jonahthelion is there a specific organization of the dataset you had? I am working with the most up-to-date download from the NuScenes website.

Thank you!

why “get_binimg” Donot crop and resize

Randomly crop and resize image data, why get_binimg Donot crop and resize

https://github.com/nv-tlabs/lift-splat-shoot/blob/d74598cb51101e2143097ab270726a561f81f8fd/src/data.py#L171C19-L171C19

def get_binimg(self, rec):
       egopose = self.nusc.get('ego_pose',
                               self.nusc.get('sample_data', rec['data']['LIDAR_TOP'])['ego_pose_token'])
       trans = -np.array(egopose['translation'])
       rot = Quaternion(egopose['rotation']).inverse
       img = np.zeros((self.nx[0], self.nx[1]))
       for tok in rec['anns']:
           inst = self.nusc.get('sample_annotation', tok)
           # add category for lyft
           if not inst['category_name'].split('.')[0] == 'vehicle':
               continue
           box = Box(inst['translation'], inst['size'], Quaternion(inst['rotation']))
           box.translate(trans)
           box.rotate(rot)

           pts = box.bottom_corners()[:2].T
           pts = np.round(
               (pts - self.bx[:2] + self.dx[:2]/2.) / self.dx[:2]
               ).astype(np.int32)
           pts[:, [1, 0]] = pts[:, [0, 1]]
           cv2.fillPoly(img, [pts], 1.0)

       return torch.Tensor(img).unsqueeze(0)

Code and dataset for map segmentation task

Hi! Thank you for your wonderful work.

Currently, the repo only has weights and code for the detection task.

How can we run the repo to reproduce the map segmentation results in your paper? (such as Table 2). Thank you.

What is the Z bound

Hey guys, Thanks a lot for sharing the code. The paper is well written with appropriate details.
I just wanted to know what the zbound stands for ?
Best Regards,
Pramit

overfited after 8 epochs

I trained the model with batchsize 8 on full nuscenes dataset,and it got overfited after 30K batch interations .

image

where is attention α in code?

In the paper:
At pixel p, the network
predicts a context c ∈ R C and a distribution over depth α ∈ 4 |D|−1 for every
pixel. The feature c d ∈ R C associated to point p d is then defined as the context
vector for pixel p scaled by α d

But in the code, I can't find where is α, how α is learned?

Ncams=5

I want to know, why Ncams=5 when is training ?
I found :

cams = np.random.choice(self.data_aug_conf['cams'], self.data_aug_conf['Ncams'], replace=False)

Training with drivable areas / maps?

Hi!

Thanks for your great contribution! I was trying to training the network again with NuScenes and other datasets, but I noticed that the train.py code does not incorporate the drivable area information, which supposedly should come from the map. Will this functionality be provided? Thanks!

Best,
Yiyang

Questions about create_frustum and voxel_pooling

Hi, thanks for your excellent work! I am a little bit confused about functions create_frustum and voxel_pooling. It will be great if you can give some further explanations.

In create_frustum, the code indicates that the output dimension is D x H x W x 3, I am wondering what is this 3 represents for? Is it RGB value? Or is it the coordinate position of point in frustum? I am also wondering whether the input to this function is raw image or extracted feature?

For voxel_pooling, what I understand is that it sums up the features of all the points in a same voxel(pillar) using cumsum trick. The dimension of output in this function is B x C x Z x X x Y, where X Y and Z are the coordinates in the BEV field(which are not the same with H W and D). However, in the paper it says "perform sum pooling to create a CxHxW tensor" which really confused me. Why are we still want H and W here? Besides, I am wondering how you get rid of Z?

Unable to train model on custom BEV dataset

Hi,

I'm trying to train the provided model on a custom BEV semantic segmentation dataset, but the results are terrible. The network always collapses to the mean output and it does not seem to be learning anything. I even tried reducing the classes to contain only vehicles and road, but the results do not change.
Given that my dataset has a much larger resolution (px/m) than the one used in the paper, I also tried increasing the "xbound", "ybound" and "dbound" resolution sizes with no success.

There is no issue with the dataset because I have successfully managed to train 3 other existing approaches on it.

Could you please advise as to which part of the network I should look into to debug this issue?

Thanks!

Why did you remove the overlapping points on pillar map in cumsum trick part?

<src/tools.py/def cumsum_trick( )>

def cumsum_trick(x, geom_feats, ranks):
    x = x.cumsum(0)
    kept = torch.ones(x.shape[0], device=x.device, dtype=torch.bool)
    kept[:-1] = (ranks[1:] != ranks[:-1])

    x, geom_feats = x[kept], geom_feats[kept]
    x = torch.cat((x[:1], x[1:] - x[:-1]))

    return x, geom_feats

It is shown to remove the overlapping points in the following:

kept[:-1] = (ranks[1:] != ranks[:-1])
x, geom_feats = x[kept], geom_feats[kept]

I didn't understand why there is remove operation instead of "sum" operation.

Would you explain why this is?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.