Git Product home page Git Product logo

woodscape's Introduction

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving

The repository containing tools and information about the WoodScape dataset https://woodscape.valeo.com.

The repository contains a boilerplate code to encourage further research in building a unified perception model for autonomous driving.


Update (Nov 16th, 2021): Weather dataset for classification has been uploaded here

Update (Nov 8th, 2021): ChargePad dataset for object detection has been uploaded here

Update (May 20th, 2021): Scripts to generate dense polygon points for instanse segmentation are added. Precomputed boxes and polygon points (uniformly spaced) are now available for download here

Update (April 15th, 2021): Calibration files (intrinsic and extrinsic parameters) are now available in our Google Drive (link).

Information on calibration process can be found here

Update (March 5th, 2021): WoodScape paper was published at ICCV in November 2019 and we announced that the dataset was planned to be released in Q1 2020. Unfortunately, there were unexpected data protection policies required in order to comply with requirements for EU GDPR and Chinese data laws. Specifically, we had to remove one third of our dataset which was recorded in China and also employ a third party anonymization company for the remaining data. It was exacerbated by COVID situation and the subsequent economic downturn impacting the automotive sector. We apologize for the delay in the release by more than a year.

Finally, we have released the first set of tasks in our Google Drive (link). It has 8.2K images along with their corresponding 8.2K previous images needed for geometric tasks. The remaining 1.8K test samples are held out for a benchmark. It currently has annotations for semantic segmentation, instance segmentation, motion segmentation and 2D bounding boxes. Soiling Detection and end-to-end driving prediction tasks will be released by March 15th, 2021. Sample scripts to use the data will be updated in the github shortly as well. Once this first set of tasks is complete and tested, additional tasks will be gradually added. The upcoming website will include an overview about the status of the additional tasks.

Despite the delay we still believe the dataset is unique in the field. Therefore we understand that this dataset has been long awaited by many researchers. We hope that an eco-system of research in multitask fisheye camera development will thrive based on this dataset. We will continue to bugfix, support and develop the dataset and therefore any feedback will be taken onboard.

Demo

Please click on the image below for a teaser video showing annotated examples and sample results.

Dataset Contents

This dataset version consists of 10K images with annotations for 7 tasks.

  • RGB images
  • Semantic segmentation
  • 2D bounding boxes
  • Instance segmentation
  • Motion segmentation
  • Previous images
  • CAN information
  • Lens soiling data and annotations
  • Calibration Information
  • Dense polygon points for objects

Coming Soon:

  • Fisheye sythetic data with semantic annotations
  • Lidar and dGPS scenes

Data organization

woodscape
│   README.md    
│
└───rgb_images
│   │   00001_[CAM].png
│   │   00002_[CAM].png
|   |   ...
│   │
└───previous_images
│   │   00001_[CAM]_prev.png
│   │   00002_[CAM]_prev.png
|   |   ...
│   │
└───semantic_annotations
        │   rgbLabels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
        │   gtLabels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
│   │
└───box_2d_annotations
│   │   00001_[CAM].png
│   │   00002_[CAM].png
|   |   ...
│   │
└───instance_annotations
│   │   00001_[CAM].json
│   │   00002_[CAM].json
|   |   ...
│   │
└───motion_annotations
        │   rgbLabels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
        │   gtLabels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
│   │
└───vehicle_data
│   │   00001_[CAM].json
│   │   00002_[CAM].json
|   |   ...
│   │
│   │
└───calibration_data
│   │   00001_[CAM].json
│   │   00002_[CAM].json
|   |   ...
│   │
└───soiling_dataset
        │   rgb_images
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
        │   gt_labels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...
        │   gt_labels
        │   │   00001_[CAM].png
        │   │   00002_[CAM].png
        |   |   ...

[CAM] :

FV --> Front CAM

RV --> Rear CAM

MVL --> Mirror Left CAM

MVR --> Mirror Right CAM

Annotation Information

  • Instance annotations are provided for more than 40 classes as polygons in json format. A full list of classes can be found in "/scripts/mappers/class_names.json"

  • We provide semantic segmentation annotations for 10 classes: void, road, lanes, curbs, rider, person, vehicles, bicycle, motorcycle and traffic_sign. You can generate the segmentation annotations for all the 40+ classes using the provided scripts. See the examples, For 3(+void) classes: "scripts/configs/semantic_mapping_3_classes.json" For 9(+void) classes: "scripts/configs/semantic_mapping_9_classes.json"

  • We provide 2D boxes for 5 classes: pedestrians, vehicles, bicycle, traffic lights and traffic sign. You can generate the 2D boxes for 14+ classes using the provided scripts. See the example, For 5 classes: "scripts/configs/box_2d_mapping_5_classes.json"

    • We also provide dense polygon points for the above 5 classes. These dense uniform points can be used for generating instanse masks.
  • Motion annotations are available for 19 classes. A full list of classes, indexes and colour coding can be found in motion_class_mapping.json

Installation

Use the package manager pip to install the required packages.

pip install numpy
pip install opencv-python
pip install tqdm
pip install shapely
pip install Pillow
pip install matplotlib

In windows shapely might raise polygon OSError: [WinError 126], use conda distribution as an alternative or install directly from .whl

Usage

To generate segmenatic or 2D boxes or dense polygon points for more additional classes. Please use the following scripts

semantic_map_generator.py: Generate the semantic segmentation annotations from json instance annotations

python semantic_map_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/semantic_annotations --semantic_class_mapping [DATASET DIR]/scripts/configs/semantic_mapping_9_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json

box_2d_generator.py: Generates the 2D boxes from json instance annotations

python box_2d_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/box_2d_annotations --box_2d_class_mapping [DATASET DIR]/scripts/configs/box_2d_mapping_5_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json --rgb_image_path [DATASET DIR]/data/rgb_images

polygon_generator.py: Generates the dense polygon points from json instance annotations

python polygon_generator.py --src_path [DATASET DIR]/data/instance_annotations/ --dst_path [DATASET DIR]/data/polygon_annotations --box_2d_class_mapping [DATASET DIR]/scripts/configs/box_2d_mapping_5_classes.json --instance_class_mapping [DATASET DIR]/scripts/mappers/class_names.json --rgb_image_path [DATASET DIR]/data/rgb_images

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License for the code

MIT

License for the data

Proprietary

Paper

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Senthil Yogamani, Ciaran Hughes, Jonathan Horgan, Ganesh Sistu, Padraig Varley, Derek O'Dea, Michal Uricar, Stefan Milz, Martin Simon, Karl Amende, Christian Witt, Hazem Rashed, Sumanth Chennupati, Sanjaya Nayak, Saquib Mansoor, Xavier Perroton, Patrick Perez
Valeo
IEEE International Conference on Computer Vision (ICCV), 2019 (Oral)

If you find our dataset useful, please cite our paper:

@article{yogamani2019woodscape,
  title={WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving},
  author={Yogamani, Senthil and Hughes, Ciar{\'a}n and Horgan, Jonathan and Sistu, Ganesh and Varley, Padraig and O'Dea, Derek and Uric{\'a}r, Michal and Milz, Stefan and Simon, Martin and Amende, Karl and others},
  journal={arXiv preprint arXiv:1905.01489},
  year={2019}
}

woodscape's People

Contributors

brainstinct0 avatar g453 avatar rvarun7777 avatar senthil-yogamani avatar sleepywitti avatar tanutarou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

woodscape's Issues

The confusion of real-world length calculations

How to calculate the length of the real world corresponding to 1 offset of the x-axis, y-axis, and z-axis in the world coordinate system (world_points) ?:<

And how to calculate the length of the real world corresponding to 1 pixel in the x-axis and y-axis in screen_points ?
:<

I'm going crazy :< It puzzles me for a long time.
Thank you !

Lidar ground truth

Hi guys,
I am trying to implement your paper FisheyeDistanceNet. I found that there is no lidar ground truth information available.
When is the lidar ground truth likely to be released?

With regards,
Ajith

TypeError: 'Tupperware' object is not callable

I am trying to run the program but keep getting a TypeError. You can see the logs down below. I am executing the code in Google colab and all files are on Google Drive. I also tried running it locally on my machine but ran into the same TypeError. Any hints or solutions?

{
    "train": "detection",
    "dataset_dir": "/content/gdrive/MyDrive/WoodScape-master/omnidet/WoodScape_ICCV19",
    "train_file": "/content/gdrive/MyDrive/WoodScape-master/omnidet/data/train.txt",
    "val_file": "/content/gdrive/MyDrive/WoodScape-master/omnidet/data/val.txt",
    "test_file": "/content/gdrive/MyDrive/WoodScape-master/omnidet/data/test.txt",
    "output_directory": "/content/gdrive/MyDrive/WoodScape-master/omnidet/data/output",
    "model_name": "res18_baseline",
    "dataset": "woodscape_raw",
    "input_height": 288,
    "input_width": 544,
    "network_layers": 18,
    "pose_network_layers": 18,
    "frame_idxs": [
        0,
        -1
    ],
    "pose_model_type": "separate",
    "pose_model_input": "pairs",
    "rotation_mode": "euler",
    "num_scales": 4,
    "crop": true,
    "disable_auto_mask": false,
    "ego_mask": true,
    "reconstr_weight": 0.15,
    "ssim_weight": 0.85,
    "smooth_weight": 0.001,
    "clip_loss_weight": 0.5,
    "semantic_num_classes": 10,
    "semantic_loss": "focal_loss",
    "semantic_class_weighting": "woodscape_enet",
    "motion_class_weighting": "motion_enet",
    "motion_loss": "focal_loss",
    "siamese_net": true,
    "num_classes_detection": 5,
    "classes_names": [
        "vehicles",
        "person",
        "bicycle",
        "traffic_sign",
        "traffic_light"
    ],
    "detection_conf_thres": 0.8,
    "detection_nms_thres": 0.2,
    "anchors1": [
        [
            24,
            45
        ],
        [
            28,
            24
        ],
        [
            50,
            77
        ]
    ],
    "anchors2": [
        [
            52,
            39
        ],
        [
            92,
            145
        ],
        [
            101,
            69
        ]
    ],
    "anchors3": [
        [
            52,
            39
        ],
        [
            92,
            145
        ],
        [
            101,
            69
        ]
    ],
    "batch_size": 22,
    "num_workers": 6,
    "epochs": 125,
    "learning_rate": 0.0001,
    "scheduler_step_size": [
        100,
        110
    ],
    "min_distance": 0.1,
    "max_distance": 100.0,
    "log_frequency": 300,
    "val_frequency": 300,
    "save_frequency": 20,
    "weighing": "enet",
    "num_classes": 10,
    "pretrained_weights": "/content/gdrive/MyDrive/res18",
    "models_to_load": [
        "detection"
    ],
    "onnx_model": "omnidet",
    "opset_version": 12,
    "model_summary": true,
    "init_weights": true,
    "video_name": "norm",
    "model_path": "/content/gdrive/MyDrive/res18",
    "onnx_export_path": "/content/drive/MyDrive/export/",
    "onnx_load_model": "/content/gdrive/MyDrive/res18/onnx/omnidet_float32_opset12.onnx",
    "device": "cuda:0",
    "cuda_visible_devices": "0",
    "use_multiple_gpu": false
}
=> Clean up the log directory?y
=> Cleaned up the logs!
=> Training on the WOODSCAPE_RAW dataset 
=> Training model named: res18_baseline 
=> Models and tensorboard events files are saved to: /content/gdrive/MyDrive/WoodScape-master/omnidet/data/output 
=> Training is using the cuda device id: 0 
=> Loading woodscape_raw training and validation dataset
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 6 worker processes in total. Our suggested max number of worker in current system is 4, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
  cpuset_checked))
=> Total number of training examples: 8029 
=> Total number of validation examples: 205
=> Loading model from folder /content/gdrive/MyDrive/res18
Loading detection weights...
Cannot find {} weights so {} is randomly initialized
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/WoodScape-master/omnidet/main.py", line 109, in <module>
    main()
  File "/content/gdrive/MyDrive/WoodScape-master/omnidet/main.py", line 76, in main
    model = DetectionModel(args)
  File "/content/gdrive/MyDrive/WoodScape-master/omnidet/train_detection.py", line 277, in __init__
    self.pre_init()
  File "/content/gdrive/MyDrive/WoodScape-master/omnidet/train_detection.py", line 94, in pre_init
    self.save_args()
  File "/content/gdrive/MyDrive/WoodScape-master/omnidet/utils.py", line 135, in save_args
    yaml.dump(to_save, f)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/main.py", line 1380, in dump
    block_seq_indent=block_seq_indent,
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/main.py", line 1321, in dump_all
    dumper._representer.represent(data)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 80, in represent
    node = self.represent_data(data)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 103, in represent_data
    node = self.yaml_representers[data_types[0]](self, data)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 321, in represent_dict
    return self.represent_mapping('tag:yaml.org,2002:map', data)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 214, in represent_mapping
    node_value = self.represent_data(item_value)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 107, in represent_data
    node = self.yaml_multi_representers[data_type](self, data)
  File "/usr/local/lib/python3.7/dist-packages/ruamel/yaml/representer.py", line 447, in represent_object
    reduce = data.__reduce_ex__(2)
TypeError: 'Tupperware' object is not callable

Error in running semantic_map_generator

Trying to generate the semantic map. Getting this error.

!python /content/WoodScape/scripts/semantic_map_generator.py --src_path /content/instance_annotations/ --dst_path /content/semantic_annotations/ --semantic_class_mapping /content/WoodScape/scripts/configs/semantic_mapping_9_classes.json --instance_class_mapping /content/WoodScape/scripts/mappers/class_names.json

Error I am getting:
image

About OmniDet loss function

Thanks for your release of this dataset.

I read the OmniDet paper and find that you use auto-encoder to abstract feature to form feature-metric loss.

Question One:
figure2 in your paper show that this feature is abstracted from the last output layer of decoder instead of from that of encoder, which is confusing. Is this just a drawing mistake or this feature Ft is really taken from the decoder?

Question Two:
The loss function for autoencoder only contain L_dis and L_cvt, but not L_reconstruction for autoencoder, which is different from that of "Feature-metric Loss for Self-supervised Learning of
Depth and Egomotion". is this just a writing mistake?

Hope you can solve these two question that puzzle me a lot

About calibration code

Hi, I am trying your calibration code (projection.py) on fisheye images. I found the distortion coefficients k1-k4 are very different from the standard format in OpenCV (their number is smaller than 1, while yours is hundreds). Could your please share your calibration code and how to correspond your coeff with Opencv's?

some doubts about your dirtygan

HI, Some doubts about the way you make use of generated masks. In fact, the C2S generator(you mentioned in paper) takes in both clean image and generated mask, but how does two inputs been trained with C2Sgenerator? concat them or dot or other ways?

mismatch in quaternion data for extrinsic parameters

The sample data shared in sample application and calibration readme files mention extrinsic rotation data in quaternion whereas the calibration data available released contains extrinsics similar to euler format.

How can i use this new data for getting rotation matrix?

Best Regards
Tahera

Lidar Ground Truth

Hello Guys,

My current research is based upon Fisheye cameras and WoodSpace dataset seems to be totally aligned with my research.
I wanted to ask whether you guys are plannning to release the lidar data as well anytime soon?

Regards,
Hamza

cylindrical to fisheye

Hi, thanks for contributing the dataset and sharing the utility codes.

The fisheye->cylindrical code works well, however, is it possible to convert the cylindrical image back to the fisheye image?

I have tried to project cylindrical cam to fisheye cam, but the conversion is super slow and the results are not good now.

I will appreciate any ideas regarding this issue. Thanks.

01410_MVR
01445_FV

Is this dataset suitable for benchmarking visual slam?

Hi,

Thanks for your kindness in sharing the data. The paper says, it contains several videos for benchmarking visual odometry/slam. However, I could not find suitable video data in your current description. Am I missing something?

Deshun

Dataset Acquisition

Hello! I have followed the papers of your team, it was an excelltent work! Then, when will you release the dataset WoodScape? Hope for your reply!

Dataset release

It is going to be 2021Q1 several days later. When to release the fisheye dataset ? Thank you very much.

Explanation of vehicle_data

Thanks for the great work.

I couldn't find any explanation of vehicle_data(*.json). Could you provide the units of those values? (e.g. ego_yawRate)

semantic segmentation questions

Thank you for the released data.
In woodscape paper, you provide semantic segmentation results of ENet as the baseline. Do you train and test only using 10 classes , consistent with the semantic tags you provided?
Besides, the github readme says the remaining 1.8K test samples are held out for a benchmark. Will it be an online testing platform like cityscape? When is it released?

Point cloud from fisheye depth image

Hi,

I found multiple "project_2d_to_3d" functions in scripts/calibration/projection. To convert the inferred fisheye depth image (like the image below in your data) into a point cloud, which camera model and function exactly should be used? Thanks!
image

Dataset Release

Dear authors, is the Dataset released now? 2020.4.11

Thanks very much if you can provide the link of Dataset as we are very interested in it for further research.

Asking about Camera Geometry Tensor

First of all, many thanks for your work.

I have investigated your implementation for OmniDet but I couldn't find the usage of Camera Geometry Tensor Ct (cam_conv=False). Are there any reasons for it?

Thank you once again!

dataset release

I'm very interested in your dataset. When will it be available? It was mentioned that it would be released in Q4 2020 in this Link .Will you release it on time?

some question in DirtyGAN

hello!Here is a question about your latest paper DirtyGAN.
In this paper, how does the mask generated by VAE guide cycleGAN to generate the stains corresponding to the mask?
Looking forward to your relpy !!!

the usage of previous images

The readme says “previous images needed for geometric tasks”。
Can you give me some examples about the geometric tasks?

How to apply this dataset to the SLAM?

Hi, I wonder how I can apply the dataset cause there is no time stamp for each photo and the photo seems not be continuously taken in the time order. And I cannot find the video for the LSD-SLAM mentioned in your paper. Thank you.

Hi, single detection task, the mAP is very low,what's wrong?

For the single detection task, the mAP is very low and does not reach the mAP63 mentioned in the paper. May I ask what is wrong?

the train setting: single detection task, train_file is train.txt, val_file is val.txt, input height is 576, input_width is 1088, num_classes_detection is 5, batch_size 24, epoch 20, learning_rate 0.0001, models_to_load: encoder and detection, pretrained_weihts is res50, use two GPU of 48G, the frame_idxs is [0], the other setting is same.

the result is:mAP 0.199 at step 3000 on 8 epoch, mAP per class in order [0.303, 0.265, 0.17, 0.105, 0.154]. The final loss is 1.26.

Looking forward to your reply, thanks!

Depth ground-truth missing

Hi guys,

Thanks for releasing the dataset. It's very useful for my research with fisheye cameras.

I'm trying to implement your paper FisheyeDistanceNet on depth estimation with Woodscape Dataset, but I didn't find Fisheye Images' ground-truth depth map or sparse lidar ground-truth in the current dataset.

Will you release the GT for depth soon?

Best,
Weiheng

Fisheye camera calibration method

Hi, thanks for the great work!
I have read the readme and example for calibration, and would like to produce the same calibration files/parameters as provided in the WoodScape dataset for my own collected fisheye images.
Could you provide more details on how the calibration is done, and if possible some links to the model and method used for the calibration. Thank you!

apply for dataset

The time is Q4 in 2020 . So when to release the fisheye dataset ? Thank you

A small question about the fisheye calibration method

First of all, I really appreciate your great contribution for providing this dataset to the community!

Question: What is the meaning of adding +0.5, -0.5 in scripts/calibration/projection.py?

145        self._principle_point = 0.5 * self._size + np.array([principle_point[0], principle_point[1]], dtype=float) - 0.5
153    cx_offset = property(lambda self: self._principle_point[0] - 0.5 * self._size[0] + 0.5)
154    cy_offset = property(lambda self: self._principle_point[1] - 0.5 * self._size[1] + 0.5)

How to convert bbox_2d_annotations and segmentation annotations from fisheye-style format to cylindrical format?

Hi:
I haved changed fisheye image to cylindrical image and then I need to change the annotations correspondingly. So I used this code
world_points = fisheye_cam.project_2d_to_3d(fisheye_points, norm=np.ones(fisheye_points.shape[0])) cyl_points = cylindrical_cam.project_3d_to_2d(world_points), which fisheye_points means the coordinates of 2d bounding boxes on raw fisheye images, cyl_points means ones on cylindrical images. But I got a unreasonable result when I visualized the annotations. Is there anything wrong with my code?
Hope for reply! Thank you!

dataset

Hello,
I am interested to use the dataset for my research. When will it be available for use.

Cannot download rgb_images.zip from google drive

Hi guys

Thanks for providing the dataset!
I had a problem when I tried to download the rgb_images.zip from google drive.
image
I tried a few times and every time it failed with this forbidden error.
image

Best regards,
Xinchao

converting fisheye to rectilinear image

The code shared in projection.py gives cylindrical image as output. I am trying to convert the fisheye image to rectilinear image. Can someone guide me on how to do that.

Green strips

Hello,

Thank you very much for sharing this amazing dataset! I noticed that there are two green strips, one at the top and one at the bottom, in each RGB image. They are a few pixels wide and contain some differently colored pixels as well. In the instance annotations they are labeled as green_strip. Could you explain what the purpose of these strips is? Can I just crop them during training?
I’m looking forward to your answer.

Best,
Jan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.