Git Product home page Git Product logo

zoedepth's Introduction

ZoeDepth: Combining relative and metric depth (Official implementation)

Open In Collab Open in Spaces

License: MIT PyTorch PWC

[Paper]

teaser

Table of Contents

Usage

It is recommended to fetch the latest MiDaS repo via torch hub before proceeding:

import torch

torch.hub.help("intel-isl/MiDaS", "DPT_BEiT_L_384", force_reload=True)  # Triggers fresh download of MiDaS repo

ZoeDepth models

Using torch hub

import torch

repo = "isl-org/ZoeDepth"
# Zoe_N
model_zoe_n = torch.hub.load(repo, "ZoeD_N", pretrained=True)

# Zoe_K
model_zoe_k = torch.hub.load(repo, "ZoeD_K", pretrained=True)

# Zoe_NK
model_zoe_nk = torch.hub.load(repo, "ZoeD_NK", pretrained=True)

Using local copy

Clone this repo:

git clone https://github.com/isl-org/ZoeDepth.git && cd ZoeDepth

Using local torch hub

You can use local source for torch hub to load the ZoeDepth models, for example:

import torch

# Zoe_N
model_zoe_n = torch.hub.load(".", "ZoeD_N", source="local", pretrained=True)

or load the models manually

from zoedepth.models.builder import build_model
from zoedepth.utils.config import get_config

# ZoeD_N
conf = get_config("zoedepth", "infer")
model_zoe_n = build_model(conf)

# ZoeD_K
conf = get_config("zoedepth", "infer", config_version="kitti")
model_zoe_k = build_model(conf)

# ZoeD_NK
conf = get_config("zoedepth_nk", "infer")
model_zoe_nk = build_model(conf)

Using ZoeD models to predict depth

##### sample prediction
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
zoe = model_zoe_n.to(DEVICE)


# Local file
from PIL import Image
image = Image.open("/path/to/image.jpg").convert("RGB")  # load
depth_numpy = zoe.infer_pil(image)  # as numpy

depth_pil = zoe.infer_pil(image, output_type="pil")  # as 16-bit PIL Image

depth_tensor = zoe.infer_pil(image, output_type="tensor")  # as torch tensor



# Tensor 
from zoedepth.utils.misc import pil_to_batched_tensor
X = pil_to_batched_tensor(image).to(DEVICE)
depth_tensor = zoe.infer(X)



# From URL
from zoedepth.utils.misc import get_image_from_url

# Example URL
URL = "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS4W8H_Nxk_rs3Vje_zj6mglPOH7bnPhQitBH8WkqjlqQVotdtDEG37BsnGofME3_u6lDk&usqp=CAU"


image = get_image_from_url(URL)  # fetch
depth = zoe.infer_pil(image)

# Save raw
from zoedepth.utils.misc import save_raw_16bit
fpath = "/path/to/output.png"
save_raw_16bit(depth, fpath)

# Colorize output
from zoedepth.utils.misc import colorize

colored = colorize(depth)

# save colored output
fpath_colored = "/path/to/output_colored.png"
Image.fromarray(colored).save(fpath_colored)

Environment setup

The project depends on :

  • pytorch (Main framework)
  • timm (Backbone helper for MiDaS)
  • pillow, matplotlib, scipy, h5py, opencv (utilities)

Install environment using environment.yml :

Using mamba (fastest):

mamba env create -n zoe --file environment.yml
mamba activate zoe

Using conda :

conda env create -n zoe --file environment.yml
conda activate zoe

Sanity checks (Recommended)

Check if models can be loaded:

python sanity_hub.py

Try a demo prediction pipeline:

python sanity.py

This will save a file pred.png in the root folder, showing RGB and corresponding predicted depth side-by-side.

Model files

Models are defined under models/ folder, with models/<model_name>_<version>.py containing model definitions and models/config_<model_name>.json containing configuration.

Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under models/zoedepth while as the multi-headed model (Zoe_NK) is defined under models/zoedepth_nk.

Evaluation

Download the required dataset and change the DATASETS_CONFIG dictionary in utils/config.py accordingly.

Evaluating offical models

On NYU-Depth-v2 for example:

For ZoeD_N:

python evaluate.py -m zoedepth -d nyu

For ZoeD_NK:

python evaluate.py -m zoedepth_nk -d nyu

Evaluating local checkpoint

python evaluate.py -m zoedepth --pretrained_resource="local::/path/to/local/ckpt.pt" -d nyu

Pretrained resources are prefixed with url:: to indicate weights should be fetched from a url, or local:: to indicate path is a local file. Refer to models/model_io.py for details.

The dataset name should match the corresponding key in utils.config.DATASETS_CONFIG .

Training

Download training datasets as per instructions given here. Then for training a single head model on NYU-Depth-v2 :

python train_mono.py -m zoedepth --pretrained_resource=""

For training the Zoe-NK model:

python train_mix.py -m zoedepth_nk --pretrained_resource=""

Gradio demo

We provide a UI demo built using gradio. To get started, install UI requirements:

pip install -r ui/ui_requirements.txt

Then launch the gradio UI:

python -m ui.app

The UI is also hosted on HuggingFace🤗 here

Citation

@misc{https://doi.org/10.48550/arxiv.2302.12288,
  doi = {10.48550/ARXIV.2302.12288},
  
  url = {https://arxiv.org/abs/2302.12288},
  
  author = {Bhat, Shariq Farooq and Birkl, Reiner and Wofk, Diana and Wonka, Peter and Müller, Matthias},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth},
  
  publisher = {arXiv},
  
  year = {2023},
  
  copyright = {arXiv.org perpetual, non-exclusive license}
}

zoedepth's People

Contributors

shariqfarooq123 avatar thias15 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zoedepth's Issues

why are the perfomances lower than the figuers on the paper

how can i get your perfomance on the metrics?
if i train your model on my local computer(all the same config), i can get the poor perfomances than yours. is there any factors that affect the perfomance?
i know it's my problem. Can you give me some advice for this problem?

Beginner's dilemmas

Hello,
So I Am just a bored guy who's been exploring ai possibilities and I found this thread. Im trying to make this work in colab but i get this messages... Is there maybe some easy fix for this? I appriciate any help.

2023-06-14 02_35_52-ZoeDepth_quickstart ipynb - Colaboratory
2023-06-14 02_35_41-ZoeDepth_quickstart ipynb - Colaboratory

Train on custom dataset

Hi,
I don't find a part of the documentation talking about training the model on a custom set of images with depth associated. Can you please provide me some docs?

Re-training ZoeDepth Model on Custom Indoor Dataset

Hello and thank you for open-sourcing the ZoeDepth project! It has been incredibly useful, and I truly appreciate the work put into it.

I am currently facing some difficulties re-training the ZoeDepth model on a custom indoor dataset that I have prepared. It would be great to have some documentation or guidance on how to accomplish this task, as I believe it would benefit other users as well.

Please provide any relevant documentation or instructions for re-training the ZoeDepth model on custom datasets. This would be very helpful for users working with unique data sources.
A step-by-step guide with this information on how to run training on a custom dataset would also be greatly appreciated.

    1. How to prepare the dataset (formatting, pre-processing, etc.) 
    2. Modifications needed to the model or training configurations
    3. How to initiate the training process with the custom dataset

I understand that creating such documentation might take some time, but any help you can provide would be highly valuable.

Thank you in advance for your assistance and support.

How to use self-built datasets for training

For the training part in the readme, I don't understand if this is fine-tuning with NYU-Depth V2, or retraining, but not using 10 datasets for relative depth training and 2 datasets for metric depth training, sorry, I'm a novice, please forgive me if the problem is a bit stupid

[request] Release of smaller models

Hi,

Is it possible to release the trained smaller models of ZoeDepth(like ZoeDepth (S2-B) )? I am intending to use a lightweight depth estimator for computer vision research. Thank you.

How to output depth maps similar to tums dataset

Hello author, I am a novice and have been trying to output a depth map similar to the Tum dataset recently, but it has not been successful. Therefore, I would like to ask which command should I call? Thank you very much!
Figure 1 shows The rgb graph in the fri_xyz dataset, the depth graph in Figure 2, and the graph of the algorithm running in Figure 3
1305031102 175304
1305031102 160407
image

Please update documentation with more details on training the other midas backbones

I have been trying to train a new ZoeDepth_N model on the NYUv2 dataset with the more efficient DPT_SwinV2_L_384 MiDaS backbone for real-time performance. However, it is not clear from the current documentation about how to properly setup the dataset and config parameters. I have no idea where to find the "shortcuts/datasets/nyu_depth_v2/official_splits/test/". The train_mono script is just printing the config params and does nothing else. What am I missing?

speed compared to other algorithm

Hi authors of zoedepth

first of all, much thx to your work,

i would like to ask, how is the inference speed as compared to other algorithm?

because we need it to run on many many pics, so speed matter a lot.

eval_mask utilized before declaration

Hi authors! Thank you so much for your work!

I'm trying to finetune your model on my own datasets, and I don't wish to use eigen_crop or garg_crop... My dataset is 'adjacent' to KITTI so I'm just changing the KITTI configs in config.py. But I get eval_mask called before declaration.

Looking through ZoeDepth/zoedepth/utils/misc.py in compute_metrics...

I wonder if the authors would consider adding an else clause to the if garg_crop or eigen_crop: at line 226?

if garg_crop or eigen_crop: 

""" CODE """

else:
    eval_mask = np.ones(valid_mask.shape)

Quick question about training time and compute

Hello, thanks for your great work. I have a very question about training.

I'm trying to run training and getting an OutOfMemoryError using a (single) 32 GB GPU (V100). What do you use for training? Also, with your compute setup, approximately how long does training take?

Thanks so much!

Different result between demo and local test

As shown below, I used same input image, but generate different depth result, the main diff is that demo sky has no depth, what's the reason?

  1. demo result: image
  2. local result using nk pretrained model: image

can not access API in hugging face

Hello!
Currently i'm trying to transform image to 3d gltf using your repo.
However it seems like there is an issue with API page in hf, when i click use via api button it just shows empty page.

If there is any other way to view documentation about api, please let me know.
Thank you!

Training is extremely unstable

截圖 2023-06-15 下午1 47 02

I ran the training multiple times with the default parameters, sometimes everything trains smoothly, but most of the times training fails at a very early stage and produces images like above

A question about evaluation on SUNRGBD dataset

Greetings, I would like to request assistance with testing on SUNRGBD dataset. It appears that the official SUNRGBD dataset did not provide a division between the training and testing sets. How did you conduct your testing?

Consistent depth scale? No normalization.

Thanks for sharing this great work!
I've noticed that the output depth map seems to leverage the full RGB range it has available for every single image it creates, which makes perfect sense for outputting a single image. However, I would like to test outputting mulitple frames with a constant depth scale. I've notice that in misc.py there are some "denormilize" and "normalize" sections. Can you provide any insight on what I might modify in order to get a consistent rendering of the depth scale across multiple frames?

model_zoe_k output off by a decimel point

Hello,

I have been testing model_zoe_k and model_zoe_nk to inference the distance to cars in a parking lot. I measured a distance of 40 meter with my tape measure. model_zoe_k gives me a distance of 3.32 meters to a point on a car, while model_zoe_nk gives me a distance of 28.98 meters to the same point. Seems like the zoe_k model is off by a factor of ten. Is this a configuration issue, because the output depth map looks on point, but the numbers for the zoe_k model are not on the right scale. Can anyone confirm? The pretrained resources load as either ZoeD_M12_k.pt or ZoeD_M12_nk.pt.

Additionally, should I crop my input image before inference for better results? Should I adjust any ZoeDepth parameters for my camera's intrinsic matrix or position above the ground? Thanks.

Getting Metric Depth

How can I use my data to get the metric depth at a pixel level using the ZoeD model?

ModuleNotFoundError: No module named 'zoedepth'

After (zoe) P:\Users\name\ZoeDepth>python sanity_hub.py I'm getting a "No module named 'zoedepth'" error. I followed the steps in the documentation to install all dependencies, but I'm still facing this issue. Any assitance would be helpful as I'm still learning coding in general.

SILog NaN for higher batch size

Hi, so in my case is that i want to use multiple GPU (4 GPU) using batch size of 8, it works well.
but then i see that each GPU only utilized half of its capacity, then i tried to increase my batch size to 12.
but the same error always reappear not OOM but "SILog is NAN stopping training".
do you know why this is happening? or has anyone encounter similar problem?

small typo in readme code

In the readme, there is missing a comma after the "local"

import torch

# Zoe_N
model_zoe_n = torch.hub.load(".", "ZoeD_N", source="local" pretrained=True)

should be

import torch

# Zoe_N
model_zoe_n = torch.hub.load(".", "ZoeD_N", source="local", pretrained=True)

prediction from evaluation worse than Gradio demo

Hi Author! Thank you for your great work done in the field of absolute depth estimation! I have learned a lot but came up with a question while running the code.

I used NYU dataset + ZoeD_N model for evaluation (evaluate.py), yet I got many hollows in my prediction image(some parts showed in white color). When I used the same image in the Gradio Demo, the result was much better, at least it didn't get any hollow at all.

Now I am wondering how could we users get the same good result as Gradio Demo gets? Did I miss something?

Use_amp:true causes Nan

Hello, when I set use_amp to true I am getting "SILog is NaN, Stopping training" after some batch
Nan SILog loss input: torch.Size([651751]) target: torch.Size([651751]) G tensor(2284, device='cuda:0') Input min max tensor(nan, device='cuda:0', grad_fn=<MinBackward1>) tensor(nan, device='cuda:0', grad_fn=<MaxBackward1>) Target min max tensor(3.8516, device='cuda:0') tensor(226.5352, device='cuda:0') Dg tensor(True, device='cuda:0') loss tensor(True, device='cuda:0')

Do you have any idea how to fix this

How to increase output depth map resolution?

Hi. Great work and thanks for sharing the colab notebook! I noticed that the on the colab notebook, output depth map appears to be of relatively low resolution compared to the input image, which could result in some fuzziness. As such, I am curious to know whether there is a way to increase the output depth map resolution.

If it is not too much trouble, could you kindly point me to the relevant section of the notebook where I can make the necessary changes? I would be incredibly grateful for your help in this regard :)

image

Error's in loading state_dict for ZoeDepthNK

I want to finetune head 2 times
which is trained in Carla dataset(my custom data), and train nyu-kitti (train_mix.py)

I use this command [python train_mix.py -m zoedepth_nk --pretrained_resource="local::Carla/ZoeDepthv1_13-Jul_04-44-e6e03405a1f8_best.pt"]
but I met this error ``
image

Inaccurate depth estimation beyond 40m

Hello @thias15 @shariqfarooq123 , thank you for the great work!

I met two problems when inferring the models in outdoor car scenes:

  1. According to your description, model_zoe_k should be the one to choose here. However, model_zoe_n and model_zoe_k gave results of around 1m ~ 7m, only model_zoe_nk gave 7m ~ 65m, while gt is 1m ~ 80m. The latter is barely satisfactory for car instances within 10 ~ 40m(<2m error), however at close and far ranges the results seem remote from reality, for example the car front of the camera itself at 8.5m, and a distant car at gt = 65.8m with pred = 44.6m. The original RGB image can be downloaded here.
    image.png
    Something also worth mentioning is that the sky have pred results almost the same as the ground, which could be observed easily in the picture. This only happens in model_zoe_nk with mode="eval".
    Do you have any insights on how to improve the metric predictions at close and far distances? Would further training on datasets work? (yet what way could be beneficial given that it's already trained on 12 datasets...)

  2. As described similarly in issue #28, I tried both default mode ('infer') and mode="eval", but got same results. Could you provide a detailed example of the correct way to do it with torch.hub.load()?

Thank you for your time! :D

How to extract the values of the depth map?

I am using the Gradio UI to generate a depth map. Is there any way to extract the values of the depths themselves or is the only possible output of another grayscaled image?

depth contour does not match the object

Thank you for your excellent work!
And I have a question.
As what is shown in the following images,
it seems that depth contour does not match the RGB image, extending pixels beyond the outer edge of the objects (indicated by the green poly-line).
I don't know why and how to restore the depth.
Thank you for your attention.
rgb
output

HuggingFace space is not working

Runtime error
Traceback (most recent call last):
File "app.py", line 22, in
model = torch.hub.load('isl-org/ZoeDepth', "ZoeD_N", pretrained=True).to(DEVICE).eval()
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 397, in load
repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 185, in _get_cache_or_reload
_validate_not_a_forked_repo(repo_owner, repo_name, branch)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 147, in _validate_not_a_forked_repo
response = json.loads(_read_url(Request(url, headers=headers)))
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 130, in _read_url
with urlopen(url) as r:
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: rate limit exceeded
Container logs:

Traceback (most recent call last):
File "app.py", line 22, in
model = torch.hub.load('isl-org/ZoeDepth', "ZoeD_N", pretrained=True).to(DEVICE).eval()
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 397, in load
repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 185, in _get_cache_or_reload
_validate_not_a_forked_repo(repo_owner, repo_name, branch)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 147, in _validate_not_a_forked_repo
response = json.loads(_read_url(Request(url, headers=headers)))
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/site-packages/torch/hub.py", line 130, in _read_url
with urlopen(url) as r:
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/home/user/.pyenv/versions/3.8.9/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: rate limit exceeded

About model size

Thank you for your great work!! I want to know the params and GFLOPs of the model, can you help me? Thank you very much!!

About the datasets for metric depth estimation

Thanks for your great work!

I am curious that why you use 10 datasets for relative depth training and only 2 datasets for metric depth training? Have you tried using more datasets for metric depth training? I wonder if you can use all 12 datasets for metric depth estimation. Looking forward to your responce!

Amount of video memory for training

Hello, could you write please what is the minimum amount of video memory required to train the model?
On what model of video cards and on what number of video cards did you train the network? I tried to run training on a video card with 40 GB of memory and the code does not work.

Any suggestion about fine-tuning on another outdoor dataset

Thanks for your brilliant work.

I am trying to use the pre-trained model to fine-tune on another outdoor dataset. Should I use the pretrained model ZoeD_M12_K.pt or the pretrained model without fine-tuning on any dataset (which is not released I think) ? I have tried the former one but I don't see an obvious droping on the training loss.
image

Thank you in advance for any advice you can offer.

The testing split of DDAD dataset.

Hi, I notice that the ZoeDepth shows the depth evaluation results on the DDAD dataset, but I can not find the DDAD test split file in the code. Could you please provide a split file or a link of it? Thanks so much!

Memory leak

Hi! I tried replacing Kitti with some other outdoor dataset which is 3 times smaller. The code starts using an extensive amount of memory (from almost 6GB to 80GB where OOM reaper kills the job at this point) when the small dataset is exhausted and should be repeated by the "RepetitiveRoundRobinDataLoader". Any idea why that is happening?

custom dataset

Thank you for your kind words! I have a few questions to consult with you now:

  1. Whether I use ZoeD-K(More suitable for outdoor scenes) or ZoeD-N(More suitable for indoor scenes) to predict KITTI scene, there will be large holes in the mid-to-long-range areas of the road (when Keep occlusion edges is not selected). Does this mean that the predicted depth map has an infinite depth or a depth of 0 in that area? I also have this issue with my own dataset.

  2. From a 3D perspective, ZoeD-K has better absolute depth than ZoeD-N, but ZoeD-N has better details in many cases and ZoeD-K sometimes estimates the depth of the sky.

  3. I want to use my own dataset(outdoor scenes) for model fine-tuning (RGB images are 1920x1080 and depth images are collected and converted from 16-line LiDAR(VLP-16)). What is the appropriate size to crop the images, and should I directly use the sparse LiDAR depth map for supervised training or first complete the sparse depth map before supervised training? In addition, what other issues should I pay attention to?

Looking forward to your reply!
屏幕截图 2023-03-22 143740
屏幕截图 2023-03-22 144624

The effectiveness of metric bins

From table 5 we see that adding metric bins has postive effects. However, I wonder if it works "as expected" and not just randomly making it better.
In my opinion, the metric bins and the attractors serve to make more and more precise depth distribution prediction as the resolution goes up. Also, the temperature serves as an uncertainty estimate which should be large where depth value in ambiguous.

However, in the paper there is no such study, and in reality I don't observe these phenomenon. The temperature is very large for no matter what pixel, and the bins are not concentrated either, they seem to be distributed quite randomly.

Can you provide some insights how this works?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.