Git Product home page Git Product logo

aim-uofa / adelaidepth Goto Github PK

View Code? Open in Web Editor NEW
1.0K 36.0 143.0 45.72 MB

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.

License: Creative Commons Zero v1.0 Universal

Python 97.26% Jupyter Notebook 1.65% Shell 1.09%
3d-scene-shape depth-prediction

adelaidepth's People

Contributors

chhshen avatar cshen avatar eltociear avatar encounter1997 avatar flareopti avatar guangkaixu avatar jeckinchen avatar sabraha2 avatar tianzhi0549 avatar yvanyin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adelaidepth's Issues

FileNotFoundError

Dear,where is the 'Holopix50k/annotations/val_annotations.json'? I only get Holopix50k/annotations/all_annotations.json after I run the command 'sh download_data.sh'.

Whether this loss can be used directly to calculate disparity loss

Hello, thank you for your amazing work!

I want to kown whether (PWNPlanesLoss、EdgeguidedNormalRegressionLoss、MSGIL_NORM_Loss)loss can be used directly to calculate disparity loss, because my dataset is disparity data not depth data.

Thank you for your time
Best Regards

PCM training code?

Hi,

I didn't see the PCM training code.
Could you please share?

thanks a lot

PairWiseNormal vs VirtualNormal Loss

Hi,
thank you very much for your great work and the continuous support of this repository!
During your study, did you evaluate the performance of the PWN loss in comparison
to the Virtual Normal Loss proposed by Yin et al. in "Diverse Depth"? The VNL seems to
explore similar ideas to your PWN loss, but with only global instead of targeted point-
sampling.
What would be the advantages of PWN?

Regards

EDIT: Sry, I found the matching paragraph in your paper!

Depth recovery issue

Thank you for your contribution. In the same scene, the change of camera attitude will lead to the scale recovery parameter a, and b will also change accordingly. Is it possible to recover more robust scale parameters for the same scene?

Some questions about train data

Thanks for your great work, I have some questions about the train dataset:

  1. what dataset did you used to train the pretained model which you provided, are DIML and 3D Ken Burns which in your paper used?
  2. In the dataset you provide on github, I didn't see DIML and 3D Ken Burns dataset, are they in DiverseDepth or they are not provided? If so, how should I prepare these two datasets?
  3. In your paper, relative depth of Holopix50K are generated using FlowNet, how can I generate relative depth of my own stereo data?

360 point cloud animation video

Hi,

Thanks for the great work!

Is there a visualization script to generate the 360 point cloud animation video or did you use a specific tool for this?

Thanks!

About intrinsic usage in mix dataset training

In normal loss, you constructed predicted normal map from point cloud (generated by aligned depth). If mix dataset in training, there will be several intrinsics in a batch, how to manage these intrinsics to let it be used in right depth map in training stage? And when mixing web stereo dataset, there is no intrinsic, how to construct point cloud from these data?

thank you

Test monocular depth prediction error

Running
python ./tools/test_depth.py --load_ckpt res50.pth --backbone resnet50
or
python ./tools/test_depth.py --load_ckpt res101.pth --backbone resnext101

throws an error:

F:\Depth_estimation\AdelaiDepth-main\LeReS>python ./tools/test_depth.py --load_ckpt res50.pth --backbone resnet50
Traceback (most recent call last):
  File "./tools/test_depth.py", line 1, in <module>
    from lib.multi_depth_model_woauxi import RelDepthModel
ModuleNotFoundError: No module named 'lib.multi_depth_model_woauxi'

F:\Depth_estimation\AdelaiDepth-main\LeReS>python ./tools/test_depth.py --load_ckpt res101.pth --backbone resnext101
Traceback (most recent call last):
  File "./tools/test_depth.py", line 1, in <module>
    from lib.multi_depth_model_woauxi import RelDepthModel
ModuleNotFoundError: No module named 'lib.multi_depth_model_woauxi'

Preprocessing Depth Data and Auxiliary Branch Architecture

Hello, thank you for your amazing work!

I am trying to reproduce your training procedure.
And I have two questions regarding the preprocessing
and the architecture of the lightweight branch.

If I understand correctly, you trained your network
with two branches, one main branch for relative depth
estimation and one auxiliary branch for disparity estimation.
Each with different losses depending on the used dataset.

  1. Preprocessing:

    • Did you apply any preprocessing to the relative depth sources
      taskonomy and 3dkenburns ?
    • How did you normalize the disparity?
    • Did you specifically handle sky regions by using segmentation masks?
  2. Auxiliary Branch:

    • It seems to be a central part of the training procedure, but you
      only give general information in the appendix.
    • What architecture did you use for this branch? Does it only share
      the weights of the first 4 layers of the decoder and adds a new
      last layer for the disparity estimation?

Thank you for your time and
Best Regards

Code Execution Result

The work is excellent. But the depth maps I got after executing the code according to the README is completely
inconsistent with the results in the paper. Is the uploaded model different?

7-color7-depth

6-color6-depth

loss becomes nan

lib.utils.logging INFO: [Step 10470/182650] [Epoch 2/50] [multi]
loss: nan, time: 5.862533, eta: 11 days, 16:23:31
meanstd-tanh_auxiloss: nan, meanstd-tanh_loss: nan, msg_normal_loss: nan, pairwise-normal-regress-edge_loss: nan, pairwise-normal-regress-plane_loss: nan, ranking-edge_auxiloss: nan, ranking-edge_loss: nan, abs_rel: 0.211080, whdr: 0.087764,
group0_lr: 0.001000, group1_lr: 0.001000,
您好,当我在用taskonomy DiverseDepth HRWSI Holopix50k这四个数据集训练的时候,loss变成了nan,请问您在训练的时候有遇到这样的问题吗?如果有应该怎么解决呢?谢谢!下面是我输入的参数
--backbone resnext101
--dataset_list taskonomy DiverseDepth HRWSI Holopix50k
--batchsize 16
--base_lr 0.001
--use_tfboard
--thread 8
--loss_mode ranking-edge_pairwise-normal-regress-edge_msgil-normal_meanstd-tanh_pairwise-normal-regress-plane_ranking-edge-auxi_meanstd-tanh-auxi
--epoch 50
--lr_scheduler_multiepochs 10 25 40
--val_step 5000
--snapshot_iters 5000
--log_interval 10 \

RUNTIME Error: The size of tensor a (7) must match the size of tensor b (8) at non-singleton dimension 0

Hi, thanks for the great work.

I am trying to train the code on my machine but encountered the tensor size runtime error as shown in the title above.

Seems some data are filtered by the invalid depth threshold, while the gt_mean and gt_std are still computed from the original gt shown as the code INLR_loss.py.

        mask_maskbatch = mask[mask_batch]
        pred_maskbatch = pred[mask_batch]
        gt_maskbatch = gt[mask_batch]

        gt_mean, gt_std = self.transform(gt)
        gt_trans = (gt_maskbatch - gt_mean[:, None, None, None]) / (gt_std[:, None, None, None] + 1e-8)

Have anyone got such issue? Is it a bug? Or maybe I got some mistakes in my training?

Any suggestion will be appreciated, thank you in advance.

Error in loading pretrained model to initialize training script

Hi,

I am trying to initialize the training script with the provided pretrained model. In the training script, the depth_model comes with auxi_modules (module.depth_model.auxi_modules) that are not in the pretrained model files.

I was wondering how I can initialize auxi_modules so that I can use the auxi losses in my fine tuning.

Also, where in the training code is "focal_keys" and "shift_keys" trained? These were also in the pretrained model. But the number of parameters in those do not equal the missing auxi_modules to load in.

Looking forward to your response. Thank you!

Best,
Mika

How to interpret the output f,d of the point cloud module?

Congrats to this nice work.

If I have ground truth depth, I can estimate a scale and shift parameter via Least Squares (recover_metric_depth) to align the estimation with the ground truth.
Now if I do not have ground truth depth, can I use your point cloud module to achieve the same?

Since the focal length f is a parameter of the backprojection, how would I use f to scale the depth (as with the scale from recover_metric_depth)?
x = u_u0 / f * depth
y = v_v0 / f * depth
z = depth

I estimated the focal length for some images and compared it to the ground truth focal length. However the difference was quite large and the estimated focal length seems to increase with the image size. Is there a correlation?

Thanks for helping me out with some insights.

train code

I would like to know when the code will be released?

#@title Download results

/content/AdelaiDepth/LeReS/test_images

zip error: Nothing to do! (result.zip)

FileNotFoundError Traceback (most recent call last)
in
3 get_ipython().run_line_magic('cd', '/content/AdelaiDepth/LeReS/test_images/')
4 get_ipython().system('find outputs/ -name "*-depth_raw.png" | zip -r result.zip -@')
----> 5 files.download("result.zip")

/usr/local/lib/python3.7/dist-packages/google/colab/files.py in download(filename)
207 if not _os.path.exists(filename):
208 msg = 'Cannot find file: {}'.format(filename)
--> 209 raise FileNotFoundError(msg) # pylint: disable=undefined-variable
210
211 comm_manager = _IPython.get_ipython().kernel.comm_manager

FileNotFoundError: Cannot find file: result.zip

Result not satisfactory

Hi Yin,
I just tested one of my images, but the result is less satisfactory compared to that of your research paper. Would it be possible for you to take a look and investigate on the cause? Thank you.
33
33-depth_raw

About Visualization

Hi~
It's a exciting work!
I wanna ask that what tool you use to visualize point clouds in your paper?
Thanks a lot!

Test_shape failed with torchsparse 1.2

Good work! I manged to get the shape using Test_shape.py but when using test_shape.py, I faced this error:
processing (0000)-th image... LeReS/test_images/5.jpg
Traceback (most recent call last):
File "LeReS/tools/test_shape.py", line 123, in
shift, focal_length, depth_scaleinv = reconstruct3D_from_depth(rgb, pred_depth_ori,
File "LeReS/tools/test_shape.py", line 76, in reconstruct3D_from_depth
shift_1 = refine_shift(pred_depth_norm, shift_model, predicted_focal_1, cam_u0, cam_v0)
File "/home/adelai_ws/AdelaiDepth/LeReS/lib/test_utils.py", line 124, in refine_shift
shift = refine_shift_one_step(depth_wshift_tmp, model, focal, u0, v0)
File "/home/adelai_ws/AdelaiDepth/LeReS/lib/test_utils.py", line 108, in refine_shift_one_step
outputs = model(inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/trainer/adelai_ws/AdelaiDepth/LeReS/lib/spvcnn_classsification.py", line 148, in forward
x2 = self.stage2(x1)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/trainer/adelai_ws/AdelaiDepth/LeReS/lib/spvcnn_classsification.py", line 23, in forward
out = self.net(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/modules/conv.py", line 72, in forward
return conv3d(inputs,
File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/functional/conv.py", line 118, in conv3d
idx_query = list(convert_neighbor_map_gpu(idx_query))
File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/functional/convert_neighbor_map.py", line 9, in forward
idx_batch, idx_point = torch.where(neighbor_map != -1)
ValueError: not enough values to unpack (expected 2, got 1)

Camera Position

Thank you for your amazing work!

Could you please tell me how to get the original camera position? I want to use original view port to observe the reconstructed model.

Test_shape failed with torchsparse 1.2

Good work! I manged to get the shape using Test_shape.py but when using test_shape.py, I faced this error:

root@PC:/home/adelai_ws/AdelaiDepth# python3 LeReS/tools/test_shape.py --load_ckpt res101.pth --backbone resnext101
No protocol specified
processing (0000)-th image... LeReS/test_images/5.jpg
Traceback (most recent call last):
  File "LeReS/tools/test_shape.py", line 123, in <module>
    shift, focal_length, depth_scaleinv = reconstruct3D_from_depth(rgb, pred_depth_ori,
  File "LeReS/tools/test_shape.py", line 76, in reconstruct3D_from_depth
    shift_1 = refine_shift(pred_depth_norm, shift_model, predicted_focal_1, cam_u0, cam_v0)
  File "/home/adelai_ws/AdelaiDepth/LeReS/lib/test_utils.py", line 124, in refine_shift
    shift = refine_shift_one_step(depth_wshift_tmp, model, focal, u0, v0)
  File "/home/adelai_ws/AdelaiDepth/LeReS/lib/test_utils.py", line 108, in refine_shift_one_step
    outputs = model(inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/trainer/adelai_ws/AdelaiDepth/LeReS/lib/spvcnn_classsification.py", line 148, in forward
    x2 = self.stage2(x1)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/trainer/adelai_ws/AdelaiDepth/LeReS/lib/spvcnn_classsification.py", line 23, in forward
    out = self.net(x)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/modules/conv.py", line 72, in forward
    return conv3d(inputs,
  File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/functional/conv.py", line 118, in conv3d
    idx_query = list(convert_neighbor_map_gpu(idx_query))
  File "/usr/local/lib/python3.8/dist-packages/torchsparse/nn/functional/convert_neighbor_map.py", line 9, in forward
    idx_batch, idx_point = torch.where(neighbor_map != -1)
ValueError: not enough values to unpack (expected 2, got 1)

I tried another version (1.4) but there is a change in the API. Any suggestions?
Thanks!

module 'torchsparse_backend' has no attribute 'hash_forward

Hi!

I'm having this problem here with and import statement from Torchsparse(version 1.2.0 as fixed in the last commit):
module 'torchsparse_backend' has no attribute 'hash_forward
I tried to ask the solution on Torchsparse repo, but they ask to update to 1.4.0, but that version breaks a lot of other import statements.

Does anyone know the solution?

Thanks in advance!

Training loss

Hi,
I think your model is great. Can u provide the code for the 3 training losses(Pair-wise normal loss, Image-level normalized regression loss and the multi-scale gradient loss) you have used in your recent paper ?

AttributeError: 'NoneType' object has no attribute 'lower'

Dear,
I download your dataset (taskeconomy, diversedepth, hwrsi and holopix50k) by running sh Download_ data. sh, and run your recently updated code. I can run sh train_ demo. sh, but cannot run sh train.sh. I modify the parameter dataset_ list in train.sh (only HWRSI is left,others are not allowed), you can run for a while and then report an error:

AttributeError: 'NoneType' object has no attribute 'lower'

I think it is the lack of content in the datasets.Can you check your dataset?Thanks!

Is there a different version of torchsparse that is used?

Installed the latest version of torchsparse (v1.4.0) and when running the 3D reconstruction command get the following errors:

from torchsparse.point_tensor import PointTensor
ModuleNotFoundError: No module named 'torchsparse.point_tensor

I noticed in the torchsparse repo, PointTensor was found in torchsparse.tensor.

But also received the error:
from torchsparse.utils.kernel_region import *
ModuleNotFoundError: No module named 'torchsparse.utils.kernel_region' and noticed I could not find kernel_region and was wondering if a different version was used?

Excellent work by the way! :)

Correct initial FoV?

# proposed focal length, FOV is 80'
proposed_scaled_focal = (rgb.shape[0] // 2 / np.tan((80/2.0)*np.pi/180))

Section 3.1 in your paper mentions an initial FoV of 60°, not 80°.
Figure 4 (right) also suggests that using an initial FoV of 80° would be sub-optimal.

Could you please clarify and/or update your code to be consistent with the paper? Thanks.

ModuleNotFoundError

keep getting this ModuleNotFoundError: No module named 'lib.multi_depth_model_woauxi' when l run python ./tools/test_depth.py --load_ckpt res50.pth --backbone resnet50 why?

Speed of inference

Generating the depth-map for one single image, currently takes about 30 seconds at my local PC.
Is this normal, and If not, could there be a version available that is able to run at like 20-30 fps (realtime) with less accuracy?

Question about the multi-scale gradient loss

Thanks for the good work!
I have some questions about the multi-scale gradient loss used in the paper.
Here is the code of MegaDepth [24], but I think it is different from the grad_loss used in this method.

def GradientLoss(self, log_prediction_d, mask, log_gt):
        N = torch.sum(mask)
        log_d_diff = log_prediction_d - log_gt
        log_d_diff = torch.mul(log_d_diff, mask)

        v_gradient = torch.abs(log_d_diff[0:-2, :] - log_d_diff[2:, :])
        v_mask = torch.mul(mask[0:-2, :], mask[2:, :])
        v_gradient = torch.mul(v_gradient, v_mask)

        h_gradient = torch.abs(log_d_diff[:, 0:-2] - log_d_diff[:, 2:])
        h_mask = torch.mul(mask[:, 0:-2], mask[:, 2:])
        h_gradient = torch.mul(h_gradient, h_mask)

        gradient_loss = torch.sum(h_gradient) + torch.sum(v_gradient)
        gradient_loss = gradient_loss / N

        return gradient_loss

Should I use depth with 'log' without any constraint to make 'prediction_d'>0?

Error Loading a custom trained model or resume training from a check point.

File "/mnt/disk/code/AdelaiDepth/LeReS/Train/lib/utils/net_tools.py", line 44, in load_ckpt
checkpoint_state_dict_noprefix = strip_prefix_if_present(checkpoint['model_state_dict'], "module.")
KeyError: 'model_state_dict'

With resuming training from a checkpoint or loading a self trained model.

Similar KeyError when loading a demo trained model in the Minist_test script (test_depth.py or test_shape.py)

To reproduce this error just train the demo model (..Train/scripts/train_demo.py) and try loading using the inference code in the Minist_test.

Please let me know if you need more details.
Any help would be much appreciated

RuntimeError: stack expects a non-empty TensorList

Hi,
When I configure the environment as required, I can run sh train_demo.sh with the code that has not been updated before, but running sh train.sh for a period of time will report an error:“AttributeError:‘NoneType’ object has no attribute ‘lower’”。

Download the latest code to run the training code (sh train_demo.sh/sh train.sh). The error display: "runtimeerror: stack expectations a non empty tensorlist". How to solve this problem?

Question on regression loss

what is the difference between "Image-level normalized regression loss" and shift_scale invariant loss based on median of a sample in (MiDaS). And what is the benefit of your proposed loss compared to MiDaS's.

'NoneType' object is not subscriptable

processing (0000)-th image... ./test_images/.ipynb_checkpoints
Traceback (most recent call last):
  File "./tools/test_shape.py", line 111, in <module>
    rgb_c = rgb[:, :, ::-1].copy()
TypeError: 'NoneType' object is not subscriptable

Though I have I uploaded the images in the test_images folder but I am getting "None Type" error.

I am facing the above error while I am executing the Test 3D reconstruction from a single image. Can anyone help me to sort out this issue.

Thanks in Advance.

Obtaining ground-truth focal length

Hello,

Upon reading your paper, in order to generate the training samples for the PCM, on the fly you perturb the focal length by some delta, then generate the point cloud. It seems that the diverse depth dataset does not have the focal length in the annotations. Is this information not yet released?

Thanks

Will you release the training code

Thank you for sharing the work,

Will you release the training code of <Learning to Recover 3D Scene Shape from a Single Image>, and when?

Running on colab

loading checkpoint res50.pth
Traceback (most recent call last):
  File "./test_shape.py", line 96, in <module>
    load_ckpt(args, depth_model, shift_model, focal_model)
  File "/content/AdelaiDepth/LeReS/lib/net_tools.py", line 33, in load_ckpt
    checkpoint = torch.load(args.load_ckpt)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 585, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 755, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
loading checkpoint res101.pth
Traceback (most recent call last):
  File "./test_shape.py", line 96, in <module>
    load_ckpt(args, depth_model, shift_model, focal_model)
  File "/content/AdelaiDepth/LeReS/lib/net_tools.py", line 33, in load_ckpt
    checkpoint = torch.load(args.load_ckpt)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 585, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/usr/local/lib/python3.7/site-packages/torch/serialization.py", line 755, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.

I'm trying to convert this to colab and get this error message.
What version of torch are you using?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.