Git Product home page Git Product logo

deephar's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deephar's Issues

something wrong about the eval_penn_multitask.py

HI,when i run the eval_penn_multitask script, it is blocked when it is evaluating on 2D action recognition. Here is the output as below:
2020-12-13 09:48:41.461262: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-12-13 09:48:42.539611: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10

By the way, i run it on google colab, thanks in advance~

Question about action recognition on NTU

Hi,

Firstly, thank you for your project for introducing me a good idea to combine pose estimation and action recognition together.
I have a question regarding 'action recognition on NTU'.
I typed 'python3 exp/ntu/eval_ntu_ar_pe_merge.py' as you mentioned.
An error said "cannot import name'ntu_ar_dataconf' ". Where can I find it?

Thank you~

Error 2D pose estimation

dear all,
I have error when I run python exp/mpii/eval_mpii_singleperson.py output/eval-mpii on anaconda prompt. How to fix it?

(deephar-master) D:\Binh\deephar-master> python exp/mpii/eval_mpii_singleperson.py output/eval-mpii
Initializing deephar v.0.4.1
Traceback (most recent call last):
File "exp/mpii/eval_mpii_singleperson.py", line 8, in
import deephar
File "D:\Binh\deephar-master\deephar_init_.py", line 16, in
keras_git = os.environ.get('HOME') + '/git/fchollet/keras'
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

eval_penn_ar_pe_merge.py error

Hi, @dluvizon thanks for your wonderful work!
when i run eval_penn_ar_pe_merge.py on google colab for action recognition, i met an error as below:

Traceback (most recent call last):

File "exp/pennaction/eval_penn_ar_pe_merge.py", line 62, in
model.load_weights(weights_path)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 2211, in load_weights
hdf5_format.load_weights_from_hdf5_group(f, self.layers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 708, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py", line 3576, in batch_set_value
x.assign(np.asarray(value, dtype=dtype(x)))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 858, in assign
self._shape.assert_is_compatible_with(value_tensor.shape)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py", line 1134, in assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (32,) and (3, 3, 32, 32) are incompatible

Here is my os setting:

keras == 2.4.3
tensorflow == 2.4.0

Thanks in advance!

Testing my own videos ?

Hi , I wanted to test it against my own set of videos , stored locally on the PC. Is it possible to do so ??

Questions regarding Pennaction folder

Hi Luvizon,

Thanks for your amazing work on pose estimation and action recognition field. It inspires me to dig further into this field. Here are some of my questions:

  1. In the recently uploaded 'eval_penn_multitask.py', the input of the pose estimation model should be the size of (8,256,256,3) instead of (1,256,256,3) in the previous version. Why do you change that? Is that mean that the model will input 8 frames at a time? Since I am working on real-time implementation, the size is not very friendly to me now XD.
  2. In the same folder exp/pennaction, does the file 'train_penn_multimodel.py' get the weights for the eval_penn_multitask.py? In the 'train_penn_multimodel.py', how do you get the pre-trained weights ''output/penn_multimodel_trial_15_only_mpii_pose_be215a3/weights_mpii+penn_ar_007.hdf5"?

Thank you for your fantastic work and help.

TypeError:

I run

python3 exp/mpii/train_mpii_singleperson.py

but,

==================================================================================================
Total params: 14,847,936
Trainable params: 14,669,952
Non-trainable params: 177,984


Traceback (most recent call last):
File "exp/mpii/train_mpii_singleperson.py", line 97, in
initial_epoch=0)
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2141, in fit_generator
callbacks.on_epoch_begin(epoch)
File "/usr/local/lib/python3.5/dist-packages/keras/callbacks.py", line 62, in on_epoch_begin
callback.on_epoch_begin(epoch, logs)
File "/usr/local/lib/python3.5/dist-packages/keras/callbacks.py", line 577, in on_epoch_begin
lr = self.schedule(epoch)
TypeError: lr_scheduler() missing 1 required positional argument: 'lr'

How to fix it?

Action recognition

very interesting work! When do you expect to release your action recognition? I am very much interested learning more in details about your joint human pose+ action recognition end-to-end model :)

Clarifications for implementation

Hi @dluvizon,

Congrats on the nice work. I have been trying to reproduce your results and implemented the network after building up on the code provided in your pose regression repo . I have a few questions/clarifications. It would be great if you respond to the following-

  1. Can you share the exact details of which portion of dataset you used from mpii for training 2D action recognition.
  2. Can you share the parameters (mean/variance) you used to generate GT heatmaps for pose estimation network?

Thanks,

Run it in real time & action recognition

1- How can i run this code in real time using my webcam camera as you did in the video?
2- Can you add text to each action which tells if the person is walking, sitting...
Thank you

question about displaying output of pose estimation, action recognition

hello, I'm student studying about computer vision and deep learning.

I'm in trouble about displaying output.

I could get log text of eval program.

but I want to display pose into image and action labeling ....

I searched Issues but I couldn't solve...

I'm sorry about this stupid question.

thank you.

Visualization Issue

Hi! I'm trying to evaluate the various datasets using the provided eval scripts and the recommended pre-trained models. I am having an issue with visualizing the outputs. When I try to use the draw function on the predicted inputs, I get an entirely new plot. I have divided the predictions by their max value to get them closer to the p_val score, but this does not seem to be the scale used to go from ground truth value to p_val. Can you let me know if there needs to be something done to the predictions for plotting on the original image?

Reproducing the results on NTU & Penn Action datasets

Hi Diogo,

I would like to run the evaluation scripts on action recognition benchmarks, however it seems that I cannot find the weights files needed:

  • deephar/releases/download/v0.3/weights_AR_merge_ep074_26-10-17.h5
  • deephar/releases/download/v0.4/weights_AR_merge_NTU_v2.h5
    Did you upload them maybe to a different repository?

Kind regards,
Daniel

Error while training penn mpii multimodel

Hello,

Firstly, Thank you for your work. I found the paper very interesting and just the thing I was looking for.

I want to train the model with my own dataset. But before that I tried to execute your training pipeline with python3 exp/pennaction/train_penn_multimodel.py output/train-penn

I am getting this error after 1 epoch.
Traceback (most recent call last): File "exp/pennaction/train_penn_multimodel.py", line 160, in <module> trainer.train(30, steps_per_epoch=steps_per_epoch, initial_epoch=2, File "/home/ubuntu/3dexperiments/deephar/deephar/trainer.py", line 216, in train end_of_epoch_callback(epoch) File "exp/pennaction/train_penn_multimodel.py", line 127, in end_of_epoch_callback mpii_callback.on_epoch_end(epoch) File "/home/ubuntu/3dexperiments/deephar/exp/common/mpii_tools.py", line 156, in on_epoch_end scores = eval_singleperson_pckh(model, self.fval, self.pval, File "/home/ubuntu/3dexperiments/deephar/exp/common/mpii_tools.py", line 67, in eval_singleperson_pckh input_shape = model.get_input_shape_at(0) File "/home/ubuntu/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/engine/base_layer.py", line 2057, in get_input_shape_at return self._get_node_attribute_at_index(node_index, 'input_shapes', File "/home/ubuntu/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/engine/base_layer.py", line 2683, in _get_node_attribute_at_index raise RuntimeError(f'The layer {self.name} has never been called ' RuntimeError: The layer Pose has never been called and thus has no defined {attr_name}. Exception ignored in: <generator object OrderedEnqueuer.get at 0x7f71d0663ba0> Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/utils/data_utils.py", line 780, in get AttributeError: 'NoneType' object has no attribute 'Empty' Exception ignored in: <generator object OrderedEnqueuer.get at 0x7f71d0663cf0> Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tensorflow2_p38/lib/python3.8/site-packages/keras/utils/data_utils.py", line 780, in get AttributeError: 'NoneType' object has no attribute 'Empty'

I spent quiet a bit of time figuring out but could not. I am not familiar with keras, unfortunately. I would really appreciate any pointers.
Thank you.

Cannot reproduce results in the NTU dataset

Hi,
Great work and thanks for sharing the code!

I've downloaded the current code and can reproduce results using the 'eval_penn_ar_pe_merge' code. However, when I run the experiment on the NTU dataset ('eval_ntu_ar_merge_pe_merge') code, I often get wrong classifications (e.g., 2 out of 50 classifications are correct). Can you verify if one can reproduce the results with the given code for the NTU dataset?

Thanks,
duygu

Not Utilizing A Video

I noticed that the following sequence does not exist when creating the image dataset using your provided code: a02_s11_e02_c01.

It should be a simple fix of adding the following line to vid2jpeg.txt:
S11/Videos/Directions.54138969.mp4:a02_s11_e02_c01

Where can I get the additional annotation of the NTU RGB+D dataset?

Hellow, Sir!

Firstly, thank you for your open source !!

I'm now doing a research project on the NTU RGB+D dataset. In my previous study, I have the the additional annotation provided in this project.

Since there is a new version of NTU RGB+D dataset, namly "NTU RGB+D 120", I also want to use the additional annotation information in the experiment on"NTU RGB+D 120". But I just can not find the download link in the official website of 'NTU RGB+D'.

So, How can I get this the additional annotation from the raw provided data?

Thank you for your attention to this matter.

ModuleNotFoundError

Hi, i have experienced following errors:

1- ModuleNotFoundError: No module named 'mpii_tools'

2-ModuleNotFoundError: No module named 'mpii_tools'

3- ModuleNotFoundError: No module named 'h36m_tools'

Human 3.6M video S10 can not be downloaded

Hi,
There is no download link for Human 3.6m S10 in the official site.
Do you know if there are some where I can get this file?
Or can you provide the download link for this file?
Thank you!

No weights provided for the multitask models

Dear @dluvizon
I was able to reproduce your results with the merge models, but not with both multitask models (eval_ntu_multitask.py and eval_penn_multitask.py), because I could not find the needed weights for those models ("weights_mpii+penn_ar_028.hdf5" and "weights_3dp+ntu_ar_030.hdf5"). I tried to load the weights from the releases, which were uploaded for the merge models from 2018, but the multitask models performed really bad with them. Could you please upload the weights for the multitask models?
Thank you!

Kinect V2 livetest on 2018 merge model pose estimation (eval_ntu_ar_pe_merge.py)
pose_beineueberkreuzt

Kinect V2 livetest on new multi-task model (All joints predicted to point in the middle of the image) (eval_ntu_multitask.py) used weight from 2018 release
Action Recognition

Kinect V2 livetest on 2018 merge model action recognition (eval_ntu_ar_pe_merge.py)
taking a selfie

Didn't upload Kinect V2 livetest action recognition on new multi-task model because it is just randomly choosing one class

Training on my dataset

Hi,
Good work. I wanted to ask a couple of questions:

  1. Can I use this method for training on my dataset;
  2. Will your dataset affect the quality of the model.
    Thanks.

run.sh is killed

Dears,

When I run the .sh by cmd "./run.sh" as following, it always shows ***** killed. Do you know how to get rid of the problem, please?

Thank you in advance!

Yi Huo

(base) yihuo@yihuo:~/Documents/deephar-master$ ./run.sh

~/Documents/deephar-master ~/Documents/deephar-master
fatal: not a git repository (or any of the parent directories): .git
Initializing deephar v0.5.0
CUDA_VISIBLE_DEVICES: 2
2023-04-30 20:27:21.740019: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-04-30 20:27:21.797162: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-04-30 20:27:22.088804: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-04-30 20:27:22.089272: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-30 20:27:22.808816: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Using keras version "2.12.0"
/home/yihuo/Documents/deephar-master/exp/mpii/train_mpii_singleperson.py:94: UserWarning: Model.fit_generator is deprecated and will be removed in a future version. Please use Model.fit, which supports generators.
model.fit_generator(data_tr,
2023-04-30 20:28:17.264850: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'Placeholder/_0' with dtype int32
[[{{node Placeholder/0}}]]
./run.sh: line 52: 104766 Killed python3 exp/mpii/train_mpii_singleperson.py output/mpii_singleperson_trial-00

Question about datasets and annotations

Hi! Thanks for your open source code !
I have one question about datasets and annotations:
I have download Human3.6M datasets, but i can't match your annotation (in /datasets/Human3.6M/annotation.mat).
Could you please list the data directory or explain the annotation file?

could the network be trained with only action video but no pose images/video?

Hi,

I am working on the following scenario:
The camera is hung on the ceiling, and monitoring the person who is working on the table with his hands. The software is going to recognition the action of the person's hand. I would like to recognize/classify normal action and abnormal action.

as you mentioned in your readme.md, I got a few questions:

    1. You trained the network with MPII(2D pose), Human3.6(3D pose) and NTU(action). however, I have only action video shot from the ceiling, but I don't have 2D pose nor 3D pose. Could I trained the network with my video/image sequence? and how to train it?
    1. according to your experience, could my action be correctly recognized by the model trained by my video? how about the accuracy? please give an estimation.
    1. how much training images/videos are needed for my case?
    1. in your readme.md, you didn't write down your email address. :)
Multi-task model for 3D pose estimation and action recognition

This model was trained simultaneously on MPII (2D pose), Human3.6 (3D pose) and NTU (action), and the results can be replicated on NTU for action recognition by:

  python3 exp/ntu/eval_ntu_multitask.py output/eval-ntu
```C


Thanks,
Ardeal

Example Video of system in action?

Hi, A few questions: 1. Is the code released on here complete yet, and what AI framework does it use 2: Could this be used to train a pose e.g person with arm raised to ask a question and then have the recognizer work on multiple people in cam footage or video and add counting of persons in view with that action (arm raised).3: Could you provide a video of the tool in action.4: could the system also recognize multiple people in the same footage doing different actions.

Many thanks, J

TypeError: lin_interpolation_2d() got an unexpected keyword argument 'dim'

Hi ,
I'm trying to use the build_softargmax_2d block in my own network,

but when compiling the model i get the following error

`---------------------------------------------------------------------------

TypeError Traceback (most recent call last)

in ()
----> 1 model = get_model()
2 model.summary()

1 frames

in get_model()
19 num_rows, num_cols, num_filters = K.int_shape(x)[1:]
20 sams_input_shape = (num_rows, num_cols, num_filters)
---> 21 sam_model = build_softargmax_2d(sams_input_shape, rho=0, name='sSAM')
22 sam_model.summary()
23 output = sam_model(x)

/content/deephar/deephar/models/blocks.py in build_softargmax_2d(input_shape, rho, name)
316 x = kl_divergence_regularizer(x, rho=rho)
317
--> 318 x_x = lin_interpolation_2d(x, dim=0)
319 x_y = lin_interpolation_2d(x, dim=1)
320 x = concatenate([x_x, x_y])

TypeError: lin_interpolation_2d() got an unexpected keyword argument 'dim'`

I traced the error like this
blocks.py --> build_softargmax_2d(....) --> x_x = lin_interpolation_2d(x, dim=0) [line 318]
while in layers.py -->lin_interpolation_2d(x, axis, vmin=0., vmax=1., name=None) [the function signature ].
How to fix this ?

a quick demo for action recognition

Hi, thanks for your amazing work ! I was wandering is there an quick demo for action recognition, or how can i use the pre - trained model to predict an action from a series of images or video

Training on custom dataset

Hey,

I am trying to train the mpii+penaction model on a custom dataset. I tried going through the code, specifically the train_penn_multimodel.py . But I am not able to figure out the format data should be in, for training the action recognition model. Could you please help me with this.

Also, is it possible to change the number of classes in the data on which Action recognition model is trained on? If yes, then how?

Thanks!

AttributeError in 2D pose estimation

Hi All,

while running python3 exp/mpii/eval_mpii_singleperson.py output/eval-mpii on ubuntu getting an error.

(venv_project) xxx@xxx-HP-Pavilion-Laptop-15-cs1xxx:~/project_deep$ python3 exp/mpii/eval_mpii_singleperson.py output/eval-mpii
Initializing deephar v.0.4.1
CUDA_VISIBLE_DEVICES not defined
Using TensorFlow backend.
No module named 'mpl_toolkits'
Using keras version "2.1.4"
2019-02-24 18:23:54.237943: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-24 18:23:54.461427: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-02-24 18:23:54.463161: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] Found device 0 with properties:
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:02:00.0
totalMemory: 3.95GiB freeMemory: 3.32GiB
2019-02-24 18:23:54.463201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1312] Adding visible gpu devices: 0
2019-02-24 18:23:54.912986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:993] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3044 MB memory) -> physical GPU (device: 0, name: GeForce MX150, pci bus id: 0000:02:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "exp/mpii/eval_mpii_singleperson.py", line 74, in
eval_singleperson_pckh(model, x_val, p_val[:,:,0:2], afmat_val, head_val)
File "/home/xxx/project_deep/exp/common/mpii_tools.py", line 86, in eval_singleperson_pckh
pred = model.predict(inputs, batch_size=batch_size, verbose=1)
File "/home/xxx/.pyenv/versions/venv_project/lib/python3.6/site-packages/keras/engine/training.py", line 1842, in predict
verbose=verbose, steps=steps)
File "/home/xxx/.pyenv/versions/venv_project/lib/python3.6/site-packages/keras/engine/training.py", line 1292, in _predict_loop
stateful_metrics=self.stateful_metric_names)
AttributeError: 'Model' object has no attribute 'stateful_metric_names'

How to fix it ?

Training hyperparameters for train_penn_multimodel.py

Hi, thank you very much for releasing your excellent code. I just quit don't understand why you set the initial_epoch = 7 here.

trainer.train(30, steps_per_epoch=steps_per_epoch, initial_epoch=7,

As I understand, the PE is trained on MPII for 120 epochs, then weights related to PE are frozen and the AR is trained on Penn for 7 epochs, then the model weights are saved as weights_mpii+penn_ar_007.hdf5.

Following that, full model is built, loaded weight, jointly trained start from initial_epoch = 7.

full_model.load_weights(

Please confirm that I am wrong or right? Thanks.

Multiple skeletons

Hello,
I wanted to know if it works with multiple skeletons and can it visualise them in 3 dimensions based on relative distance of the skeletons ?

General question with resepect to paper

Hello,

I read your paper its really interesting work and good .

How can we train the pose estimation and action network ? Can we train pose and action network seperately.
Does action recognition takes input as 2d or 3 pose or both ?

Because my training dataset contains only 3 action and evaluation has to be on dataset which has 16 actions ?

My aim is to estimate 3d pose and action recogntion.

Informations about training

Hi @dluvizon ,

First of all congrats for this work. I have some questions/clarification about this project.

  1. If i right understand first you train the pose model, and after the action recognition model. How much time/epochs it need to obtain your result(or similar) training only action model on Pennaction, and how much for NTU?

  2. Reading your paper seems that to work well the pose model must be train at least with some examples of action dataset that will be used for action recognition. Maybe better with an example, for action recognition on Pennaction, pose model it's trained with a part of this dataset before action model training, same for NTU, pose model it's trained with a part of NTU. Is this right observation? Is this very important for final performance on action recognition?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.