Git Product home page Git Product logo

social-lstm's Introduction

Social LSTM implementation in PyTorch

Project details

Semester project of Master of Computer Science in EPFL
Student name : Baran Nama
Advisor: Alexandre Alahi
Presentation : https://drive.google.com/file/d/1biC23s1tbsyDETKKBW8PFXWYyyhNEAuI/view?usp=sharing

Implementation details

Baseline implementation: https://github.com/vvanirudh/social-lstm-pytorch
Paper: http://cvgl.stanford.edu/papers/CVPR16_Social_LSTM.pdf
Detailed info about challange and datasets: https://www.aicrowd.com/challenges/trajnet-a-trajectory-forecasting-challenge Made improvements: Please see attached presentation

Documentation

  • generator.py : Python script for generating artifical datasets
  • helper.py: Python script includes various helper methods
  • hyperparameter.py: Pyton script for random best parameter selection for a model
  • make_directories.sh: Bash script for creation of file structure
  • model.py: Python file includes Social LSTM model definition
  • olstm_model.py: Python file includes Occupancy LSTM model definition
  • olstm_train.py: Python script for training Occupancy LSTM model
  • test.py: Python script for model testing and getting output txt file for submission
  • train.py: Python script for training Social LSTM model
  • utils.py: Python script for handling input train/test/validation data and batching it
  • validation.py: Python script for externally evaluate a trained model by getting validation error
  • visualize.py: Python script for visualizing predicted trajectories during train/test/validation sessions
  • vlstm_model.py: Python file includes Vanilla LSTM model definition
  • vlstm_train: Python script for training Vanilla LSTM model

How to deploy

  1. Fork the repository
  2. Start train a model >>> python train/olstm_train/vlstm.train.py - -[Parameter set]
  3. If necesarry file structure is not exist (which is the initial situation), train script will run make_directories.sh and this command will automatically create file structure
  4. Enjoy!

Results

Model name Avarage error Final error Mean error
Social LSTM 1.3865 2.098 0.675
Occupancy LSTM 2.1105 3.12 1.101
Vanilla LSTM 2.107 3.114 1.1

Reference: http://trajnet.stanford.edu/result.php?cid=1

social-lstm's People

Contributors

mirsking avatar quancore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

social-lstm's Issues

error curve

Hi,The code is good work,Thanks. I have a question:

Screenshot from 2020-01-14 09-32-35
Why is the error curve like this?

get wrong while running vlstm_train.py

hello, I try to run vlstm_train.py on my coumputer,win7amd64 & pytorch 0.4.1 & python 3.6, and there is an error telling '[WinError 193] %1 is not a valid Win32 application' in _winapi.CreateProcess. I change to virtual environment and this is the configuration: win7 32bit & python36 &pythorch 0.4.1. But another error occurs:'DLL load failed' in 'from torch._C import *', it seems that pytorch can't work on windows 32bit. Did you met this problem?

TypeError: No loop matching the specified signature and casting was found for ufunc svd_n_f

Hi, thank you for sharing the code.
I am training the model with all the datasets except stanford dataset
I am facing some errors while using it on ubuntu

File "/workspace/code/helper.py", line 87, in sample_gaussian_2d next_values = np.random.multivariate_normal(mean, cov, 1) File "mtrand.pyx", line 4521, in mtrand.RandomState.multivariate_normal File "/opt/conda/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1562, in svd u, s, vh = gufunc(a, signature=signature, extobj=extobj) TypeError: No loop matching the specified signature and casting was found for ufunc svd_n_f

can someone help me fix it, please? and if possible can someone share a pre-trained model for this network?

Error

when i try to run train.py i'm getting error lie
Screenshot from 2019-06-10 15-54-10

Traceback (most recent call last):
File "train.py", line 626, in
main()
File "train.py", line 94, in main
train(args)
File "train.py", line 197, in train
x, y, d , numPedsList, PedsList ,target_ids= dataloader.next_batch()
File "/media/sdb/vish_features/dominant_flow/manoj/social-lstm-master/utils.py", line 424, in next_batch
target_ids.append(self.target_ids[self.dataset_pointer][math.floor((self.frame_pointer)/self.seq_length)])
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

Can somebody help pls

Bug in visualize

Traceback (most recent call last):
File "/home/zly/data/zhangyanbo/social-lstm-master/validation.py", line 775, in
main()
File "/home/zly/data/zhangyanbo/social-lstm-master/validation.py", line 765, in main
args.max_ped_ratio, results[i][5], [min_r, max_r, plot_offset], 20)
File "/home/zly/data/zhangyanbo/social-lstm-master/validation.py", line 410, in plot_trajectories
colors, name, frames, true_target_id_values, plot_directory, style, num_of_color)
File "/home/zly/data/zhangyanbo/social-lstm-master/validation.py", line 482, in create_plot_animation
ani.save(plot_directory + '/' + name + '.mp4')
File "/home/zly/anaconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 1200, in save
writer.grab_frame(**savefig_kwargs)
File "/home/zly/anaconda3/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/home/zly/anaconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 241, in saving
self.finish()
File "/home/zly/anaconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 367, in finish
self.cleanup()
File "/home/zly/anaconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 405, in cleanup
out, err = self._proc.communicate()
File "/home/zly/anaconda3/lib/python3.6/subprocess.py", line 863, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/home/zly/anaconda3/lib/python3.6/subprocess.py", line 1525, in _communicate
selector.register(self.stdout, selectors.EVENT_READ)
File "/home/zly/anaconda3/lib/python3.6/selectors.py", line 351, in register
key = super().register(fileobj, events, data)
File "/home/zly/anaconda3/lib/python3.6/selectors.py", line 237, in register
key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data)
File "/home/zly/anaconda3/lib/python3.6/selectors.py", line 224, in _fileobj_lookup
return _fileobj_to_fd(fileobj)
File "/home/zly/anaconda3/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd
"{!r}".format(fileobj)) from None
ValueError: Invalid file object: <_io.BufferedReader name=9>

understanding output

hello
we set x and y as input to our model and get 5 outputs.
could you please explain briefly what these five elements are?
Because you said we observe trajectory for 3.2 secs. and predict 4.8 secs. next.
Thanks.

训练得到的loss是负值?

Training epoch: 0 loss: 1.3787947255539994
Training epoch: 1 loss: -0.11686869635937157
Validation dataset epoch: 1 loss: 4.276391523665396 mean_err: tensor(1.0776)final_err: tensor(2.1968)
Training epoch: 2 loss: -0.44195691734777687
Validation dataset epoch: 2 loss: 2.9025856962555965 mean_err: tensor(1.0149)final_err: tensor(2.0672)
Training epoch: 3 loss: -0.6255790152767639
Validation dataset epoch: 3 loss: 2.818588351265624 mean_err: tensor(1.0031)final_err: tensor(2.0612)
Training epoch: 4 loss: -0.7745512770007141
Validation dataset epoch: 4 loss: 2.661308268945793 mean_err: tensor(1.0141)final_err: tensor(2.0460)
Training epoch: 5 loss: -0.8871525505268626
Validation dataset epoch: 5 loss: 1.9109164989585503 mean_err: tensor(0.9268)final_err: tensor(1.8676)
Training epoch: 6 loss: -0.9383222575299442
Validation dataset epoch: 6 loss: 1.9576881097950816 mean_err: tensor(0.9330)final_err: tensor(1.8477)
Training epoch: 7 loss: -1.0259218493572857
Validation dataset epoch: 7 loss: 1.6354501941723045 mean_err: tensor(0.9228)final_err: tensor(1.8636)

Question about grid

Please, could you explain what the parameter "is_occupancy" stands for in grid.py? With respect to the original implementation, I saw you implemented two different kinds of mask, according to the "is_occupancy" value.
What does it represent? Why the two masks have different shape?

Thank you a lot.

Different Length in Sequences

Hi, the original code has been successfully implemented on my labtop.
In my own dataset ,the sequence length with different ids are different. In the biwi dataset, the sequence length is set as fixed 20. Is there anyway adpat my dataset?

Error in calculation of grid

In your grid.py script, I believe current_x and current_y coordinates are in world coordinate system, and they are in meters.
But, width_bound, height_bound don't have a unit, they are just fractions. (line 30)

So, they should not be added or subtracted. ( line 38 and 39). It doesn't mean anything.

cannot run it project,when i need " subprocess.call([script_path])" it will stop

Directory creation script is running...
Traceback (most recent call last):
File "D:/predictionProject/social-lstm/train.py", line 626, in
main()
File "D:/predictionProject/social-lstm/train.py", line 94, in main
train(args)
File "D:/predictionProject/social-lstm/train.py", line 111, in train
subprocess.call([f_prefix+'/make_directories.sh'])
File "C:\Users\user\anaconda3\envs\DeepLearning\lib\subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Users\user\anaconda3\envs\DeepLearning\lib\subprocess.py", line 858, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\user\anaconda3\envs\DeepLearning\lib\subprocess.py", line 1311, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
OSError: [WinError 193] %1 不是有效的 Win32 应用程序。

about the train loss

In the training, i find the loss less than 0 sometimes. i guess the smaller the loss, the better(eg: loss=-10 is better than loss=-5).
Am i right ?? @quancore

Question about the datasets in the train folder

Hi,
I want to implement my own data in your model, and I have a question about your dataset in the train folder. I noticed that in the biwi train folder, there are two files, biwi_hotel_0.txt and biwi_hotal.txt. I want to ask do they have any relationship? Since I found this relationship for your other datasets as well, and I am really confused. It would be really helpful for my project if you can answer me. Please let me know if I do not illustrate my question clearly. Thank you.

pedestrain ID

Hi,
In the sample dataset, there are many pedestrians in a single frame and also in the whole sequence.
If you have a single pedestrian in a single video, the ped_ID would be 1 for all the frames in one sequence. So if you have multiple videos with very few pedestrians, should I assign a unique Ped_ID for all the pedestrians present in the dataset or Unique Ped_ID in one video frame?
Also, when processing the data, the variable "numPeds_data" stores Ped_IDs or the numbers of pedestrians in that sequence?
Similarly, when extracting a batch from the process data, seq_numPedsList shows the number of pedestrians in the frame of Ped_ID for each pedestrian.

keyerror

I met this problem that when i tried to run the train.py, it said that '' keyerror 'train' '',but I have no idea why this happened. Is there anyone meeting the same problem? Thx a lot.
5dad7974a2db32291261fd1af52ea4d

test total_error/final_error

Excuse me.It seems that total_error/final_error in test stage, it compute the error between observed data and itself, beause ret_x_seq has been signed to ret_x_seq[:args.obs_length, :, :] = x_seq.clone() in The sample function. May I request your answer? Thanks very much!

Record the mean and final displacement error

        total_error += get_mean_error(ret_x_seq[1:sample_args.obs_length].data, orig_x_seq[1:sample_args.obs_length].data, PedsList_seq[1:sample_args.obs_length], PedsList_seq[1:sample_args.obs_length], sample_args.use_cuda, lookup_seq)
        final_error += get_final_error(ret_x_seq[1:sample_args.obs_length].data, orig_x_seq[1:sample_args.obs_length].data, PedsList_seq[1:sample_args.obs_length], PedsList_seq[1:sample_args.obs_length], lookup_seq)

Training Error with different datasets

In utils. Py, if only the file biwi_hotel.txt appears in the training set, the file will be run to train during the training (2,900 pieces of data in total).If you change this file to another file, that is, comment out the file name and change to the name of other TXT files, all the data sets will be trained (a total of 44000 pieces of data).But I still don't see why it should be all training, and only biwi_hotel.txt can train individual files.
image
image
image

Datasets used are different from the original ones: What preprocessings have been done to obtain them?

I have realized that the datasets used in your code are different from the original datasets I found from their sources.
To go from the original, raw datasets to ones you've used here, I can see that you have ordered data entries based on ped-id and frame-num, have deleted sequences shorter than 20 frames, and for sequences longer than 20 frames, you have deleted the excess frames.
However, the x-y position values in your datasets are different from the original ones. I suppose you have used some kind of preprocessing/transformation to obtain them but I couldn't find any code or explanation for this part. Therefore, I went ahead and applied the homography mapping to go from image to world coordinates just like done in: https://github.com/t2kasa/social_lstm_keras_tf. Which is to obtain image coordinates from world coordinates by using the inverse of the homography matrix and then, normalizing them by dividing the x-y position values by image_size.
However, the x-y position values still don't match the ones in your datasets.
Could you share the type of preprocessing and transformations you have applied to the raw-original datasets to get to what you use here?

Loss function computation

Hi, I have a question.
In training process:"

Forward prop

            outputs, _, _ = net(x_seq, grid_seq, hidden_states, cell_states, PedsList_seq,numPedsList_seq ,dataloader, lookup_seq)

Compute loss

            loss = Gaussian2DLikelihood(outputs, x_seq, PedsList_seq, lookup_seq)"

why the loss is computed by using x_seq and outputs rather than outputs and y_seq?
Thanks

Versions of python and pytorch

Hi,
Would be possible to share the versions of Python, pytorch, numpy, cuda and other relevant packages used in this implementation?
I am having getting errors when training a model, so I am wondering if this is related to a mismatch of package versions.

In training stage, the time positions of input/output seq. are the same?

I noticed that the loss computing function in training stage is called like this: 'loss = Gaussian2DLikelihood(outputs, x_seq, PedsList_seq, lookup_seq)'. Loss is calculated on the same time position of 'outputs' and 'x_seq'. It confused me, because for prediction task the output , at time t, should be corresponding to the input, at time t+1.
I also noticed in another implementation 'https://github.com/kabraxis/Social-LSTM-VehicleTrajectory', their loss computing function is called like this: 'loss = Gaussian2DLikelihood(outputs, nodes[1:], nodesPresent[1:], args.pred_length)'.
Which one is correct?

TypeError occurs when running train.py

Hi,
When I excute train.py, a TypeError occurs.It looks like this,"TypeError: indexing a tensor with an object of type list.".The Traceback refers to line 312 in train.py, "outputs, _, _ = net(x_seq, grid_seq,......)."(net = SocialModel(args)). And it also refers to some torch documents.
image
My environment includes python3.6 and pytorch0.1.12 on ubuntu16.04. I want to know weather this problem dues to the version of
pytorch. If not, can you tell me your environment which can excute this project successfully. Thank you!

Loss Function Compute

In the definition of Gaussian2DLikelihood, you calculate the density function, when the result of density function >1 , result = -torch.log(torch.clamp(result, min=epsilon)), this value will <0, the loss < 0. I think the probability value is between 0-1, and the cross entropy should be > 0. Is it right, look forward your reply

In the train,true sequece as input(20),why?

in the article,from time tobs+1 to tpreds,the author use the predicted position from the previous social_lstm cell in place of the true coordinates, but in your train, i can't find it, you just use 20 true coordinates, and in the your test, same problem

Getting key error when training on external dataset

Using the same format as you. I get errors when using train (default params) on an external dataset previously formated in the same way you suggest (dataset is attached)
07_tracks.txt

Creating pre-processed validation data from raw data
Now processing:  ./data/validation/highd/07_tracks.txt
Creating pre-processed training data from raw data
Now processing:  ./data/train/highd/07_tracks.txt
Loading train or test dataset:  ./data/train/trajectories_train.cpkl
Sequence size(frame) ------> 20
One batch size (frame)--->- 100
Training data from training dataset(name, # frame, #sequence)-->  07_tracks.txt : 40282 : 2014
Validation data from training dataset(name, # frame, #sequence)-->  07_tracks.txt : 0 : 0
Total number of training batches: 402
Total number of validation batches: 0
****************Training epoch beginning******************
0/12060 (epoch 0), train_loss = 18.527, time/batch = 0.945
1/12060 (epoch 0), train_loss = 6.513, time/batch = 0.877
Traceback (most recent call last):
  File "train.py", line 626, in <module>
    main()
  File "train.py", line 94, in main
    train(args)
  File "train.py", line 218, in train
    target_id_values = x_seq[0][lookup_seq[target_id], 0:2]
KeyError: 14

Inference on plots from vizualize.py -Tobs & Tpred unknown

I'm trying to understand what the plots depict? There are 20 frames, at interval of 10, at the top with one pedestrian plotted with "target ped 24 pred." and "target ped 24 true" which should mean the ground truth vs predicted traj. (THE SAME FOR ped 111 and so on... in every plot)

But, from which frame/time it starts to predict? That is, what is t=1 to Tobs, and Tobs+1 to Tpred? The paper says, it observes for 8 frames and predicts for next 12. So, which are these 8 and 12?

Kindly help. Thank you!

Here are the plots

sequence00003
sequence00028

Prediction not autoregressive?

During sampling of future trajectories, it seems like the the ground truth is fed back into the network, not the predicted point, so the network is not autoregressive. Is that correct? Below is the code line I am referring to.

out_, hidden_states, cell_states = net(x_seq[tstep].view(1, numx_seq, 2), [grid[tstep]], hidden_states, cell_states, [Pedlist[tstep]], [num_pedlist[tstep]], dataloader, look_up)

grid.py width_bound

In grid.py line 30 you put *2 after the original calculation. I dont really understand. Does that mean you want to enlarge the neighborhood area? Thanks!

loss computing in sample_validation_data

Hi,
ret_x_seq, loss = sample_validation_data(x_seq, PedsList_seq, grid_seq, args, net, lookup_seq, numPedsList_seq, dataloader)

loss = Gaussian2DLikelihood(out_[0].view(1, out_.size()[1], out_.size()[2]), x_seq[tstep].view(1, numx_seq, 2), [Pedlist[tstep]], look_up)

why use x_seq[tstep] to compute loss, why not use x_seq[tstep+1].?

The Issue Regarding Visualization of Trajectories

I would like to know if, during the visualization process, the input seems to be a txt file? I'm not sure if my assumption is correct. Also, I'm curious to know if it's possible to input a video that I've recorded to output predicted trajectories, similar to the results presented in your paper.

Datasets process

I would like to ask, how to deal with a data set so that it can be put into social LSTM for use?
For example, I have a sequence of images, and also have hand moving videos, how to deal with them?

error when run train.py

Thanks a lot for your code.
But when I run the train.py,it occurs error as follows:
image
could you please tell me why?
Thanks a lot for your attention

Results far off

After training the network in its defeault mode and subsequently running test.py with the defeault parameter, the output at the end is:

Best iteration has been changed. Previous best iteration:  0 Error:  31.76620076238882
New best iteration :  1 Error:  tensor(2.6017e-09)
Iteration: 1  Total training (observed part) mean error of the model is  tensor(2.6017e-09)
Iteration: 1 Total training (observed part) final error of the model is  tensor(5.7770e-09)
Smallest error iteration: 1

31 and 2.6e-9 both are far away from the reported 1.3. How can that be?

PermissionError: [Errno 13]

Hi, when I run python olstm_train.py, I got an error "PermissionError: [Errno 13] Permission denied: './make_directories.sh'"
Maybe this is beacuse I use the server of my lab, and I am not in the sudoers file for safety sake.
But how can I fix this problem? Thank you.

Bug in test.py

Hello,

Can someone please at a look at these two lines in test.py?
https://github.com/quancore/social-lstm/blob/master/test.py#L202-L203

            total_error += get_mean_error(ret_x_seq[1:sample_args.obs_length].data, orig_x_seq[1:sample_args.obs_length].data, PedsList_seq[1:sample_args.obs_length], PedsList_seq[1:sample_args.obs_length], sample_args.use_cuda, lookup_seq)
            final_error += get_final_error(ret_x_seq[1:sample_args.obs_length].data, orig_x_seq[1:sample_args.obs_length].data, PedsList_seq[1:sample_args.obs_length], PedsList_seq[1:sample_args.obs_length], lookup_seq)

More specifically, why are the prediction errors computed on the 1:sample_args.obs_length range, which I think represents the observed data.

visualization

Hey,
I run training, test, validation successfully. But when i run visualize.py i got the following error:

Video creation for sequence00000 is starting...
MovieWriter ffmpeg unavailable; trying to use <class 'matplotlib.animation.PillowWriter'> instead.
Traceback (most recent call last):
File "/home/user/anaconda3/lib/python3.7/site-packages/PIL/Image.py", line 2114, in save
format = EXTENSION[ext]
KeyError: '.mp4'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "visualize.py", line 772, in
main()
File "visualize.py", line 762, in main
target_traj = plot_trajectories(results[i][0], results[i][1], results[i][2], results[i][3],results[i][4], name, figure_save_directory, args.min_traj ,args.max_ped_ratio, results[i][5], [min_r, max_r, plot_offset], 20)
File "visualize.py", line 413, in plot_trajectories
create_plot_animation(plt, video_plot_trajs, processed_ped_index, target_id, real_inv_lookup, obs_length, markers, colors, name, frames, true_target_id_values, plot_directory, style, num_of_color)
File "visualize.py", line 479, in create_plot_animation
ani.save(plot_directory+'/'+name+'.mp4')
File "/home/user/anaconda3/lib/python3.7/site-packages/matplotlib/animation.py", line 1156, in save
writer.grab_frame(**savefig_kwargs)
File "/home/user/anaconda3/lib/python3.7/contextlib.py", line 119, in exit
next(self.gen)
File "/home/user/anaconda3/lib/python3.7/site-packages/matplotlib/animation.py", line 232, in saving
self.finish()
File "/home/user/anaconda3/lib/python3.7/site-packages/matplotlib/animation.py", line 577, in finish
duration=int(1000 / self.fps), loop=0)
File "/home/user/anaconda3/lib/python3.7/site-packages/PIL/Image.py", line 2116, in save
raise ValueError("unknown file extension: {}".format(ext))
ValueError: unknown file extension: .mp4

Can you please help me?
Thanks

Plots without prediction

After running the visualize script, a lot of plots are generated. However, in most of them no prediction trajectory is plotted, for example here.
sequence00008

What is the reason for that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.