Git Product home page Git Product logo

faceforensics's Introduction

FaceForensics++: Learning to Detect Manipulated Facial Images

Header

Overview

FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods: Deepfakes, Face2Face, FaceSwap and NeuralTextures. The data has been sourced from 977 youtube videos and all videos contain a trackable mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries. As we provide binary masks the data can be used for image and video classification as well as segmentation. In addition, we provide 1000 Deepfakes models to generate and augment new data.

For more information, please consult our updated paper.

Server Status

After a power outage, our EU servers are up again. Unfortunately, we are still have some issues with the Canadian server (CA). Please use the EU hosts (EU, EU2) for now until we resolve the issue and remove this message.

What is new

  • FaceShifter: We are including the two-stage FaceShifter face swapping method that has been published in CVPR2020. It is able to generate high fidelity identity preserving face swap results and, in comparison to our previous methods, deal with facial occlusions using a second synthesis stage consisting of a Heuristic Error Acknowledging Refinement Network (HEAR-Net). All 1000 original videos of the original youtube based dataset have been manipulated. Please check them out on their project page for more information! See its dataset page for updated numbers as well as an example video. If you want to access the new data and have already applied for our download script, simply reuse the original download link to get the updated script. Otherwise, please fill out this google form and, once accepted, we will send you the link to our download script.

  • Deep Fake Detection Dataset: We are hosting the Deep Fake Detection Dataset provided by Google & JigSaw. The dataset contains over 3000 manipulated videos from 28 actors in various scenes. The dataset has a similar file structure and is downloaded by default together with the regular dataset. See the dataset page for more information.

  • Neural Textures: We included a fourth manipulation method that does face manipulation using GANs and Neural Textures. All results have been updated to incorporate the new manipulation method and we have updated the benchmark as well. We refer to the paper for more information. Unfortunately, we won't continue support on the old benchmark after this update, though you can still submit your models to the new benchmark by creating a new submission.

Access

If you would like to download the FaceForensics++ dataset, please fill out this google form and, once accepted, we will send you the link to our download script.

If you have not received a response within a week, it is likely that your email is bouncing - please check this before sending repeat requests.

Once, you obtain the download link, please head to the download section. You can also find details about the generation of the dataset there.

We are offering an automated benchmark for facial manipulation detection on the presence of compression based on our manipulation methods that contains 1000 images. If you are interested to test your approach on unseen data, check it out! For more information, please consult our paper. You can download the benchmark images here.

Original FaceForensics

You can view the original FaceForensics github here. Any request to this dataset will also contain the download link to the original version of our dataset.

Citation

If you use the FaceForensics++ data or code please cite:

@inproceedings{roessler2019faceforensicspp,
	author = {Andreas R\"ossler and Davide Cozzolino and Luisa Verdoliva and Christian Riess and Justus Thies and Matthias Nie{\ss}ner},
	title = {Face{F}orensics++: Learning to Detect Manipulated Facial Images},
	booktitle= {International Conference on Computer Vision (ICCV)},
	year = {2019}
}

Help

If you have any questions, please contact us at [email protected].

Video

Please view our youtube video here.

youtubev_video

Changelog

15.07.2020: Added FaceShifter

23.09.2019: Added sample videos as well as the Deep Fake Detection Dataset

30.08.2019: Paper got accepted to ICCV 2019! Updated the download script to include NeuralTextures and changed instructions

06.04.2019: Updated sample and added benchmark

02.04.2019: Updated our arxiv paper, switched to google forms, release of dataset generation methods and added a classification sample

25.01.2019: Release of FaceForensics++

License

The data is released under the FaceForensics Terms of Use, and the code is released under the MIT license.

Copyright (c) 2019

faceforensics's People

Contributors

ondyari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faceforensics's Issues

how to extract the image for train?first frame , random frame or all frames?

Hello ,
I am so confused,
I have some simple questions. does the training set extract all frames of the video, or does it choose the first frame or randomly extract a frame?
Then after the training is completed, how to calculate the test results in the article ?
To test the video of the entire test set, and then count the majority of the classification results of each frame of this video, get the majority as the prediction of the video?Or extract the first frame , random frame,or extract all frames to calculate all classification results?
Thank you!

Problem with downloading the FaceForensics dataset

Hello,

Thanks for your consent about sharing your dataset.

I was just trying to use the download script sent to me to download your dataset. However, when trying to do so, I get the following message:
urllib.error.HTTPError: HTTP Error 404: Not Found

The problem is happening when downloading this file:
http://kaldir.vc.in.tum.de/FaceForensics/v3/manipulated_sequences/Face2Face/raw/videos/585_599.mp4

It seems to me that there might be a problem with hosting server.

Is the hosting server OK?

Many thanks,

No such file or directory: '/home/ondyari/.torch/models/xception-b5690688.pth

When I type in cmd [python detect_from_video.py -i video\123.mp4 -m network\models.py -o output], and pop up this error.
Plaese someone help me to fix this problem, thank very much.
I had download [xception-b5690688.pth'], but don't know where to put it.

File "C:\Users\chenha\Anaconda3\envs\eye\lib\site-packages\torch\serialization.py", line 366, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '/home/ondyari/.torch/models/xception-b5690688.pth'

faceswap.py doesn't work

I'm using the dataset\FaceSwapKowalski\faceswap.py to recreate some face swapping results.
However, the "utils.getShapeTextureCoords" function doesn't exist in the original https://github.com/MarekKowalski/FaceSwap repo.
I think the following code needs to be added to FaceSwap/utils.py

def getShapeTextureCoords(keypoints, mean3DShape, blendshapes, idxs2D, idxs3D):
    projectionModel = models.OrthographicProjectionBlendshapes(blendshapes.shape[0])
    modelParams = projectionModel.getInitialParameters(mean3DShape[:, idxs3D], keypoints[:, idxs2D])
    modelParams = NonLinearLeastSquares.GaussNewton(modelParams, projectionModel.residual, projectionModel.jacobian, ([mean3DShape[:, idxs3D], blendshapes[:, :, idxs3D]], keypoints[:, idxs2D]), verbose=0)
    textureCoords = projectionModel.fun([mean3DShape, blendshapes], modelParams)
    return textureCoords 

Error thrown when install requirements.txt

ERROR: Could not find a version that satisfies the requirement mkl-fft==1.0.10 (from -r requirements.txt (line 15)) (from versions: 1.0.0.17, 1.0.2, 1.0.6)
ERROR: No matching distribution found for mkl-fft==1.0.10 (from -r requirements.txt (line 15))

Error thrown when install requirements.txt. (python 3.6.8)

Download issue,

When I download the data set, no matter which server node I use, there is almost no progress. How can I speed up the progress?

train model using my own videos

Excuse me,how to train the model using my own videos?
I think I can run detect_from_video.py python file, you think it is right?
Thanks!!

Accuracy of provided trained models does not match reported results in the paper

Hello,

Thanks for sharing your code, results and datasets.

I was wondering whether the provided trained models were trained on the fake videos generated by the NeuralTextures method.

I've downloaded the high quality compression version of your dataset (c23) and was doing some tests on them using the model under face_detection/xception/all_c23.p. The accuracy of detection on DF, F2F and FS is quite high. However, the same model mostly fails to detect fake frames coming from the NeuralTextures method, only successfully catches few fake frames per the whole video!

The reported accuracy of this model in your paper is 92.19 on the NeuralTextures method!
Do you have any idea about that please?

Thanks,

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'

Traceback (most recent call last):
File "detect_from_video.py", line 239, in
test_full_image_network(**vars(args))
File "detect_from_video.py", line 187, in test_full_image_network
cuda=cuda)
File "detect_from_video.py", line 95, in predict_with_model
output = model(preprocessed_image)
File "F:\ANACONDA\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "D:\Panicwq\FaceForensics-master\classification\network\models.py", line 113, in forward
x = self.model(x)
File "F:\ANACONDA\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "F:\ANACONDA\lib\site-packages\pretrainedmodels\models\xception.py", line 210, in forward
x = self.features(input)
File "F:\ANACONDA\lib\site-packages\pretrainedmodels\models\xception.py", line 172, in features
x = self.conv1(input)
File "F:\ANACONDA\lib\site-packages\torch\nn\modules\module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "F:\ANACONDA\lib\site-packages\torch\nn\modules\conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight'

How to solve this problem? Thanks

mask video is much shorter than manipulated video.

This video is 47 sec.
manipulated_sequences/DeepFakeDetection/c40/videos/27_13__walking_and_outside_surprised__A1OSUJE9.mp4

But mask video is only 1 sec.
manipulated_sequences/DeepFakeDetection/masks/videos/27_13__walking_and_outside_surprised__A1OSUJE9.mp4

invalid enumerant error in glReadPixels

I follow the instructions to run /FaceSwapKowalski/faceswap.py, but it crashed at glReadPixels like this. I have been search for fixing for a while but made no progess.

python faceswap.py -i1 my_path1 -i2 my_path2

N/A% (0 of 2157) | | Elapsed Time: 0:00:00 ETA: --:--:--Traceback (most recent call last):
File "faceswap.py", line 169, in
renderedImg = renderer.render(shape3D)
File "/home/jinzehui/Deepfake/related_work/FaceForensics/FaceForensics/dataset/FaceSwapKowalski/FaceSwap/FaceRendering.py", line 64, in render
data = glReadPixels(0, 0, self.w, self.h, GL_BGR, GL_UNSIGNED_BYTE)
File "/home/jinzehui/.local/lib/python2.7/site-packages/OpenGL/GL/images.py", line 371, in glReadPixels
imageData
File "/home/jinzehui/.local/lib/python2.7/site-packages/OpenGL/platform/baseplatform.py", line 402, in call
return self( *args, **named )
File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
err = 1280,
description = 'invalid enumerant',
baseOperation = glReadPixels,
cArguments = (
0,
0,
640,
368,
GL_BGR,
GL_UNSIGNED_BYTE,
array([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[...,
)

Problem of Image preprocessing

Thank you for making this great project open source. It helped me a lot.

But I have a question about transform.py in the data set, why the image here has a Resize of 299. Does the number 299 have a special meaning?

'train': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5]*3, [0.5]*3)
]),
'val': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5] * 3, [0.5] * 3)
]),
'test': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5] * 3, [0.5] * 3)
]),

Can't find c23/c40 data in original FaceForensic dataset

When I download the original FaceForensic dataset, there is only two types (raw, compressed). But the paper denotes it has three compression types(raw, c23, c40). I'm not sure the downloaded 'compressed' data refers to c23 or c40 and how can I get both of them?

[RESOLVED--I was using old script] download-FaceForensics.py issue

When I run "download-FaceForensics.py -h", I see the following DATASET_TYPE parameters:
Enter which dataset you want to download: "raw",
"compressed", "selfreenactment_raw",
"selfreenactment_compressed", "original_videos"
"source_to_target_images" or "selfreenactment_images".

How can I download the DeepFake and Face2Face videos?

subprocess.CalledProcessError

Dear all,
I want to do the deepfake generate models in deepfake.py.
I run the deepfake.py as:
python deepfakes.py -m generate_models -i /home/wangxian/face_foren_input -o /home/wangxian/face_foren_output --python_path /home/wangxian/t_heads
it turns out as:
Traceback (most recent call last):
File "deepfakes.py", line 289, in
generate_models(**vars(args))
File "deepfakes.py", line 137, in generate_models
'{}_alignment.txt'.format(apath)))
File "deepfakes.py", line 44, in convert_frames_to_data
shell=True, stderr=subprocess.STDOUT
File "/usr/local/lib/python3.7/subprocess.py", line 376, in check_output
**kwargs).stdout
File "/usr/local/lib/python3.7/subprocess.py", line 468, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'CUDA_VISIBLE_DEVICES=1 /home/wangxian/t_heads /home/wangxian/faceforensics/dataset/DeepFakes/faceswap-master/faceswap.py extract -i /home/wangxian/face_foren_output/636_756/636 -o /home/wangxian/face_foren_output/636_756/636_faces --alignments /home/wangxian/face_foren_output/636_756/636_alignment.txt' returned non-zero exit status 126.

I do not know how should I set the CUDA_VISIBLE_DEVICES sentence. I tried to change --gpu from 0 to 3, still the same result. 

Anyone knows how to fix it?

About the data

Hi, I have fill in the form and send you emails with my signed terms of service agreement one week ago. However, I haven't received any represon. Would you please check your emails and reponse me with the download link? Thank you very much.

About the weight format

I would like to ask if the test model must be in .p format? For example, the model of meso you gave is weight file in .h5 format, but I got an error when I tested it.

If I can only test the weights of the .p format, how do I convert the format?
Thanks a lot!

Detecting fake/real image - sourcechangewarning

When I try to detect the video using detect_from_video.py - !python classification/detect_from_video.py -i video/trumpvideo.mp4 -m classification/faceforensics++_models/full/xception/full_c23.p -o output_path,
it gives my this error - Starting: video/trumpvideo.mp4
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'network.models.TransferModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'pretrainedmodels.models.xception.Xception' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'pretrainedmodels.models.xception.Block' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'pretrainedmodels.models.xception.SeparableConv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:453: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
Model found in classification/faceforensics++_models/full/xception/full_c23.p
0% 0/595 [00:00<?, ?it/s]Traceback (most recent call last):
File "classification/detect_from_video.py", line 234, in
test_full_image_network(**vars(args))
File "classification/detect_from_video.py", line 187, in test_full_image_network
cuda=cuda)
File "classification/detect_from_video.py", line 95, in predict_with_model
output = model(preprocessed_image)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/content/drive/My Drive/Colab Notebooks/faceswap/classification/network/models.py", line 114, in forward
x = self.model(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pretrainedmodels/models/xception.py", line 210, in forward
x = self.features(input)
File "/usr/local/lib/python3.6/dist-packages/pretrainedmodels/models/xception.py", line 172, in features
x = self.conv1(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 343, in forward
return self.conv2d_forward(input, self.weight)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
0% 1/595 [00:00<04:49, 2.05it/s]

cannot connect to X server Segmentation fault (core dumped)

I am trying to replicate Xception model predictions on the FaceForensics dataset with the command

python detect_from_video.py -i /scratch/data/original_sequences/youtube/c23/videos/004.mp4 -m /scratch/models/full/xception/full_c23.p -o ~/scratch/out --cuda

This produces

Starting: /scratch/data/original_sequences/youtube/c23/videos/004.mp4
/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'network.models.TransferModel' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)

...

/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
  warnings.warn(msg, SourceChangeWarning)
Model found in /scratch/models/full/xception/full_c23.p
  0%|                                                                                                                                                                             | 0/309 [00:00<?, ?it/s]: cannot connect to X server 
Segmentation fault (core dumped)

Which I can't make heads or tails of.

Is there a faster way to download the data?

I have downloaded some data using the script provided. It is very convenient but ... the speed is too slow (~100kB/s) and it may cost three or more weeks. Is there a faster way to download the data (e.g Google Drive)? Many thanks!

The result on custom image is not good

Hello @ondyari
I tested on another source and target image but as you see face alignment of person is almost same in the photo and masking of source_img is perfect.
rsz_screenshot_from_2020-07-28_12-11-46
0000

Please let me know Is there any solution to resolve this issue.

what meaning of "random model"

When run detect_from_video.py for my own set of videos it says 'No model found, initializing random model.' This is because model_path is not set. But the model initialized by xception-b5690688.pth file, which i have downloaded and hope properly used. Where is a randomization? I also have added model.eval(). And as I see results are actually random. Repetition of the same tests give different results.
Could anybody explain what is "random model" and how avoid it
Thank you.

Downloading Original Youtube Video Script

Hi I wanted to download the original youtube videos to extract audio. Would it be possible if you provided the youtube downloading script for your provided json files from downloaded_videos_info? (Or any youtube downloading script you recommend)

Thanks!

Jeff

Not able to reproduce results

I am unable to reproduce the results you mentioned in the paper, ran on Deekfakes c44 images. did the weights you provided receive the same results as the one mentioned?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.