Git Product home page Git Product logo

Comments (44)

thaoshibe avatar thaoshibe commented on July 19, 2024 1

@wangyyff I think you should double-check your txt file. Pls follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

from beautygan_pytorch.

Jian-danai avatar Jian-danai commented on July 19, 2024

Sorry, but I have a question. Is it necessary to train the vgg16 by ourselves? Or just use the pretrained model downloaded from Pytorch? ( and using features[:18] for the vgg forwarding)

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

@Jian-danai I think we can just use the pretrained model from Pytorch. I have run the code and the result was still good. The aim of vgg16 is to extract high-level features, and pretrained model can do this, too. By the way, training vgg model on ImageNet by ourselves takes much time lol😏。

from beautygan_pytorch.

Jian-danai avatar Jian-danai commented on July 19, 2024

@Jian-danai I think we can just use the pretrained model from Pytorch. I have run the code and the result was still good. The aim of vgg16 is to extract high-level features, and pretrained model can do this, too. By the way, training vgg model on ImageNet by ourselves takes much time lol😏。
I see. Thank you very much.

from beautygan_pytorch.

Jian-danai avatar Jian-danai commented on July 19, 2024

By the way, can you change the batch size? When I change the batch size from 1 to 2, I will get this error:

Traceback (most recent call last):
File "train.py", line 83, in
train_net()
File "train.py", line 60, in train_net
solver.train()
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 375, in train
g_A_lip_loss_his = self.criterionHis(fake_A, ref_B, mask_A_lip, mask_B_lip, index_A_lip) * self.lambda_his_lip
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 239, in criterionHis
mask_src = mask_src.expand(1, 3, mask_src.size(2), mask_src.size(2)).squeeze()
RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 0. Target sizes: [1, 3, 256, 256]. Tensor sizes: [2, 3, 256, 256]

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

Note that the masks and the facial images are loaded together:

for self.i, (img_A, img_B, mask_A, mask_B) in enumerate(self.data_loader_train):

So when you change the batch_size(e.g. batch_size=2), you should change such lines like this:

mask_src = mask_src.expand(2, 3, mask_src.size(2), mask_src.size(2)).squeeze()

Because expand(a,b,c,d) is to create a tensor whose shape is (a,b,c,d). In this case, a is the batch size.

By the way, no need to change the batch_size because: generating makeup images is a 'specific' task ; larger batch means heavier requirement on GPU memory. As for batch=1(no modification to author's code except the vgg I mentioned before), it takes about 5000MB GPU memory(I checked it via 'nvidia-smi' command). You may exceed your GPU memory if you enlarge your batch size.

from beautygan_pytorch.

Jian-danai avatar Jian-danai commented on July 19, 2024

Thanks for your reply, but I do not understand why enlarging my batch size is meaningless. If I have enough GPU mem, will the larger batch_size contribute to the higher training speed or higher accuracy or not?
I have tried modifiying 'train.py'
parser.add_argument('--batch_size', default='2', type=int, help='batch_size')
and then modified 'solver_makeup.py' with
mask_src = mask_src.expand(2, 3, mask_src.size(2), mask_src.size(2)).squeeze()
mask_tar = mask_tar.expand(2, 3, mask_tar.size(2), mask_tar.size(2)).squeeze()
but still got this error (actually yesterday I have tried modifying these two files)
Traceback (most recent call last):
File "train.py", line 83, in
train_net()
File "train.py", line 60, in train_net
solver.train()
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 378, in train
g_A_lip_loss_his = self.criterionHis(fake_A, ref_B, mask_A_lip, mask_B_lip, index_A_lip) * self.lambda_his_lip
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/solver_makeup.py", line 248, in criterionHis
input_match = histogram_matching(input_masked, target_masked, index)
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/ops/histogram_matching.py", line 51, in histogram_matching
dst_align = [dstImg[i, index[0], index[1]] for i in range(0, 3)]
File "/data/home/yang-bj/bjyang/BeautyGAN_pytorch/ops/histogram_matching.py", line 51, in
dst_align = [dstImg[i, index[0], index[1]] for i in range(0, 3)]
IndexError: index 220 is out of bounds for axis 1 with size 3

(where index 220 will be changed to 'index 3' or anything else if I try again.)

One more question, the multi-GPU mode doesn't work, right? (because I have not found the related code for multi-GPU computation....)

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

I am checking on this; The author did not implement Multi-GPU, you need to do it yourself~~

from beautygan_pytorch.

DanielMao2015 avatar DanielMao2015 commented on July 19, 2024

@TalentedMUSE Hey is is any chance for you to share your code?

from beautygan_pytorch.

pao-hui avatar pao-hui commented on July 19, 2024

Hi, does anyone know why the BeautyGAN authors choose 6 residual blocks in the generator but not 9 as in CycleGAN because the training images are 256*256?

I also find that some unofficial implementations based on tensorflow are using 9 blocks. For example, https://github.com/baldFemale/beautyGAN-tf-Implement

It's really weird!

from beautygan_pytorch.

yql0612 avatar yql0612 commented on July 19, 2024

I am checking on this; The author did not implement Multi-GPU, you need to do it yourself~~

Thanks for your sharing, but I found the data set mask pictures are all black?no any other content like this, it‘s really normal?
image

from beautygan_pytorch.

DateBro avatar DateBro commented on July 19, 2024

@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels.

from beautygan_pytorch.

yql0612 avatar yql0612 commented on July 19, 2024

from beautygan_pytorch.

DateBro avatar DateBro commented on July 19, 2024

@yql0612 I have opened your screenshot in a new tab and found it not all black. If you look carefully, you can find that there are some face segmentations, whose grey value is slightly bigger than 0.

from beautygan_pytorch.

yql0612 avatar yql0612 commented on July 19, 2024

from beautygan_pytorch.

yql0612 avatar yql0612 commented on July 19, 2024

@yql0612 They are not all black. If you open the image and have a look you will find that there are some grey segmentation labels.

When I trained according this project,I meet some problems as follows:
File "/home/data_mount_2/yql/makeup_model/BeautyGAN_pytorch-master/data_loaders/makeup.py", line 79, in getitem
image_B = Image.open(os.path.join(self.image_path, "test/makeup", getattr(self, "test_" + self.cls_B + "filenames")[index % getattr(self, 'num_of_test' + self.cls_list[1] + 'data')])).convert("RGB")
File "/home/data_mount_2/yql/conda/envs/PSGAN/lib/python3.7/site-packages/PIL/Image.py", line 2878, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/home/data_mount_2/yql/dataset/BeautyGAN/data/test/makeup/train_MAKEMIX.txt'

But when I add the txt to /test/makeup folder, there are another problem arises like this:
File "/home/data_mount_2/yql/makeup_model/BeautyGAN_pytorch-master/data_loaders/makeup.py", line 79, in getitem
image_B = Image.open(os.path.join(self.image_path, "test/makeup", getattr(self, "test" + self.cls_B + "filenames")[index % getattr(self, 'num_of_test' + self.cls_list[1] + '_data')])).convert("RGB")
File "/home/data_mount_2/yql/conda/envs/PSGAN/lib/python3.7/site-packages/PIL/Image.py", line 2931, in open
"cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file '/home/data_mount_2/yql/dataset/BeautyGAN/data/test/makeup/train_MAKEMIX.txt'
I have try many times to solve it, but still not work, I sincerely need your help, Thank you very much!

from beautygan_pytorch.

Zteat avatar Zteat commented on July 19, 2024

Thank you for your tips! It helps a lot!

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

AJ~P)BR95SNS(K7LXI229 V
Has there anyone met this problem?? NEED HELP!!!!

from beautygan_pytorch.

thaoshibe avatar thaoshibe commented on July 19, 2024

AJ~P)BR95SNS(K7LXI229 V
Has there anyone met this problem?? NEED HELP!!!!

Could you check if your .txt file is empty or not?

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

Has there anyone met this problem?? NEED HELP!!!!

Could you check if your .txt file is empty or not?

My txt is like this,I don't konw what else to do,please help!!
U8EO7KVM R0YGGK9HQ{D~JF

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more?

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

I write the txt like the guideline and put them in right place,but I am sorry that I don't know what is custom or makeup transfer dataset. Could you please tell me more?

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)

Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

from beautygan_pytorch.

Zteat avatar Zteat commented on July 19, 2024

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)
Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

download the vgg model in "https://bethgelab.org/media/uploads/pytorch_models/vgg_conv.pth".

from beautygan_pytorch.

wangyyff avatar wangyyff commented on July 19, 2024

@wangyyff I think you should double-check your txt file. Plt follow the guideline here #8 (comment)
Are you using Makeup Transfer Dataset or Custom dataset?

I am so sorry! I have solved this problem but now there is a new problem! I didn't have the folder "addings" and the file "vgg_conv.pth". Where can I find them???

download the vgg model in "https://bethgelab.org/media/uploads/pytorch_models/vgg_conv.pth".

Thanks for all your help!! Now there's another problem. Maybe I'm just a foolish! ! All the images are in "makeup" and the txts are in "makeup_final. Where do I need to rewrite?
FF2NDZ_XG7$7W(W$)TTABQ7

from beautygan_pytorch.

flyz1 avatar flyz1 commented on July 19, 2024

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

from beautygan_pytorch.

TomatoBoy90 avatar TomatoBoy90 commented on July 19, 2024

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

he change the file solver_makeup.py for vgg and create 4 txt files(train_MAKEMIX,test_MAKEMIX,train_SYMIX ,test_SYMIX )

from beautygan_pytorch.

flyz1 avatar flyz1 commented on July 19, 2024

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.
You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)
At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hello, I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a million!

he change the file solver_makeup.py for vgg and create 4 txt files(train_MAKEMIX,test_MAKEMIX,train_SYMIX ,test_SYMIX )

Thanks for your reply! But after i have tried for many days i can not still run the code. So would you mind sharing your code for me? Thanks a million!

from beautygan_pytorch.

thaoshibe avatar thaoshibe commented on July 19, 2024

@Hellboykun I think your results are not that bad.
image
Mine look terrible. (Left to right: Source | Reference | Output)

But I found the original pretrained model is very stable.
Check in Holan's Github: https://github.com/Honlan/BeautyGAN
Or direct to Google Drive: https://drive.google.com/drive/folders/1pgVqnF2-rnOxcUQ3SO4JwHUFTdiSe5t9

Sadly, it is tensorflow, and they doesn't provide training code!!

from beautygan_pytorch.

thaoshibe avatar thaoshibe commented on July 19, 2024

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

from beautygan_pytorch.

Hellboykun avatar Hellboykun commented on July 19, 2024

@thaoshibe Holan's model works very well, but unfortunately it's not clear how he implemented it. It was suggested that it should be revised makeup.py Data processing part. I don't know if it works. If you train a good model, I hope you can share it. Thank you very much

from beautygan_pytorch.

pirate-zhang avatar pirate-zhang commented on July 19, 2024

After many hours, I finally can run the code 2333. Here are some tips to run the code:

  1. the train_MAKEMIX.txt means the names of those with-makeup pics in the MT dataset(train_SYMIX means non-makeup pics). It looks like this:
    VWXXQ1Y{7$W45S$M4BE73GA

The names are duplicated in every row because the mask pics share the same name. The mask pics are in the "seg" folder of the MT dataset.

You can wirte a Python script to automatically read the names of the pics. As for me, I choose 2400 makeup pics for training, the rest 300 for testing. Remember to duplicate the name!

  1. You can organize the dataset like this:
    4 EWCTXDP ~GMNS{0AB$GA5
    Then you should change the paths in makeup.py. For example:
    HS4BT4DQ81_HUAP%B7QS7
  2. You can just download the VGG model from the Pytorch model zoo
import torchvision.models as models

#self.vgg = net.VGG()

#self.vgg.load_state_dict(torch.load('vgg_conv.pth')) 

self.vgg=models.vgg16(pretrained=True)

And then, write a forward function on your own to seize the 4th conv layer:

#you can print the vgg16 model and find that the 4th layer conv's id is 17.

def vgg_forward(self,model,x):

        for i in range(18):

            x=model.features[i](x)

        return x

Finally:

vgg_org=self.vgg_forward(self.vgg,org_A)

vgg_org = Variable(vgg_org.data).detach()

vgg_fake_A=self.vgg_forward(self.vgg,fake_A)

g_loss_A_vgg = self.criterionL2(vgg_fake_A, vgg_org) * self.lambda_A * self.lambda_vgg

......

(At this time the network speed of my home really sucks....It is so hard to download the ImageNet dataset, and it's hard to make the parameters match since the author have made some modifications to the VGG. I think the method above can work for you~)

At last, I am really grateful for the work that the author has done. It helps me a lot! Great thanks!

Hi,Thanks for your guidef firstly! I'm just starting to learn, so I don't understand a lot. Would you please share your code? Thanks a lot!

from beautygan_pytorch.

pirate-zhang avatar pirate-zhang commented on July 19, 2024

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

Hello! Could you share your code with me? I am new for this!Thanks a lot!

from beautygan_pytorch.

thaoshibe avatar thaoshibe commented on July 19, 2024

@wangyyff sorry i didn't check this thread lately. have you solved your problem?
actually I rewrite the data loader code, so if you haven't figured out, I can share the code!!

Hello! Could you share your code with me? I am new for this!Thanks a lot!

Stay tuned. I'll upload it in the next couple of days!

from beautygan_pytorch.

thaoshibe avatar thaoshibe commented on July 19, 2024

@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc).

You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation
Not sure if I did anything wrong (haha), but I hope it helps.

from beautygan_pytorch.

pirate-zhang avatar pirate-zhang commented on July 19, 2024

from beautygan_pytorch.

xwh130 avatar xwh130 commented on July 19, 2024

hi,writer!Could you share your code with me?Thanks a lot

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

@Hellboykun I think your results are not that bad.
image
Mine look terrible. (Left to right: Source | Reference | Output)

But I found the original pretrained model is very stable.
Check in Holan's Github: https://github.com/Honlan/BeautyGAN
Or direct to Google Drive: https://drive.google.com/drive/folders/1pgVqnF2-rnOxcUQ3SO4JwHUFTdiSe5t9

Sadly, it is tensorflow, and they doesn't provide training code!!

I have met the same problem during my experiment on some images. It seems that this method isn't quite robust on illumination--some shades on the original face may still be transferred to the target face. I think changing the loss function/tuning the hyperparameters may may may help.

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

@Hellboykun @wangyyff @pirate-zhang I've created a repo for my modification (dataloader, etc).

You can find it here: https://github.com/thaoshibe/BeautyGAN-pytorch-reimplementation
Not sure if I did anything wrong (haha), but I hope it helps.

great!~

from beautygan_pytorch.

TalentedMUSE avatar TalentedMUSE commented on July 19, 2024

Thank you for your tips! It helps a lot!

My honor~😄

from beautygan_pytorch.

NaVi-JackMartin avatar NaVi-JackMartin commented on July 19, 2024

Can the model process video in real time?

from beautygan_pytorch.

yql0612 avatar yql0612 commented on July 19, 2024

from beautygan_pytorch.

pirate-zhang avatar pirate-zhang commented on July 19, 2024

from beautygan_pytorch.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.