Git Product home page Git Product logo

residualmaskingnetwork's Introduction

Facial Expression Recognition using Residual Masking Network

The code for my undergraduate thesis.

Downloads pypi package circleci Python package style PWC

Inference:

Open In Colab

  1. Install from pip
pip install rmn

# or build from source

git clone [email protected]:phamquiluan/ResidualMaskingNetwork.git
cd ResidualMaskingNetwork
pip install -e .
  1. Run demo in Python (with webcam available)
from rmn import RMN
m = RMN()
m.video_demo()
  1. Detect emotions from an image
image = cv2.imread("some-image-path.png")
results = m.detect_emotion_for_single_frame(image)
print(results)
image = m.draw(image, results)
cv2.imwrite("output.png", image)

Table of Contents

       

Recent Update

  • [07/03/2023] Re-structure, update Readme
  • [05/05/2021] Release ver 2, add colab
  • [27/02/2021] Add paper
  • [14/01/2021] Packaging Project and publish rmn on Pypi
  • [27/02/2020] Update Tensorboard visualizations and Overleaf source
  • [22/02/2020] Test-time augmentation implementation.
  • [21/02/2020] Imagenet training code and trained weights released.
  • [21/02/2020] Imagenet evaluation results released.
  • [10/01/2020] Checking demo stuff and training procedure works on another machine
  • [09/01/2020] First time upload

Benchmarking on FER2013

We benchmark our code thoroughly on two datasets: FER2013 and VEMO. Below are the results and trained weights:

Model Accuracy
VGG19 70.80
EfficientNet_b2b 70.80
Googlenet 71.97
Resnet34 72.42
Inception_v3 72.72
Bam_Resnet50 73.14
Densenet121 73.16
Resnet152 73.22
Cbam_Resnet50 73.39
ResMaskingNet 74.14
ResMaskingNet + 6 76.82

Results in VEMO dataset could be found in my thesis or slide (attached below)

Benchmarking on ImageNet

We also benchmark our model on ImageNet dataset.

Model Top-1 Accuracy Top-5 Accuracy
Resnet34 72.59 90.92
CBAM Resnet34 73.77 91.72
ResidualMaskingNetwork 74.16 91.91

Installation

  • Install PyTorch by selecting your environment on the website and running the appropriate command.
  • Clone this repository and install package prerequisites below.
  • Then download the dataset by following the instructions below.

Datasets

Training on FER2013

Open In Colab

  • To train the networks, you need to specify the model name and other hyperparameters in the config file (located at configs/*) then ensure it is loaded in main file, then run training procedure by simply run main file, for example:
python main_fer.py  # Example for fer2013_config.json file
  • The best checkpoints will chosen at term of best validation accuracy, located at saved/checkpoints
  • The TensorBoard training logs are located at saved/logs, to open it, use tensorboard --logdir saved/logs/

  • By default, it will train alexnet model, you can switch to another model by edit configs/fer2013\_config.json file (to resnet18 or cbam\_resnet50 or my network resmasking\_dropout1.

Training on the Imagenet dataset

To perform training resnet34 on 4 V100 GPUs on a single machine:

python ./main_imagenet.py -a resnet34 --dist-url 'tcp://127.0.0.1:12345' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0

Evaluation

For students, who should take care of the font family of the confusion matrix and would like to write things in LaTeX, below is an example for generating a striking confusion matrix.

(Read this article for more information, there will be some bugs if you blindly run the code without reading).

python cm_cbam.py

Ensemble method

I used the no-weighted sum average ensemble method to fuse 7 different models together, to reproduce results, you need to do some steps:

  1. Download all needed trained weights and locate them on the ./saved/checkpoints/ directory. The link to download can be found in the Benchmarking section.
  2. Edit file gen_results and run it to generate result offline for each model.
  3. Run the gen_ensemble.py file to generate accuracy for example methods.

Dissertation and Slide

Authors

Citation

Pham Luan, The Huynh Vu, and Tuan Anh Tran. "Facial Expression Recognition using Residual Masking Network". In: Proc. ICPR. 2020.

@inproceedings{pham2021facial,
  title={Facial expression recognition using residual masking network},
  author={Pham, Luan and Vu, The Huynh and Tran, Tuan Anh},
  booktitle={2020 25th International Conference on Pattern Recognition (ICPR)},
  pages={4513--4519},
  year={2021},
  organization={IEEE}
}

Star History

Star History Chart

residualmaskingnetwork's People

Contributors

phamquiluan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

residualmaskingnetwork's Issues

Hyperparameters selection

Nice contribution, I am using in my master program, obviously im gonna cite you.

How did you find the best hyperparameters which are found in config? For example for fer_2013, its ok if I only change the arch name or should I change for example lr or max_iter or batch etc?

I ran your implementation and I got 73.06 no 74.14

How to quickstart this project?

Hello, I am really fascinated by your work and I am trying to rebuild this FER from Scratch for my project.
How should I proceed initially to replicate the system and get the desired accuracy?

No file named "test_target.npy" found

Hi.Thank you for giving access to use the pretraned models.
I wanted to test thesaved models but after I run the gen_results.py and create the predictions
I acn't run gen_ensembles ause there is no file named target_test.npy.
where is it?
could you give me some insights?

How to get the model once it has been trained

Hi,
I have trained the model by running main_fer2013.py and it successfully did the training (after adding the fer dataset to saved/ etc)
However, how can I then use the model to predict images once the model is trained? I see it created a file in checkpoints/ dir, but I am not sure how to use this.

Thank you

Which configuration is used to reproduce the 74%?

First of all thank you for this great contribution. I was trying to replicate you result with the Residual Masking Network, I used the fer2013 config file but with arch resmasking_dropout1 and channel 1.

Unfortunately I just got a 66% precision on the train set after almost 8 hours training(currently I am working to speed up it using multiple TPUs).

I was wondering what configuration you use, also I notice there are some commented code in resnet.py like this:

self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False) #self.conv1 = nn.Conv2d(in_channels, self.inplanes, kernel_size=7, stride=2, padding=3, bias=False)

Since I am not doing transfer learning in fer2013 dataset, the channel size should be changed to 1 right?.

Resmasking Forward Function TypeError

import torch
import torch.nn as nn
from torchvision.models.utils import load_state_dict_from_url
from torchvision.models.resnet import ResNet, BasicBlock, Bottleneck

model_urls = {
"resnet18": "https://download.pytorch.org/models/resnet18-5c106cde.pth",
"resnet34": "https://download.pytorch.org/models/resnet34-333f7ec4.pth",
"resnet50": "https://download.pytorch.org/models/resnet50-19c8e357.pth",
}

class ResMasking(ResNet):
def init(self, weight_path=""):
super(ResMasking, self).init(
block=BasicBlock, layers=[2, 2, 2, 2]
)
if weight_path:
state_dict = torch.load(weight_path)
self.load_state_dict(state_dict, strict=False)
else:
state_dict = load_state_dict_from_url(model_urls["resnet18"], progress=True)
self.load_state_dict(state_dict, strict=False)
self.fc = nn.Linear(512, 7)

    self.mask1 = self._masking(64, 64, depth=4)
    self.mask2 = self._masking(128, 128, depth=3)
    self.mask3 = self._masking(256, 256, depth=2)
    self.mask4 = self._masking(512, 512, depth=1)

def _masking(self, in_channels, out_channels, depth):
    return nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
        nn.BatchNorm2d(out_channels),
        nn.ReLU(inplace=True),
        *[
            nn.Sequential(
                nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
                nn.BatchNorm2d(out_channels),
                nn.ReLU(inplace=True)
            ) for _ in range(depth - 1)
        ]
    )

def forward(self, x):
    x = self.conv1(x)
    x = self.bn1(x)
    x = self.relu(x)
    x = self.maxpool(x)

    x = self.layer1(x)
    m = self.mask1(x)
    x = x * (1 + m)

    x = self.layer2(x)
    m = self.mask2(x)
    x = x * (1 + m)

    x = self.layer3(x)
    m = self.mask3(x)
    x = x * (1 + m)

    x = self.layer4(x)
    m = self.mask4(x)
    x = x * (1 + m)

    x = self.avgpool(x)
    x = torch.flatten(x, 1)

    x = self.fc(x)
    return x

class ResMasking50(ResNet):
def init(self, weight_path=""):
super(ResMasking50, self).init(
block=Bottleneck, layers=[3, 4, 6, 3]
)
if weight_path:
state_dict = torch.load(weight_path)
self.load_state_dict(state_dict, strict=False)
else:
state_dict = load_state_dict_from_url(model_urls["resnet50"], progress=True)
self.load_state_dict(state_dict, strict=False)
self.fc = nn.Linear(2048, 7)

    self.mask1 = self._masking(256, 256, depth=4)
    self.mask2 = self._masking(512, 512, depth=3)
    self.mask3 = self._masking(1024, 1024, depth=2)
    self.mask4 = self._masking(2048, 2048, depth=1)

def _masking(self, in_channels, out_channels, depth):
    return nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
        nn.BatchNorm2d(out_channels),
        nn.ReLU(inplace=True),
        *[
            nn.Sequential(
                nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
                nn.BatchNorm2d(out_channels),
                nn.ReLU(inplace=True)
            ) for _ in range(depth - 1)
        ]
    )

def forward(self, x):
    x = self.conv1(x)
    x = self.bn1(x)
    x = self.relu(x)
    x = self.maxpool(x)

    x = self.layer1(x)
    m = self.mask1(x)
    x = x * (1 + m)

    x = self.layer2(x)
    m = self.mask2(x)
    x = x * (1 + m)

    x = self.layer3(x)
    m = self.mask3(x)
    x = x * (1 + m)

    x = self.layer4(x)
    m = self.mask4(x)
    x = x * (1 + m)

    x = self.avgpool(x)
    x = torch.flatten(x, 1)

    x = self.fc(x)
    return x

def resmasking(in_channels=3, num_classes=7, weight_path=""):
return ResMasking(weight_path)

def resmasking50_dropout1(in_channels=3, num_classes=7, weight_path=""):
model = ResMasking50(weight_path)
model.fc = nn.Sequential(nn.Dropout(0.4), nn.Linear(2048, num_classes))
return model

def resmasking_dropout1(in_channels=3, num_classes=7, weight_path=""):
model = ResMasking(weight_path)
model.fc = nn.Sequential(
nn.Dropout(0.4),
nn.Linear(512, num_classes)
)
return model

def resmasking_dropout2(in_channels=3, num_classes=7, weight_path=""):
model = ResMasking(weight_path)
model.fc = nn.Sequential(
nn.Linear(512, 128),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(128, num_classes),
)
return model

def resmasking_dropout3(in_channels=3, num_classes=7, weight_path=""):
model = ResMasking(weight_path)
model.fc = nn.Sequential(
nn.Linear(512, 512),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(512, 128),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(128, num_classes),
)
return model

TypeError: ResMasking.forward() got an unexpected keyword argument 'in_channels'

main.py
def main(config_path):
"""
This is the main function to make the training up

Parameters:
-----------
config_path : srt
    path to config file
"""
# load configs and set random seed
configs = json.load(open(config_path))
configs["cwd"] = os.getcwd()

# load model and data_loader
model = get_model(configs)

train_set, val_set, test_set = get_dataset(configs)

# init trainer and make a training
# from trainers.fer2013_trainer import FER2013Trainer

# from trainers.centerloss_trainer import FER2013Trainer
trainer = FER2013Trainer(model, train_set, val_set, test_set, configs)

if configs["distributed"] == 1:
    ngpus = torch.cuda.device_count()
    mp.spawn(trainer.train, nprocs=ngpus, args=())
else:
    trainer.train()

def get_model(configs):
# Assuming 'arch' in configs matches 'vgg19_bn_mask_pretrain'
if configs["arch"] == "resmasking_dropout3":
# Directly return the imported model architecture
model = resmasking_dropout3(

        num_classes=configs["num_classes"]
    )
    return model
else:
    # Handle case where 'arch' does not match
    raise ValueError(f"Model architecture {configs['arch']} is not supported.")

def get_dataset(configs):
"""
This function get raw dataset
"""

# todo: add transform
train_set = fer2013("train", configs)
val_set = fer2013("val", configs)
test_set = fer2013("test", configs, tta=True, tta_size=10)
return train_set, val_set, test_set

if name == "main":
main("/content/drive/MyDrive/Resnet/fer2013_config.json")

License

Can you please add license?

ModuleNotFoundError: No module named 'pytorchcv'

File "/content/ResidualMaskingNetwork/models/init.py", line 25, in
from pytorchcv.model_provider import get_model as ptcv_get_model
ModuleNotFoundError: No module named 'pytorchcv'

Training on FER2013 can't open in colab

No moudle name barez

Hello author I use your this project in the local run up prompt no "barez" this library?But it seems that it is not a library searched on the net for a long time did not find relevant data to trouble you to give a solution thank you very much!

Got 65.67% accuracy

sorry, I tried to run the main_fer2013.py file and specified the model name is resmasking_dropout1, but I only got 65.63% accuracy on the private test dataset. What caused this?

How can I get an example of a figure that fails to predict?

This is a very good study!
I have been learning ensemble and I want to get an unexpected picture from that training.
Can I get an example of a figure that is misclassified by the accuracy obtained with gen_ensemble?
I'm a deep learning newbie, so I don't know how to do.

Issue when using m.draw

Hello.

THank you for your code. I've installed your rmn with pip install rmn. It works fine when i'm using this

video_capture = cv2.VideoCapture(0)
m = RMN()
_, frame = video_capture.read()
results = m.detect_emotion_for_single_frame(frame)
frame = m.draw(frame, results)

But, i've wanted to try using your model with pre-detected face image (since i'll probably use another face detector). SO, for that case i've changed code like this

video_capture = cv2.VideoCapture(0)
m = RMN()
_, frame = video_capture.read()

face = m.detect_faces(frame)
xmin = face[0]["xmin"]
ymin = face[0]["ymin"]
xmax = face[0]["xmax"]
ymax = face[0]["ymax"]

frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_image = frame[ymin:ymax, xmin:xmax]
results = m.detect_emotion_for_single_face_image(face_image)
frame = m.draw(frame, results)

And i've got error on m.draw


Traceback (most recent call last):
  File "/home/daddywesker/EmotionRecognition/RMN/testing_rmn.py", line 20, in <module>
    frame = m.draw(frame, results)
  File "/home/daddywesker/anaconda3/envs/rmn/lib/python3.7/site-packages/rmn/__init__.py", line 208, in draw
    xmin = r["xmin"]
TypeError: string indices must be integers

Process finished with exit code 1

which is kinda understandable, since result in that case doesn't contain all the necessary fields for drawing and it's structure is slightly different. So, maybe you should add some "if" check to avoid that error? Or just let now user what he should send to m.draw.

How to get the benchmark performance on FER2013?

Thanks for your great job and kind sharing.

I am doing some work on FER, and intend to get the performance of some classic network (including Resnet34, Resnet152, etc.) as described in https://github.com/phamquiluan/ResidualMaskingNetwork#benchmarking-on-fer2013.

But I cannot get the accuracy of 70%+, but only around 60%, though I tried to refer to parameters in https://github.com/phamquiluan/ResidualMaskingNetwork/blob/master/configs/fer2013_config.json.

Any recommendation and suggestion please?

How do you get 74% accuracy

sorry, I tried to run the main_fer2013.py file and specified the model name is resnet34,I can got 72.88%, but specified the model name is resmasking_dropout1, I only got 69.09% accuracy on the private test dataset. What caused this? I also refer to 7# .

Tensorrt Conversion returning NaN Tensors for pre-trained Models

I am trying to convert this model to tensorrt. First I convert this model to onnx

import torch
from models import resmasking_dropout1

model_path = '/media/soccer/Samsung 1TB SSD/Shahzeb/ResidualMaskingNetwork-master/emotion_model_v1.pt'

model = resmasking_dropout1(in_channels=3, num_classes=7)

model.cuda()

dummy_input = torch.randn(1, 3, 224, 224,  device='cuda')

out= model(dummy_input)

model.load_state_dict(torch.load(model_path))

model= model.half()


dummy_input = torch.randn(1, 3, 224, 224,  device='cuda')

dummy_input= dummy_input.half()

torch.onnx.export(model, dummy_input, "emotion-model-v1_half.onnx", verbose=True)
 

#After model conversion in onnx, I use the following script to convert it into tensorrt


import pycuda.driver as cuda

import pycuda.autoinit

import numpy as np
import tensorrt as trt
def set_net_batch(network, batch_size):
    """Set network input batch size.
    The ONNX file might have been generated with a different batch size,
    say, 64.
    """
    shape = list(network.get_input(0).shape)
    print("ONNX input shape before :", shape)
    print("ONNX input dtype before :", network.get_input(0).dtype)
    network.get_input(0).dtype = trt.float16
    shape[0] = batch_size
    network.get_input(0).shape = shape
    print("ONNX input shape after:", list(network.get_input(0).shape))
    print("ONNX input dtype after:", network.get_input(0).dtype)
    #shape = list(network.get_output(0).shape)
    #print("ONNX output shape before :", shape)
    print("ONNX output dtype before :", network.get_output(0).dtype)
    network.get_output(0).dtype = trt.float16
    #shape[0] = batch_size
    #network.get_output(0).shape = shape
    #print("ONNX output shape after:", list(network.get_output(0).shape))
    print("ONNX output dtype after:", network.get_output(0).dtype)
    return network
def build_engine(onnx_file_path, BATCH_SIZE, enable_fp16=False, enable_int8=False):
    TRT_LOGGER = trt.Logger()
    EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
    MAX_BATCH_SIZE = BATCH_SIZE
    # initialize TensorRT engine and parse ONNX model
    builder = trt.Builder(TRT_LOGGER)
    network = builder.create_network(EXPLICIT_BATCH)
    parser = trt.OnnxParser(network, TRT_LOGGER)    # parse ONNX
    with open(onnx_file_path, 'rb') as model:
        print('Beginning ONNX file parsing')
        parser.parse(model.read())
    print('Completed parsing of ONNX file')
    # network = set_net_batch(network, MAX_BATCH_SIZE)
    # network.get_input(0).shape = [MAX_BATCH_SIZE, 3, 244, 244]
    # exit()
    # allow TensorRT to use up to 1GB of GPU memory for tactic selection
    builder.max_workspace_size = 1 << 28
    # we have only one image in batch
    builder.max_batch_size = MAX_BATCH_SIZE
    print('Building engine with max batch size: %d', builder.max_batch_size)
    # use FP16 mode if possible
    if builder.platform_has_fast_fp16 and enable_fp16:
        print("Using FP16 Mode ...")
        builder.fp16_mode = True
    # use INT8 mode if possible
    if builder.platform_has_fast_int8 and enable_int8:
        print("Using INT8 Mode ...")
        builder.int8_mode = True
    # generate TensorRT engine optimized for the target platform
    print('Building an engine...')
    engine = builder.build_cuda_engine(network)
    if engine is None:
        print('Failed to build engine')
        return None
    return engine
def main():
    # initialize TensorRT engine and parse ONNX model
    ONNX_FILE_PATH = './emotion/emotion-model-v1_half.onnx'
    engine = build_engine(ONNX_FILE_PATH, BATCH_SIZE=1, enable_fp16=True, enable_int8=False)
    TRT_FILE_PATH = './emotion/emotion-model-v1_half.trt'
    with open(TRT_FILE_PATH, 'wb') as engine_file:
        engine_file.write(engine.serialize())
    print("Completed creating engine")
if __name__ == "__main__":
    main()
```
`



Models return NAN tensors on inference. 

How many params does Resmasking50 have?

Hi,

As your dissertation, Resmasking (resnet34 base) model has 149m params. So do you know how many params does resmasking50 have? I got 1.8 billions for it, too big, I want to make sure my result is correct.

And do you use pre-trained weights for resmasking50? If yes, which's one?

I saw your submission was accepted, congratulations! I am student from Vietnam, can I get your contact to ask some questions and learn from your experience?

Thank you so much!

Torch-Cuda issue on Mac

When I try to train on my mac i get - AssertionError: Torch not compiled with CUDA enabled

I don't think CUDA is supported with Mac since it requires NVIDIA processors. Is there any way to remove this dependency or fix this issue on a mac?

How to use it to train more or less than 7 emotions?

Hii,
I tried your's model for prediction , it gives much better result for the 7 classes in fer2013.
But, I want to use it to train on more 5 emotion classes .
For now, I was trying with one less class, I did conversion of my 6 classes data to the fer data format and make change num_classes=6 in fer2013_config and tried with default 'alexnet' model.
Training is on, till now it's running without error.
But, is that sufficient changes? Or Is it possible to train for more classes in this way?
(Is there any changes require in tta_trainer.py or alexnet.py ?)
Please help me to clarify with this.

Trained weights for ensemble method are missing

Can you add "resnet50_pretrained_vgg_rot30_2019Nov13_08.20" and "resnet18_rot30_2019Nov05_17.44" trained weights on your shared google drive folder for ensemble method, they are missing and I can't get 76.82% accuracy without them. Thanks!

How to access and read the paper

Dear phamquiluan,

I am interesting in your paper, but I cannot find access link for your paper in the ICPR2020 and arxiv.
I want to read your paper to check the excellence of your algorithm.

Could you please provide a link to read the paper?

Best,
vujadeyoon

CUDA out of memory

getting this error when i run python ssd_infer.py. Any suggestions?

RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 10.76 GiB total capacity; 996.23 MiB already allocated; 8.81 MiB free; 1010.00 MiB reserved in total by PyTorch)

Đánh giá độ chính xác trên tập private test và accuracy

hello anh,
cảm ơn anh về source code và paper. Trong lúc em đọc paper thì em không thấy anh nói về việc đánh giá accuracy trên tập test dùng TTA hay không dùng ạ. Với lại kết quả em đạt được với chỉ có 73.140 dùng TTA và 71.106 khi không dùng TTA.
Mong anh trả lời
Em cảm ơn.

How to train the CK dataset?

Hii,
I'm try your model to training the CK dataset,but I don't do it ,
Could you tell me the steps to train the CK dataset, and I downloaded the CK dataset from the website
Please!
thanks

How do I detect it with pictures

from rmn import RMN
Traceback (most recent call last):
File "", line 1, in
File "D:\student\faces\demo6\ResidualMaskingNetwork-master\ResidualMaskingNetwork-master\rmn_init_.py", line 43
desc=f"Downloading {local_path}..",
^
SyntaxError: invalid syntax
I want to use the image to enter, how do I run

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.