Git Product home page Git Product logo

arah-release's Introduction

ARAH: Animatable Volume Rendering of Articulated Human SDFs

This repository contains the implementation of our paper ARAH: Animatable Volume Rendering of Articulated Human SDFs.

You can find detailed usage instructions for using pretrained models and training your own models below.

If you find our code useful, please cite:

@inproceedings{ARAH:2022:ECCV,
  title = {ARAH: Animatable Volume Rendering of Articulated Human SDFs},
  author = {Shaofei Wang and Katja Schwarz and Andreas Geiger and Siyu Tang},
  booktitle = {European Conference on Computer Vision},
  year = {2022}
}

Installation

Environment Setup

This repository has been tested on the following platform:

  1. Python 3.9.7, PyTorch 1.10 with CUDA 11.3 and cuDNN 8.2.0, Ubuntu 20.04/CentOS 7.9.2009

To clone the repo, run either:

git clone --recursive https://github.com/taconite/arah-release.git

or

git clone https://github.com/taconite/arah-release.git
git submodule update --init --recursive

Next, you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called arah using

conda env create -f environment.yml
conda activate arah

Lastly, compile the extension modules. You can do this via

python setup.py build_ext --inplace

SMPL Setup

Download SMPL v1.0 for Python 2.7 from SMPL website (for male and female models), and SMPLIFY_CODE_V2.ZIP from SMPLify website (for the neutral model). After downloading, inside SMPL_python_v.1.0.0.zip, male and female models are smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl and smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl, respectively. Inside mpips_smplify_public_v2.zip, the neutral model is smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl. Remove the chumpy objects in these .pkl models using this code under a Python 2 environment (you can create such an environment with conda). Finally, rename the newly generated .pkl files and copy them to subdirectories under ./body_models/smpl/. Eventually, the ./body_models folder should have the following structure:

body_models
 └-- smpl
    ├-- male
    |   └-- model.pkl
    ├-- female
    |   └-- model.pkl
    └-- neutral
        └-- model.pkl

Then, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be saved into ./body_models/misc/.

Quick Demo on the AIST++ Dataset

  1. Run bash download_demo_data.sh to download and extract 1) pretrained models and 2) the preprocessed AIST++ sequence.
  2. Run the pre-trained model on AIST++ poses via
    python test.py --num-workers 4 configs/arah-zju/ZJUMOCAP-377-mono_4gpus.yaml
    
    The script will compose a result .mp4 video in out/arah-zju/ZJUMOCAP-377-mono_4gpus/vis. There are a total of 258 frames, so it will take some time to render all the frames. If you want to check the result quickly run:
    python test.py --num-workers 4 --end-frame 10 configs/arah-zju/ZJUMOCAP-377-mono_4gpus.yaml
    
    to render only the first 10 frames, or
    python test.py --num-workers 4 --subsampling-rate 25 configs/arah-zju/ZJUMOCAP-377-mono_4gpus.yaml
    
    to render every 25th frame. Inference requires ~20GB VRAM, if you don't have so much memory, add --low-vram option. This should run with ~12GB VRAM at the cost of longer inference time.

Results on ZJU-MoCap

For easy comparison to our approach, we also store all our rendering and geometry reconstruction results on the ZJU-MoCap dataset here. Train/val splits on cameras/poses follow NeuralBody's split. Pseudo ground truths for geometry reconstruction on the ZJU-MoCap dataset are stored in this folder. For evaluation script and data split of geometry reconstruction please refer to this comment.

Dataset preparation

Due to license issues, we cannot publicly distribute our preprocessed ZJU-MoCap and H36M data. You have to get the raw data from their respective sources and use our preprocessing script to generate data that is suitable for our training/validation scripts. Please follow the steps in DATASET.md.

Download pre-trained skinning and SDF networks

We provide pre-trained models on the CAPE dataset as prerequisites, including 1) meta learned skinning network on the CAPE dataset, 2) MetaAvatar SDF model. After downloading them, please put them in respective folders under ./out/meta-avatar.

Training

To train new networks from scratch, run

python train.py --num-workers 4 ${path_to_config}

Where ${path_to_config} is the relative path to the yaml config file, e.g. config/arah-zju/ZJUMOCAP-313_4gpus.yaml

Training and validation use wandb for logging, which is free to use but requires online register.

Note that by default, all models are trained on 4 GPUs with a total batch size of 4. You can change the value of training.gpus to [0] in the configuration file to train on a single GPU with a batch size of 1, however the model accuracy may drop and the training might become less stable.

Pre-trained models of ARAH (Work In Progress)

We provide pre-trained models, including multi-view and monocular models. After downloading them, please put them in respective folders under ./out/arah-zju or ./out/arah-h36m.

Validate the trained model on within-distribution poses

To validate the trained model on novel views of training poses, run

python validate.py --novel-view --num-workers 4 ${path_to_config}

To validate the trained model on novel views of unseen poses, run

python validate.py --novel-pose --num-workers 4 ${path_to_config}

Test the trained model on out-of-distribution poses

To run the trained model on preprocessed poses, run

python test.py --num-workers 4 --pose-dir ${pose_dir} --test-views ${view} configs/arah/${config}

where ${pose_dir} denotes the directory under data/odp/CoreView_${sequence_name}/ that contains target (out-of-distribution) poses. ${view} indicates the testing views from which to render the model.

Currently, the code only supports animating ZJU-MoCap models for out-of-distribution poses.

License

We employ MIT License for the ARAH code, which covers

extract_smpl_parameters.py
train.py
validate.py
test.py
setup.py
configs
im2mesh/
preprocess_datasets/preprocess_ZJU-MoCap.py

Our SDF network is based on SIREN. Our mesh extraction code is borrowed from DeepSDF. The structure of our rendering code is largely based on IDR. Our root-finding code is modified from SNARF. We thank authors of these papers for their wonderful works which inspired this paper.

Modules not covered by our license are:

  1. Modified code from EasyMocap to preprocess ZJU-MoCap/H36M datasets (./preprocess_datasets/easymocap);
  2. Modified code from SMPL-X (./human_body_prior);

for these parts, please consult their respective licenses and cite the respective papers.

arah-release's People

Contributors

mikeqzy avatar taconite avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

arah-release's Issues

How can I save mesh and texture files?

First of all, thanks for your great work!!! But there are some questions I found when I reproduce on my custom dataset.

I've set gen_cano_mesh = True, but there is no mesh file saved in my output dirs. I checked the code, and I thought that it only outputs the rendered images.

So how could I save mesh files like what you offer in this link? Also I downloaded your mesh files and visualize them in meshlab, I found that there is no texture like rendered images in vis/rgb_pred.

What I need is mesh file with texture, so that I could use it in my scenes and rendered like vis/rgb_pred just with backgrounds.

This is your offered mesh file without texture:
image

Segmentation Augmentation while Training???

image
image

Is there some augmentation feature in codes??? Because that looks so sharp.
When I see logfiles, there are something strange about foreground images.
If there is no problem with my datas, does it bother training?

Error when training on my custom dataset.

I've successfully prepared my custom dataset according to neural body and easy mocap. But when I train on it, some error occurs.

Command: python train.py --num-workers 2 configs/arah-zju/test.yaml
Error log:

['0', '1', '2', '3']
['0', '1', '2', '3']
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/jj/.conda/envs/arah/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
['0', '1', '2', '3']
['0', '1', '2', '3']
initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/jj/.conda/envs/arah/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------

LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2,3]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [2,3]

  | Name        | Type             | Params
-------------------------------------------------
0 | model       | MetaAvatarRender | 87.0 M
1 | loss_fn_vgg | LPIPS            | 14.7 M
2 | criteria    | IDHRLoss         | 14.7 M
-------------------------------------------------
87.0 M    Trainable params
14.7 M    Non-trainable params
101 M     Total params
407.043   Total estimated model params size (MB)
/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:132: UserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 48 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:132: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 48 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
Epoch 0:   0%|                                                                                        | 0/100 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/train.py", line 135, in <module>
    trainer.fit(model=model, train_dataloaders=train_loader, val_dataloaders=val_loader, ckpt_path=checkpoint_path)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
    self._call_and_handle_interrupt(
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
    self._dispatch()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
    self.training_type_plugin.start_training(self)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
    self._results = trainer.run_stage()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
    return self._run_train()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train
    self.fit_loop.run()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
    self.epoch_loop.run(data_fetcher)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 140, in run
    self.on_run_start(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 141, in on_run_start
    self._dataloader_iter = _update_dataloader_iter(data_fetcher, self.batch_idx + 1)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/utilities.py", line 121, in _update_dataloader_iter
    dataloader_iter = enumerate(data_fetcher, batch_idx)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 199, in __iter__
    self.prefetching(self.prefetch_batches)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 258, in prefetching
    self._fetch_next_batch()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 300, in _fetch_next_batch
    batch = next(self.dataloader_iter)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/supporters.py", line 550, in __next__
    return self.request_next_batch(self.loader_iters)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/supporters.py", line 562, in request_next_batch
    return apply_to_collection(loader_iters, Iterator, next)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/apply_func.py", line 96, in apply_to_collection
    return function(data, *args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/im2mesh/data/zju_mocap.py", line 381, in __getitem__
    fg_inds = np.random.choice(valid_inds.shape[0], size=self.num_fg_samples, replace=False)
  File "numpy/random/mtrand.pyx", line 945, in numpy.random.mtrand.RandomState.choice
ValueError: a must be greater than 0 unless no samples are taken

Traceback (most recent call last):
  File "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/train.py", line 135, in <module>
    trainer.fit(model=model, train_dataloaders=train_loader, val_dataloaders=val_loader, ckpt_path=checkpoint_path)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
    self._call_and_handle_interrupt(
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
    self._dispatch()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch
    self.training_type_plugin.start_training(self)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
    self._results = trainer.run_stage()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage
    return self._run_train()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train
    self.fit_loop.run()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
    self.epoch_loop.run(data_fetcher)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 140, in run
    self.on_run_start(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 141, in on_run_start
    self._dataloader_iter = _update_dataloader_iter(data_fetcher, self.batch_idx + 1)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/utilities.py", line 121, in _update_dataloader_iter
    dataloader_iter = enumerate(data_fetcher, batch_idx)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 199, in __iter__
    self.prefetching(self.prefetch_batches)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 258, in prefetching
    self._fetch_next_batch()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/fetching.py", line 300, in _fetch_next_batch
    batch = next(self.dataloader_iter)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/supporters.py", line 550, in __next__
    return self.request_next_batch(self.loader_iters)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/supporters.py", line 562, in request_next_batch
    return apply_to_collection(loader_iters, Iterator, next)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/apply_func.py", line 96, in apply_to_collection
    return function(data, *args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/_utils.py", line 434, in reraise
    raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/im2mesh/data/zju_mocap.py", line 381, in __getitem__
    fg_inds = np.random.choice(valid_inds.shape[0], size=self.num_fg_samples, replace=False)
  File "numpy/random/mtrand.pyx", line 1001, in numpy.random.mtrand.RandomState.choice
ValueError: Cannot take a larger sample than population when 'replace=False'

visualize posed mesh in preprocess but get nothing

I modified the arah-release/preprocess_datasets/preprocess_ZJU-MoCap.py file.

import os
os.environ['PYOPENGL_PLATFORM'] = 'egl'
import torch
import trimesh
import glob
import json
import shutil
import argparse

import numpy as np
import preprocess_datasets.easymocap.mytools.camera_utils as cam_utils

from scipy.spatial.transform import Rotation

from human_body_prior.body_model.body_model import BodyModel

from preprocess_datasets.easymocap.smplmodel import load_model

from PIL import Image
import pyrender
from pyrender import OffscreenRenderer, PerspectiveCamera, Mesh, Scene, DirectionalLight

parser = argparse.ArgumentParser(
    description='Preprocessing for ZJU-MoCap.'
)
parser.add_argument('--data-dir', type=str, help='Directory that contains raw ZJU-MoCap data.')
parser.add_argument('--out-dir', type=str, help='Directory where preprocessed data is saved.')
parser.add_argument('--seqname', type=str, default='CoreView_313', help='Sequence to process.')

if __name__ == '__main__':
    args = parser.parse_args()
    seq_name = args.seqname
    data_dir = os.path.join(args.data_dir, seq_name)
    out_dir = os.path.join(args.out_dir, seq_name)

    annots = np.load(os.path.join(data_dir, 'annots.npy'), allow_pickle=True).item()
    cameras = annots['cams']

    smpl_dir = os.path.join(data_dir, 'new_params')
    verts_dir = os.path.join(data_dir, 'new_vertices')

    if not os.path.exists(out_dir):
        os.makedirs(out_dir)

    body_model = BodyModel(bm_path='body_models/smpl/neutral/model.pkl', num_betas=10, batch_size=1).cuda()

    faces = np.load('body_models/misc/faces.npz')['faces']

    if seq_name in ['CoreView_313', 'CoreView_315']:
        cam_names = list(range(1, 20)) + [22, 23]
    else:
        cam_names = list(range(1, 24))

    cam_names = [str(cam_name) for cam_name in cam_names]

    all_cam_params = {'all_cam_names': cam_names}
    smpl_out_dir = os.path.join(out_dir, 'models')
    if not os.path.exists(smpl_out_dir):
        os.makedirs(smpl_out_dir)

    for cam_idx, cam_name in enumerate(cam_names):
        if seq_name in ['CoreView_313', 'CoreView_315']:
            K = cameras['K'][cam_idx]
            D = cameras['D'][cam_idx]
            R = cameras['R'][cam_idx]
        else:
            K = cameras['K'][cam_idx].tolist()
            D = cameras['D'][cam_idx].tolist()
            R = cameras['R'][cam_idx].tolist()

        R_np = np.array(R)
        T = cameras['T'][cam_idx]
        T_np = np.array(T).reshape(3, 1) / 1000.0
        T = T_np.tolist()

        cam_out_dir = os.path.join(out_dir, cam_name)
        if not os.path.exists(cam_out_dir):
            os.makedirs(cam_out_dir)

        if seq_name in ['CoreView_313', 'CoreView_315']:
            img_in_dir = os.path.join(data_dir, 'Camera ({})'.format(cam_name))
            mask_in_dir = os.path.join(data_dir, 'mask_cihp/Camera ({})'.format(cam_name))
        else:
            img_in_dir = os.path.join(data_dir, 'Camera_B{}'.format(cam_name))
            mask_in_dir = os.path.join(data_dir, 'mask_cihp/Camera_B{}'.format(cam_name))

        img_files = sorted(glob.glob(os.path.join(img_in_dir, '*.jpg')))

        cam_params = {'K': K, 'D': D, 'R': R, 'T': T}
        # print(R)
        # print(T)
        # assert 0
        all_cam_params.update({cam_name: cam_params})

        for img_file in img_files:
            print ('Processing: {}'.format(img_file))
            if seq_name in ['CoreView_313', 'CoreView_315']:
                idx = int(os.path.basename(img_file).split('_')[4])
                frame_index = idx - 1
            else:
                idx = int(os.path.basename(img_file)[:-4])
                frame_index = idx

            mask_file = os.path.join(mask_in_dir, os.path.basename(img_file)[:-4] + '.png')
            smpl_file = os.path.join(smpl_dir, '{}.npy'.format(idx))
            verts_file = os.path.join(verts_dir, '{}.npy'.format(idx))

            if not os.path.exists(smpl_file):
                print ('Cannot find SMPL file for {}: {}, skipping'.format(img_file, smpl_file))
                continue

            # We only process SMPL parameters in world coordinate
            if cam_idx == 0:
                params = np.load(smpl_file, allow_pickle=True).item()

                root_orient = Rotation.from_rotvec(np.array(params['Rh']).reshape([-1])).as_matrix()
                trans = np.array(params['Th']).reshape([3, 1])

                betas = np.array(params['shapes'], dtype=np.float32)
                poses = np.array(params['poses'], dtype=np.float32)
                # import pdb; pdb.set_trace()
                pose_body = poses[:, 3:66].copy()
                pose_hand = poses[:, 66:].copy()

                poses_torch = torch.from_numpy(poses).cuda()
                pose_body_torch = torch.from_numpy(pose_body).cuda()
                pose_hand_torch = torch.from_numpy(pose_hand).cuda()
                betas_torch = torch.from_numpy(betas).cuda()

                new_root_orient = Rotation.from_matrix(root_orient).as_rotvec().reshape([1, 3]).astype(np.float32)
                new_trans = trans.reshape([1, 3]).astype(np.float32)

                new_root_orient_torch = torch.from_numpy(new_root_orient).cuda()
                new_trans_torch = torch.from_numpy(new_trans).cuda()

                # Get shape vertices
                body = body_model(betas=betas_torch)
                minimal_shape = body.v.detach().cpu().numpy()[0]

                # Get bone transforms
                body = body_model(root_orient=new_root_orient_torch, pose_body=pose_body_torch, pose_hand=pose_hand_torch, betas=betas_torch, trans=new_trans_torch)

                body_model_em = load_model(gender='neutral', model_type='smpl')
                verts = body_model_em(poses=poses_torch, shapes=betas_torch, Rh=new_root_orient_torch, Th=new_trans_torch, return_verts=True)[0].detach().cpu().numpy()

                vertices = body.v.detach().cpu().numpy()[0]
                new_trans = new_trans + (verts - vertices).mean(0, keepdims=True)
                new_trans_torch = torch.from_numpy(new_trans).cuda()

                body = body_model(root_orient=new_root_orient_torch, pose_body=pose_body_torch, pose_hand=pose_hand_torch, betas=betas_torch, trans=new_trans_torch)

                # Visualize SMPL mesh
                vertices = body.v.detach().cpu().numpy()[0]
                mesh = trimesh.Trimesh(vertices=vertices, faces=faces)
                out_filename = os.path.join(smpl_out_dir, '{:06d}.ply'.format(idx))
                mesh.export(out_filename)
                
                # Create a PyRender scene
                scene = Scene()
                pyrender_mesh = Mesh.from_trimesh(mesh)
                yellow_material = pyrender.MetallicRoughnessMaterial(baseColorFactor=[1.0, 1.0, 0.0, 1.0])
                pyrender_mesh.material = yellow_material
                scene.add(pyrender_mesh)
                # camera = PerspectiveCamera(intrinsics=camera_intrinsics)
                camera = pyrender.IntrinsicsCamera(fx=K[0][0], fy=K[1][1], cx=K[0][2], cy=K[1][2], zfar=8000)
                camera_extrinsics = np.eye(4)
                camera_extrinsics[:3, :3] = np.array(R)
                camera_extrinsics[:3, 3] = np.array(T).flatten()
                camera_pose = np.linalg.inv(camera_extrinsics)  # Invert camera extrinsics to get camera pose
                # camera_pose = camera_extrinsics
                scene.add(camera, pose=camera_pose)
                light = DirectionalLight(color=[1.0, 1.0, 1.0], intensity=1.0)
                scene.add(light)
                height = 1024
                width = 1024
                renderer = OffscreenRenderer(viewport_width=width, viewport_height=height,point_size=1.0)
                color, depth = renderer.render(scene)
                color_image = Image.fromarray(color)
                out_filename = os.path.join(smpl_out_dir, '{:06d}.png'.format(idx))
                color_image.save(out_filename)

                bone_transforms = body.bone_transforms.detach().cpu().numpy()
                Jtr_posed = body.Jtr.detach().cpu().numpy()

                out_filename = os.path.join(smpl_out_dir, '{:06d}.npz'.format(idx))

                np.savez(out_filename,
                         minimal_shape=minimal_shape,
                         betas=betas,
                         Jtr_posed=Jtr_posed[0],
                         bone_transforms=bone_transforms[0],
                         trans=new_trans[0],
                         root_orient=new_root_orient[0],
                         pose_body=pose_body[0],
                         pose_hand=pose_hand[0])

            shutil.copy(os.path.join(img_file), os.path.join(cam_out_dir, '{:06d}.jpg'.format(idx)))
            shutil.copy(os.path.join(mask_file), os.path.join(cam_out_dir, '{:06d}.png'.format(idx)))

    with open(os.path.join(out_dir, 'cam_params.json'), 'w') as f:
        json.dump(all_cam_params, f)

some issue about quantitative results of novel-view synthe Table.3

Thanks for your great work.
what training config do you use to compute Table 3: Novel View Synthesis in the paper?
I run 392_mono_4gpu.yaml to train our model and the final results are 0.104, 27.34, and 0.926 which are less than your report results.

1676652704562

or you run 392_4gpu.yaml config? and what start_frame, end_frame, and subsampling_rate you use to compute these metrics?

Custom data preprocessing

Thanks for the great work. Is it possible to release pre-processing code (guideline) for my custom video so I could run it using your methods? Thanks again!

Out of memory after some iterations of validation

Restoring states from the checkpoint path at out/arah-zju/ZJUMOCAP-313_4gpus/checkpoints/last.ckpt
LOCAL_RANK: 2 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
LOCAL_RANK: 3 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
Loaded model weights from checkpoint at out/arah-zju/ZJUMOCAP-313_4gpus/checkpoints/last.ckpt
Validating: 15%|████████████████████▎ | 513/3434 [2:04:33<16:18:51, 20.11s/it]
Traceback (most recent call last):
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in _validate_impl
results = self._run(model, ckpt_path=self.validated_ckpt_path)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
self._dispatch()
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1275, in _dispatch
self.training_type_plugin.start_evaluating(self)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 206, in start_evaluating
self._results = trainer.run_stage()
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1286, in run_stage
return self._run_evaluate()
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1334, in _run_evaluate
eval_loop_results = self._evaluation_loop.run()
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 145, in run
self.advance(*args, **kwargs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 122, in advance
output = self._evaluation_step(batch, batch_idx, dataloader_idx)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 217, in _evaluation_step
output = self.trainer.accelerator.validation_step(step_kwargs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 239, in validation_step
return self.training_type_plugin.validation_step(*step_kwargs.values())
File "/opt/conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 447, in validation_step
return self.lightning_module.validation_step(*args, **kwargs)
File "/arah-release/im2mesh/metaavatar_render/lightning_model.py", line 174, in validation_step
model_outputs = self.model(inputs, gen_cano_mesh=False, eval=True)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/arah-release/im2mesh/metaavatar_render/models/init.py", line 200, in forward
model_outputs = self.idhr_network(inputs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/arah-release/im2mesh/metaavatar_render/renderer/implicit_differentiable_renderer.py", line 209, in forward
rgb_i, w_i = self.get_rbg_value_vol_sdf(sdf_network,
File "/arah-release/im2mesh/metaavatar_render/renderer/implicit_differentiable_renderer.py", line 359, in get_rbg_value_vol_sdf
valid_rgb.append(self.rendering_network(pi.squeeze(0), normal.squeeze(0), vi, feature_vectors, pose_cond))
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/arah-release/im2mesh/metaavatar_render/models/decoder.py", line 115, in forward
x = lin(torch.cat([rendering_input, x], dim=-1))
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
result = forward_call(*input, **kwargs)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/envs/arah/lib/python3.9/site-packages/torch/nn/functional.py", line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 978.00 MiB (GPU 6; 23.70 GiB total capacity; 17.81 GiB already allocated; 665.31 MiB free; 20.93 GiB reserved in total by PyTorch) If reserve
d memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Test error on custom dataset.

The training process is successfully finished. But when I test my own trained model, it has an error that raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: Total length of Dataloader across ranks is zero. Please make sure that it returns at least 1 batch.

Test command: python test.py --num-workers 1 configs/arah-zju/rxy.yaml

Error log:

 Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/jj/.conda/envs/arah/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [2]
Missing logger folder: out/arah-zju/ZJUMOCAP-313_4gpus/lightning_logs
Traceback (most recent call last):
  File "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/test.py", line 81, in <module>
    trainer.test(model=model, dataloaders=val_loader, verbose=True)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 911, in test
    return self._call_and_handle_interrupt(self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 954, in _test_impl
    results = self._run(model, ckpt_path=self.tested_ckpt_path)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run
    self._dispatch()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1275, in _dispatch
    self.training_type_plugin.start_evaluating(self)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 206, in start_evaluating
    self._results = trainer.run_stage()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1286, in run_stage
    return self._run_evaluate()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1328, in _run_evaluate
    self._evaluation_loop._reload_evaluation_dataloaders()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 168, in _reload_evaluation_dataloaders
    self.trainer.reset_test_dataloader()
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py", line 568, in reset_test_dataloader
    self.num_test_batches, self.test_dataloaders = self._reset_eval_dataloader(
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py", line 508, in _reset_eval_dataloader
    if has_len_all_ranks(dataloader, self.training_type_plugin, module)
  File "/home/jj/.conda/envs/arah/lib/python3.9/site-packages/pytorch_lightning/utilities/data.py", line 118, in has_len_all_ranks
    raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: Total length of `Dataloader` across ranks is zero. Please make sure that it returns at least 1 batch.

error about training code

I encountered a programming error in

trainer.fit(model=model, train_dataloaders=train_loader, val_dataloaders=val_loader, ckpt_path=checkpoint_path)
, but I have confirmed that the data was loaded successfully.

TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'trimesh.caching.TrackedArray'>

Out of distribution pose

hi, thanks for sharing great research.

When I test your pretrained model of zju dataset for out of distribution pose, I don't find the directory: data/odp/CoreView_${sequence_name}/.

error about zju_mocap.py

Hi! When I was running the code, I ran into some problems in zju_mocap.py. In line 480 (

smpl_mesh = trimesh.Trimesh(vertices=minimal_shape_v, faces=self.faces)
), I found that the dimensions of minimal_shape_v (6890x3) and self.faces (20908x3) should be the same, but in fact they are different, leading to program errors. I hope I can get your help!

Run on custom dataset

When I train arah in my custom dataset,this error occur:
valid_inds = np.where(mask_at_box[:self.num_fg_samples + 1024] == 1)[0]
fg_inds = np.random.choice(valid_inds.shape[0], size=self.num_fg_samples, replace=False)
ValueError: Cannot take a larger sample than population when 'replace=False'
But when debuging it's all fine,I guess it's because some intersections between ratys and SMPL bounding box whose near is bigger than far,so how can I deal with that?

new problem about my custom data;

when I trained; i get checkpoint train loss is 3.34,wandb vis normal.png is normal, but I get bad normal result when I run test.py,
img_v2_f51112ed-2dcb-4bfd-ad2f-033ed91f5aag

code:
model_outputs = model.model(inputs, gen_cano_mesh=True, eval=True)

Perceptual Loss

How to use perceptual loss, is patch sampling necessary for using perceptual loss ?

A problem of calculating jacobian in inference

Hi, thanks for the excellent work.

When I tried to validate or test the trained model, an error occurred:
[*************************************]
diff_operators.py", line 74, in jacobian
jac[:, :, i, :] = grad(y_flat, x, torch.ones_like(y_flat), retain_graph=True, create_graph=False)[0]

return Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn


More detailed information:

return self.model.validation_step(*args, **kwargs)
model_outputs = self.model(inputs, gen_cano_mesh=False, eval=True)
model_outputs = self.idhr_network(inputs)
calls /im2mesh/metaavatar_render/renderer/implicit_differentiable_renderer.py", line 89, in forward
= self.ray_tracer(sdf_network,

/im2mesh/metaavatar_render/renderer/ray_tracing.py", line 110, in forward
self.sphere_tracing(batch_size,
/im2mesh/metaavatar_render/renderer/ray_tracing.py", line 247, in sphere_tracing
search_iso_surface_depth(cam_locs,

/im2mesh/utils/root_finding_utils.py", line 424, in search_iso_surface_depth
J_lbs = forward_skinning_jac(valid_x_hat_0, loc, sc_factor, coord_min, coord_max, center, skinning_model, vol_feat, bone_transforms, mask=None)
File "/mnt/lustre/thu/project/human/nb/codelib/arah/im2mesh/utils/root_finding_utils.py", line 216, in forward_skinning_jac
jac, status = diff_operators.jacobian(pi_bar,

Any suggestions?

Preprocessing code for AIST++ Dataset

Hi, thanks for inspiring work!
Would it possible to provide preprocessing code for AIST++ dataset to convert original pkl to the npz files? Thanks!

ZJU-mocap data Image Size

As I know, original resolution of ZJU-mocap data is (1024,1024). But your code assumes that resolution is (512,512).
To reproduce with original dataset, Should I change the resolution of original data & camera parameters(intrinsic) ??

Some bug in ray_tracing.py

curr_start_points_opt, acc_start_dis_opt, curr_start_transforms_opt, converge_mask_opt = \
search_iso_surface_depth(cam_locs,
body_ray_directions,
~diverge_mask if eval else torch.ones_like(diverge_mask),
curr_start_points,
acc_start_dis,
curr_start_transforms,
sdf_network,
loc,
sc_factor,
skinning_model,
vol_feat,

Hi, as the above line illustrates, the L-252, valuable 'eval' is not defined in this function scope, hence ~diverge_mask if eval else torch.ones_like(diverge_mask) is always true(it means ~diverge_mask either train or eval phase).
I do not know is a bug of the code, or you make it True always.

bug in preprocess_ZJU-MoCap.py

Command :
python preprocess_datasets/preprocess_ZJU-MoCap.py --data-dir "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/data/rxy" --out-dir "/DATA/DATA1/rxy_code/projects/202312_demo/arah-release/data/rxy/arah-one" --seqname "one"

Error_log:
Traceback (most recent call last): File "preprocess_datasets/preprocess_ZJU-MoCap.py", line 40, in <module> body_model = BodyModel(bm_path='body_models/smpl/neutral/model.pkl', num_betas=10, batch_size=1).cuda() TypeError: __init__() got an unexpected keyword argument 'bm_path'

About pretrained meta model

Hi shaofei,

Thanks to your great work. I'm curious about the effect of the pretrained meta model in your code: 1) meta learned skinning network on the CAPE dataset, 2) MetaAvatar SDF model. Does this really have a big influence on your result or is it just a kind of way for initialization?

Thanks a lot.

Resolution of training image

@taconite My doubt was that I was training on a custom dataset with similar setting as ZJU-MOCAP. However I observed that the images were cropped and in the top left corner and then passed for training as shown here. Is there any reason for such a design decision

plan to run your code on our own dataset.

thanks for your great work. I plan to run your code on our own dataset. I recognize that you have preprocessed code for ZJU-Mocap dataset and H36M dataset. Could you please provide me with a brief guide on how to modify my own dataset to meet ARAH requirements? Or I should simply change my own dataset into the format of ZJUMocap or H36m and this will work for me? Much appreciate.

Differences between the 'body_model's of human_body_prior and easymocap?

Thanks for your great work! I have some questions about the code:

# Get bone transforms
body = body_model(root_orient=new_root_orient_torch, pose_body=pose_body_torch, pose_hand=pose_hand_torch, betas=betas_torch, trans=new_trans_torch)
body_model_em = load_model(gender='neutral', model_type='smpl')
verts = body_model_em(poses=poses_torch, shapes=betas_torch, Rh=new_root_orient_torch, Th=new_trans_torch, return_verts=True)[0].detach().cpu().numpy()
vertices = body.v.detach().cpu().numpy()[0]
new_trans = new_trans + (verts - vertices).mean(0, keepdims=True)
new_trans_torch = torch.from_numpy(new_trans).cuda()

  1. Why are the vertex positions obtained by the two body_models different? I didn't find any differences in these two classes and corresponding lbs functions.
  2. Why you need to change the new_trans?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.