Git Product home page Git Product logo

video-preprocessing's Introduction

Video Preprocessing

This repository provides tools for preprocessing videos for TaiChi, VoxCeleb and UvaNemo dataset used in paper.

Dowloading videos and cropping according to precomputed bounding boxes

  1. Instal requirments:
pip install -r requirements.txt
  1. Load youtube-dl:
wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl
chmod a+rx youtube-dl
  1. Run script to download videos, there are 2 formats that can be used for storing videos one is .mp4 and another is folder with .png images. While .png images occupy significantly more space, the format is loss-less and have better i/o performance when training.

Taichi

python load_videos.py --metadata taichi-metadata.csv --format .mp4 --out_folder taichi --workers 8

select number of workers based on number of cpu avaliable. Note .png format take aproximatly 80GB.

VoxCeleb

python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8

Note .png format take aproximatly 300GB.

UvaNemo Since videos is not avaliable on youtube you have to download videos from official website, and run:

python load_videos.py --metadata nemo-metadata.csv --format .mp4 --out_folder nemo --workers 8 --video_folder path/to/original/videos

Note .png format take aproximatly 18GB.

Preprocessing VoxCeleb dataset

If you need to change cropping strategy for VoxCeleb dataset or produce new bounding box annotations folow these steps:

  1. Load vox-celeb1(vox-celeb2) annotations:
wget www.robots.ox.ac.uk/~vgg/data/voxceleb/data/vox1_test_txt.zip
unzip vox1_test_txt.zip

wget www.robots.ox.ac.uk/~vgg/data/voxceleb/data/vox1_dev_txt.zip
unzip vox1_dev_txt.zip
wget www.robots.ox.ac.uk/~vgg/data/voxceleb/data/vox2_test_txt.zip
unzip vox2_test_txt.zip

wget www.robots.ox.ac.uk/~vgg/data/voxceleb/data/vox2_dev_txt.zip
unzip vox2_dev_txt.zip
  1. Load youtube-dl:
wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl
chmod a+rx youtube-dl
  1. Install face-alignment library:
git clone https://github.com/1adrianb/face-alignment
cd face-alignment
pip install -r requirements.txt
python setup.py install
  1. Install ffmpeg
sudo apt-get install ffmpeg
  1. Run preprocessing (assuming 8 gpu, and 5 workers per gpu).
python crop_vox.py --workers 40 --device_ids 0,1,2,3,4,5,6,7 --format .mp4 --dataset_version 2
python crop_vox.py --workers 40 --device_ids 0,1,2,3,4,5,6,7 --format .mp4 --dataset_version 1 --data_range 10000-11252

Preprocessing TaiChi dataset

If you need to change cropping strategy for TaiChi dataset or produce new bounding box annotations folow these steps:

  1. Download videos based on annotations:
python load_videos.py --metadata taichi-metadata.csv --format .mp4 --out_folder taichi --workers 8 --video_folder youtube-taichi --no_crop
  1. Install mask-rcnn benchmark. Follow the instalation guide https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/INSTALL.md

  2. Load youtube-dl:

wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl
chmod a+rx youtube-dl
  1. Run preprocessing (assuming 8 gpu, and 5 workers per gpu).
python crop_taichi.py --workers 40 --device_ids 0,1,2,3,4,5,6,7 --format .mp4

Preprocessing Nemo dataset

If you need to change cropping strategy for Nemo dataset or produce new bounding box annotations folow these steps:

  1. Install face-alignment library:
git clone https://github.com/1adrianb/face-alignment
cd face-alignment
pip install -r requirements.txt
python setup.py install
  1. Download videos from official website, and run:
python crop_nemo.py --in_folder /path/to/videos --out_folder nemo --device_ids 0,1 --workers 8 --format .mp4

Additional notes

Citation:

@InProceedings{Siarohin_2019_NeurIPS,
  author={Siarohin, Aliaksandr and Lathuilière, Stéphane and Tulyakov, Sergey and Ricci, Elisa and Sebe, Nicu},
  title={First Order Motion Model for Image Animation},
  booktitle = {Conference on Neural Information Processing Systems (NeurIPS)},
  month = {December},
  year = {2019}
}

video-preprocessing's People

Contributors

abcdea avatar aliaksandrsiarohin avatar maxopoly avatar reznet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

video-preprocessing's Issues

Actually not all the links are broken, only `204`:

          Actually not all the links are broken, only `204`:May I ask how did you successfully download the video, I also followed this line of code python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8 runs, but 257it [75:29:08,. 966.54s/it] Can not load video 75sBThtNTdo, broken link

This keeps on happening without any video being downloaded, what should I do to successfully download the video, can you tell me your method?

$ grep -o -i "broken link" download_test_64093769.log | wc -l 
204

Originally posted by @moldach in #18 (comment)

What 'start and end' mean in metadata.csv?

Thank you for open-source such amazing work.
I am trying to add more video data and going to make metadata about it.
But, i don't understand meant by start and end in metadata.csv.
I think It's not just sec. What is it?
And I have one more question. Is there nemo-matedata.csv?

Access denied

When trying to run the "wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl", it returns the following message.
`wget :

Access denied

Due to a ruling of the Hamburg Regional Court, access to this website is blocked.


Zugriff gesperrt

Aufgrund eines Urteils des Landgerichts Hamburg ist der Zugriff auf diese Website gesperrt.

At line:1 char:1 + wget http://youtube-dl.org/downloads/2013.01.11/youtube-dl -O /usr/bi ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException + FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand `

Can you please help me fix this problem?

cannot install requirements

when I run pip install -r requirements.txt I get the following error:
Collecting cffi==1.11.5 Using cached cffi-1.11.5.tar.gz (438 kB) ERROR: Command errored out with exit status 1: command: 'c:\arquivos de programas (unprotected)\python\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Lucas\\AppData\\Local\\Temp\\pip-install-d5dssvmx\\cffi\\setup.py'"'"'; __file__='"'"'C:\\Users\\Lucas\\AppData\\Local\\Temp\\pip-install-d5dssvmx\\cffi\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Lucas\AppData\Local\Temp\pip-install-d5dssvmx\cffi\pip-egg-info' cwd: C:\Users\Lucas\AppData\Local\Temp\pip-install-d5dssvmx\cffi\ Complete output (19 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Lucas\AppData\Local\Temp\pip-install-d5dssvmx\cffi\setup.py", line 120, in <module> if sys.platform == 'win32' and uses_msvc(): File "C:\Users\Lucas\AppData\Local\Temp\pip-install-d5dssvmx\cffi\setup.py", line 98, in uses_msvc return config.try_compile('#ifndef _MSC_VER\n#error "not MSVC"\n#endif') File "c:\arquivos de programas (unprotected)\python\lib\distutils\command\config.py", line 225, in try_compile self._compile(body, headers, include_dirs, lang) File "c:\arquivos de programas (unprotected)\python\lib\distutils\command\config.py", line 132, in _compile self.compiler.compile([src], include_dirs=include_dirs) File "c:\arquivos de programas (unprotected)\python\lib\distutils\_msvccompiler.py", line 360, in compile self.initialize() File "c:\arquivos de programas (unprotected)\python\lib\distutils\_msvccompiler.py", line 253, in initialize vc_env = _get_vc_env(plat_spec) File "c:\arquivos de programas (unprotected)\python\lib\site-packages\setuptools\msvc.py", line 314, in msvc14_get_vc_env return _msvc14_get_vc_env(plat_spec) File "c:\arquivos de programas (unprotected)\python\lib\site-packages\setuptools\msvc.py", line 268, in _msvc14_get_vc_env raise distutils.errors.DistutilsPlatformError( distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/ ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

When I search for the error, I get a gazillion results on google, most proposing answers for Linux or MacOS, on a bunch of different installations. Been at this for the past 3 days and simply can't get past step 1.

Download vox dataset problems

Hi FOMM team,
I have some problem when I run load_videos.py: Can not load video 9VPK7BusHDI, broken link. I think it is caused by network ban. Can you provide the Google Disk for the vox dataset after cropping? Thanks.

Download data set problem

Hi
Thank you for awesome repo.
When I use the python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8 command to download the vox data set, an error ''Can not load video na8-QEFmj44, broken link'' appears. And there is no video in the newly created folder.

For me it worked with the latest version of youtube-dl. In your case no videos have been downloaded? Can you download the video using ```youtube-dl <video-id>```? Is it possible to access the same video from youtube website?

          For me it worked with the latest version of youtube-dl. In your case no videos have been downloaded? Can you download the video using  ```youtube-dl <video-id>```? Is it possible to access the same video from youtube website?

Also if you need datasets for educational purposes, contact me by email.

Originally posted by @AliaksandrSiarohin in #1 (comment)

Mutiprocessing Error

I run load_videos.py for vox process. And it stucked. After ctrl+c, it returns "
File "/opt/conda/envs/firstOrder/lib/python3.7/multiprocessing/pool.py", line 733, in next
item = self._items.popleft()
IndexError: pop from an empty deque
".

OSError: [Errno 12] Cannot allocate memory when processing Taichi Dataset

I encountered the following memory error when processing the Taichi Dataset. The same script works for VoxCeleb and the error does not occur at the first iteration but arbitrarily at 200+ iterations. Any idea what caused the problem? For example, a particular video, or number of workers, ... Would using a try-except block to wrap mimsave or using only 1 worker fix the problem?

python load_videos.py --metadata taichi-metadata.csv --format .mp4 --out_folder taichi --workers 8 --youtube youtube-dl

/path/anaconda3/envs/first-order-model/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/path/anaconda3/envs/first-order-model/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
0it [00:00, ?it/s]Can not load video 3M5VGsUtw_Q, broken link
2it [00:29, 10.95s/it]Can not load video xmwGBXYofEE, broken link
...
267it [1:23:47, 21.82s/it]Can not load video vNfhp02w9s0, broken link
279it [1:29:16, 15.30s/it]multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "load_videos.py", line 75, in run
    save(os.path.join(args.out_folder, partition, path), entry['frames'], args.format)
  File "/path/video-preprocessing/util.py", line 118, in save
    imageio.mimsave(path, frames)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/core/functions.py", line 357, in mimwrite
    writer.append_data(im)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/core/format.py", line 492, in append_data
    return self._append_data(im, total_meta)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 558, in _append_data
    self._initialize()
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 616, in _initialize
    self._write_gen.send(None)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio_ffmpeg/_io.py", line 379, in write_frames
    cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None, shell=ISWIN
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/subprocess.py", line 1482, in _execute_child
    restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "load_videos.py", line 103, in <module>
    for chunks_data in tqdm(pool.imap_unordered(run, zip(video_ids, args_list))):
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/tqdm/std.py", line 1108, in __iter__
    for obj in iterable:
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/multiprocessing/pool.py", line 748, in next
    raise value
OSError: [Errno 12] Cannot allocate memory
279it [1:41:36, 21.85s/it]

crop_vox.py ??

Is it necessary to run crop_vox.py script for preprocessing data, or just load_videos.py will do the job? Basically, I want to replicate results, so it is necessary to run crop_vox.py? Also if I want to process subset of vox celeb dataset, then how can I do it?
@AliaksandrSiarohin

vox2 metadata

Hello, i want to thank you for this repository. My question is how did you obtain vox-metadata.csv? Is there a similar markup for vox2? (afaik original data is in another format).

Output dataset of taichi can not found

Hi, I follow the command in the picture to pre-process taichi dataset and it is strangely the output folder is empty. The folder is created while there is no file in the folder. No errors assert in the running time. Does that mean the processed images are not saved in the code? Do you have any suggessions?
image

Take a digression, generally there are some package errors when directly run the preprocesing codes for taichi dataset. And it's mainly caused by the maskrcnn-benchmark building. I suggest to add a statement of asking followers to build maskrcnn-benchmark first.

Size of train dataset

Hi, Aliaksandr!

I've parsed your file vox-metadata.csv and found out, that condition [(width > 255) & (height > 255) & (length > 63)] gives 17703 train and 476 test videos.

Your paper says, that there were 12331 train and 444 test videos.
Could you please answer, why there is such a big difference in the number of training videos?

Code to reproduce:

import pandas as pd

df = pd.read_csv("vox-metadata.csv")

df['bbox1'], df['bbox2'], df['bbox3'], df['bbox4'] = df['bbox'].str.split('-').str
df[["bbox1", "bbox2", "bbox3", "bbox4"]] = \
    df[["bbox1", "bbox2", "bbox3", "bbox4"]].apply(pd.to_numeric)

df['w'] = df['bbox3'] - df['bbox1']
df['h'] = df['bbox4'] - df['bbox2']
df['len'] = df['end'] - df['start']

df_train_test = df[(df['w'] > 255) & (df['h'] > 255) & (df['len'] > 63)]
df_train = df_train_test[df_train_test['partition'] == 'train']
df_test = df_train_test[df_train_test['partition'] == 'test']

print(df_train.shape, df_test.shape)

RuntimeError: Zero images were written.

I use code like:
python load_videos.py --metadata ../data/ted-metadata.csv --format .mp4 --out_folder ../data/TED384-v2 --workers 8 --image_shape 384,384
to get ted-384 datasets
But, I get this error. And I print the frams, it is empty

Broken video_ids in vox-metadata.csv

I was hoping to download the data using load_videos.py but it seems like vox-metadata.csv has broken video_id's.

Running the following leads to a string of broken link's:

$ python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8

Can not load video EdFbFdf0K7w, broken link

crop_vox.py generates no output or errors in log

I'm trying to run the preprocess section of the script but running into issues with no discernable error to go off-of.

I was able to successfully run python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8 which I see downloaded 18,334 .mp4's to the vox/train subdirectory:

$ pwd
/scratch/moldach/my-thesis-project/vox/train
$ ls | head -n 1
id10001#7w0IBEWc9Qw#000993#001143.mp4
$ ls | wc -l
18334

I've also unzipped the vox1-annotations to txt/:

$ cd txt/
$ ls | head
id10001
id10002

However, running crop_vox.py is not generating any output files (am I expecting cropped videos in the videos/ directory?):

Submission script

#!/bin/bash
#SBATCH --job-name=preprocess_bcri   # Job name
#SBATCH --mail-type=END,FAIL         # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH [email protected]     # Where to send mail        
#SBATCH --nodes=1                    # Request a P100-16G GPU node on Cedar
## This has Four Tesla P100 16GB cards
#SBATCH --gres=gpu:p100l:4   
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24           # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --mem=0                      # Request the full memory of the node
#SBATCH --time=01:00:00            # Time limit hrs:min:sec
#SBATCH --output=%x_%j.log           # Standard output
#SBATCH --error=%x_%j.log            # Standard error
pwd; hostname; date

source venv/bin/activate

echo "Running preprocessing (assuming 4 GPU, and 6 workers per GPU)"

python crop_vox.py --workers 24 --device_ids 0,1,2,3 --format .mp4 --dataset_version 1

date

preprocess_bcri_chantal_64208433.err

^M0it [00:00, ?it/s]

preprocess_bcri_chantal_64208433.log

/scratch/moldach/my-thesis-project
cdr903.int.cedar.computecanada.ca
Fri Mar 19 18:44:11 PDT 2021
Running preprocessing (assuming 4 GPU, and 6 workers per GPU)
Fri Mar 19 18:51:01 PDT 2021

Do you have any suggestions for debugging this?

Inserting cropped video back in the original

Hi, Aliaksandr! Thank you for your scripts, they are super useful in my research!

I intend to do some changes to the cropped face images and then insert them back into the original, do you have any recommendations on how to better approach this task without losing quality and mismatch problems? Do you think doing everything in reverse in the crop script will be enough?

how to process download data

I have downloaded Voxceleb2 datasets from student email, including mp4 and txt file. what should i do next to train FOMM. The scripts seems not match to deal with the download datasets.

Any suggestions to quickly test the cropping strategy?

Depending on the availability of videos and their contents, sometimes it takes a few minutes to get an initial result in the out_folder if I am lucky or half to one hour when no candidates are found. I am running 5 workers per GPU as in the README, tested different combinations of image_shape, min_frames, max_frames, min_size, --no-download, and --no-split-in-utterance but didn't have much luck.

Unable to use load video script

(xx) PS E:\video-preprocessing-master> python load_videos.py --metadata vox-metadata.csv --format .png --out_folder vox --workers 1
0it [00:00, ?it/s]multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "D:\anaconda\envs\xx\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "E:\video-preprocessing-master\load_videos.py", line 32, in run
download(video_id.split('#')[0], args)
File "E:\video-preprocessing-master\load_videos.py", line 25, in download
video_path], stdout=DEVNULL, stderr=DEVNULL)
File "D:\anaconda\envs\xx\lib\subprocess.py", line 247, in call
with Popen(*popenargs, **kwargs) as p:
File "D:\anaconda\envs\xx\lib\subprocess.py", line 676, in init
restore_signals, start_new_session)
File "D:\anaconda\envs\xx\lib\subprocess.py", line 957, in _execute_child
startupinfo)
OSError: [WinError 193] %1 is not a valid Win32 application
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "load_videos.py", line 100, in
for chunks_data in tqdm(pool.imap_unordered(run, zip(video_ids, args_list))):
File "D:\anaconda\envs\xx\lib\site-packages\tqdm_tqdm.py", line 1002, in iter
for obj in iterable:
File "D:\anaconda\envs\xx\lib\multiprocessing\pool.py", line 731, in next
raise value
OSError: [WinError 193] %1 is not a valid Win32 application

voxceleb dataset

Hi, I'm sorry to bother you
I'm trying to download the voxceleb dataset, but since the download link from the original website doesn't seem to be available anymore, I'd like to ask you for help. Maybe you can guide me on how to get the voxceleb dataset, which I want to use for my coursework.

All videos downloaded with no audio and slowed down

Hi, super devastating, have been downloading VoxCeleb for a few days with your script, but the resulting .mp4 audios downloaded without any audio(which is very important for my research) and slowed down to around 25%(this is fixable, but no audio is a real bummer).

Any way to make better result (avoid deformed head), e.g. more iterations etc?

I love this thing, thanks for the great work.
The result is amazing. The only problem I'm having. If the head moves too much and it results a deformed head.
Currently, it takes about 2min to generate one clip. I don't mind to wait for longer if I can get rid of the deformed head.

Is there a way where I can increase iterations or so to make it better? Any parameter can be tuned? This is the command I'm using:
python /bin/first-order-model-master/demo.py --config /bin/first-order-model-master/config/vox-adv-256.yaml --driving_video .\driver-1x1.mp4 --checkpoint /bin/first-order-model-master/play/vox-adv-cpk.pth.tar --relative --adapt_scale --source_image .\VanGogh.jpg

Missing Videos in Three Datasets (Vox, Taichi and Ted)

Thanks for your efforts in building this project. I am so grateful for it. However, we found that some of the videos in three datasets(Vox, Taichi and Ted) can't be downloaded anymore since these videos have been made private on Youtube. So, We wonder if you can provide some measures to obtain these videos or processed image sequences to help many researchers(including me) to continue our research? The video missing is shown as follows Missing videos in Vox, Taichi and ted

python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8 This code execution error246it [72:19:36, 1128.84s/it]Can not load video t2b-CAsCbkc, broken link

May I ask how did you successfully download the video, I also followed this line of code python load_videos.py --metadata vox-metadata.csv --format .mp4 --out_folder vox --workers 8 runs, but 257it [75:29:08,. 966.54s/it] Can not load video 75sBThtNTdo, broken link
This keeps on happening without any video being downloaded, what should I do to successfully download the video, can you tell me your method?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.