Author: Kung-hsiang, Huang (Steeve), 2018
Although there exists an abundance of English speech recognition datasets publicly available, the opposite is true for the Mandarin ones, espically for Mandarin datasets that contain some Taiwanese or English speech. We want to leverage the copiousness of the Taiwanese dramas uploaded to Youtube to collect Speech Recognition dataset. The pipeline is shown as the following figure:
First, install FFMPEG from its website
- Python==3.6
- joblib==0.12.0
- numpy==1.13.3
- pandas==0.23.3
- tensorflow-gpu==1.4.0
- keras==2.1.3
- google-cloud-vision==0.32.0
- pafy==0.5.4
- youtube-dl==2017.12.2
- tqdm==4.23.4
- editdistance==0.4
To set up all the requirements, prepare a python 3.6 environment with conda and install packages with pip.
conda create -n py36 python=3.6 anaconda
source activate py36
pip install -r requirements.txt
|-- src
|-- mandarin
| |-- audios
| |-- bgs_results
| |-- frames
| |-- maskrcnn_results
| |-- ocr_results
| |-- processed_frames
| |-- processed_videos
| |-- split_audios
| |-- srts
| `-- videos
|-- Mask_RCNN
| |-- assets
| |-- images
| |-- logs
| |-- mrcnn
| `-- samples
|-- docs
src/: Directory that stores all the code.
mandarin/: Directory that stores all the intermediate and final results, including the sub-directories as the follows:
videos/: Directory that stores the downloaded videos.
audios/: Directory that stores the extracted audio from the videos.
frames/: Directory that stores the split frames from the videos.
maskrcnn_result/: Directory that stores the resulted frames processed by Mask-RCNN.
ocr_results/: Directory that stores the OCR results in CSV files for each video.
srts/: Directory that stores the SRT files for each video.
processed_videos/: Directory that stores the videos that have been split into frames.
processed_frames/: Directory that stores the frames that have been processed by Mask-RCNN.
/Mask-RCNN: Directory of Mask-RCNN.
logs/: Directory that stores the training logs (in tensorboard format) and Mask-RCNN's weights.
samples/subtitle/: Directory that stores the training logs (in tensorboard format) and Mask-RCNN's weights.
docs/: Presentation to the General Director.
mandarin_drama.txt: Input file for download_video.py. Each row contains a drama (playlist) name.
download_videos.py : Download videos with Youtube API.
split_videos.py : Split videos with FFMPEG.
run_mask_rcnn.py : Run Mask-RCNN to remove everything in the images except the subtitles.
ocr_to_csv.py : Detect text in frames with Google OCR API.
csv_to_srt.py : Aggregate OCR results to SRT files.
automatic_script.sh : Run the script to run through the whole pipeline.
Dataset Generation Mask-RCNN .ipynb : Jupyter notebook for generating Mask-RCNN training dataset.
To train Mask-RCNN,
Perpare several true-type fonts, and images split from videos downloaded. Run Dataset Generation Mask-RCNN .ipynb, then run subtitle.py under Mask_RCNN/samples/subtitle .
python subtitle.py train --dataset=/path/to/dataset --subset=train --weights=coco
To automatically run the
bash automatic_script.sh