This project sought to identify fights in surveillance footage by using low-cost methods.
Python 3.8+
NumPy
Pandas
sklean
OpenCV
Pillow
TensorFlow >= 2
This repository uses a TensorFlow 2.x compatible version of the one proposed by https://github.com/Qidian213/deep_sort_yolov3.
-
Weight Files
-
Download the YOLOv4 and DeepSORT weights
-
-
Environment Setup
- Clone and compile YOLOv4 from https://github.com/AlexeyAB/darknet
- Copy the files
darknet
andlibdarknet.so
to this project's folder - Place the
mars-small128.pb
andyolov4.weights
into aweights
folder
-
Dataset
- Extract the RWF-2000 dataset into this folder
- Alternatively edit file
run_all.py
to change the path used
Running the run_all.py
script will create an output table
File | Ground Truth | #Frames | #Frames with Fights Detected |
---|---|---|---|
... | ... | ... | ... |
Running then the script plot.py
will process output tables
and create comparison graphs.
These tables can also be loaded by the Facets app.
By editing the paths at the start of files extract_movement.py
and conflict_detector.py
, one can
first run the extract_movement.py
script to create a table, registering the position of each person
during the video.
By then running the conflict_detector.py
script, one can see how many frames are there in the video
and how many of those contain potential fight scenes. Also, the tables loaded for this script can also
be loaded by the visualize.py
script, which displays the video while showing the personal space of
each person detected.