Git Product home page Git Product logo

gcn-anomaly-detection's People

Contributors

jx-zhong-for-academic-purpose avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gcn-anomaly-detection's Issues

About the anomaly number

Hi,Thanks for your great work.Since there is no test code, I can't see the result. So I have a question to verify my thoughts.I have a question.If there are two different types of anomaly events in a video, Can this model detect two different anomalies ?
Looking forward to your reply
Best Wishes
Thanks

About the class UCFCirme

Hi, thank you for your great work!
In train.py,

ucf_crime = UCFCrime(feature_path, graph_generator),

there are only two parameters got, while UCFCrime needs at least four parameters in its defination.

class UCFCrime(Dataset):
def init(self, videos_pkl, prediction_folder, feature_folder, modality,
normalized=True, graph_generator=None, graph_generator_param=None):

I have not run this file. Should it be corrected?

About the optimization mechanism of classifier and graph convolution

Thank you for your work very much. In train.py and experiment_c3d.py, I found that only the graph convolution was trained and did not have the interaction network with the classifier. In addition, I would like to ask the feature_path to store the features extracted by the classifier, how to generate the prediction_folder file. in make_soft_label_c3d_high_conf.py ,finally output and save as a txt file? Can you give me some suggestions? Thank you

About the number of video segment

Hi, Zhong, I am confused about the number of segments of training videos.

For C3D, the feature is extracted form every 16 frames and then compress these feature to 32 features for 32 segments, do I understand correctly?

For TSN, due to the complexity of installing caffe and configuring pycaffe, I tried to use the pytorch edition of TSN from tsn-pytorch. In this code, the feature of a short video is extracted from 7 or 9 frames in corresponding 7 or 9 snippets. In other words, only one frame is used from each snippets.

However, the number of feature for a video in this paper is 32. What confuse me is that in feature extracting stage, whether a video is divided into 32 segments first, and then each segments is divided into 7 or 9 snippets as the input of TSN? Is TSN used to extracted feature for each segment in a video?

Thank you for your time!

Installation guide

Do you have any list of dependencies needed to run both the feature extraction and training codes?
Do we need to build 2 different caffe for each C3D and TSN feature extractions?

Your help is appreciated

Problem in reproducing the experiments

I am trying to reproduce the experiments.
In the paper, you have mentioned that using the Noise Filter you are retraining Action Classifier. But according to your code it seems like you are using the action classifier only as a feature extractor and then training only Noise Filter.
Can you help me understand how its working exactly?

About run extract_c3d_all.py

Hi,Thanks for I compiled the caffe in extract_c3d successfully, and when I run extract_c3d_all.py, I get an error:

1:42:47.985507 21269 upgrade_proto.cpp:928] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/z840/GCN-Anomaly-Detection-master/c3d_deploy.prototxt
*** Check failure stack trace: ***
Setting device 0
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 31:16: Non-repeated field "kernel_size" is specified multiple times.
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0716 21:42:48.657341 21302 upgrade_proto.cpp:928] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/z840/GCN-Anomaly-Detection-master/c3d_deploy.prototxt
*** Check failure stack trace: ***
I don't know where this problem is.Do you give me some advice?
Thanks
Best wishes

UCSD Ped2

Hello! Can I ask two questions?

  1. how did you randomly select the videos for the train set, as there is no standard for randomness mentioned? Like, did you use numpy.random.seed(0)? If possible, could you provide a txt containing 60 anomaly videos and 40 normal videos used (6 abn * 10 + 4norm * 10) for the train?

  2. So, is the training set just composed of 6 random abnormal videos and 4 random normal videos from UCSD Ped2? And the remaining 18 videos of abnormal and normal videos are used as test set?

I just keep having doubts about how the model can be trained on such a tiny data and have such an excellent performance b/c each of those 10 "videos" has between 120 and 180 frames only.

Thank you!

About Temporal Consistency Graph Module

Hi,I am interested in the paper.I have two questions after reading the article.

  1. In the Temporal Consistency Graph Module, I want to know what form i and j are represented in the formula. Why do you choose to use Laplacian kernel ? The article only says that i represents the i-th snippet.
  2. input a V(Anomaly Videos snippets contain N snippets) into the action classifier , the i and j snippet in the V must be from the same video, or can come from different videos?

looking forward to your reply,thanks

About UCFCrimeTest class

Thanks for your great work.In the pygcn folder,in train.py,"from dataset_test import UCFCrimeTest"is problematic.in dataset_test.py,UCFCrimeTest is not available,It lack UCFCrimeTest class.

Trained model out from Baidu

Hi, I have been trying to download the trained models for feature extraction from Baidu but I am not able since I do not have a Baidu account and I cannot do it since I do not have a Chinese phone number. Can someone upload the 4 files that are in that folder into a different cloud storage? Thank you

About select high confidence snippets

Hi,Thanks for your great work.I downloaded the extract feature code. I have some doubt. Firstly, the c3d_iter_1000.caffemodel file format you provided is a pcx image, not a binary file.
Secondly, in the cross-entropy error of direct supervision, In the paper ,H represents a set of high-confidence snippets. eg.C3D net ,So does the max value of |H| is the value 1600*60%?why does H plus absolute value mean ? I don't understand words in the paper"Due to the limited memory of GPUs,we at most sample 1600 high-confidence snippets with not more than 8 neighbors respectively in a video".Is an epoch selecting a 1600 high confidence snippets?what‘s the 8 neighbors mean?
Looking forward to your reply
Best wishes

How to get per-frame test result?

Hi,

Thank you for providing the code for this great paper!
I'm wondering how can I get the per-frame anomaly detection results? According to the paper, the C3D provides per-snippet prediction, but the final evaluation is using per-frame AUC. I wonder how did you populate the per-snippet prediction to per-frame?

Figure 6 actually shows that the anomaly detection is smooth so I'm assuming there is a filtering method or so?

P.s. it woudl be helpful to know the snippet size you used in your paper...

Thank you,
Brian

您好,请问能提供划分好的数据集吗?(已解决,我是sb)

您好,我没怎么理解您提供的01_0014
01_0016
01_002
01_0026
01_0029
01_003
01_0030
01_005
01_0054
01_0063
01_007
01_0073
01_0076
01_009
01_010
01_011
01_0129
01_013
01_0130
01_0131
01_0132
01_0133
01_0134
01_0138
01_014
01_0140
01_015
01_016
01_0163
01_017
等索引,您能直接提供一下划分好的数据集吗?望回复谢谢

degree matrix?

Hi, in the paper, having the formula:

$$\widehat{\mathbf{A}}^{\mathbf{F}}=\widetilde{\mathbf{D}}^{\mathbf{F}-\frac{1}{2}} \widetilde{\mathbf{A}}^{\mathbf{F}} \widetilde{\mathbf{D}}^{\mathbf{F}-\frac{1}{2}}$$

but, the degree matrix D in which the entries outside the main diagonal are all zero, so if calculate
the formula: $\widetilde{\mathbf{D}}^{\mathbf{F}-\frac{1}{2}}$, it will be wrong.

can you explain it? thank you!

Missing the Testing Codes?

It seems there are no testing code in the repo. The utils.py seems infered that you calculate all snippets. (e.g. C3D, video_length//16 snippets for each video, and use sigmoid to normalize the score )
Can you detail this part? The different post-processing method may influence the result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.