khalooei / alocc-cvpr2018 Goto Github PK
View Code? Open in Web Editor NEWAdversarially Learned One-Class Classifier for Novelty Detection (ALOCC)
License: MIT License
Adversarially Learned One-Class Classifier for Novelty Detection (ALOCC)
License: MIT License
Hello, I change the dataset for my own dataset, and the test.py file run well, but i dont kown how this file work, could you tell me what does this file do? How to use a trained model? Thanks in advance!
Hi Sabokrou,
Thank you for the source code of this good work!
I would like to explore more about your approach. Could you please provide the frame-level scores of Ped2 you obtained in your experiment (which you used to estimate the EER in Table 2)?
Thank you!
Hello there,
In your "test.py" file the variable "nd_patch_step" is not defined. If I track down the variable it should be equal to "nStride", but with such setting the test does not seem to be working correct.
Am I missing some thing?
Thanks
Hi there, I want to extend your idea for adversarial training. Is there anyone which has time for helping me to use it for that purpose?
Is the output of the discriminator an anomaly score during the test of anomaly detection? Why is the output of the discriminator a very small value when I am testing, on the order of e-9 to e-20?
(What the title says)
In my experiment, the output of the discriminator is about 0.49 after GAN convergence. How could you make the discriminator output probability larger than 0.5?
Hi,
I'm trying to repeat your experiments on MNIST. But to get the F1-score, may I know how you set the likelihood threshold to distinguish the inliers and outliers of the test samples?
Thanks!
It seems that your code doesn't converge on MNIST dataset. Please check it out, thanks.
getting error for UCSD dataset:
File "test.py", line 168, in process_frame
frame_patches = nd_patch.transpose([1,0,2,3])
ValueError: axes don't match array
Hi, khalooei:
Thanks for sharing your code, it's interesting.
I've a little confusion about the model, could you please explain it?
It seems the ALOCC can model generic single class(e.g. penguins) and others
class(e.g. dogs, cats...) very well.
Will it works for these scenarios?
a. generic train-set multi-classes
class and others
class? i.e. R models a distribution for not only one explicit class but a complex distribution for all classes in train-set
b. for fine-grained dataset, one explicit class as base class and others as novelty classes? i.e. base class and novelty class may have more similar distributions than generic class dataset
c. fine-grained multi-classes as base class and others as novelty class?
Thanks.
I find the function of sampler and generator actually are the same? I wonder if there is any mistake.
hello~
when I trained and tested on my own dataset, I got some weird output from results_d: there exists negative value and values which is bigger than 1.0, such as:
results d: [[-0.06518985]
[ 0.01245594]
[ 0.18250445]
[ 0.14557755]
[-0.12947509]
[-0.32424578]
[ 0.01147144]
[-0.20546311]
[ 0.38732064]
[ 0.45803282]
[ 0.22594012]
[ 0.02803989]
[ 0.43993357]
[ 0.81624144]
[ 0.5465872 ]
[ 1.1168824 ]
[ 0.4104824 ]
[ 0.60788006]
[ 0.68482363]
[ 1.3565235 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945145 ]
[-0.06518985]
[ 0.01245594]
[ 0.18250445]
[ 0.14557755]
[-0.12947509]
[-0.32424578]
[ 0.01147144]
[-0.20546311]
[ 0.38732064]
[ 0.45803282]
[ 0.22594012]
[ 0.02803989]
[ 0.43993357]
[ 0.81624144]
[ 0.5465872 ]
[ 1.1168824 ]
[ 0.4104824 ]
[ 0.60788006]
[ 0.68482363]
[ 1.3565235 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945145 ]
[ 1.1945149 ]]
Did I miss something during training or testing process? Your any advices would be helpful, thanks!
Line 370 in 9fd544d
Hi, the output values of generator G is the one of tanh and thus it takes [-1,1] values.
But the true input takes [0,1] values (at least, MNIST case).
Thus, it will be too easy for discriminator D to classify True or False, by just checking if minus values being contained. The output of G should take sigmoid instead of tanh?
Issue: No output sample is generated for “mnist” or “ucsd” (even when the pre-trained model provided in github is used)
python test.py --dataset UCSD --dataset_address ./dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Train --input_height 45 --output_height 45
if FLAGS.dataset=='mnist':
mnist = input_data.read_data_sets(FLAGS.dataset_address)
specific_idx_anomaly = np.where(mnist.train.labels != 6)[0]
specific_idx = np.where(mnist.train.labels == 6)[0]
ten_precent_anomaly = [specific_idx_anomaly[x] for x in
random.sample(range(0, len(specific_idx_anomaly)), len(specific_idx) // 40)]
data = mnist.train.images[specific_idx].reshape(-1, 28, 28, 1)
tmp_data = mnist.train.images[ten_precent_anomaly].reshape(-1, 28, 28, 1)
data = np.append(data, tmp_data).reshape(-1, 28, 28, 1)
lst_prob = tmp_ALOCC_model.f_test_frozen_model(data[0:FLAGS.batch_size])
print('check is ok')
exit()
for s_image_dirs in sorted(glob(os.path.join(FLAGS.dataset_address, 'Test[0-9][0-9][0-9]'))):
tmp_lst_image_paths = []
if os.path.basename(s_image_dirs) not in ['Test004']:
print('Skip ',os.path.basename(s_image_dirs))
continue
for s_image_dir_files in sorted(glob(os.path.join(s_image_dirs + '/*'))):
if os.path.basename(s_image_dir_files) not in ['068.tif']:
print('Skip ', os.path.basename(s_image_dir_files))
continue
tmp_lst_image_paths.append(s_image_dir_files)
#random
#lst_image_paths = [tmp_lst_image_paths[x] for x in random.sample(range(0, len(tmp_lst_image_paths)), n_fetch_data)]
lst_image_paths = tmp_lst_image_paths
#images =read_lst_images(lst_image_paths,nd_patch_size,nd_patch_step,b_work_on_patch=False)
nd_patch_step = 0 # we added this line (from the github to avoid an error)
images = read_lst_images_w_noise2(lst_image_paths, nd_patch_size, nd_patch_step)
lst_prob = process_frame(os.path.basename(s_image_dirs),images,tmp_ALOCC_model)
print('pseudocode test is finished')
Dear Sir:
I am very interested in your paper. But some problems occurred when I repeated MNIST experiment. I wrote the test code and adopted the same LOSS function as the training code. Visualize the LOSS value of G and D, but the normal samples and abnormal samples may not be distinguished.
For example, In the training,the number 1 is treated as a normal sample; in the test, the number 0,1,2 and 3 are visualized; the number 1 is red, the number 0 is blue, the number 2 is green, and the number 3 is yellow. The parameters are all default values. The results look good.
But if the number 2 is treated as a normal sample; in the test, the number 0,1,2 and 3 are visualized; the number 2 is red, the number 0 is blue, the number 1 is green, and the number 3 is yellow. The parameters are all default values. The results turned out badly.
I tried adding noise to the test sample and adjusted r_alpha, but none of the results improved significantly.
This question has been bothering me for a few days and How should the LOSS of the test sample be selected,Could you give me some advice. Thank you!
Best wishes!
Dear Sir,
In the paper refinement loss is defined as euclidean loss but in the tensorflow code it is defined as crossentropy loss why is it different?
Also did you try L1 loss ?
Thanks
For example,line 83 “ FLAGS.nStride ” nStride is not difined, line 161 "nd_patch_step" is not difined,
using glob.glob() instead of glob() may be better and so on.
test.py doesnot output any meaningful result? the code is not finished? how about AUC?
Will this also be applicable to structure data - continuous or categorical columns.
I ran training, but hit this error... Anyone getting this?
python train.py --dataset UCSD --dataset_address ./dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Train --input_height 45 --output_height 45
{'attention_label': 1,
'batch_size': 128,
'beta1': 0.5,
'checkpoint_dir': 'checkpoint',
'dataset': 'UCSD',
'dataset_address': './dataset/UCSD_Anomaly_Dataset.v1p2/UCSDped2/Train',
'epoch': 40,
'input_fname_pattern': '*',
'input_height': 45,
'input_width': None,
'learning_rate': 0.002,
'log_dir': 'log',
'output_height': 45,
'output_width': None,
'r_alpha': 0.2,
'sample_dir': 'samples',
'train': True,
'train_size': inf}
2018-12-22 15:03:46.625140: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-12-22 15:03:46.948538: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.97GiB
2018-12-22 15:03:46.965900: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
Program is on Train Mode
libpng error: Write Error
I can't reproduce the results in your paper,that is,D(G(z)), in your paper,all of them are between 0 and 1,I tried to print the variable 'D_',it turned out that all of them are negative,and so ,the results of variable 'D_logits_' are all too close to zero,which means the algorithm didn't go very well because all the test patches are abnormal,I was stuck for a long time,and I hope you can help me.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.