abhijitbendale / osdn Goto Github PK
View Code? Open in Web Editor NEWCode and data for the research paper "Towards Open Set Deep Networks" A Bendale, T Boult, CVPR 2016
License: Other
Code and data for the research paper "Towards Open Set Deep Networks" A Bendale, T Boult, CVPR 2016
License: Other
so while computing euclidean distance you have a division by 200 in the equation.
I couldn't understand why is it required.
query_distance = spd.euclidean(mean_vec[channel, :], query_channel) / 200.
While using the imageNetFeatures script, I ran into following errors:-
After making the script work we tried running compute_openmax.py with fooling_images data provided on the main page. Surprisingly probabilty for fooling image was around 90% .
Please suggest some solution to reproduce the result specified in the thesis paper.
Error statements-
File "imageNet_Features.py", line 302, in <module> main(sys.argv) File "imageNet_Features.py", line 299, in main extractFeatures(args) File "imageNet_Features.py", line 126, in extractFeatures compute_features(imgname,args) File "imageNet_Features.py", line 176, in compute_features feature_dict['fc7'] = sp.asarray(classifier.blobs['fc7'].data.squeeze(axis=(2,3))) ValueError: 'axis' entry 2 is out of bounds [-2, 2)
Traceback (most recent call last): File "imageNet_Features.py", line 302, in <module> main(sys.argv) File "imageNet_Features.py", line 299, in main extractFeatures(args) File "imageNet_Features.py", line 126, in extractFeatures compute_features(imgname,args) File "imageNet_Features.py", line 151, in compute_features input_scale=args.input_scale, channel_swap=channel_swap) File "/home/ubuntu/deep-learning/caffe/python/caffe/classifier.py", line 37, in __init__ self.transformer.set_mean(in_, mean) File "/home/ubuntu/deep-learning/caffe/python/caffe/io.py", line 250, in set_mean raise ValueError('Mean channels incompatible with input.') ValueError: Mean channels incompatible with input.
Hello,
I have some problems with the code.
When I use the command:
/compile.sh
, I got:
gcc: error: unrecognized command line option ‘-fstack-protector-strong’
gcc: error: unrecognized command line option ‘-fstack-protector-strong’
error: command 'gcc' failed with exit status 1
cp: cannot stat ‘libmr.so’: No such file or directory
Do you now what kind gcc is needed for this?
In readme, it is said 'We will upload the fooling images and features extracted for fooling images in few days'. I don't think the extracted features are uploaded. I am wondering if there is a plan to do so.
If not, I think we should modify imageNet_Features.py and generate the extracted features from there?
Hi! @abhijitbendale Could you please share your experience about choosing the hyper parameters, such as ALPHA_RANK and WEIBULL_TAIL_SIZE?
Hello,
Please, the "channel_scores" is the feature vector of the "fc8" layer for a specific channel, then why did you loop over the "categoryid" to compute modified_fc8_score = channel_scores[categoryid] * ( 1 - wscore*ranked_alpha[categoryid] ) ?
Yet, the Len(channel_scores) is not equal to NCHANNELS.
Thank you.
In the paper "Towards Open Set Deep Networks" it is mentioned, that we have to do per class Weibull fit using FitHigh function. However, in the documentation html files it is written that FItHigh should be used if the data is such that larger is better, which I suppose is referred to large distances from the mean activation vector.
Using FitHigh function, gives larger scores for larger distances, but according to the paper, should it be the opposite? What I meant to say is that, should, we get low scores for larger distances from the mean activation vector?
And if I am not wrong, shouldn't FitLow function be used in place of FitHigh?
This is what I obtained
import libmr as mr
meta = mr.MR()
meta.fit_high(sorted_dist[-100:],100)
meta.w_score(0.0)
0.0
meta.w_score(10.0)
0.21476801948365665
meta.w_score(20.0)
0.9963975817523232
meta.w_score(30.0)
0.9999996470632248
meta.w_score(40.0)
0.9999999999978881
meta.fit_low(sorted_dist[-100:],100)
meta.w_score(0.0)
1.0
meta.w_score(10.0)
0.8115743914686449
meta.w_score(20.0)
1.665564614727888e-05
meta.w_score(30.0)
0.0
meta.w_score(40.0)
0.0
I am also attaching the plot of the sorted distances of all the 1011 correctly classified training examples of only one class, from the mean activation vector of that class.
I tried to run ImagenetFeature.py. The imagenetFeature.py can't excute because caffe does't have gpu and mean_file parameter. The picture shows init method in caffe library.
In the imagenetFearute.py, that send the gpu and mean_file parameter to caffe library to create the object.
Question: How can i fixed it. Can I remove the gpu and mean_file parameter in ImagenetFeature.py.
I've found a versione of libmr in https://github.com/Vastlab/libMR released under BSD license.
But here I see that there is a patent pending on some libmr methods. Are this related also to fitmax?
wscore = category_weibull[2][channel].w_score(channel_distance)
modified_fc8_score = channel_scores[categoryid] * ( 1 - wscore*ranked_alpha[categoryid] )
openmax_fc8_channel += [modified_fc8_score]
openmax_fc8_unknown += [channel_scores[categoryid] - modified_fc8_score ]
Can you explain how this works with negative fc8 scores?
As I understand, if v_i(x) is negative then even with w_i=1, v_0(x)=0, so v_i < v_0.
In fact a fc8 layer may contain negative logits, and it leads to a high unknown-unknown probability in my dataset.
features = loadmat('data/train_features/n01440764/n01440764_9981.JPEG.mat')
ch = 0
features['fc8'][ch]>0
array([ True, True, True, True, True, True, True, True, True,
False, True, True, True, True, True, True, True, True,
False, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, False, True,
False, False, False, False, True, False, False, True, True,
False, True, True, True, True, True, True, True, True,
...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.