wzzheng / hdml Goto Github PK
View Code? Open in Web Editor NEWImplementation of Hardness-Aware Deep Metric Learning (CVPR 2019 Oral) in Tensorflow.
Implementation of Hardness-Aware Deep Metric Learning (CVPR 2019 Oral) in Tensorflow.
Hi, @wzzheng :
I've noted that there are code under tfRecord
directory, so I guess whether the data loading part could be realized in that way?
Thanks in advance!
embedding_z_quta = tf.concat([anc, neg_tile], axis=0) Thanks a lot
Hello @wzzheng ,
I have a problem to load the cub200_2011 even cars196 dataset. Where could I repair it? Thanks
Traceback (most recent call last):
File "main_npair.py", line 254, in <module>
tf.app.run()
File "/home/tom/miniconda3/envs/recall/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "main_npair.py", line 79, in main
embedding_z_quta = HDML.Pulling(FLAGS.LossType, embedding_z, Javg)
File "/home/tom/Devel/testing/HDML/lib/HDML.py", line 15, in Pulling
neg_tile = tf.tile(neg, [FLAGS.batch_size / 2, 1])
File "/home/tom/miniconda3/envs/recall/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8514, in tile
"Tile", input=input, multiples=multiples, name=name)
File "/home/tom/miniconda3/envs/recall/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 609, in _apply_op_helper
param_name=input_name)
File "/home/tom/miniconda3/envs/recall/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'multiples' has DataType float32 not in list of allowed values: int32, int64
Hi, i'm training your source code now. In Cars196 and CUB200-2011 it works well.
But recall rate of SOP (stanford online products) is not same as paper's one.
Even though I have experimented with changing the configuration several times, only bad results.
So if you have free time, please let me know the configuration on SOP dataset.
Thanks!
Hi @wzzheng ,
Thanks for your interesting paper and open sourcing the code.
I tried retraining with the default parameters in the FLAGS.py and with Cars196 dataset. Following are the final evaluation results that I see:
num_correct: 6509 num: 8131 K: 1, Recall: 0.801
num_correct: 7154 num: 8131 K: 2, Recall: 0.880
num_correct: 7576 num: 8131 K: 4, Recall: 0.932
num_correct: 7806 num: 8131 K: 8, Recall: 0.960
num_correct: 7948 num: 8131 K: 16, Recall: 0.977
num_correct: 8030 num: 8131 K: 32, Recall: 0.988
NMI: 0.7029318395368825
F1: 0.4271278961451372
These are around 1% higher than the numbers mentioned in the paper. Any particular reason why this is the case?
Also, with the default options, but just changing the dataset to CUB, I see the training going to NaN immediately. Are there any hyper parameters that are to be changed for training with the CUB dataset?
Hi, and thks for this repo
Could you provide some guidances on running inferences when the model is trained ?
Hello,
I am having problems to make products dataset work.
I was able to convert dataset into hd5y format and start the main worker with
python main_npair.py --dataSet='products' --batch_size=128 --Regular_factor=5e-3 --init_learning_rate=7e-5 --load_formalVal=False --embedding_size=128 --loss_l2_reg=3e-3 --init_batch_per_epoch=640 --batch_per_epoch=64 --max_steps=6400 --beta=1e+4 --lr_gen=1e-2 --num_class=11319 --_lambda=0.5 --s_lr=1e-3 --Apply_HDML
But after few iterations, I am having errors. I think that it is related to the batch construction, but
I do not see right solution right yet. The error is:
weight_decay : is nan, not record
J_metric : is nan, not record
Jm : is nan, not record
The situation is the same if I use tripletloss. Do you know how to solve the issue?
Thanks,
Tom
I used the code you provided to download the dataset and the following problems occurred:
Traceback (most recent call last):
File "datasets/cars196_downloader.py", line 15, in
fuel_root_path = fuel.config.config["data_path"]["yaml"]
KeyError: 'yaml'
Hi,
I try to run you code on GPU but it always come out error like:
failed call to cuInit: CUDA_ERROR_NO_DEVICE
I updated my tf version and also other settings, they all work for other tf projects. I don't know why and disturb you.
best,
File "F:/PhD/AML/project/HDML-master/datasets/cars196_downloader.py", line 15, in <module>
fuel_root_path = fuel.config.config["data_path"]["yaml"]
KeyError: 'yaml'
what is this error?
fisrtly, thx for your sharing codes.
I tried to train my own data(8 classes,totally 20000 images), I convert it to hdf5 file and run training step successfully, but recall@1 metric seems errant.
my question is why batch_size cant be larger than num_classes*2?
Hi, thank you for providing the code.
I was trying to use your framework for a different problem. Can you please specify what normalization you use for training using the Triplet Loss?
In evaluation.py 213th lines y_batch is the embedding extact from GoogLeNet, but synthetics embedding only trained in FC,so how the metric work?
Hi, @wzzheng ,
Thanks for your contribution of this repo, and the original work.
In reading your paper, I found something that might be typo:
augmented harder negative sample
as z with a tilde above it and a minus as a script, while in the append maths formulation (ie, eq.4) and the remaining of this section, I could not find any other symbol like this.Is this just a typo, or something not fully rendered?
Hi @wzzheng,
Thanks for your interesting paper and open sourcing the code.
I'm trying to use your method to synthetic negative samples via the triplet loss (eq. 12. ) in your paper , but I found the "main_npair.py" is only suit for 'Npairloss'. And it's useless if simply change the "FLAGS.LossType" to "triple-loss". So I wonder how to run the code for triplet loss, and could you tell me how to change "main_npair.py" for triplet loss, or could you send a file like "main_triple.py" to me?
Here is my gmail: [email protected] And thank you so much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.