Git Product home page Git Product logo

horeid's Introduction

horeid's People

Contributors

develduan avatar wangguanan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

horeid's Issues

Padding in Transformer

Hi,
Thank you for sharing code. I have a question about data preprocessing. I noticed that you use padding operation in the training phase, but without padding in the testing phase. What is the benefit of doing that. Thanks

请问如何使用自己的数据集进行推理?

您好,很感谢您能对此篇论文的复现。现在我有一份自己采集的数据集想要进行推理,数据集格式整理成跟 Occluded-DukeMTMC-Dataset 一样的格式,请问可以直接进行 visualize 吗?我的详细报错信息如下:
Traceback (most recent call last):
File "main.py", line 153, in
main(config)
File "main.py", line 89, in main
visualize_ranked_images(config, base, loaders, 'duke')
File "/workspace/HOReID/core/visualize.py", line 25, in visualize_ranked_images
torch.tensor(query_features).cuda(),
RuntimeError: Could not infer dtype of NoneType
期待您的回复

train

I trained according to the steps, but it didn't reach the accuracy in the paper. What's the reason? I trained on a 2080ti
result:
Time: 2020-12-29 19:19:33, base, Dataset: Duke
mAP: 0.40751378988429915
Rank: [0.49457014 0.57647059 0.61809955 ... 1. 1. 1. ]
Time: 2020-12-29 19:27:35, base+gcn, Dataset: Duke
mAP: 0.40423225383235895
Rank: [0.49095023 0.57330317 0.6199095 ... 1. 1. 1. ]
Time: 2020-12-29 19:34:59, base+gcn+gm, Dataset: Duke
mAP: 0.4047804871692115
Rank: [0.50090498 0.5719457 0.61674208 ... 1. 1. 1. ]

关于partial dataset以及occluded-ReID数据集的测试代码

您好,感谢您这篇文章所做的工作,我在跑您的代码时碰到了一些问题:
当我直接在Market-1501上训练,然后在Occluded-REID以及Partial-REID数据集上测试时,并没有得到您论文中所展示的结果,请问在测试的时候有什么要注意的吗?或者您是否能给我发一份这几个数据集的测试代码,我的邮箱:[email protected],非常感谢!

code

when would you open your source code? I think your project is interesting.

how do you organize the dataset for training?

Hi Guanan, I read your paper and have some questions about the experiment, especially on datasets. Here I list my questions:

  1. Since the occluded datasets are small, have you pre-trained your model on any holistic datasets?
  2. For the experiments on occluded-duke, how do you sample the images? In each batch, the number of occluded images is fixed or not? Will you repeat the occluded images since they much less than whole-body images?
  3. The occluded-REID don't have a training set, the model evaluated on occluded-REID is trained on which datasets?

请问如何使用自己的数据集进行推理?

您好,很感谢您能对此篇论文的复现。现在我有一份自己采集的数据集想要进行推理,数据集格式整理成跟 Occluded-DukeMTMC-Dataset 一样的格式,请问可以直接进行 visualize 吗?我的详细报错信息如下:
Traceback (most recent call last):
File "main.py", line 153, in
main(config)
File "main.py", line 89, in main
visualize_ranked_images(config, base, loaders, 'duke')
File "/workspace/HOReID/core/visualize.py", line 25, in visualize_ranked_images
torch.tensor(query_features).cuda(),
RuntimeError: Could not infer dtype of NoneType
期待您的回复

Test the datasets DukeMTMC-reID,it takes error,do you know why?

I trained the dataset DukeMTMC-reID,after that,i test the generated 'pkl' documents,but it takes error.I want to know how can i modify it. Maybe you can help me with it. Thank you.

(horeid) goo@goo-Z390-GAMING-X:~/yx/HOReID/HOReID-master$ python main.py --mode test --resume_test_path /home/goo/yx/HOReID/HOReID-master/result-DukeMTMC/MODELS --resume_test_epoch 119 --duke_path /home/goo/yx/datayx/DukeMTMC-reID --output_path ./result-DukeMTMC
Traceback (most recent call last):
File "main.py", line 162, in
main(config)
File "main.py", line 14, in main
loaders = Loaders(config)
File "/home/goo/yx/HOReID/HOReID-master/core/data_loader/init.py", line 49, in init
self._load()
File "/home/goo/yx/HOReID/HOReID-master/core/data_loader/init.py", line 54, in _load
train_samples = self._get_train_samples(self.train_dataset)
File "/home/goo/yx/HOReID/HOReID-master/core/data_loader/init.py", line 64, in _get_train_samples
samples = Samples4Duke(train_samples_path)
File "/home/goo/yx/HOReID/HOReID-master/core/data_loader/dataset.py", line 23, in init
samples = self._load_images_path(self.samples_path)
File "/home/goo/yx/HOReID/HOReID-master/core/data_loader/dataset.py", line 52, in _load_images_path
root_path, _, files_name = os_walk(folder_dir)
TypeError: cannot unpack non-iterable NoneType object

ValueError: expected 2D or 3D input (got 1D input)

when i train or test the dataset occluded-duke and dukemtmc,it takes error . The details are shown as follows:

(horeid) goo@goo-Z390-GAMING-X:~/yx/HOReID/HOReID-master$ python main.py --mode test --resume_test_path /home/goo/yx/HOReID/pre-trained-models --resume_test_epoch 119 --duke_path /home/goo/yx/datayx/Occluded_Duke --output_path ./results
[[0. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 1. 1. 0. 0. 0. 1. 1. 0. 0. 0. 0. 1.]
[1. 1. 0. 0. 1. 0. 0. 1. 1. 0. 0. 0. 0. 1.]
[0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 1. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 1.]
[0. 1. 1. 0. 0. 0. 0. 1. 0. 0. 1. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
Existed dirs: ./results
Existed dirs: ./results/models/
Existed dirs: ./results/logs/
Existed dirs: ./results/visualization/market/
Existed dirs: ./results/visualization/duke/

Namespace(auto_resume_training_from_lastest_steps=True, base_learning_rate=0.00035, branch_num=14, cuda='cuda', duke_path='/home/goo/yx/datayx/Occluded_Duke', gcn_lr_scale=0.1, gcn_scale=20.0, gm_lr_scale=1.0, image_size=[256, 128], k=4, margin=0.3, max_save_model_num=1, milestones=[40, 70], mode='test', norm_scale=10.0, output_path='./results', p=16, pid_num=702, resume_test_epoch=119, resume_test_path='/home/goo/yx/HOReID/pre-trained-models', resume_visualize_epoch=0, resume_visualize_path='', total_train_epochs=120, train_dataset='duke', use_gm_after=20, ver_alpha=0.5, ver_in_scale=10.0, ver_lr_scale=1.0, ver_topk=1, weight_decay=0.0005, weight_global_feature=1.0, weight_p_loss=1.0, weight_ver_loss=0.1)
Traceback (most recent call last):
File "main.py", line 162, in
main(config)
File "main.py", line 80, in main
duke_map, duke_rank = testwithVer2(config, logger, base, loaders, 'duke', use_gcn=False, use_gm=False)
File "/home/goo/yx/HOReID/HOReID-master/core/test.py", line 26, in testwithVer2
info, gcned_info = base.forward(images, pids, training=False)
File "/home/goo/yx/HOReID/HOReID-master/core/base.py", line 262, in forward
bned_feature_vector_list, cls_score_list = self.bnclassifiers(feature_vector_list)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/goo/yx/HOReID/HOReID-master/core/models/model_reid.py", line 94, in call
bned_feature_vector_i, cls_score_i = classifier_i(feature_vector_i)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/goo/yx/HOReID/HOReID-master/core/models/model_reid.py", line 68, in forward
feature = self.bn(x)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 61, in forward
self._check_input_dim(input)
File "/home/goo/anaconda3/envs/horeid/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py", line 176, in _check_input_dim
.format(input.dim()))
ValueError: expected 2D or 3D input (got 1D input)

Looking forward to your reply.

Color jitter augmentation

In the paper, you said "When test on occluded/partial datasets, we use extra color jitter augmentation to avoid domain variance." But, when I looked at your code, I was not able to find that augmentation part. Can you let me know how you did it? for example, some setting for torchvision.transforms.ColorJitter. I'd appreciate some help. Thanks in advance.

implementation details of model_gcn

Thanks for the amazing implementation. Some questions regarding the implementation details of the model_gcn.py.
1.line 27, why set global feature link adj[-1,:-1]=0? Why not keep adj symmetrical as the rest of the matrix?
2.line 89, why use the distance gap, rather than the distance between part feature and global feature directly as the paper described? In the implementation, it seems the distance gap is essentially the distances between part features without abs?
3.line 102, why *2?
4.line 123, the second graph adgcn2 still uses the original adj as input rather than the learned adj from the previous gcn?
It would be appreciated if you can shed some insights on these questions. Thanks!

global feature and local feature

Hello, I would like to ask if the feature_stage1 in the source code corresponds to the global feature and feature_stage2 corresponds to the local feature in the paper

你好,问您一个问题,对于个人数据集(duke格式的)该怎么准备呢?

我现在已经按照您的步骤将数据集转化成了Occluded-Duke的数据格式,下载您提供的预训练模型,但是运行的时候遇到了这样的问题:
Traceback (most recent call last):
File "/home/lty/lty/HOReID-master/main.py", line 151, in
main(config)
File "/home/lty/lty/HOReID-master/main.py", line 56, in main
, results = train_an_epoch(config, base, loaders, current_epoch)
File "/home/lty/lty/HOReID-master/core/train.py", line 26, in train_an_epoch
ide_loss = base.compute_ide_loss(cls_score_list, pids, keypoints_confidence)
File "/home/lty/lty/HOReID-master/core/base.py", line 100, in compute_ide_loss
loss_i = self.ide_creiteron(score_i, pids)
File "/home/lty/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/lty/lty/HOReID-master/tools/loss.py", line 33, in forward
targets = torch.zeros(log_probs.size()).scatter
(1, targets.unsqueeze(1).data.cpu(), 1)
RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:151

我猜测是因为,config中的pid_num的原因
当我修改了pid_num为我们数据的pid数后,报错如下:
size mismatch for module.classifier_0.classifier.weight: copying a param with shape torch.Size([702, 2048]) from checkpoint, the shape in current model is torch.Size([750, 2048]).
......

请问我该怎么做?

how to test on partial REID

Hello! I want to ask how to train on Market and test on partial REID and partial iLIDS dataset?
what's more, I couldn't find the whole partial iLIDS dataset which containing 238 images.Can you share a link about dataset?
looking forward to your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.