Git Product home page Git Product logo

mar's Introduction

This repo contains the source code for our CVPR'19 work Unsupervised person re-identification by soft multilabel learning (the paper and the supplementary material is available). Our implementation is based on Pytorch. In the following is an instruction to use the code to train and evaluate the MAR model on the Market-1501 dataset.

Prerequisites

  1. Pytorch 1.0.0
  2. Python 3.6+
  3. Python packages: numpy, scipy, pyyaml/yaml, h5py
  4. [Optional] MATLAB, if you need to customize used datasets.

Data preparation

If you simply want to run the demo code without further modification, you might skip this step by downloading all required data from BaiduPan with password "tih8", and put all of them into /data/. Alternatively, you can find processed MSMT17 here.

  1. Pretrained model

    Please find the pretrained model (pretrained using softmax loss on MSMT17) in BaiduPan (password: tih8) or GoogleDrive. After downloading pretrained_MSMT17.pth, please put it into /data/.

  2. Target dataset

    Download the Market-1501 dataset, and unzip it into /data. After this step, you should have a folder structure:

    • data
      • Market-1501-v15.09.15
        • bounding_box_test
        • bounding_box_train
        • query

    Then run /data/construct_dataset_Market.m in MATLAB. If you prefer to use another dataset, just modify the MATLAB code accordingly. The processed Market-1501 and DukeMTMC-reID are available in BaiduPan.

  3. Auxiliary (source) dataset

    Download the MSMT17 dataset, and unzip it into /data. After this step, you should have a folder structure:

    • data
      • MSMT17_V1
        • train
        • test
        • list_train.txt
        • list_query.txt
        • list_gallery.txt

    Then run /data/construct_dataset_MSMT17.m in MATLAB. If you prefer to use another dataset, just modify the MATLAB code accordingly. Again, the processed MSMT17 is available in BaiduPan and Mega.

Run the code

Please enter the main folder, and run

python src/main.py --gpu 0,1,2,3 --save_path runs/debug

where "0,1,2,3" specifies your gpu IDs. If you are using gpus with 12G memory, you need 4 gpus to run in the default setting (batchsize=368). If you set a small batch size, please do not forget to lower the learning rate as the gradient would be stronger for a smaller batch size. Please also note that since I load the whole datasets into cpu memory to cut down IO overhead, you need at least 40G cpu memory. Hence I recommend you run it on a server.

Main results

Reference

If you find our work helpful in your research, please kindly cite our paper:

Hong-Xing Yu, Wei-Shi Zheng, Ancong Wu, Xiaowei Guo, Shaogang Gong and Jian-Huang Lai, "Unsupervised person re-identification by soft multilabel learning", In CVPR, 2019.

bib:

@inproceedings{yu2019unsupervised,
  title={Unsupervised Person Re-identification by Soft Multilabel Learning},
  author={Yu, Hong-Xing and Zheng, Wei-Shi and Wu, Ancong and Guo, Xiaowei and Gong, Shaogang and Lai, Jianhuang},
  year={2019},
  booktitle={IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)},
}

Contact

If you have any problem please email me at [email protected] I may not look at issues.

mar's People

Contributors

kovenyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mar's Issues

Some question about the loss and code

Thanks for your work

  1. eq(3) in paper, target data similarity is calculated by cosine, but here is euclidean ? the default metric of pdist is euclidean.
  2. the cosine similarity calculated here is not a common way likes F.normalize in pytorch, why? what's the concern?
  3. the cosine similarity is finally inflated by a scale of 30, why?
  4. eq(6) in paper, the mean and std of soft multilabels is updated by moving average of weight 0.5 as described in supplementary, but here in code uses batch size / 10000, why?

Lower r1, r5, r10 and MAP results

Thank for your work. I set batchz-size is 100 or 184, and lr is 2e-5 or 1.414e-4, I got r1 0.000, r5 0.148 r10 0.208, map 5.395. And I have load the pre-training model.

question about batchsize

In main.py line 13, I'm confused about args.batch_size//2, why devide by 2? The batchsize is set to 368, actually is 184 when it is devided by 2. Whether we can set batchsize=184 immediately.

ValueError: axes don't match array

I tried to run the code,but I got the error.
Traceback (most recent call last):
File "src/main.py", line 46, in
main()
File "src/main.py", line 13, in main
args.crop_size, args.padding, args.batch_size//2, False)
File "/home/m904/zl-ReID/MAR-master/src/utils.py", line 522, in get_transfer_dataloaders
target_data = Market('data/{}.mat'.format(target), state='train')
File "/home/m904/zl-ReID/MAR-master/src/ReIDdatasets.py", line 33, in init
self.data = np.transpose(temp.value, (0, 3, 2, 1))
File "<array_function internals>", line 6, in transpose
File "/home/m904/.conda/envs/pytorch_wxy/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 650, in transpose
return _wrapfunc(a, 'transpose', axes)
File "/home/m904/.conda/envs/pytorch_wxy/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc
return bound(*args, **kwds)
ValueError: axes don't match array

question about loss

Hello,thanks for your share.

There will be an error that the value of LOSS is NAN when I run the code.I only changed the value of batchsize, the rest of the parameters all used the default values.

I don't know how to solve it. Could you give me some suggestions? Thanks.

MemoryError

i dont have 40G cpu memory, is there another solution?

Need pretrained_Duke.pth

If i regard Duke as source dataset to train, I need pretrained model of duke. Is there the code to train the pretraind model?

loss target could not decrease

==>>[2020-06-17 00:02:37] [Epoch=199/200] Stage 1, [Need: 00:03:45]
Iter: [000/220] Freq 130.8 loss_source 0.000 loss_st 0.169 loss_target 0.304 loss_total 1.993 [2020-06-17 00:02:40]
Iter: [100/220] Freq 360.5 loss_source 0.000 loss_st 0.163 loss_target 0.299 loss_total 1.928 [2020-06-17 00:04:20]
Iter: [200/220] Freq 363.8 loss_source 0.000 loss_st 0.163 loss_target 0.302 loss_total 1.934 [2020-06-17 00:06:00]
Train loss_source 0.000 loss_st 0.163 loss_target 0.301 loss_total 1.934

i try my data with this code but could not get great performance, and my data have a great gap between source data and target data, so i remove loss ml.

Released code reproduce result with default parameters lower than publish one

Thanks for your excellent work and kindly code release. This work is elegant and inspires my future study.
However, when I run your release code with default parameter on the Market dataset, the Rank1 and MAP is slightly lower than the published one. The Rank1 is 65.2 (67.7 in the paper) and MAP is 38.8 (40.0 in the paper) when the model converged.

  • Can you reproduce the result in the paper using this release code?
  • Does the parameter need to be fine-tuned slightly base on this release code?

Any suggestions for this mismatch? Thanks for your kindly reply.

nan error

I used default parameters, except the batch size that is changed to 64 due to the small GPU memory.
However, nan error appears after the first epoch:

python version : 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609]
torch version : 1.0.1

------------------------------------------------------- options --------------------------------------------------------
batch_size: 64 beta: 0.2 crop_size: (384, 128)
epochs: 20 gpu: 0 img_size: (384, 128)
lamb_1: 0.0002 lamb_2: 50.0 lr: 0.0002
margin: 1.0 mining_ratio: 0.005 ml_path: data/ml_Market.dat
padding: 7 pretrain_path: data/pretrained_weight.pth print_freq: 100
resume: save_path: debug scala_ce: 30.0
source: MSMT17 target: Market wd: 0.025

loaded pre-trained model from data/pretrained_weight.pth

==>>[2020-04-17 18:03:46] [Epoch=000/020] Stage 1, [Need: 00:00:00]
initializing centers/threshold ...
loaded ml from data/ml_Market.dat
initializing centers done.
initializing threshold done.
Iter: [000/3877] Freq 10.6 loss_target 0.000 loss_source 0.070 loss_ml 13879.977 loss_st 0.451 loss_total 10.812 [2020-04-17 18:04:08]
Iter: [100/3877] Freq 129.1 loss_target 0.000 loss_source 0.223 loss_ml 12678.907 loss_st 0.578 loss_total 19.478 [2020-04-17 18:04:52]
Iter: [200/3877] Freq 136.8 loss_target 0.000 loss_source 0.718 loss_ml 11486.483 loss_st 0.706 loss_total 45.257 [2020-04-17 18:05:36]
Iter: [300/3877] Freq 139.5 loss_target 0.000 loss_source 1.211 loss_ml 10696.569 loss_st 0.762 loss_total 70.312 [2020-04-17 18:06:20]
Iter: [400/3877] Freq 141.0 loss_target 0.000 loss_source 1.447 loss_ml 10321.419 loss_st 0.782 loss_total 82.236 [2020-04-17 18:07:04]
Iter: [500/3877] Freq 141.1 loss_target 0.000 loss_source 1.512 loss_ml 10035.379 loss_st 0.787 loss_total 85.473 [2020-04-17 18:07:49]
Iter: [600/3877] Freq 141.7 loss_target 0.000 loss_source 1.521 loss_ml 9804.266 loss_st 0.784 loss_total 85.846 [2020-04-17 18:08:33]
Iter: [700/3877] Freq 142.2 loss_target 0.000 loss_source 1.504 loss_ml 9656.519 loss_st 0.777 loss_total 84.899 [2020-04-17 18:09:18]
Iter: [800/3877] Freq 142.5 loss_target 0.000 loss_source 1.480 loss_ml 9529.720 loss_st 0.770 loss_total 83.625 [2020-04-17 18:10:02]
Iter: [900/3877] Freq 142.3 loss_target 0.000 loss_source 1.448 loss_ml 9396.765 loss_st 0.765 loss_total 81.939 [2020-04-17 18:10:47]
Iter: [1000/3877] Freq 142.5 loss_target 0.000 loss_source 1.417 loss_ml 9326.110 loss_st 0.761 loss_total 80.334 [2020-04-17 18:11:32]
Iter: [1100/3877] Freq 142.7 loss_target 0.000 loss_source 1.386 loss_ml 9234.825 loss_st 0.757 loss_total 78.692 [2020-04-17 18:12:16]
Iter: [1200/3877] Freq 142.8 loss_target 0.000 loss_source 1.354 loss_ml 9180.113 loss_st 0.752 loss_total 77.062 [2020-04-17 18:13:00]
Iter: [1300/3877] Freq 142.7 loss_target 0.000 loss_source 1.325 loss_ml 9123.445 loss_st 0.746 loss_total 75.557 [2020-04-17 18:13:46]
Iter: [1400/3877] Freq 142.8 loss_target 0.000 loss_source 1.297 loss_ml 9052.444 loss_st 0.742 loss_total 74.055 [2020-04-17 18:14:30]
Iter: [1500/3877] Freq 142.9 loss_target 0.000 loss_source 1.268 loss_ml 8993.854 loss_st 0.737 loss_total 72.588 [2020-04-17 18:15:14]
Iter: [1600/3877] Freq 143.0 loss_target 0.000 loss_source 1.240 loss_ml 8949.674 loss_st 0.733 loss_total 71.113 [2020-04-17 18:15:58]
Iter: [1700/3877] Freq 142.9 loss_target 0.000 loss_source 1.216 loss_ml 8908.284 loss_st 0.730 loss_total 69.876 [2020-04-17 18:16:44]
Iter: [1800/3877] Freq 143.0 loss_target 0.000 loss_source 1.191 loss_ml 8866.926 loss_st 0.726 loss_total 68.567 [2020-04-17 18:17:28]
Iter: [1900/3877] Freq 143.1 loss_target 0.000 loss_source 1.167 loss_ml 8835.746 loss_st 0.722 loss_total 67.353 [2020-04-17 18:18:12]
Iter: [2000/3877] Freq 143.2 loss_target 0.000 loss_source 1.142 loss_ml 8806.737 loss_st 0.718 loss_total 66.061 [2020-04-17 18:18:56]
Iter: [2100/3877] Freq 143.1 loss_target 0.000 loss_source 1.121 loss_ml 8780.041 loss_st 0.715 loss_total 64.979 [2020-04-17 18:19:42]
Iter: [2200/3877] Freq 143.2 loss_target 0.000 loss_source 1.102 loss_ml 8744.079 loss_st 0.712 loss_total 63.964 [2020-04-17 18:20:26]
Iter: [2300/3877] Freq 143.3 loss_target 0.000 loss_source 1.086 loss_ml 8710.513 loss_st 0.710 loss_total 63.124 [2020-04-17 18:21:10]
Iter: [2400/3877] Freq 143.3 loss_target 0.000 loss_source 1.068 loss_ml 8682.339 loss_st 0.707 loss_total 62.225 [2020-04-17 18:21:54]
Iter: [2500/3877] Freq 143.2 loss_target 0.000 loss_source 1.054 loss_ml 8654.118 loss_st 0.705 loss_total 61.497 [2020-04-17 18:22:40]
Iter: [2600/3877] Freq 143.3 loss_target 0.000 loss_source 1.039 loss_ml 8635.352 loss_st 0.703 loss_total 60.705 [2020-04-17 18:23:24]
Iter: [2700/3877] Freq 143.3 loss_target 0.000 loss_source 1.026 loss_ml 8602.657 loss_st 0.701 loss_total 60.008 [2020-04-17 18:24:08]
Iter: [2800/3877] Freq 143.4 loss_target 0.000 loss_source 1.011 loss_ml 8580.846 loss_st 0.698 loss_total 59.240 [2020-04-17 18:24:52]
Iter: [2900/3877] Freq 143.3 loss_target 0.000 loss_source 0.997 loss_ml 8564.657 loss_st 0.696 loss_total 58.499 [2020-04-17 18:25:38]
Iter: [3000/3877] Freq 143.3 loss_target 0.000 loss_source 0.983 loss_ml 8544.973 loss_st 0.694 loss_total 57.802 [2020-04-17 18:26:22]
Iter: [3100/3877] Freq 143.4 loss_target 0.000 loss_source 0.971 loss_ml 8523.918 loss_st 0.692 loss_total 57.159 [2020-04-17 18:27:06]
Iter: [3200/3877] Freq 143.4 loss_target 0.000 loss_source 0.959 loss_ml 8506.227 loss_st 0.691 loss_total 56.549 [2020-04-17 18:27:51]
Iter: [3300/3877] Freq 143.3 loss_target 0.000 loss_source 0.948 loss_ml 8495.211 loss_st 0.689 loss_total 56.004 [2020-04-17 18:28:36]
Iter: [3400/3877] Freq 143.4 loss_target 0.000 loss_source 0.936 loss_ml 8476.330 loss_st 0.687 loss_total 55.355 [2020-04-17 18:29:20]
Iter: [3500/3877] Freq 143.4 loss_target 0.000 loss_source 0.925 loss_ml 8460.062 loss_st 0.685 loss_total 54.781 [2020-04-17 18:30:04]
Iter: [3600/3877] Freq 143.5 loss_target 0.000 loss_source 0.913 loss_ml 8443.244 loss_st 0.684 loss_total 54.175 [2020-04-17 18:30:48]
Iter: [3700/3877] Freq 143.4 loss_target 0.000 loss_source 0.901 loss_ml 8419.879 loss_st 0.682 loss_total 53.564 [2020-04-17 18:31:34]
Iter: [3800/3877] Freq 143.4 loss_target 0.000 loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:32:18]
Train loss_target 0.000 loss_source nan loss_ml nan loss_st nan loss_total nan

==>>[2020-04-17 18:32:53] [Epoch=001/020] Stage 1, [Need: 09:13:00]
Iter: [000/3877] Freq 43.2 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:32:54]
Iter: [100/3877] Freq 137.4 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:33:40]
Iter: [200/3877] Freq 137.9 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:34:26]
Iter: [300/3877] Freq 138.1 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:35:12]
Iter: [400/3877] Freq 138.2 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:35:58]
Iter: [500/3877] Freq 137.1 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:36:46]
Iter: [600/3877] Freq 136.9 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:37:34]
Iter: [700/3877] Freq 136.6 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:38:21]
Iter: [800/3877] Freq 136.7 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:39:08]
Iter: [900/3877] Freq 136.0 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:39:57]
Iter: [1000/3877] Freq 136.2 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:40:43]
Iter: [1100/3877] Freq 136.4 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:41:29]
Iter: [1200/3877] Freq 136.6 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:42:15]
Iter: [1300/3877] Freq 136.6 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:43:02]
Iter: [1400/3877] Freq 136.8 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:43:48]
Iter: [1500/3877] Freq 136.9 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:44:34]
Iter: [1600/3877] Freq 137.0 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:45:20]
Iter: [1700/3877] Freq 137.0 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:46:07]
Iter: [1800/3877] Freq 137.1 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:46:53]
Iter: [1900/3877] Freq 137.1 loss_target nan loss_source nan loss_ml nan loss_st nan loss_total nan [2020-04-17 18:47:40]

invalid value

/home/usr/MAR/src/utils.py:162: RuntimeWarning: invalid value encountered in greater
is_positive = p_agree[similar_idx] > self.threshold.item()

as you stated in the previous issue i have reduced the batch size and lr and getting error, how to deal with this error? i am using 2 GPUs of 12GB each.
Iter: [900/2481] Freq 213.2 loss_total nan loss_ml nan loss_st nan loss_target nan loss_source nan [2019-06-17 11:23:52]

after first epoch, i am getting nan every time. batchsize=60 & lr= 0.0002.

and when i am trying to run on Rtx 2 GPUs of 24GB each i am getting this error
Traceback (most recent call last):

File "src/main.py", line 46, in
main()
File "src/main.py", line 35, in main
meters_trn = trainer.train_epoch(source_loader, target_loader, epoch)
File "/home/saif/MAR/src/trainers.py", line 123, in train_epoch
multilabels = F.softmax(features_target.mm(agents.detach().t_()*self.args.scala_ce), dim=1)
RuntimeError: set_storage_offset is not allowed on Tensor created from .data or .detach()

i was facing some problems with pytorch& Cuda so i installed nightly.

About al_loss

In trainers.py line 117-120
features, similarity, _ = self.net(imgs)
features_target, similarity_target, _ = self.net(imgs_target)
scores = similarity * self.args.scala_ce
loss_source = self.al_loss(scores, labels)
The al_loss is about labels and scores. In this code, scores is made from imgs, but in the section3.4 of the paper, it is from image_target. The author said al_loss is minimized for the auxiliary dataset. Could you explain it? I got confused.

construct_dataset_Market.m

Hello, i have a question. In matlab code, file"construct_dataset_Market.m", line50,51,52,
"train_labels = uniquize(train_labels);
gallery_labels = uniquize(gallery_labels);
probe_labels = uniquize(probe_labels);"
i do not search any information about function "uniquize" and when i run this matlab file ,something wrong like this "Undefined function 'uniquize' for input arguments of type 'int64'.
Error in construct_dataset_Market (line 50)train_labels = uniquize(train_labels);"
i want to know, why. thanks twice.

question about processing dataset

Thanks for your share!
And I have a question about the processed dataset.
The processed Market-1501 provided by you is 2.75GB.
But the size of the Market database converted according to the code you provide is only 314.9MB.
What's going on?

two questions about dataset

hallo,thanks for your share!
I've met two errors after running the program!
1.the dataset downloaded is the MSMT17_v2,not the v1,without folder named test and train!
2.in my windows,running the program with the enviroment of anaconda,a bug occurred,
with error statements"Traceback (most recent call last):
File "src\main.py", line 47, in
main()
File "src\main.py", line 14, in main
args.crop_size, args.padding, args.batch_size//2, False)
File "C:\Users\Documents\MAR-master\MAR-master\src\utils.py", line 533, in get_transfer_dataloaders
source_data = FullTraining('data/{}.mat'.format(source))
File "C:\Users\Documents\MAR-master\MAR-master\src\ReIDdatasets.py", line 108, in init
self.data = np.transpose(temp.value, (0, 3, 2, 1))
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\AppData\Roaming\Python\Python36\site-packages\h5py_hl\dataset.py", line 250, in value
return self[()]
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "C:\Users\idriver\AppData\Roaming\Python\Python36\site-packages\h5py_hl\dataset.py", line 496, in getitem
self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5d.pyx", line 181, in h5py.h5d.DatasetID.read
File "h5py_proxy.pyx", line 130, in h5py._proxy.dset_rw
File "h5py_proxy.pyx", line 84, in h5py._proxy.H5PY_H5Dread
OSError: Can't read data (inflate() failed)"
I don't know how to solve these problem,please help me,thank you!

Unable to open file: name = 'data/market.mat', ??

hello,Baidu cloud network down the content and code did not mention market.mat? Are Market.mat? Why is this error?
File "src/main.py", line 47, in
main()
File "src/main.py", line 14, in main
args.crop_size, args.padding, args.batch_size//2, False)
File "/home/gfp/Downloads/MAR-master/src/utils.py", line 525, in get_transfer_dataloaders
target_data = Market('data/{}.mat'.format(target), state='train')
File "/home/gfp/Downloads/MAR-master/src/ReIDdatasets.py", line 19, in init
f = h5py.File(self.root, 'r')
File "/home/gfp/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 272, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/gfp/anaconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 92, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1482475225177/work/h5py/_objects.c:2856)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1482475225177/work/h5py/_objects.c:2814)
File "h5py/h5f.pyx", line 76, in h5py.h5f.open (/home/ilan/minonda/conda-bld/h5py_1482475225177/work/h5py/h5f.c:2102)
OSError: Unable to open file (Unable to open file: name = 'data/market.mat', errno = 2, error message = 'no such file or directory', flags = 0, o_flags = 0)

A Question Concerning The Datasets Being Used

Hello,
Thanks for your work, as from which I have learned a lot. However, I found that most methods in the literature use datasets like DukeMTMC-reID and Market-1501 instead of MSMT-17. So I was wondering if you have done any experiments for D->M or M->D. If so, could you please release or send me the results? Thanks in advance.

About the MSMT17 dataset

Thanks for your shareing of MSMT17 dataset. As the MSMT17 can not be downloaded from the official website, your sharing is really helpful. But, the processed data in '.Mat' format lack the camera information of images, which is important for testing phase. May I ask for the raw MSMT17 dataset, or the lack camera information of query and gallery images? Thank you very much!

MSMT17.mat can't read

I use python command to run this code. But 'Input/output error' was shown immediately, which said:

OSError: Can't read data (file read failed: time = Thu Oct 3 13:19:29 2019
, filename = 'data/MSMT17.mat', file descriptor = 6, errno = 5, error message = 'Input/output error', buf = 0x5629561acaa0, total read size = 3656, bytes this sub-read = 3656, bytes actually read = 18446744073709551615, offset = 6684483979)

Is there some problem with data MSMT17.mat? Cause I re-download this and tried for three times but also obtain same error annotation.

Could you please give me some suggestions or clues to solve that? Truly thanks!

Checkpoint resume error

I try to load the checkpoint and resume the trainer for continuing training from 20-th epoch (saved) to 30 epoch. Then the error shows up.

Traceback (most recent call last):
File "/home/xxx/project/MAR/src/main.py", line 48, in
main()
File "/home/xxx/project/MAR/src/main.py", line 37, in main
meters_trn = trainer.train_epoch(source_loader, target_loader, epoch)
File "/home/xxx/project/MAR/src/trainers.py", line 154, in train_epoch
save_checkpoint(self, epoch, os.path.join(self.args.save_path, "checkpoints.pth"))
File "/home/xxx/project/MAR/src/utils.py", line 446, in save_checkpoint
torch.save((trainer, epoch), save_path)
File "/home/xxx/.local/lib/python3.6/site-packages/torch/serialization.py", line 224, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/xxx/.local/lib/python3.6/site-packages/torch/serialization.py", line 149, in _with_file_like
return body(f)
File "/home/xxx/.local/lib/python3.6/site-packages/torch/serialization.py", line 224, in
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/xxx/.local/lib/python3.6/site-packages/torch/serialization.py", line 297, in _save
pickler.dump(obj)
TypeError: can't pickle _thread.lock objects

Is that you save the entire trainer object to 'checkpoint.pth', rather than save the model.statedict() and epoch to the 'checkpoint.pth'? I find that saving the model.statedict() is a typical way for resume model.

continue: checkpint resume error

First, I want to check the right ways to resume the model.

  1. save the trainer checkpoint
  2. modify the args.resume to the place where the checkpoint was saved. ie. runs/debug/checkpoints.pth.
  3. change the args.yaml to a lager epoch.
  4. run the main.py

Following the above steps, the error still exists, even downgrade the PyTorch version to 1.0.0(my previous PyTorch version is 1.1.0).

Traceback (most recent call last):
File "/home/xxxxxx/project/MAR/src/main.py", line 46, in
main()
File "/home/xxxxxx/project/MAR/src/main.py", line 35, in main
meters_trn = trainer.train_epoch(source_loader, target_loader, epoch)
File "/home/xxxxxx/project/MAR/src/trainers.py", line 155, in train_epoch
save_checkpoint(self, epoch, os.path.join(self.args.save_path, "checkpoints.pth"))
File "/home/xxxxxx/project/MAR/src/utils.py", line 442, in save_checkpoint
torch.save((trainer, epoch), save_path)
File "/home/xxxxxx/anaconda3/envs/MAR/lib/python3.6/site-packages/torch/serialization.py", line 218, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/xxxxxx/anaconda3/envs/MAR/lib/python3.6/site-packages/torch/serialization.py", line 143, in _with_file_like
return body(f)
File "/home/xxxxxx/anaconda3/envs/MAR/lib/python3.6/site-packages/torch/serialization.py", line 218, in
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "/home/xxxxxx/anaconda3/envs/MAR/lib/python3.6/site-packages/torch/serialization.py", line 291, in _save
pickler.dump(obj)
TypeError: can't pickle _thread.lock objects

The checkpoint is successfully loaded but fails to save the newer trained checkpoint.

  • I wonder whether I need to save the newer checkpoint with another name, ie, checkpoint2.pth.
  • Another way to solve my problem is to change the model save method: save the model.state_dict().

If I have to change the model save method that only save the model.state_dict(), any suggestion about this change?
is that I only need to save the model.state_dict() and epoch to the checkpoint? is there any attention need to be paid on other detail?

Thanks for your attention and kind reply.

Why should we use function `dist_idx_to_pair_idx` ?

Thanks for your sharing.

Why should we have the function dist_idx_to_pair_idx(d, i)?

def dist_idx_to_pair_idx(d, i):

In fact, I don't understand this function by now. What I know is that this function is to change dist index into pair index which format is like two tuple (i), (j) & i < j. But I have no idea about those parameters in it. Can someone explain this for me or give me some related materials?

Thanks : )

change the batchsize got low performance

due to the gpu constrict,I change the batchsize to 184, half of the default and the learning rate to the 0.0001 but got bad performance.This is my log,I don't know why this result occurs.

python version : 3.6.7 | packaged by conda-forge | (default, Jul 2 2019, 02:18:42) [GCC 7.3.0]
torch version : 1.0.0

------------------------------------------------------- options --------------------------------------------------------
batch_size: 184 beta: 0.2 crop_size: (384, 128)
epochs: 20 gpu: 2,3,4 img_size: (384, 128)
lamb_1: 0.0002 lamb_2: 50.0 lr: 0.0001
margin: 1.0 mining_ratio: 0.005 ml_path: data/ml_Market.dat
padding: 7 pretrain_path: data/pretrained_weight.pth print_freq: 100
resume: save_path: runs/debug scala_ce: 30.0
source: MSMT17 target: Market wd: 0.025

data/pretrained_weight.pth is not a file. train from scratch.

==>>[2019-10-30 20:45:30] [Epoch=000/020] Stage 1, [Need: 00:00:00]
initializing centers/threshold ...
loaded ml from data/ml_Market.dat
initializing centers done.
initializing threshold done.
Iter: [000/1348] Freq 8.1 loss_source 8.495 loss_st 1.997 loss_ml 1692.398 loss_target 0.000 loss_total 445.052 [2019-10-30 20:46:16]
Iter: [100/1348] Freq 115.3 loss_source 8.465 loss_st 1.984 loss_ml 881.421 loss_target 0.000 loss_total 443.282 [2019-10-30 20:48:34]
Iter: [200/1348] Freq 122.1 loss_source 8.419 loss_st 1.977 loss_ml 756.861 loss_target 0.000 loss_total 440.853 [2019-10-30 20:50:56]
Iter: [300/1348] Freq 124.7 loss_source 8.364 loss_st 1.970 loss_ml 711.933 loss_target 0.000 loss_total 438.050 [2019-10-30 20:53:17]
Iter: [400/1348] Freq 126.2 loss_source 8.308 loss_st 1.964 loss_ml 686.483 loss_target 0.000 loss_total 435.183 [2019-10-30 20:55:38]
Iter: [500/1348] Freq 127.0 loss_source 8.249 loss_st 1.957 loss_ml 671.668 loss_target 0.000 loss_total 432.163 [2019-10-30 20:57:59]
Iter: [600/1348] Freq 127.4 loss_source 8.191 loss_st 1.951 loss_ml 666.046 loss_target 0.000 loss_total 429.183 [2019-10-30 21:00:21]
Iter: [700/1348] Freq 127.8 loss_source 8.127 loss_st 1.944 loss_ml 669.111 loss_target 0.000 loss_total 425.925 [2019-10-30 21:02:42]
Iter: [800/1348] Freq 128.2 loss_source 8.063 loss_st 1.936 loss_ml 675.569 loss_target 0.000 loss_total 422.627 [2019-10-30 21:05:02]
Iter: [900/1348] Freq 128.5 loss_source 8.002 loss_st 1.929 loss_ml 688.473 loss_target 0.000 loss_total 419.550 [2019-10-30 21:07:23]
Iter: [1000/1348] Freq 128.5 loss_source 7.942 loss_st 1.922 loss_ml 704.718 loss_target 0.000 loss_total 416.474 [2019-10-30 21:09:46]
Iter: [1100/1348] Freq 128.7 loss_source 7.885 loss_st 1.915 loss_ml 721.545 loss_target 0.000 loss_total 413.554 [2019-10-30 21:12:07]
Iter: [1200/1348] Freq 128.8 loss_source 7.829 loss_st 1.908 loss_ml 741.383 loss_target 0.000 loss_total 410.685 [2019-10-30 21:14:29]
Iter: [1300/1348] Freq 128.8 loss_source 7.777 loss_st 1.901 loss_ml 762.890 loss_target 0.000 loss_total 408.010 [2019-10-30 21:16:52]
Train loss_source 7.751 loss_st 1.898 loss_ml 772.180 loss_target 0.000 loss_total 406.709

==>>[2019-10-30 21:18:00] [Epoch=001/020] Stage 1, [Need: 10:17:22]
Iter: [000/1348] Freq 71.1 loss_source 6.665 loss_st 1.783 loss_ml 852.047 loss_target 0.631 loss_total 351.859 [2019-10-30 21:18:02]
Iter: [100/1348] Freq 117.9 loss_source 6.878 loss_st 1.794 loss_ml 1070.804 loss_target 0.668 loss_total 362.707 [2019-10-30 21:20:37]
Iter: [200/1348] Freq 117.6 loss_source 6.848 loss_st 1.789 loss_ml 1113.840 loss_target 0.673 loss_total 361.164 [2019-10-30 21:23:14]
Iter: [300/1348] Freq 117.9 loss_source 6.814 loss_st 1.784 loss_ml 1146.866 loss_target 0.676 loss_total 359.441 [2019-10-30 21:25:49]
Iter: [400/1348] Freq 118.1 loss_source 6.776 loss_st 1.779 loss_ml 1174.303 loss_target 0.677 loss_total 357.498 [2019-10-30 21:28:25]
Iter: [500/1348] Freq 118.1 loss_source 6.736 loss_st 1.773 loss_ml 1202.633 loss_target 0.678 loss_total 355.470 [2019-10-30 21:31:00]
Iter: [600/1348] Freq 118.0 loss_source 6.696 loss_st 1.768 loss_ml 1222.033 loss_target 0.678 loss_total 353.406 [2019-10-30 21:33:37]
Iter: [700/1348] Freq 118.1 loss_source 6.655 loss_st 1.763 loss_ml 1240.679 loss_target 0.678 loss_total 351.311 [2019-10-30 21:36:12]
Iter: [800/1348] Freq 118.2 loss_source 6.613 loss_st 1.757 loss_ml 1259.879 loss_target 0.677 loss_total 349.159 [2019-10-30 21:38:46]
Iter: [900/1348] Freq 118.3 loss_source 6.569 loss_st 1.752 loss_ml 1275.914 loss_target 0.677 loss_total 346.914 [2019-10-30 21:41:22]
Iter: [1000/1348] Freq 118.3 loss_source 6.523 loss_st 1.746 loss_ml 1299.174 loss_target 0.676 loss_total 344.528 [2019-10-30 21:43:57]
Iter: [1100/1348] Freq 118.4 loss_source 6.477 loss_st 1.740 loss_ml 1324.785 loss_target 0.676 loss_total 342.188 [2019-10-30 21:46:31]
Iter: [1200/1348] Freq 118.3 loss_source 6.427 loss_st 1.734 loss_ml 1345.513 loss_target 0.675 loss_total 339.612 [2019-10-30 21:49:08]
Iter: [1300/1348] Freq 118.4 loss_source 6.378 loss_st 1.728 loss_ml 1365.731 loss_target 0.674 loss_total 337.120 [2019-10-30 21:51:42]
Train loss_source 6.356 loss_st 1.725 loss_ml 1379.448 loss_target 0.674 loss_total 336.005

==>>[2019-10-30 21:53:01] [Epoch=002/020] Stage 1, [Need: 10:07:34]
Iter: [000/1348] Freq 63.7 loss_source 5.287 loss_st 1.614 loss_ml 1125.035 loss_target 0.623 loss_total 281.356 [2019-10-30 21:53:04]
Iter: [100/1348] Freq 117.2 loss_source 5.382 loss_st 1.620 loss_ml 1727.869 loss_target 0.662 loss_total 286.304 [2019-10-30 21:55:39]
Iter: [200/1348] Freq 117.5 loss_source 5.373 loss_st 1.616 loss_ml 1720.451 loss_target 0.657 loss_total 285.839 [2019-10-30 21:58:15]
Iter: [300/1348] Freq 117.6 loss_source 5.342 loss_st 1.611 loss_ml 1738.493 loss_target 0.655 loss_total 284.194 [2019-10-30 22:00:52]
Iter: [400/1348] Freq 118.0 loss_source 5.299 loss_st 1.605 loss_ml 1780.929 loss_target 0.654 loss_total 281.997 [2019-10-30 22:03:26]
Iter: [500/1348] Freq 117.9 loss_source 5.256 loss_st 1.599 loss_ml 1806.289 loss_target 0.654 loss_total 279.826 [2019-10-30 22:06:03]
Iter: [600/1348] Freq 118.0 loss_source 5.212 loss_st 1.593 loss_ml 1827.316 loss_target 0.652 loss_total 277.566 [2019-10-30 22:08:38]
Iter: [700/1348] Freq 118.1 loss_source 5.174 loss_st 1.587 loss_ml 1852.025 loss_target 0.650 loss_total 275.590 [2019-10-30 22:11:13]
Iter: [800/1348] Freq 118.2 loss_source 5.125 loss_st 1.581 loss_ml 1878.133 loss_target 0.648 loss_total 273.101 [2019-10-30 22:13:48]
Iter: [900/1348] Freq 118.1 loss_source 5.074 loss_st 1.575 loss_ml 1909.571 loss_target 0.645 loss_total 270.458 [2019-10-30 22:16:24]
Iter: [1000/1348] Freq 118.1 loss_source 5.024 loss_st 1.568 loss_ml 1937.472 loss_target 0.643 loss_total 267.940 [2019-10-30 22:19:00]
Iter: [1100/1348] Freq 118.1 loss_source 4.976 loss_st 1.562 loss_ml 1959.263 loss_target 0.642 loss_total 265.439 [2019-10-30 22:21:36]
Iter: [1200/1348] Freq 118.1 loss_source 4.928 loss_st 1.556 loss_ml 1991.780 loss_target 0.639 loss_total 262.978 [2019-10-30 22:24:13]
Iter: [1300/1348] Freq 118.0 loss_source 4.879 loss_st 1.550 loss_ml 2014.268 loss_target 0.638 loss_total 260.492 [2019-10-30 22:26:49]
Train loss_source 4.857 loss_st 1.547 loss_ml 2025.439 loss_target 0.637 loss_total 259.346

==>>[2019-10-30 22:28:10] [Epoch=003/020] Stage 1, [Need: 09:41:42]
Iter: [000/1348] Freq 70.8 loss_source 3.744 loss_st 1.440 loss_ml 2014.427 loss_target 0.549 loss_total 202.534 [2019-10-30 22:28:12]
Iter: [100/1348] Freq 118.7 loss_source 3.883 loss_st 1.442 loss_ml 2296.210 loss_target 0.623 loss_total 209.659 [2019-10-30 22:30:46]
Iter: [200/1348] Freq 118.4 loss_source 3.850 loss_st 1.437 loss_ml 2342.424 loss_target 0.616 loss_total 207.959 [2019-10-30 22:33:22]
Iter: [300/1348] Freq 118.1 loss_source 3.832 loss_st 1.432 loss_ml 2354.441 loss_target 0.611 loss_total 206.981 [2019-10-30 22:35:58]
Iter: [400/1348] Freq 118.3 loss_source 3.806 loss_st 1.428 loss_ml 2361.431 loss_target 0.607 loss_total 205.636 [2019-10-30 22:38:33]
Iter: [500/1348] Freq 118.2 loss_source 3.773 loss_st 1.424 loss_ml 2351.717 loss_target 0.609 loss_total 203.960 [2019-10-30 22:41:09]
Iter: [600/1348] Freq 117.9 loss_source 3.737 loss_st 1.419 loss_ml 2344.352 loss_target 0.608 loss_total 202.102 [2019-10-30 22:43:48]
Iter: [700/1348] Freq 117.6 loss_source 3.697 loss_st 1.413 loss_ml 2353.415 loss_target 0.608 loss_total 200.078 [2019-10-30 22:46:26]
Iter: [800/1348] Freq 117.8 loss_source 3.666 loss_st 1.409 loss_ml 2361.779 loss_target 0.609 loss_total 198.459 [2019-10-30 22:49:01]
Iter: [900/1348] Freq 117.8 loss_source 3.626 loss_st 1.404 loss_ml 2377.575 loss_target 0.609 loss_total 196.444 [2019-10-30 22:51:37]
Iter: [1000/1348] Freq 117.7 loss_source 3.587 loss_st 1.399 loss_ml 2384.437 loss_target 0.608 loss_total 194.446 [2019-10-30 22:54:14]
Iter: [1100/1348] Freq 117.8 loss_source 3.557 loss_st 1.395 loss_ml 2396.515 loss_target 0.608 loss_total 192.901 [2019-10-30 22:56:49]
Iter: [1200/1348] Freq 117.8 loss_source 3.526 loss_st 1.390 loss_ml 2410.275 loss_target 0.607 loss_total 191.275 [2019-10-30 22:59:26]
Iter: [1300/1348] Freq 117.7 loss_source 3.493 loss_st 1.386 loss_ml 2417.496 loss_target 0.606 loss_total 189.600 [2019-10-30 23:02:03]
Train loss_source 3.482 loss_st 1.385 loss_ml 2420.889 loss_target 0.606 loss_total 189.025

==>>[2019-10-30 23:03:21] [Epoch=004/020] Stage 1, [Need: 09:11:21]
Iter: [000/1348] Freq 66.0 loss_source 2.598 loss_st 1.289 loss_ml 2624.562 loss_target 0.544 loss_total 143.874 [2019-10-30 23:03:23]
Iter: [100/1348] Freq 117.3 loss_source 2.691 loss_st 1.299 loss_ml 2518.045 loss_target 0.581 loss_total 148.599 [2019-10-30 23:05:59]
Iter: [200/1348] Freq 117.7 loss_source 2.662 loss_st 1.295 loss_ml 2494.907 loss_target 0.581 loss_total 147.152 [2019-10-30 23:08:35]
Iter: [300/1348] Freq 117.7 loss_source 2.639 loss_st 1.291 loss_ml 2497.345 loss_target 0.582 loss_total 145.933 [2019-10-30 23:11:11]
Iter: [400/1348] Freq 118.0 loss_source 2.623 loss_st 1.288 loss_ml 2493.339 loss_target 0.582 loss_total 145.099 [2019-10-30 23:13:46]
Iter: [500/1348] Freq 118.0 loss_source 2.602 loss_st 1.284 loss_ml 2503.495 loss_target 0.585 loss_total 144.030 [2019-10-30 23:16:22]
Iter: [600/1348] Freq 118.0 loss_source 2.578 loss_st 1.281 loss_ml 2514.068 loss_target 0.585 loss_total 142.801 [2019-10-30 23:18:58]
Iter: [700/1348] Freq 117.9 loss_source 2.560 loss_st 1.278 loss_ml 2516.724 loss_target 0.587 loss_total 141.842 [2019-10-30 23:21:35]
Iter: [800/1348] Freq 118.0 loss_source 2.541 loss_st 1.274 loss_ml 2523.856 loss_target 0.588 loss_total 140.886 [2019-10-30 23:24:10]
Iter: [900/1348] Freq 118.0 loss_source 2.524 loss_st 1.271 loss_ml 2527.430 loss_target 0.587 loss_total 140.003 [2019-10-30 23:26:45]
Iter: [1000/1348] Freq 118.0 loss_source 2.506 loss_st 1.268 loss_ml 2531.789 loss_target 0.587 loss_total 139.081 [2019-10-30 23:29:22]
Iter: [1100/1348] Freq 118.1 loss_source 2.484 loss_st 1.264 loss_ml 2527.871 loss_target 0.586 loss_total 137.932 [2019-10-30 23:31:57]
Iter: [1200/1348] Freq 118.0 loss_source 2.464 loss_st 1.261 loss_ml 2531.401 loss_target 0.587 loss_total 136.907 [2019-10-30 23:34:33]
Iter: [1300/1348] Freq 118.1 loss_source 2.444 loss_st 1.258 loss_ml 2537.023 loss_target 0.589 loss_total 135.867 [2019-10-30 23:37:08]
Train loss_source 2.435 loss_st 1.256 loss_ml 2542.517 loss_target 0.590 loss_total 135.391

==>>[2019-10-30 23:38:25] [Epoch=005/020] Stage 1, [Need: 08:38:43]
Iter: [000/1348] Freq 68.6 loss_source 2.245 loss_st 1.227 loss_ml 2518.969 loss_target 0.699 loss_total 125.706 [2019-10-30 23:38:27]
Iter: [100/1348] Freq 118.0 loss_source 1.845 loss_st 1.184 loss_ml 2666.924 loss_target 0.591 loss_total 105.228 [2019-10-30 23:41:02]
Iter: [200/1348] Freq 117.8 loss_source 1.831 loss_st 1.181 loss_ml 2640.690 loss_target 0.591 loss_total 104.495 [2019-10-30 23:43:39]
Iter: [300/1348] Freq 117.5 loss_source 1.820 loss_st 1.180 loss_ml 2617.161 loss_target 0.591 loss_total 103.931 [2019-10-30 23:46:16]
Iter: [400/1348] Freq 118.0 loss_source 1.820 loss_st 1.179 loss_ml 2603.656 loss_target 0.585 loss_total 103.901 [2019-10-30 23:48:50]
Iter: [500/1348] Freq 117.8 loss_source 1.821 loss_st 1.177 loss_ml 2606.027 loss_target 0.587 loss_total 103.907 [2019-10-30 23:51:27]
Iter: [600/1348] Freq 117.7 loss_source 1.809 loss_st 1.175 loss_ml 2594.393 loss_target 0.589 loss_total 103.310 [2019-10-30 23:54:04]
Iter: [700/1348] Freq 117.7 loss_source 1.805 loss_st 1.173 loss_ml 2607.114 loss_target 0.589 loss_total 103.073 [2019-10-30 23:56:41]
Iter: [800/1348] Freq 117.9 loss_source 1.795 loss_st 1.171 loss_ml 2606.293 loss_target 0.589 loss_total 102.595 [2019-10-30 23:59:15]
Iter: [900/1348] Freq 117.8 loss_source 1.787 loss_st 1.169 loss_ml 2613.639 loss_target 0.592 loss_total 102.164 [2019-10-31 00:01:53]
Iter: [1000/1348] Freq 117.9 loss_source 1.775 loss_st 1.166 loss_ml 2617.651 loss_target 0.592 loss_total 101.546 [2019-10-31 00:04:27]
Iter: [1100/1348] Freq 117.9 loss_source 1.765 loss_st 1.164 loss_ml 2616.425 loss_target 0.593 loss_total 101.012 [2019-10-31 00:07:03]
Iter: [1200/1348] Freq 117.9 loss_source 1.754 loss_st 1.161 loss_ml 2618.908 loss_target 0.594 loss_total 100.454 [2019-10-31 00:09:40]
Iter: [1300/1348] Freq 117.9 loss_source 1.741 loss_st 1.158 loss_ml 2619.923 loss_target 0.595 loss_total 99.754 [2019-10-31 00:12:15]
Train loss_source 1.736 loss_st 1.157 loss_ml 2619.598 loss_target 0.596 loss_total 99.497

==>>[2019-10-31 00:13:31] [Epoch=006/020] Stage 1, [Need: 08:05:22]
Iter: [000/1348] Freq 68.1 loss_source 1.414 loss_st 1.135 loss_ml 3089.285 loss_target 0.565 loss_total 83.252 [2019-10-31 00:13:34]
Iter: [100/1348] Freq 118.0 loss_source 1.280 loss_st 1.098 loss_ml 2694.786 loss_target 0.617 loss_total 76.124 [2019-10-31 00:16:09]
Iter: [200/1348] Freq 117.9 loss_source 1.268 loss_st 1.095 loss_ml 2632.039 loss_target 0.621 loss_total 75.508 [2019-10-31 00:18:45]
Iter: [300/1348] Freq 117.6 loss_source 1.276 loss_st 1.095 loss_ml 2605.454 loss_target 0.620 loss_total 75.905 [2019-10-31 00:21:22]
Iter: [400/1348] Freq 117.9 loss_source 1.282 loss_st 1.095 loss_ml 2597.865 loss_target 0.623 loss_total 76.197 [2019-10-31 00:23:57]
Iter: [500/1348] Freq 117.9 loss_source 1.284 loss_st 1.094 loss_ml 2582.284 loss_target 0.624 loss_total 76.258 [2019-10-31 00:26:33]
Iter: [600/1348] Freq 117.9 loss_source 1.285 loss_st 1.093 loss_ml 2586.060 loss_target 0.625 loss_total 76.322 [2019-10-31 00:29:09]
Iter: [700/1348] Freq 117.9 loss_source 1.280 loss_st 1.092 loss_ml 2581.643 loss_target 0.626 loss_total 76.046 [2019-10-31 00:31:45]
Iter: [800/1348] Freq 118.0 loss_source 1.275 loss_st 1.090 loss_ml 2577.499 loss_target 0.627 loss_total 75.792 [2019-10-31 00:34:20]
Iter: [900/1348] Freq 118.1 loss_source 1.271 loss_st 1.089 loss_ml 2575.272 loss_target 0.627 loss_total 75.600 [2019-10-31 00:36:55]
Iter: [1000/1348] Freq 118.1 loss_source 1.265 loss_st 1.087 loss_ml 2569.577 loss_target 0.628 loss_total 75.247 [2019-10-31 00:39:31]
Iter: [1100/1348] Freq 118.2 loss_source 1.260 loss_st 1.085 loss_ml 2571.158 loss_target 0.629 loss_total 75.012 [2019-10-31 00:42:05]
Iter: [1200/1348] Freq 118.2 loss_source 1.256 loss_st 1.083 loss_ml 2570.635 loss_target 0.629 loss_total 74.776 [2019-10-31 00:44:41]
Iter: [1300/1348] Freq 118.1 loss_source 1.252 loss_st 1.081 loss_ml 2571.229 loss_target 0.630 loss_total 74.567 [2019-10-31 00:47:18]
Train loss_source 1.250 loss_st 1.081 loss_ml 2573.118 loss_target 0.630 loss_total 74.444

==>>[2019-10-31 00:48:35] [Epoch=007/020] Stage 1, [Need: 07:31:25]
Iter: [000/1348] Freq 66.3 loss_source 1.073 loss_st 1.032 loss_ml 2308.806 loss_target 0.670 loss_total 65.087 [2019-10-31 00:48:37]
Iter: [100/1348] Freq 117.7 loss_source 0.915 loss_st 1.035 loss_ml 2498.415 loss_target 0.638 loss_total 57.229 [2019-10-31 00:51:12]
Iter: [200/1348] Freq 118.1 loss_source 0.920 loss_st 1.034 loss_ml 2535.577 loss_target 0.635 loss_total 57.511 [2019-10-31 00:53:48]
Iter: [300/1348] Freq 118.3 loss_source 0.926 loss_st 1.034 loss_ml 2541.370 loss_target 0.632 loss_total 57.796 [2019-10-31 00:56:23]
Iter: [400/1348] Freq 118.3 loss_source 0.934 loss_st 1.033 loss_ml 2533.403 loss_target 0.634 loss_total 58.159 [2019-10-31 00:58:58]
Iter: [500/1348] Freq 118.2 loss_source 0.935 loss_st 1.033 loss_ml 2530.900 loss_target 0.634 loss_total 58.235 [2019-10-31 01:01:35]
Iter: [600/1348] Freq 118.0 loss_source 0.938 loss_st 1.033 loss_ml 2524.310 loss_target 0.633 loss_total 58.350 [2019-10-31 01:04:12]
Iter: [700/1348] Freq 118.0 loss_source 0.935 loss_st 1.031 loss_ml 2512.656 loss_target 0.633 loss_total 58.221 [2019-10-31 01:06:48]
Iter: [800/1348] Freq 118.2 loss_source 0.936 loss_st 1.031 loss_ml 2505.070 loss_target 0.632 loss_total 58.253 [2019-10-31 01:09:22]
Iter: [900/1348] Freq 118.1 loss_source 0.939 loss_st 1.030 loss_ml 2519.707 loss_target 0.632 loss_total 58.383 [2019-10-31 01:11:58]
Iter: [1000/1348] Freq 117.9 loss_source 0.939 loss_st 1.029 loss_ml 2517.942 loss_target 0.633 loss_total 58.368 [2019-10-31 01:14:36]
Iter: [1100/1348] Freq 118.0 loss_source 0.937 loss_st 1.028 loss_ml 2514.382 loss_target 0.632 loss_total 58.285 [2019-10-31 01:17:11]
Iter: [1200/1348] Freq 117.9 loss_source 0.941 loss_st 1.028 loss_ml 2512.419 loss_target 0.632 loss_total 58.438 [2019-10-31 01:19:49]
Iter: [1300/1348] Freq 117.9 loss_source 0.939 loss_st 1.027 loss_ml 2513.699 loss_target 0.632 loss_total 58.369 [2019-10-31 01:22:25]
Train loss_source 0.940 loss_st 1.026 loss_ml 2516.620 loss_target 0.632 loss_total 58.389

==>>[2019-10-31 01:23:41] [Epoch=008/020] Stage 1, [Need: 06:57:15]
Iter: [000/1348] Freq 65.1 loss_source 0.576 loss_st 0.958 loss_ml 2373.349 loss_target 0.603 loss_total 39.451 [2019-10-31 01:23:44]
Iter: [100/1348] Freq 118.5 loss_source 0.675 loss_st 0.989 loss_ml 2462.239 loss_target 0.628 loss_total 44.775 [2019-10-31 01:26:18]
Iter: [200/1348] Freq 118.8 loss_source 0.702 loss_st 0.990 loss_ml 2464.918 loss_target 0.626 loss_total 46.104 [2019-10-31 01:28:52]
Iter: [300/1348] Freq 118.7 loss_source 0.704 loss_st 0.991 loss_ml 2458.039 loss_target 0.628 loss_total 46.209 [2019-10-31 01:31:27]
Iter: [400/1348] Freq 118.8 loss_source 0.708 loss_st 0.991 loss_ml 2454.017 loss_target 0.628 loss_total 46.410 [2019-10-31 01:34:02]
Iter: [500/1348] Freq 118.8 loss_source 0.713 loss_st 0.991 loss_ml 2460.428 loss_target 0.628 loss_total 46.705 [2019-10-31 01:36:37]
Iter: [600/1348] Freq 118.7 loss_source 0.717 loss_st 0.991 loss_ml 2463.951 loss_target 0.629 loss_total 46.903 [2019-10-31 01:39:12]
Iter: [700/1348] Freq 118.8 loss_source 0.720 loss_st 0.991 loss_ml 2464.420 loss_target 0.629 loss_total 47.041 [2019-10-31 01:41:46]
Iter: [800/1348] Freq 118.8 loss_source 0.721 loss_st 0.991 loss_ml 2462.566 loss_target 0.629 loss_total 47.095 [2019-10-31 01:44:21]
Iter: [900/1348] Freq 118.7 loss_source 0.725 loss_st 0.991 loss_ml 2468.437 loss_target 0.629 loss_total 47.269 [2019-10-31 01:46:58]
Iter: [1000/1348] Freq 118.7 loss_source 0.726 loss_st 0.990 loss_ml 2456.888 loss_target 0.629 loss_total 47.334 [2019-10-31 01:49:32]
Iter: [1100/1348] Freq 118.8 loss_source 0.726 loss_st 0.989 loss_ml 2450.530 loss_target 0.629 loss_total 47.287 [2019-10-31 01:52:06]
Iter: [1200/1348] Freq 118.9 loss_source 0.725 loss_st 0.988 loss_ml 2442.680 loss_target 0.629 loss_total 47.256 [2019-10-31 01:54:40]
Iter: [1300/1348] Freq 118.8 loss_source 0.726 loss_st 0.988 loss_ml 2443.946 loss_target 0.629 loss_total 47.295 [2019-10-31 01:57:16]
Train loss_source 0.727 loss_st 0.988 loss_ml 2445.401 loss_target 0.628 loss_total 47.357

==>>[2019-10-31 01:58:32] [Epoch=009/020] Stage 1, [Need: 06:22:35]
Iter: [000/1348] Freq 66.7 loss_source 0.501 loss_st 0.973 loss_ml 2416.425 loss_target 0.629 loss_total 35.871 [2019-10-31 01:58:35]
Iter: [100/1348] Freq 119.2 loss_source 0.528 loss_st 0.955 loss_ml 2410.499 loss_target 0.625 loss_total 37.068 [2019-10-31 02:01:08]
Iter: [200/1348] Freq 118.5 loss_source 0.530 loss_st 0.955 loss_ml 2368.588 loss_target 0.627 loss_total 37.141 [2019-10-31 02:03:44]
Iter: [300/1348] Freq 118.2 loss_source 0.532 loss_st 0.957 loss_ml 2352.228 loss_target 0.626 loss_total 37.281 [2019-10-31 02:06:21]
Iter: [400/1348] Freq 118.5 loss_source 0.538 loss_st 0.957 loss_ml 2380.483 loss_target 0.628 loss_total 37.565 [2019-10-31 02:08:54]
Iter: [500/1348] Freq 118.2 loss_source 0.545 loss_st 0.958 loss_ml 2383.143 loss_target 0.629 loss_total 37.947 [2019-10-31 02:11:32]
Iter: [600/1348] Freq 118.3 loss_source 0.551 loss_st 0.959 loss_ml 2386.345 loss_target 0.628 loss_total 38.232 [2019-10-31 02:14:06]
Iter: [700/1348] Freq 118.3 loss_source 0.558 loss_st 0.959 loss_ml 2384.404 loss_target 0.629 loss_total 38.593 [2019-10-31 02:16:42]
Iter: [800/1348] Freq 118.4 loss_source 0.563 loss_st 0.959 loss_ml 2385.691 loss_target 0.628 loss_total 38.822 [2019-10-31 02:19:17]
Iter: [900/1348] Freq 118.4 loss_source 0.569 loss_st 0.959 loss_ml 2386.662 loss_target 0.628 loss_total 39.138 [2019-10-31 02:21:52]
Iter: [1000/1348] Freq 118.4 loss_source 0.570 loss_st 0.959 loss_ml 2383.333 loss_target 0.628 loss_total 39.178 [2019-10-31 02:24:28]
Iter: [1100/1348] Freq 118.4 loss_source 0.573 loss_st 0.959 loss_ml 2378.201 loss_target 0.627 loss_total 39.338 [2019-10-31 02:27:02]
Iter: [1200/1348] Freq 118.6 loss_source 0.574 loss_st 0.958 loss_ml 2379.614 loss_target 0.628 loss_total 39.377 [2019-10-31 02:29:35]
Iter: [1300/1348] Freq 118.5 loss_source 0.577 loss_st 0.958 loss_ml 2377.149 loss_target 0.628 loss_total 39.529 [2019-10-31 02:32:11]
Train loss_source 0.577 loss_st 0.958 loss_ml 2376.906 loss_target 0.628 loss_total 39.532

==>>[2019-10-31 02:33:27] [Epoch=010/020] Stage 1, [Need: 05:47:57]
Iter: [000/1348] Freq 65.8 loss_source 0.444 loss_st 0.945 loss_ml 2605.647 loss_target 0.680 loss_total 32.837 [2019-10-31 02:33:30]
Iter: [100/1348] Freq 119.9 loss_source 0.432 loss_st 0.937 loss_ml 2345.406 loss_target 0.630 loss_total 32.088 [2019-10-31 02:36:02]
Iter: [200/1348] Freq 118.9 loss_source 0.420 loss_st 0.934 loss_ml 2313.393 loss_target 0.631 loss_total 31.439 [2019-10-31 02:38:38]
Iter: [300/1348] Freq 118.3 loss_source 0.423 loss_st 0.933 loss_ml 2304.845 loss_target 0.629 loss_total 31.566 [2019-10-31 02:41:15]
Iter: [400/1348] Freq 118.6 loss_source 0.438 loss_st 0.935 loss_ml 2315.193 loss_target 0.629 loss_total 32.345 [2019-10-31 02:43:49]
Iter: [500/1348] Freq 118.5 loss_source 0.445 loss_st 0.936 loss_ml 2334.554 loss_target 0.628 loss_total 32.701 [2019-10-31 02:46:25]
Iter: [600/1348] Freq 118.7 loss_source 0.449 loss_st 0.936 loss_ml 2325.462 loss_target 0.628 loss_total 32.929 [2019-10-31 02:48:59]
Iter: [700/1348] Freq 118.7 loss_source 0.453 loss_st 0.937 loss_ml 2323.330 loss_target 0.628 loss_total 33.105 [2019-10-31 02:51:34]
Iter: [800/1348] Freq 118.7 loss_source 0.455 loss_st 0.937 loss_ml 2319.867 loss_target 0.629 loss_total 33.228 [2019-10-31 02:54:08]
Iter: [900/1348] Freq 118.6 loss_source 0.459 loss_st 0.936 loss_ml 2316.449 loss_target 0.628 loss_total 33.385 [2019-10-31 02:56:45]
Iter: [1000/1348] Freq 118.5 loss_source 0.462 loss_st 0.936 loss_ml 2313.949 loss_target 0.628 loss_total 33.533 [2019-10-31 02:59:21]
Iter: [1100/1348] Freq 118.7 loss_source 0.464 loss_st 0.936 loss_ml 2312.868 loss_target 0.627 loss_total 33.642 [2019-10-31 03:01:54]
Iter: [1200/1348] Freq 118.6 loss_source 0.467 loss_st 0.936 loss_ml 2316.160 loss_target 0.627 loss_total 33.780 [2019-10-31 03:04:31]
Iter: [1300/1348] Freq 118.5 loss_source 0.470 loss_st 0.936 loss_ml 2313.366 loss_target 0.627 loss_total 33.953 [2019-10-31 03:07:07]
Train loss_source 0.471 loss_st 0.936 loss_ml 2310.967 loss_target 0.627 loss_total 34.022

==>>[2019-10-31 03:08:22] [Epoch=011/020] Stage 1, [Need: 05:13:15]
Iter: [000/1348] Freq 69.0 loss_source 0.282 loss_st 0.937 loss_ml 2344.097 loss_target 0.667 loss_total 24.584 [2019-10-31 03:08:25]
Iter: [100/1348] Freq 118.7 loss_source 0.341 loss_st 0.916 loss_ml 2355.124 loss_target 0.629 loss_total 27.325 [2019-10-31 03:10:59]
Iter: [200/1348] Freq 118.6 loss_source 0.342 loss_st 0.916 loss_ml 2307.593 loss_target 0.626 loss_total 27.360 [2019-10-31 03:13:34]
Iter: [300/1348] Freq 118.3 loss_source 0.349 loss_st 0.915 loss_ml 2277.407 loss_target 0.624 loss_total 27.703 [2019-10-31 03:16:10]
Iter: [400/1348] Freq 118.4 loss_source 0.355 loss_st 0.916 loss_ml 2289.614 loss_target 0.625 loss_total 28.002 [2019-10-31 03:18:45]
Iter: [500/1348] Freq 118.3 loss_source 0.357 loss_st 0.917 loss_ml 2275.185 loss_target 0.625 loss_total 28.095 [2019-10-31 03:21:21]
Iter: [600/1348] Freq 118.2 loss_source 0.362 loss_st 0.917 loss_ml 2268.833 loss_target 0.624 loss_total 28.368 [2019-10-31 03:23:58]
Iter: [700/1348] Freq 118.1 loss_source 0.366 loss_st 0.918 loss_ml 2262.012 loss_target 0.623 loss_total 28.541 [2019-10-31 03:26:34]
Iter: [800/1348] Freq 118.2 loss_source 0.371 loss_st 0.919 loss_ml 2258.630 loss_target 0.623 loss_total 28.835 [2019-10-31 03:29:09]
Iter: [900/1348] Freq 118.2 loss_source 0.377 loss_st 0.919 loss_ml 2251.037 loss_target 0.623 loss_total 29.113 [2019-10-31 03:31:45]
Iter: [1000/1348] Freq 118.3 loss_source 0.379 loss_st 0.919 loss_ml 2261.247 loss_target 0.623 loss_total 29.208 [2019-10-31 03:34:18]
Iter: [1100/1348] Freq 118.3 loss_source 0.387 loss_st 0.921 loss_ml 2257.889 loss_target 0.624 loss_total 29.615 [2019-10-31 03:36:54]
Iter: [1200/1348] Freq 118.3 loss_source 0.394 loss_st 0.922 loss_ml 2262.038 loss_target 0.624 loss_total 29.988 [2019-10-31 03:39:30]
Iter: [1300/1348] Freq 118.3 loss_source 0.398 loss_st 0.923 loss_ml 2263.971 loss_target 0.624 loss_total 30.205 [2019-10-31 03:42:06]
Train loss_source 0.399 loss_st 0.923 loss_ml 2264.840 loss_target 0.624 loss_total 30.276

==>>[2019-10-31 03:43:20] [Epoch=012/020] Stage 1, [Need: 04:38:33]
Iter: [000/1348] Freq 66.3 loss_source 0.395 loss_st 0.896 loss_ml 2123.650 loss_target 0.594 loss_total 29.752 [2019-10-31 03:43:23]
Iter: [100/1348] Freq 117.1 loss_source 0.286 loss_st 0.895 loss_ml 2215.422 loss_target 0.618 loss_total 24.300 [2019-10-31 03:45:59]
Iter: [200/1348] Freq 117.6 loss_source 0.262 loss_st 0.888 loss_ml 2243.511 loss_target 0.620 loss_total 23.055 [2019-10-31 03:48:34]
Iter: [300/1348] Freq 117.6 loss_source 0.255 loss_st 0.885 loss_ml 2242.461 loss_target 0.621 loss_total 22.679 [2019-10-31 03:51:11]
Iter: [400/1348] Freq 117.9 loss_source 0.250 loss_st 0.882 loss_ml 2240.256 loss_target 0.622 loss_total 22.399 [2019-10-31 03:53:46]
Iter: [500/1348] Freq 117.8 loss_source 0.247 loss_st 0.880 loss_ml 2225.737 loss_target 0.622 loss_total 22.236 [2019-10-31 03:56:22]
Iter: [600/1348] Freq 117.9 loss_source 0.242 loss_st 0.879 loss_ml 2215.205 loss_target 0.622 loss_total 21.939 [2019-10-31 03:58:58]
Iter: [700/1348] Freq 123.9 loss_source 0.239 loss_st 0.878 loss_ml 2211.120 loss_target 0.621 loss_total 21.778 [2019-10-31 04:00:41]
Iter: [800/1348] Freq 133.1 loss_source 0.236 loss_st 0.876 loss_ml 2205.401 loss_target 0.621 loss_total 21.626 [2019-10-31 04:01:47]
Iter: [900/1348] Freq 141.1 loss_source 0.233 loss_st 0.875 loss_ml 2192.080 loss_target 0.622 loss_total 21.479 [2019-10-31 04:02:55]
Iter: [1000/1348] Freq 148.2 loss_source 0.232 loss_st 0.875 loss_ml 2189.130 loss_target 0.621 loss_total 21.399 [2019-10-31 04:04:03]
Iter: [1100/1348] Freq 154.6 loss_source 0.231 loss_st 0.875 loss_ml 2183.306 loss_target 0.621 loss_total 21.340 [2019-10-31 04:05:10]
Iter: [1200/1348] Freq 160.3 loss_source 0.229 loss_st 0.874 loss_ml 2177.734 loss_target 0.620 loss_total 21.257 [2019-10-31 04:06:18]
Iter: [1300/1348] Freq 165.6 loss_source 0.228 loss_st 0.874 loss_ml 2168.686 loss_target 0.620 loss_total 21.174 [2019-10-31 04:07:26]
Train loss_source 0.227 loss_st 0.873 loss_ml 2165.796 loss_target 0.620 loss_total 21.133

==>>[2019-10-31 04:08:00] [Epoch=013/020] Stage 1, [Need: 03:58:15]
Iter: [000/1348] Freq 97.4 loss_source 0.154 loss_st 0.830 loss_ml 2031.508 loss_target 0.606 loss_total 17.017 [2019-10-31 04:08:01]
Iter: [100/1348] Freq 271.6 loss_source 0.191 loss_st 0.864 loss_ml 2146.182 loss_target 0.614 loss_total 19.234 [2019-10-31 04:09:08]
Iter: [200/1348] Freq 271.0 loss_source 0.193 loss_st 0.863 loss_ml 2150.023 loss_target 0.617 loss_total 19.339 [2019-10-31 04:10:16]
Iter: [300/1348] Freq 271.5 loss_source 0.195 loss_st 0.863 loss_ml 2142.042 loss_target 0.619 loss_total 19.403 [2019-10-31 04:11:24]
Iter: [400/1348] Freq 273.3 loss_source 0.193 loss_st 0.862 loss_ml 2147.793 loss_target 0.618 loss_total 19.335 [2019-10-31 04:12:30]
Iter: [500/1348] Freq 273.2 loss_source 0.193 loss_st 0.863 loss_ml 2144.947 loss_target 0.618 loss_total 19.332 [2019-10-31 04:13:37]
Iter: [600/1348] Freq 260.2 loss_source 0.192 loss_st 0.864 loss_ml 2141.714 loss_target 0.618 loss_total 19.263 [2019-10-31 04:15:05]
Iter: [700/1348] Freq 245.6 loss_source 0.191 loss_st 0.864 loss_ml 2144.682 loss_target 0.618 loss_total 19.246 [2019-10-31 04:16:45]
Iter: [800/1348] Freq 236.2 loss_source 0.190 loss_st 0.864 loss_ml 2138.832 loss_target 0.618 loss_total 19.186 [2019-10-31 04:18:24]
Iter: [900/1348] Freq 229.0 loss_source 0.190 loss_st 0.863 loss_ml 2134.511 loss_target 0.618 loss_total 19.172 [2019-10-31 04:20:03]
Iter: [1000/1348] Freq 223.7 loss_source 0.190 loss_st 0.863 loss_ml 2135.060 loss_target 0.618 loss_total 19.192 [2019-10-31 04:21:43]
Iter: [1100/1348] Freq 220.0 loss_source 0.190 loss_st 0.863 loss_ml 2130.858 loss_target 0.618 loss_total 19.190 [2019-10-31 04:23:21]
Iter: [1200/1348] Freq 216.3 loss_source 0.191 loss_st 0.863 loss_ml 2130.745 loss_target 0.618 loss_total 19.208 [2019-10-31 04:25:01]
Iter: [1300/1348] Freq 213.1 loss_source 0.190 loss_st 0.863 loss_ml 2129.600 loss_target 0.618 loss_total 19.191 [2019-10-31 04:26:43]
Train loss_source 0.190 loss_st 0.863 loss_ml 2127.544 loss_target 0.618 loss_total 19.191

==>>[2019-10-31 04:27:34] [Epoch=014/020] Stage 1, [Need: 03:18:01]
Iter: [000/1348] Freq 71.2 loss_source 0.161 loss_st 0.858 loss_ml 1724.777 loss_target 0.683 loss_total 17.667 [2019-10-31 04:27:36]
Iter: [100/1348] Freq 183.9 loss_source 0.173 loss_st 0.862 loss_ml 2098.285 loss_target 0.623 loss_total 18.314 [2019-10-31 04:29:15]
Iter: [200/1348] Freq 185.3 loss_source 0.174 loss_st 0.864 loss_ml 2139.156 loss_target 0.618 loss_total 18.371 [2019-10-31 04:30:53]
Iter: [300/1348] Freq 184.7 loss_source 0.174 loss_st 0.863 loss_ml 2111.251 loss_target 0.619 loss_total 18.388 [2019-10-31 04:32:33]
Iter: [400/1348] Freq 184.7 loss_source 0.175 loss_st 0.863 loss_ml 2104.732 loss_target 0.619 loss_total 18.422 [2019-10-31 04:34:13]
Iter: [500/1348] Freq 184.9 loss_source 0.175 loss_st 0.862 loss_ml 2110.549 loss_target 0.619 loss_total 18.411 [2019-10-31 04:35:52]
Iter: [600/1348] Freq 184.5 loss_source 0.176 loss_st 0.862 loss_ml 2096.866 loss_target 0.618 loss_total 18.432 [2019-10-31 04:37:33]
Iter: [700/1348] Freq 184.5 loss_source 0.176 loss_st 0.862 loss_ml 2093.281 loss_target 0.618 loss_total 18.474 [2019-10-31 04:39:13]
Iter: [800/1348] Freq 184.4 loss_source 0.176 loss_st 0.861 loss_ml 2095.736 loss_target 0.618 loss_total 18.453 [2019-10-31 04:40:53]
Iter: [900/1348] Freq 184.4 loss_source 0.176 loss_st 0.861 loss_ml 2090.378 loss_target 0.618 loss_total 18.443 [2019-10-31 04:42:33]
Iter: [1000/1348] Freq 184.3 loss_source 0.176 loss_st 0.861 loss_ml 2092.603 loss_target 0.617 loss_total 18.443 [2019-10-31 04:44:13]
Iter: [1100/1348] Freq 184.2 loss_source 0.176 loss_st 0.861 loss_ml 2089.798 loss_target 0.616 loss_total 18.445 [2019-10-31 04:45:54]
Iter: [1200/1348] Freq 184.1 loss_source 0.176 loss_st 0.861 loss_ml 2086.500 loss_target 0.616 loss_total 18.452 [2019-10-31 04:47:34]
Iter: [1300/1348] Freq 184.1 loss_source 0.176 loss_st 0.861 loss_ml 2089.030 loss_target 0.616 loss_total 18.438 [2019-10-31 04:49:14]
Train loss_source 0.176 loss_st 0.861 loss_ml 2087.777 loss_target 0.616 loss_total 18.441

==>>[2019-10-31 04:50:04] [Epoch=015/020] Stage 1, [Need: 02:41:31]
Iter: [000/1348] Freq 65.2 loss_source 0.252 loss_st 0.862 loss_ml 1902.214 loss_target 0.547 loss_total 22.143 [2019-10-31 04:50:07]
Iter: [100/1348] Freq 179.3 loss_source 0.168 loss_st 0.860 loss_ml 2026.106 loss_target 0.612 loss_total 18.033 [2019-10-31 04:51:48]
Iter: [200/1348] Freq 180.5 loss_source 0.167 loss_st 0.860 loss_ml 2050.695 loss_target 0.615 loss_total 17.974 [2019-10-31 04:53:29]
Iter: [300/1348] Freq 182.3 loss_source 0.165 loss_st 0.859 loss_ml 2054.228 loss_target 0.615 loss_total 17.865 [2019-10-31 04:55:08]
Iter: [400/1348] Freq 183.6 loss_source 0.166 loss_st 0.859 loss_ml 2052.888 loss_target 0.614 loss_total 17.891 [2019-10-31 04:56:46]
Iter: [500/1348] Freq 183.3 loss_source 0.166 loss_st 0.859 loss_ml 2058.827 loss_target 0.613 loss_total 17.912 [2019-10-31 04:58:27]
Iter: [600/1348] Freq 183.2 loss_source 0.167 loss_st 0.860 loss_ml 2052.829 loss_target 0.613 loss_total 17.976 [2019-10-31 05:00:08]
Iter: [700/1348] Freq 184.0 loss_source 0.167 loss_st 0.859 loss_ml 2060.672 loss_target 0.613 loss_total 17.976 [2019-10-31 05:01:46]
Iter: [800/1348] Freq 183.9 loss_source 0.167 loss_st 0.859 loss_ml 2060.205 loss_target 0.614 loss_total 17.981 [2019-10-31 05:03:26]
Iter: [900/1348] Freq 183.9 loss_source 0.167 loss_st 0.859 loss_ml 2056.509 loss_target 0.613 loss_total 17.963 [2019-10-31 05:05:06]
Iter: [1000/1348] Freq 183.7 loss_source 0.167 loss_st 0.859 loss_ml 2054.370 loss_target 0.613 loss_total 17.965 [2019-10-31 05:06:47]
Iter: [1100/1348] Freq 184.0 loss_source 0.167 loss_st 0.859 loss_ml 2054.545 loss_target 0.612 loss_total 17.946 [2019-10-31 05:08:25]
Iter: [1200/1348] Freq 184.0 loss_source 0.167 loss_st 0.859 loss_ml 2054.274 loss_target 0.613 loss_total 17.970 [2019-10-31 05:10:05]
Iter: [1300/1348] Freq 183.9 loss_source 0.167 loss_st 0.859 loss_ml 2053.557 loss_target 0.612 loss_total 17.964 [2019-10-31 05:11:46]
Train loss_source 0.167 loss_st 0.859 loss_ml 2055.719 loss_target 0.613 loss_total 17.945

==>>[2019-10-31 05:12:35] [Epoch=016/020] Stage 1, [Need: 02:06:46]
Iter: [000/1348] Freq 62.4 loss_source 0.155 loss_st 0.869 loss_ml 1786.544 loss_target 0.626 loss_total 17.410 [2019-10-31 05:12:38]
Iter: [100/1348] Freq 182.7 loss_source 0.159 loss_st 0.855 loss_ml 2039.652 loss_target 0.613 loss_total 17.511 [2019-10-31 05:14:16]
Iter: [200/1348] Freq 183.3 loss_source 0.155 loss_st 0.855 loss_ml 2082.787 loss_target 0.608 loss_total 17.322 [2019-10-31 05:15:56]
Iter: [300/1348] Freq 183.8 loss_source 0.156 loss_st 0.855 loss_ml 2066.720 loss_target 0.608 loss_total 17.387 [2019-10-31 05:17:36]
Iter: [400/1348] Freq 184.1 loss_source 0.154 loss_st 0.854 loss_ml 2057.583 loss_target 0.610 loss_total 17.247 [2019-10-31 05:19:15]
Iter: [500/1348] Freq 184.0 loss_source 0.154 loss_st 0.854 loss_ml 2058.349 loss_target 0.610 loss_total 17.247 [2019-10-31 05:20:56]
Iter: [600/1348] Freq 184.2 loss_source 0.155 loss_st 0.854 loss_ml 2056.382 loss_target 0.610 loss_total 17.298 [2019-10-31 05:22:35]
Iter: [700/1348] Freq 184.3 loss_source 0.155 loss_st 0.855 loss_ml 2054.100 loss_target 0.609 loss_total 17.297 [2019-10-31 05:24:14]
Iter: [800/1348] Freq 184.3 loss_source 0.156 loss_st 0.855 loss_ml 2049.063 loss_target 0.609 loss_total 17.373 [2019-10-31 05:25:54]
Iter: [900/1348] Freq 184.3 loss_source 0.156 loss_st 0.855 loss_ml 2053.129 loss_target 0.609 loss_total 17.387 [2019-10-31 05:27:34]
Iter: [1000/1348] Freq 184.6 loss_source 0.156 loss_st 0.856 loss_ml 2054.089 loss_target 0.609 loss_total 17.389 [2019-10-31 05:29:12]
Iter: [1100/1348] Freq 184.9 loss_source 0.156 loss_st 0.856 loss_ml 2050.949 loss_target 0.610 loss_total 17.390 [2019-10-31 05:30:50]
Iter: [1200/1348] Freq 185.0 loss_source 0.157 loss_st 0.856 loss_ml 2044.885 loss_target 0.610 loss_total 17.440 [2019-10-31 05:32:29]
Iter: [1300/1348] Freq 185.2 loss_source 0.157 loss_st 0.856 loss_ml 2043.941 loss_target 0.609 loss_total 17.422 [2019-10-31 05:34:07]
Train loss_source 0.157 loss_st 0.856 loss_ml 2040.170 loss_target 0.609 loss_total 17.424

==>>[2019-10-31 05:34:55] [Epoch=017/020] Stage 1, [Need: 01:33:25]
Iter: [000/1348] Freq 75.7 loss_source 0.144 loss_st 0.856 loss_ml 2345.150 loss_target 0.692 loss_total 16.916 [2019-10-31 05:34:57]
Iter: [100/1348] Freq 184.2 loss_source 0.147 loss_st 0.853 loss_ml 2025.583 loss_target 0.613 loss_total 16.901 [2019-10-31 05:36:36]
Iter: [200/1348] Freq 184.5 loss_source 0.148 loss_st 0.854 loss_ml 2013.158 loss_target 0.613 loss_total 16.972 [2019-10-31 05:38:15]
Iter: [300/1348] Freq 185.0 loss_source 0.150 loss_st 0.855 loss_ml 2013.840 loss_target 0.609 loss_total 17.059 [2019-10-31 05:39:54]
Iter: [400/1348] Freq 185.9 loss_source 0.148 loss_st 0.855 loss_ml 2025.610 loss_target 0.609 loss_total 16.979 [2019-10-31 05:41:32]
Iter: [500/1348] Freq 186.0 loss_source 0.147 loss_st 0.854 loss_ml 2027.664 loss_target 0.608 loss_total 16.909 [2019-10-31 05:43:11]
Iter: [600/1348] Freq 185.2 loss_source 0.147 loss_st 0.854 loss_ml 2024.651 loss_target 0.609 loss_total 16.896 [2019-10-31 05:44:52]
Iter: [700/1348] Freq 185.5 loss_source 0.147 loss_st 0.854 loss_ml 2029.384 loss_target 0.608 loss_total 16.880 [2019-10-31 05:46:30]
Iter: [800/1348] Freq 186.5 loss_source 0.146 loss_st 0.854 loss_ml 2026.395 loss_target 0.609 loss_total 16.878 [2019-10-31 05:48:05]
Iter: [900/1348] Freq 186.8 loss_source 0.147 loss_st 0.854 loss_ml 2031.541 loss_target 0.609 loss_total 16.906 [2019-10-31 05:49:43]
Iter: [1000/1348] Freq 186.4 loss_source 0.147 loss_st 0.854 loss_ml 2034.979 loss_target 0.609 loss_total 16.903 [2019-10-31 05:51:23]
Iter: [1100/1348] Freq 186.6 loss_source 0.147 loss_st 0.854 loss_ml 2032.998 loss_target 0.609 loss_total 16.907 [2019-10-31 05:53:01]
Iter: [1200/1348] Freq 186.7 loss_source 0.146 loss_st 0.854 loss_ml 2032.367 loss_target 0.608 loss_total 16.877 [2019-10-31 05:54:39]
Iter: [1300/1348] Freq 187.0 loss_source 0.146 loss_st 0.854 loss_ml 2031.432 loss_target 0.608 loss_total 16.868 [2019-10-31 05:56:15]
Train loss_source 0.146 loss_st 0.854 loss_ml 2030.448 loss_target 0.608 loss_total 16.864

==>>[2019-10-31 05:57:04] [Epoch=018/020] Stage 1, [Need: 01:01:17]
Iter: [000/1348] Freq 64.5 loss_source 0.123 loss_st 0.880 loss_ml 2399.581 loss_target 0.578 loss_total 15.986 [2019-10-31 05:57:06]
Iter: [100/1348] Freq 186.3 loss_source 0.146 loss_st 0.855 loss_ml 1971.614 loss_target 0.614 loss_total 16.862 [2019-10-31 05:58:43]
Iter: [200/1348] Freq 184.1 loss_source 0.146 loss_st 0.855 loss_ml 1993.462 loss_target 0.613 loss_total 16.883 [2019-10-31 06:00:24]
Iter: [300/1348] Freq 183.8 loss_source 0.145 loss_st 0.855 loss_ml 1994.852 loss_target 0.609 loss_total 16.825 [2019-10-31 06:02:05]
Iter: [400/1348] Freq 184.8 loss_source 0.145 loss_st 0.853 loss_ml 2014.643 loss_target 0.610 loss_total 16.786 [2019-10-31 06:03:43]
Iter: [500/1348] Freq 184.9 loss_source 0.145 loss_st 0.853 loss_ml 2015.269 loss_target 0.610 loss_total 16.767 [2019-10-31 06:05:22]
Iter: [600/1348] Freq 184.5 loss_source 0.145 loss_st 0.853 loss_ml 2011.875 loss_target 0.609 loss_total 16.776 [2019-10-31 06:07:03]
Iter: [700/1348] Freq 184.7 loss_source 0.144 loss_st 0.853 loss_ml 2015.176 loss_target 0.609 loss_total 16.765 [2019-10-31 06:08:42]
Iter: [800/1348] Freq 184.7 loss_source 0.144 loss_st 0.853 loss_ml 2018.306 loss_target 0.608 loss_total 16.756 [2019-10-31 06:10:21]
Iter: [900/1348] Freq 185.1 loss_source 0.144 loss_st 0.853 loss_ml 2021.341 loss_target 0.609 loss_total 16.768 [2019-10-31 06:11:59]
Iter: [1000/1348] Freq 185.4 loss_source 0.144 loss_st 0.853 loss_ml 2017.077 loss_target 0.608 loss_total 16.747 [2019-10-31 06:13:37]
Iter: [1100/1348] Freq 185.7 loss_source 0.144 loss_st 0.853 loss_ml 2016.717 loss_target 0.608 loss_total 16.739 [2019-10-31 06:15:14]
Iter: [1200/1348] Freq 185.9 loss_source 0.144 loss_st 0.853 loss_ml 2016.753 loss_target 0.608 loss_total 16.729 [2019-10-31 06:16:52]
Iter: [1300/1348] Freq 185.9 loss_source 0.144 loss_st 0.853 loss_ml 2017.851 loss_target 0.608 loss_total 16.759 [2019-10-31 06:18:31]
Train loss_source 0.144 loss_st 0.853 loss_ml 2017.128 loss_target 0.608 loss_total 16.763

==>>[2019-10-31 06:19:20] [Epoch=019/020] Stage 1, [Need: 00:30:12]
Iter: [000/1348] Freq 66.1 loss_source 0.187 loss_st 0.833 loss_ml 2280.488 loss_target 0.655 loss_total 18.791 [2019-10-31 06:19:23]
Iter: [100/1348] Freq 183.8 loss_source 0.148 loss_st 0.854 loss_ml 2043.541 loss_target 0.605 loss_total 16.934 [2019-10-31 06:21:01]
Iter: [200/1348] Freq 183.3 loss_source 0.143 loss_st 0.853 loss_ml 2012.975 loss_target 0.608 loss_total 16.701 [2019-10-31 06:22:41]
Iter: [300/1348] Freq 184.3 loss_source 0.143 loss_st 0.854 loss_ml 2017.834 loss_target 0.608 loss_total 16.696 [2019-10-31 06:24:20]
Iter: [400/1348] Freq 186.8 loss_source 0.144 loss_st 0.854 loss_ml 2019.012 loss_target 0.606 loss_total 16.729 [2019-10-31 06:25:55]
Iter: [500/1348] Freq 187.2 loss_source 0.143 loss_st 0.854 loss_ml 2024.413 loss_target 0.607 loss_total 16.714 [2019-10-31 06:27:32]
Iter: [600/1348] Freq 187.7 loss_source 0.142 loss_st 0.853 loss_ml 2009.926 loss_target 0.607 loss_total 16.668 [2019-10-31 06:29:09]
Iter: [700/1348] Freq 188.3 loss_source 0.142 loss_st 0.854 loss_ml 2021.515 loss_target 0.607 loss_total 16.666 [2019-10-31 06:30:45]
Iter: [800/1348] Freq 188.7 loss_source 0.142 loss_st 0.854 loss_ml 2030.756 loss_target 0.607 loss_total 16.660 [2019-10-31 06:32:21]
Iter: [900/1348] Freq 189.1 loss_source 0.143 loss_st 0.853 loss_ml 2029.066 loss_target 0.608 loss_total 16.685 [2019-10-31 06:33:57]
Iter: [1000/1348] Freq 189.1 loss_source 0.143 loss_st 0.853 loss_ml 2026.544 loss_target 0.607 loss_total 16.710 [2019-10-31 06:35:34]
Iter: [1100/1348] Freq 189.1 loss_source 0.143 loss_st 0.853 loss_ml 2026.284 loss_target 0.607 loss_total 16.702 [2019-10-31 06:37:11]
Iter: [1200/1348] Freq 189.2 loss_source 0.143 loss_st 0.853 loss_ml 2024.188 loss_target 0.608 loss_total 16.703 [2019-10-31 06:38:48]
Iter: [1300/1348] Freq 189.5 loss_source 0.143 loss_st 0.853 loss_ml 2022.614 loss_target 0.608 loss_total 16.713 [2019-10-31 06:40:23]
Train loss_source 0.144 loss_st 0.853 loss_ml 2023.444 loss_target 0.608 loss_total 16.720
Test r1 43.379 r5 61.906 r10 69.210 MAP 22.346

MSMT17 preprocessed data

Can you please share Google drive link for MSMT17 preprocessed dataset as Baidu link is not working?

construct_dataset_Market.m

Hello, i have a question. In matlab code, file"construct_dataset_Market.m", line50,51,52,
"train_labels = uniquize(train_labels);
gallery_labels = uniquize(gallery_labels);
probe_labels = uniquize(probe_labels);"
i do not search any information about function "uniquize" and when i run this matlab file ,something wrong like this "Undefined function 'uniquize' for input arguments of type 'int64'.
Error in construct_dataset_Market (line 50)train_labels = uniquize(train_labels);"
i want to know, why. thanks twice.

RuntimeWarning: invalid value encountered in greater is_positive = p_agree[similar_idx] > self.threshold.item()

Iter: [600/1348] Freq 248.2 loss_source 2.152 loss_st 0.781 loss_ml 3458.865 loss_target 0.000 loss_total 116.117 [2019-10-31 09:07:41]
Iter: [700/1348] Freq 248.9 loss_source 2.127 loss_st 0.782 loss_ml 3387.675 loss_target 0.000 loss_total 114.824 [2019-10-31 09:08:53]
Iter: [800/1348] Freq 250.2 loss_source 2.104 loss_st 0.784 loss_ml 3329.171 loss_target 0.000 loss_total 113.704 [2019-10-31 09:10:04]
Iter: [900/1348] Freq 250.9 loss_source 2.078 loss_st 0.784 loss_ml 3280.807 loss_target 0.000 loss_total 112.419 [2019-10-31 09:11:16]
Iter: [1000/1348] Freq 251.3 loss_source 2.060 loss_st 0.786 loss_ml 3250.651 loss_target 0.000 loss_total 111.489 [2019-10-31 09:12:28]
Iter: [1100/1348] Freq 252.0 loss_source 2.040 loss_st 0.786 loss_ml 3223.414 loss_target 0.000 loss_total 110.485 [2019-10-31 09:13:39]
Iter: [1200/1348] Freq 252.7 loss_source nan loss_st nan loss_ml nan loss_target 0.000 loss_total nan [2019-10-31 09:14:49]
Iter: [1300/1348] Freq 253.6 loss_source nan loss_st nan loss_ml nan loss_target 0.000 loss_total nan [2019-10-31 09:15:59]
Train loss_source nan loss_st nan loss_ml nan loss_target 0.000 loss_total nan
Test r1 0.000 r5 0.119 r10 0.208 MAP 5.396

==>>[2019-10-31 09:19:46] [Epoch=001/020] Stage 1, [Need: 05:18:16]
/data/qli/Person_Re-Identification/MAR/utils.py:165: RuntimeWarning: invalid value encountered in greater
is_positive = p_agree[similar_idx] > self.threshold.item()
Iter: [000/1348] Freq 96.7 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:19:48]
Iter: [100/1348] Freq 213.6 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:21:14]
Iter: [200/1348] Freq 211.7 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:22:41]
Iter: [300/1348] Freq 211.6 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:24:08]
Iter: [400/1348] Freq 211.8 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:25:35]
Iter: [500/1348] Freq 211.5 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:27:02]
Iter: [600/1348] Freq 211.2 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:28:30]
Iter: [700/1348] Freq 211.0 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:29:58]
Iter: [800/1348] Freq 211.2 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:31:24]
Iter: [900/1348] Freq 211.1 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:32:52]
Iter: [1000/1348] Freq 211.0 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:34:19]
Iter: [1100/1348] Freq 211.2 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:35:46]
Iter: [1200/1348] Freq 210.9 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:37:14]
Iter: [1300/1348] Freq 210.8 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:38:42]
Train loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan
Test r1 0.000 r5 0.119 r10 0.208 MAP 5.396

==>>[2019-10-31 09:42:20] [Epoch=002/020] Stage 1, [Need: 05:56:28]
Iter: [000/1348] Freq 93.9 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:42:22]
Iter: [100/1348] Freq 214.4 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:43:47]
Iter: [200/1348] Freq 212.8 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:45:14]
Iter: [300/1348] Freq 212.4 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:46:41]
Iter: [400/1348] Freq 212.3 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:48:08]
Iter: [500/1348] Freq 212.1 loss_source nan loss_st nan loss_ml nan loss_target nan loss_total nan [2019-10-31 09:49:35]
Anybody face this problem, please tell me how to solve this problem?

About the results.

Hello, I noticed the differences between the results of Market-1501 and DukeMTMC-reID in your ablation study about those three losses. Why the rank-1 without Lcml and Lral is higher than just without Lral? Can you explain this? thank you!

Question about MDL loss in paper

Hi, Thanks for your sharing. I have a question about your paper.

The equation (4) in 3.2. part
123

In P-bar, you calculate 2-norm of f(zi) - f(zj).
According to content in 3.1. part, index pair (i, j) is found in the unlabeled target dataset X; intuitively, we should use image pair (xi, xj) to calculate 2-norm of f(xi) - f(xj).

So I wonder why you use (zi, zj), which is from an auxiliary RE-ID dataset Z, to calculate 2-norm in equation (4). Could you explain this?
The same circumstances also happen in N-bar.

Thanks a lot !

Question about model pretrain method?

Thanks for your sharing.

I find that the paper mentions that the model is pretrained only used $$L_{AL}$$first. In section 4.2,

i.e. we first pretrain the network using only L AL (without enforcing the unit norm constraint) to endow the basic discriminative power with the embedding and to determine the directions of the reference agents in the hypersphere embedding....

And I don't know how to pretrain the model according to the code now. I need some more detailed instructions, e.g how many epoches should I pretrain the model.

Thanks >.<

set_storage_offset error

HI thank you for share the code.But I ran into a problem while running the code.I reduced the batchsize to 184,

==>>[2019-10-30 19:22:50] [Epoch=000/020] Stage 1, [Need: 00:00:00]
initializing centers/threshold ...
loaded ml from data/ml_Market.dat
initializing centers done.
initializing threshold done.
Traceback (most recent call last):
File "src/main.py", line 46, in
main()
File "src/main.py", line 35, in main
meters_trn = trainer.train_epoch(source_loader, target_loader, epoch)
File "/home/sunxia/MAR/src/trainers.py", line 123, in train_epoch
multilabels = F.softmax(features_target.mm(agents.detach().t_()*self.args.scala_ce), dim=1)
RuntimeError: set_storage_offset is not allowed on Tensor created from .data or .detach()

Is this a problem that my memory is not big enough?

Issue in utils _update_centers function

Thanks for your work and released code.

When I run the code by following your instruction, there is a special case.

In _update_centers function in utils.py, there is a if condition sentence: if len(ml_in_v) == 1: then continue.

In the special case, the univiews.shape is [1] and len(ml_in_v) is 1. Then the network will stop here because means will become empty.

I use the processed dataset provided by you. The ml_Market is computed by my machine.

Thanks for your help in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.