Git Product home page Git Product logo

caffe-model's People

Contributors

cypof avatar lzx1413 avatar soeaver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caffe-model's Issues

Converting keras models to Caffe models

Hi
Sorry if the question is a little irrelevant, but I'm really stuck. How did you convert the keras models to Caffe models? Can you guide me to the right piece of code or converter? Thank you.

A question about Inception-v4

Thanks for your wonderful work! I noticed that you updated the inception-v4.py 5 days ago. The modification is that you swap the concating order of “pool branch” and “conv branch" in stem1 block. To my understanding, this may not affect the model. Would you mind to tell me why you have done this modification? Thanks!

About focal loss for det

Hi:)
Have you tried focal loss on FPN or other one-stage networks for detection? I dont have the FPN code,so I use RON, which is very similar to FPN.I just delete the rpn and add a python target_layer to generate anchor target and labels for every layer used to predict. Focal loss is used to replace softmax loss for cls. But when i begin to train ,the loss is very small and weird, i think maybe sth wrong with my target_layer or my network. I'm wondering how to change a two-stage net to a one-stage net and how to design the python layer to generate bbox_target,bbox inside outside weight and labels? Have you change the FPN?

Can you give some details about training inception-resnet-v2??

Hi, I am trying to train faster-rcnn using inception-resnet-v2 with pascal voc2007 dataset.
I used your "caffe-model/det/faster_rcnn/models/pascal_voc/solver.prototxt" and "caffe-model/det/faster_rcnn/models/pascal_voc/inception-resnet-v2/faster_voc_inception-resnet-v2-merge-aligned.prototxt" to train.

I changed your solver.prototxt like following.


base_lr: 0.045
lr_policy: "step"
gamma: 0.94
stepvalue: 50000
display: 20
average_loss: 100
momentum: 0.9
weight_decay: 0.0001
snapshot: 0
snapshot_prefix: "faster_inception_resnet-v2"
iter_size: 1


When I ran the command : "time ./tools/train_net.py --gpu 0 --solver models/pascal_voc/Inception-resnet-v2/faster_rcnn_end2end/solver.prototxt --weights data/imagenet_models/inception-resnet-v2.caffemodel --imdb voc_2007_trainval --iters 80000 --cfg experiments/cfgs/faster_rcnn_end2end.yml"

I got this error message...


/home/yeonjae/py-faster-rcnn/tools/../lib/fast_rcnn/bbox_transform.py:48: RuntimeWarning: overflow encountered in exp
pred_w = np.exp(dw) * widths[:, np.newaxis]
/home/yeonjae/py-faster-rcnn/tools/../lib/fast_rcnn/bbox_transform.py:48: RuntimeWarning: overflow encountered in multiply
pred_w = np.exp(dw) * widths[:, np.newaxis]
/home/yeonjae/py-faster-rcnn/tools/../lib/fast_rcnn/bbox_transform.py:49: RuntimeWarning: overflow encountered in exp
pred_h = np.exp(dh) * heights[:, np.newaxis]
/home/yeonjae/py-faster-rcnn/tools/../lib/fast_rcnn/bbox_transform.py:49: RuntimeWarning: overflow encountered in multiply
pred_h = np.exp(dh) * heights[:, np.newaxis]
I1011 18:06:21.929924 3081 solver.cpp:218] Iteration 0 (0 iter/s, 0.894591s/20 iters), loss = 3486.56
I1011 18:06:21.930017 3081 solver.cpp:237] Train net output #0: loss_bbox = 3265.53 (* 1 = 3265.53 loss)
I1011 18:06:21.930033 3081 solver.cpp:237] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I1011 18:06:21.930045 3081 solver.cpp:237] Train net output #2: rpn_cls_loss = 44.6777 (* 1 = 44.6777 loss)
I1011 18:06:21.930058 3081 solver.cpp:237] Train net output #3: rpn_loss_bbox = 89.0116 (* 1 = 89.0116 loss)
Floating point exception (core dumped)


Have you seen this error?? And could you please share your settings or things you used when training inception-resnet-v2??
Thanks~!

What is CPANet?

Could you provide Context Pyramid Attention Network paper as reference? Thanks!

error: AttributeError: 'LayerParameter' object has no attribute 'weight_filler'

i have run the script inception_v1.py and get an error like this:

Traceback (most recent call last):
  File "python_googlenet.py", line 435, in <module>
    main()
  File "python_googlenet.py", line 430, in main
    prototxt = inception_v1.inception_v1_proto(batch_size=32)
  File "python_googlenet.py", line 258, in inception_v1_proto
    return n.to_proto()
  File "/home/server/caffe/python/caffe/net_spec.py", line 189, in to_proto
    top._to_proto(layers, names, autonames)
  File "/home/server/caffe/python/caffe/net_spec.py", line 97, in _to_proto
    return self.fn._to_proto(layers, names, autonames)
  File "/home/server/caffe/python/caffe/net_spec.py", line 158, in _to_proto
    assign_proto(layer, k, v)
  File "/home/server/caffe/python/caffe/net_spec.py", line 64, in assign_proto
    is_repeated_field = hasattr(getattr(proto, name), 'extend')
AttributeError: 'LayerParameter' object has no attribute 'weight_filler'

how can i fix it?

Finetuning 错误

您好!感谢您将这些模型公开!
我在使用您的百度云模型进行Finetuning的时候,不管是inceptionV4或者inceptionResnetV2总会遇到下面这个问题:
Check failed: target_blobs.size() == source_layer.blobs_size() (2 vs. 1) Incompatible number of blobs for layer conv1_3x3_s2
不知道您这边能否给我点建议解决这个问题

Difference between Inception V4 deploy.prototext and .caffemodel on Baidu Cloud

First of all, thank you so much for sharing your work on this repo!

I saw your response to #10 where you say the Python scripts are likely out of date.

However, I obtained the model and weights from the Baidu Cloud package, and I believe that the network in the Inception V4 deploy.prototxt is different than the one in the .caffemodel. When I try and run this model, I get the following error:

Check failed: target_blobs.size() == source_layer.blobs_size() (5 vs. 3) Incompatible number of blobs for layer conv1_3x3_s2_bn

I originally got this error in DIGITS when running a train_val.prototxt I created from the deploy.prototxt file; however I can reproduce it with the following line of code in Python:

 caffe.Net('deploy_inception_v4.prototxt', 'inception_v4.caffemodel', caffe.TEST)

My interpretation of this error is that there is a conflict between the two files, but I'm pretty new to Caffe so I could be wrong. Can you confirm that the deploy.prototxt doesn't match the .caffemodel? If so, could you possibly share the deploy.prototxt that does match?

Thank you again.

Unable to get correct results from inception_v3 and v4 models

Hi,
Thank you for the models you have provided, I downloaded and tested resnet269-v2 and was able to get the same result as the one reported here. I use a cpp code for testing which is based on caffe classification example but not exactly the same.

The thing is I can't get one correct results using this same code with inception_v3 and v4 models. I tested inception_v4 weights with the evaluation_cls.py you have provided and it's results are OK, but the cpp code using the same version of caffe gives me very different results. Here are the top-5 predictions for the first 5 validation images:
ILSVRC2012_val_00000001.JPEG 65 600 616 748 840 765
ILSVRC2012_val_00000002.JPEG 970 600 700 731 748 765
ILSVRC2012_val_00000003.JPEG 230 987 722 998 818 846
ILSVRC2012_val_00000004.JPEG 809 429 494 577 427 616
ILSVRC2012_val_00000005.JPEG 516 722 494 616 429 600

What do you think could be the reason?
Thanks in advance for your help

Check failed: error == cudaSuccess (74 vs. 0) misaligned address

when I fine-tune the inceptionv4 model with batch size 2 or more than 2 (initializes Train and Test net together, batch size 2 for Train net and batch size 1 for Test net), It throws this above error. Only when the batch size for Train net is 1, it works well. But when i only use the TEST net (e.g. when extracting features), the batch size can be 20. What happens. Does someone know that?

Smaller crop size for Inception v3

I am having a hard time getting some good results from Inception v3. I am trying to fine-tune the model on the (Washington) RGB-D Object Dataset, but the results are way worse than what I have for Inception v2 (-0.07 accuracy). @soeaver I was wondering if you have stored the pre-trained model you used to get the reported results for crop size = 235. Give the original resolution of the images, maybe a smaller crop is enough and can also make the training easier.

请问你的邮箱

你好,方便告知你的邮箱地址吗,有些技术问题想和你交流一下。

One question on inception-v3 MXNet model

First I want to thank your great job and sharing this comprehensive evaluation.
I am not clear if it is appropriate to post this question here but I am very curious that, in the inception-v3 MXNet page https://github.com/dmlc/mxnet-model-gallery/blob/master/imagenet-1k-inception-v3.md,
I downloaded its model file inception-v3.tar.gz, and I can see that in Inception-7-symbol.json, the "FullyConnected" op "fc1" has a "num_hidden" with 1008, which I thought should be 1000.
And actually @soeaver already convert it to a caffemodel with "fc1" layer with num_output of 1000.
I am new to MXNet and just want to know what happen to MXNet's "fc1", thanks ~

微调inception-resnet-v2时loss无法下降

hi,我用soeaver提供的inception-resnet-v2model在自己数据集上微调,loss总是无法下降,不知是何原因,请帮忙分析一下~
solver.prototxt:
net:"/opt/meituan/lishengxi/caffe/models/inception_resnet_v2/inception_resnet_v2_trainval_fromdeploy.prototxt"
test_iter: 0
test_interval: 257252
test_initialization: false
iter_size: 4
type: "RMSProp"
rms_decay: 0.9
delta: 1.0
display: 100
average_loss: 1000
base_lr: 0.000005
lr_policy: "step"
stepsize: 514504
gamma: 0.94
max_iter: 40000000
#momentum: 0.9
#weight_decay: 0.0001
snapshot: 257252
snapshot_prefix: "/data1/meituan/lishengxi/trained_model/inception_resnet_v2/inception_resnet_v2"
solver_mode: GPU
trainval.prototxt:
name: "Inception_Resnet2_Imagenet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 331
mean_file: "/data1/meituan/lishengxi/train-data/lishengxi_traindata/mean.binaryproto"
}
data_param {
source: "/data1/meituan/lishengxi/train-data/lishengxi_traindata/train-lmdb"
batch_size: 8
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 331
mean_file: "/data1/meituan/lishengxi/train-data/lishengxi_traindata/mean.binaryproto"
}
data_param {
source: "/data1/meituan/lishengxi/train-data/lishengxi_testdata/test-lmdb"
batch_size: 1
backend: LMDB
}
}
layer {
name: "conv1_3x3_s2"
type: "Convolution"
bottom: "data"
top: "conv1_3x3_s2"
param {
lr_mult: 1
decay_mult: 1
}
convolution_param {
num_output: 32
bias_term: false
pad: 0
kernel_size: 3
stride: 2
weight_filler {
type: "xavier"
std: 0.01
}
}
}
layer {
name: "conv1_3x3_s2_bn"
type: "BatchNorm"
bottom: "conv1_3x3_s2"
top: "conv1_3x3_s2"
batch_norm_param {
use_global_stats: true
eps: 0.001
}
}
layer {
name: "conv1_3x3_s2_scale"
type: "Scale"
bottom: "conv1_3x3_s2"
top: "conv1_3x3_s2"
scale_param {
bias_term: true
}
}
layer {
name: "conv1_3x3_relu"
type: "ReLU"
bottom: "conv1_3x3_s2"
top: "conv1_3x3_s2"
}
layer {
name: "conv2_3x3_s1"
type: "Convolution"
bottom: "conv1_3x3_s2"
top: "conv2_3x3_s1"
param {
lr_mult: 1
decay_mult: 1
}
convolution_param {
num_output: 32
bias_term: false
pad: 0
kernel_size: 3
stride: 1
weight_filler {
type: "xavier"
std: 0.01
}
}
}
layer {
name: "conv2_3x3_s1_bn"
type: "BatchNorm"
bottom: "conv2_3x3_s1"
top: "conv2_3x3_s1"
batch_norm_param {
use_global_stats: true
eps: 0.001
}
}
...
...
...
layer {
name: "pool_8x8_s1_drop"
type: "Dropout"
bottom: "pool_8x8_s1"
top: "pool_8x8_s1_drop"
dropout_param {
dropout_ratio: 0.2
}
}
layer {
name: "classifier_finetune"
type: "InnerProduct"
bottom: "pool_8x8_s1_drop"
top: "classifier"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 585
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "classifier"
bottom: "label"
top: "loss"
}
layer {
name: "accuracy_top1"
type: "Accuracy"
bottom: "classifier"
bottom: "label"
top: "accuracy_top1"
include {
phase: TEST
}
}
layer {
name: "accuracy_top5"
type: "Accuracy"
bottom: "classifier"
bottom: "label"
top: "accuracy_top5"
include {
phase: TEST
}
accuracy_param {
top_k: 5
}
}
training_log:
I0630 17:39:20.930785 37004 solver.cpp:228] Iteration 0, loss = 45.8317
I0630 17:39:20.930838 37004 solver.cpp:244] Train net output #0: loss = 52.5151 (* 1 = 52.5151 loss)
I0630 17:39:20.930882 37004 sgd_solver.cpp:106] Iteration 0, lr = 5e-06
I0630 17:48:54.090333 37004 solver.cpp:228] Iteration 100, loss = 9.46255
I0630 17:48:54.090742 37004 solver.cpp:244] Train net output #0: loss = 6.26746 (* 1 = 6.26746 loss)
I0630 17:48:54.090757 37004 sgd_solver.cpp:106] Iteration 100, lr = 5e-06
I0630 17:58:27.340862 37004 solver.cpp:228] Iteration 200, loss = 8.03773
I0630 17:58:27.341080 37004 solver.cpp:244] Train net output #0: loss = 6.44643 (* 1 = 6.44643 loss)
I0630 17:58:27.341095 37004 sgd_solver.cpp:106] Iteration 200, lr = 5e-06
I0630 18:08:01.358023 37004 solver.cpp:228] Iteration 300, loss = 7.52133
I0630 18:08:01.358211 37004 solver.cpp:244] Train net output #0: loss = 6.45956 (* 1 = 6.45956 loss)
I0630 18:08:01.358225 37004 sgd_solver.cpp:106] Iteration 300, lr = 5e-06
I0630 18:17:35.868541 37004 solver.cpp:228] Iteration 400, loss = 7.25034
I0630 18:17:35.868717 37004 solver.cpp:244] Train net output #0: loss = 6.46146 (* 1 = 6.46146 loss)
I0630 18:17:35.868731 37004 sgd_solver.cpp:106] Iteration 400, lr = 5e-06
I0630 18:27:09.715803 37004 solver.cpp:228] Iteration 500, loss = 7.08411
I0630 18:27:09.715955 37004 solver.cpp:244] Train net output #0: loss = 6.3012 (* 1 = 6.3012 loss)
I0630 18:27:09.715976 37004 sgd_solver.cpp:106] Iteration 500, lr = 5e-06
I0630 18:36:43.702155 37004 solver.cpp:228] Iteration 600, loss = 6.97121
I0630 18:36:43.702328 37004 solver.cpp:244] Train net output #0: loss = 6.46279 (* 1 = 6.46279 loss)
I0630 18:36:43.702342 37004 sgd_solver.cpp:106] Iteration 600, lr = 5e-06
I0630 18:46:18.470590 37004 solver.cpp:228] Iteration 700, loss = 6.88971
I0630 18:46:18.470779 37004 solver.cpp:244] Train net output #0: loss = 6.4131 (* 1 = 6.4131 loss)
I0630 18:46:18.470793 37004 sgd_solver.cpp:106] Iteration 700, lr = 5e-06
I0630 18:55:52.481586 37004 solver.cpp:228] Iteration 800, loss = 6.82753
I0630 18:55:52.481956 37004 solver.cpp:244] Train net output #0: loss = 6.38331 (* 1 = 6.38331 loss)
I0630 18:55:52.481994 37004 sgd_solver.cpp:106] Iteration 800, lr = 5e-06
I0630 19:05:26.927824 37004 solver.cpp:228] Iteration 900, loss = 6.7785
I0630 19:05:26.928011 37004 solver.cpp:244] Train net output #0: loss = 6.37622 (* 1 = 6.37622 loss)
I0630 19:05:26.928025 37004 sgd_solver.cpp:106] Iteration 900, lr = 5e-06
I0630 19:15:00.974705 37004 solver.cpp:228] Iteration 1000, loss = 6.70058
I0630 19:15:00.974974 37004 solver.cpp:244] Train net output #0: loss = 6.51832 (* 1 = 6.51832 loss)
I0630 19:15:00.974999 37004 sgd_solver.cpp:106] Iteration 1000, lr = 5e-06
I0630 19:24:34.497189 37004 solver.cpp:228] Iteration 1100, loss = 6.42923
I0630 19:24:34.497380 37004 solver.cpp:244] Train net output #0: loss = 6.37607 (* 1 = 6.37607 loss)
I0630 19:24:34.497392 37004 sgd_solver.cpp:106] Iteration 1100, lr = 5e-06
I0630 19:34:08.962963 37004 solver.cpp:228] Iteration 1200, loss = 6.40789
I0630 19:34:08.963131 37004 solver.cpp:244] Train net output #0: loss = 6.35479 (* 1 = 6.35479 loss)
I0630 19:34:08.963145 37004 sgd_solver.cpp:106] Iteration 1200, lr = 5e-06
I0630 19:43:43.194453 37004 solver.cpp:228] Iteration 1300, loss = 6.3977
I0630 19:43:43.194653 37004 solver.cpp:244] Train net output #0: loss = 6.31195 (* 1 = 6.31195 loss)
I0630 19:43:43.194666 37004 sgd_solver.cpp:106] Iteration 1300, lr = 5e-06
I0630 19:53:17.739840 37004 solver.cpp:228] Iteration 1400, loss = 6.39237
I0630 19:53:17.740031 37004 solver.cpp:244] Train net output #0: loss = 6.34581 (* 1 = 6.34581 loss)
I0630 19:53:17.740046 37004 sgd_solver.cpp:106] Iteration 1400, lr = 5e-06
I0630 20:02:52.571374 37004 solver.cpp:228] Iteration 1500, loss = 6.38892
I0630 20:02:52.571532 37004 solver.cpp:244] Train net output #0: loss = 6.35414 (* 1 = 6.35414 loss)
I0630 20:02:52.571543 37004 sgd_solver.cpp:106] Iteration 1500, lr = 5e-06
I0630 20:12:27.761003 37004 solver.cpp:228] Iteration 1600, loss = 6.38638
I0630 20:12:27.761207 37004 solver.cpp:244] Train net output #0: loss = 6.33794 (* 1 = 6.33794 loss)
I0630 20:12:27.761221 37004 sgd_solver.cpp:106] Iteration 1600, lr = 5e-06
I0630 20:22:02.799101 37004 solver.cpp:228] Iteration 1700, loss = 6.3841
I0630 20:22:02.799299 37004 solver.cpp:244] Train net output #0: loss = 6.36954 (* 1 = 6.36954 loss)
I0630 20:22:02.799314 37004 sgd_solver.cpp:106] Iteration 1700, lr = 5e-06
I0630 20:31:36.950860 37004 solver.cpp:228] Iteration 1800, loss = 6.38272
I0630 20:31:36.951056 37004 solver.cpp:244] Train net output #0: loss = 6.41768 (* 1 = 6.41768 loss)
I0630 20:31:36.951076 37004 sgd_solver.cpp:106] Iteration 1800, lr = 5e-06
I0630 20:41:12.142201 37004 solver.cpp:228] Iteration 1900, loss = 6.38204
I0630 20:41:12.142433 37004 solver.cpp:244] Train net output #0: loss = 6.397 (* 1 = 6.397 loss)
I0630 20:41:12.142448 37004 sgd_solver.cpp:106] Iteration 1900, lr = 5e-06
I0630 20:50:46.829725 37004 solver.cpp:228] Iteration 2000, loss = 6.38092
I0630 20:50:46.829897 37004 solver.cpp:244] Train net output #0: loss = 6.42403 (* 1 = 6.42403 loss)
I0630 20:50:46.829910 37004 sgd_solver.cpp:106] Iteration 2000, lr = 5e-06
I0630 21:00:21.711207 37004 solver.cpp:228] Iteration 2100, loss = 6.37983
I0630 21:00:21.711416 37004 solver.cpp:244] Train net output #0: loss = 6.38042 (* 1 = 6.38042 loss)
I0630 21:00:21.711431 37004 sgd_solver.cpp:106] Iteration 2100, lr = 5e-06
I0630 21:09:57.621116 37004 solver.cpp:228] Iteration 2200, loss = 6.37856
I0630 21:09:57.621309 37004 solver.cpp:244] Train net output #0: loss = 6.37336 (* 1 = 6.37336 loss)
I0630 21:09:57.621323 37004 sgd_solver.cpp:106] Iteration 2200, lr = 5e-06
I0630 21:19:32.920210 37004 solver.cpp:228] Iteration 2300, loss = 6.37789
I0630 21:19:32.920429 37004 solver.cpp:244] Train net output #0: loss = 6.38435 (* 1 = 6.38435 loss)
I0630 21:19:32.920444 37004 sgd_solver.cpp:106] Iteration 2300, lr = 5e-06
I0630 21:29:07.907094 37004 solver.cpp:228] Iteration 2400, loss = 6.37704
I0630 21:29:07.907378 37004 solver.cpp:244] Train net output #0: loss = 6.37227 (* 1 = 6.37227 loss)
I0630 21:29:07.907400 37004 sgd_solver.cpp:106] Iteration 2400, lr = 5e-06
I0630 21:38:43.186713 37004 solver.cpp:228] Iteration 2500, loss = 6.37627
I0630 21:38:43.186909 37004 solver.cpp:244] Train net output #0: loss = 6.39551 (* 1 = 6.39551 loss)
I0630 21:38:43.186924 37004 sgd_solver.cpp:106] Iteration 2500, lr = 5e-06
I0630 21:48:19.068430 37004 solver.cpp:228] Iteration 2600, loss = 6.37548
I0630 21:48:19.068650 37004 solver.cpp:244] Train net output #0: loss = 6.332 (* 1 = 6.332 loss)
I0630 21:48:19.068663 37004 sgd_solver.cpp:106] Iteration 2600, lr = 5e-06
I0630 21:57:54.886209 37004 solver.cpp:228] Iteration 2700, loss = 6.37502
I0630 21:57:54.886394 37004 solver.cpp:244] Train net output #0: loss = 6.35546 (* 1 = 6.35546 loss)
I0630 21:57:54.886409 37004 sgd_solver.cpp:106] Iteration 2700, lr = 5e-06
I0630 22:07:30.825448 37004 solver.cpp:228] Iteration 2800, loss = 6.37505
I0630 22:07:30.825645 37004 solver.cpp:244] Train net output #0: loss = 6.41616 (* 1 = 6.41616 loss)
I0630 22:07:30.825659 37004 sgd_solver.cpp:106] Iteration 2800, lr = 5e-06
I0630 22:17:05.987233 37004 solver.cpp:228] Iteration 2900, loss = 6.3745
I0630 22:17:05.987385 37004 solver.cpp:244] Train net output #0: loss = 6.31583 (* 1 = 6.31583 loss)
I0630 22:17:05.987397 37004 sgd_solver.cpp:106] Iteration 2900, lr = 5e-06
I0630 22:26:41.517298 37004 solver.cpp:228] Iteration 3000, loss = 6.37359
I0630 22:26:41.517518 37004 solver.cpp:244] Train net output #0: loss = 6.35037 (* 1 = 6.35037 loss)
I0630 22:26:41.517531 37004 sgd_solver.cpp:106] Iteration 3000, lr = 5e-06
I0630 22:36:16.700883 37004 solver.cpp:228] Iteration 3100, loss = 6.37353
I0630 22:36:16.701133 37004 solver.cpp:244] Train net output #0: loss = 6.32359 (* 1 = 6.32359 loss)
I0630 22:36:16.701151 37004 sgd_solver.cpp:106] Iteration 3100, lr = 5e-06
I0630 22:45:52.433956 37004 solver.cpp:228] Iteration 3200, loss = 6.37377
I0630 22:45:52.434128 37004 solver.cpp:244] Train net output #0: loss = 6.55039 (* 1 = 6.55039 loss)
I0630 22:45:52.434140 37004 sgd_solver.cpp:106] Iteration 3200, lr = 5e-06
I0630 22:55:28.162855 37004 solver.cpp:228] Iteration 3300, loss = 6.3735
I0630 22:55:28.163061 37004 solver.cpp:244] Train net output #0: loss = 6.36019 (* 1 = 6.36019 loss)
I0630 22:55:28.163075 37004 sgd_solver.cpp:106] Iteration 3300, lr = 5e-06
I0630 23:05:04.064528 37004 solver.cpp:228] Iteration 3400, loss = 6.37292
I0630 23:05:04.064723 37004 solver.cpp:244] Train net output #0: loss = 6.38414 (* 1 = 6.38414 loss)
I0630 23:05:04.064736 37004 sgd_solver.cpp:106] Iteration 3400, lr = 5e-06
I0630 23:14:39.575633 37004 solver.cpp:228] Iteration 3500, loss = 6.3726
I0630 23:14:39.575810 37004 solver.cpp:244] Train net output #0: loss = 6.34691 (* 1 = 6.34691 loss)
I0630 23:14:39.575824 37004 sgd_solver.cpp:106] Iteration 3500, lr = 5e-06
I0630 23:24:14.273372 37004 solver.cpp:228] Iteration 3600, loss = 6.37267
I0630 23:24:14.273581 37004 solver.cpp:244] Train net output #0: loss = 6.40705 (* 1 = 6.40705 loss)
I0630 23:24:14.273594 37004 sgd_solver.cpp:106] Iteration 3600, lr = 5e-06
I0630 23:33:49.255270 37004 solver.cpp:228] Iteration 3700, loss = 6.3726
I0630 23:33:49.255455 37004 solver.cpp:244] Train net output #0: loss = 6.33854 (* 1 = 6.33854 loss)
I0630 23:33:49.255470 37004 sgd_solver.cpp:106] Iteration 3700, lr = 5e-06
I0630 23:43:24.633955 37004 solver.cpp:228] Iteration 3800, loss = 6.37205
I0630 23:43:24.634140 37004 solver.cpp:244] Train net output #0: loss = 6.36776 (* 1 = 6.36776 loss)
I0630 23:43:24.634155 37004 sgd_solver.cpp:106] Iteration 3800, lr = 5e-06
I0630 23:52:59.572437 37004 solver.cpp:228] Iteration 3900, loss = 6.37189
I0630 23:52:59.572604 37004 solver.cpp:244] Train net output #0: loss = 6.3797 (* 1 = 6.3797 loss)
I0630 23:52:59.572618 37004 sgd_solver.cpp:106] Iteration 3900, lr = 5e-06
I0701 00:02:34.552245 37004 solver.cpp:228] Iteration 4000, loss = 6.37253
I0701 00:02:34.552474 37004 solver.cpp:244] Train net output #0: loss = 6.37711 (* 1 = 6.37711 loss)
I0701 00:02:34.552487 37004 sgd_solver.cpp:106] Iteration 4000, lr = 5e-06
I0701 00:12:09.808169 37004 solver.cpp:228] Iteration 4100, loss = 6.37261
I0701 00:12:09.808434 37004 solver.cpp:244] Train net output #0: loss = 6.3691 (* 1 = 6.3691 loss)
I0701 00:12:09.808449 37004 sgd_solver.cpp:106] Iteration 4100, lr = 5e-06
I0701 00:21:45.504600 37004 solver.cpp:228] Iteration 4200, loss = 6.37244
I0701 00:21:45.504794 37004 solver.cpp:244] Train net output #0: loss = 6.31232 (* 1 = 6.31232 loss)
I0701 00:21:45.504808 37004 sgd_solver.cpp:106] Iteration 4200, lr = 5e-06
I0701 00:31:20.838205 37004 solver.cpp:228] Iteration 4300, loss = 6.37248
I0701 00:31:20.838410 37004 solver.cpp:244] Train net output #0: loss = 6.34705 (* 1 = 6.34705 loss)
I0701 00:31:20.838424 37004 sgd_solver.cpp:106] Iteration 4300, lr = 5e-06
I0701 00:40:56.551658 37004 solver.cpp:228] Iteration 4400, loss = 6.37254
I0701 00:40:56.551851 37004 solver.cpp:244] Train net output #0: loss = 6.38901 (* 1 = 6.38901 loss)
I0701 00:40:56.551865 37004 sgd_solver.cpp:106] Iteration 4400, lr = 5e-06
I0701 00:50:32.533896 37004 solver.cpp:228] Iteration 4500, loss = 6.37225
I0701 00:50:32.534078 37004 solver.cpp:244] Train net output #0: loss = 6.36528 (* 1 = 6.36528 loss)
I0701 00:50:32.534091 37004 sgd_solver.cpp:106] Iteration 4500, lr = 5e-06
I0701 01:00:07.859900 37004 solver.cpp:228] Iteration 4600, loss = 6.37199
I0701 01:00:07.863266 37004 solver.cpp:244] Train net output #0: loss = 6.42688 (* 1 = 6.42688 loss)
I0701 01:00:07.863281 37004 sgd_solver.cpp:106] Iteration 4600, lr = 5e-06
I0701 01:09:42.897660 37004 solver.cpp:228] Iteration 4700, loss = 6.37184
I0701 01:09:42.897840 37004 solver.cpp:244] Train net output #0: loss = 6.38218 (* 1 = 6.38218 loss)
I0701 01:09:42.897853 37004 sgd_solver.cpp:106] Iteration 4700, lr = 5e-06
I0701 01:19:18.036485 37004 solver.cpp:228] Iteration 4800, loss = 6.37134
I0701 01:19:18.036684 37004 solver.cpp:244] Train net output #0: loss = 6.29088 (* 1 = 6.29088 loss)
I0701 01:19:18.036698 37004 sgd_solver.cpp:106] Iteration 4800, lr = 5e-06
I0701 01:28:52.342355 37004 solver.cpp:228] Iteration 4900, loss = 6.37136
I0701 01:28:52.342546 37004 solver.cpp:244] Train net output #0: loss = 6.36352 (* 1 = 6.36352 loss)
I0701 01:28:52.342561 37004 sgd_solver.cpp:106] Iteration 4900, lr = 5e-06
I0701 01:38:26.742046 37004 solver.cpp:228] Iteration 5000, loss = 6.37103
I0701 01:38:26.742293 37004 solver.cpp:244] Train net output #0: loss = 6.3444 (* 1 = 6.3444 loss)
I0701 01:38:26.742316 37004 sgd_solver.cpp:106] Iteration 5000, lr = 5e-06
I0701 01:48:01.752177 37004 solver.cpp:228] Iteration 5100, loss = 6.3705
I0701 01:48:01.752383 37004 solver.cpp:244] Train net output #0: loss = 6.41061 (* 1 = 6.41061 loss)
I0701 01:48:01.752398 37004 sgd_solver.cpp:106] Iteration 5100, lr = 5e-06
I0701 01:57:36.400079 37004 solver.cpp:228] Iteration 5200, loss = 6.37011
I0701 01:57:36.400283 37004 solver.cpp:244] Train net output #0: loss = 6.37832 (* 1 = 6.37832 loss)
I0701 01:57:36.400298 37004 sgd_solver.cpp:106] Iteration 5200, lr = 5e-06
I0701 02:07:10.855957 37004 solver.cpp:228] Iteration 5300, loss = 6.3699
I0701 02:07:10.856158 37004 solver.cpp:244] Train net output #0: loss = 6.3878 (* 1 = 6.3878 loss)
I0701 02:07:10.856173 37004 sgd_solver.cpp:106] Iteration 5300, lr = 5e-06
I0701 02:16:46.215668 37004 solver.cpp:228] Iteration 5400, loss = 6.37012
I0701 02:16:46.215847 37004 solver.cpp:244] Train net output #0: loss = 6.25666 (* 1 = 6.25666 loss)
I0701 02:16:46.215859 37004 sgd_solver.cpp:106] Iteration 5400, lr = 5e-06
I0701 02:26:20.881023 37004 solver.cpp:228] Iteration 5500, loss = 6.37009
I0701 02:26:20.881225 37004 solver.cpp:244] Train net output #0: loss = 6.27052 (* 1 = 6.27052 loss)
I0701 02:26:20.881240 37004 sgd_solver.cpp:106] Iteration 5500, lr = 5e-06
I0701 02:35:56.060355 37004 solver.cpp:228] Iteration 5600, loss = 6.36999
I0701 02:35:56.060576 37004 solver.cpp:244] Train net output #0: loss = 6.43268 (* 1 = 6.43268 loss)
I0701 02:35:56.060607 37004 sgd_solver.cpp:106] Iteration 5600, lr = 5e-06
I0701 02:45:32.090903 37004 solver.cpp:228] Iteration 5700, loss = 6.37019
I0701 02:45:32.091126 37004 solver.cpp:244] Train net output #0: loss = 6.36767 (* 1 = 6.36767 loss)
I0701 02:45:32.091140 37004 sgd_solver.cpp:106] Iteration 5700, lr = 5e-06
I0701 02:55:07.616437 37004 solver.cpp:228] Iteration 5800, loss = 6.37023
I0701 02:55:07.616633 37004 solver.cpp:244] Train net output #0: loss = 6.39645 (* 1 = 6.39645 loss)
I0701 02:55:07.616647 37004 sgd_solver.cpp:106] Iteration 5800, lr = 5e-06
I0701 03:04:43.384860 37004 solver.cpp:228] Iteration 5900, loss = 6.36984
I0701 03:04:43.385059 37004 solver.cpp:244] Train net output #0: loss = 6.29547 (* 1 = 6.29547 loss)
I0701 03:04:43.385073 37004 sgd_solver.cpp:106] Iteration 5900, lr = 5e-06
I0701 03:14:19.311123 37004 solver.cpp:228] Iteration 6000, loss = 6.36931
I0701 03:14:19.311290 37004 solver.cpp:244] Train net output #0: loss = 6.36824 (* 1 = 6.36824 loss)
I0701 03:14:19.311302 37004 sgd_solver.cpp:106] Iteration 6000, lr = 5e-06
I0701 03:23:55.617151 37004 solver.cpp:228] Iteration 6100, loss = 6.36946
I0701 03:23:55.617308 37004 solver.cpp:244] Train net output #0: loss = 6.36766 (* 1 = 6.36766 loss)
I0701 03:23:55.617321 37004 sgd_solver.cpp:106] Iteration 6100, lr = 5e-06
I0701 03:33:31.547133 37004 solver.cpp:228] Iteration 6200, loss = 6.36965
I0701 03:33:31.547333 37004 solver.cpp:244] Train net output #0: loss = 6.37759 (* 1 = 6.37759 loss)
I0701 03:33:31.547346 37004 sgd_solver.cpp:106] Iteration 6200, lr = 5e-06
I0701 03:43:07.404549 37004 solver.cpp:228] Iteration 6300, loss = 6.36929
I0701 03:43:07.404731 37004 solver.cpp:244] Train net output #0: loss = 6.35548 (* 1 = 6.35548 loss)
I0701 03:43:07.404745 37004 sgd_solver.cpp:106] Iteration 6300, lr = 5e-06
I0701 03:52:43.235383 37004 solver.cpp:228] Iteration 6400, loss = 6.36948
I0701 03:52:43.235579 37004 solver.cpp:244] Train net output #0: loss = 6.36435 (* 1 = 6.36435 loss)
I0701 03:52:43.235592 37004 sgd_solver.cpp:106] Iteration 6400, lr = 5e-06
I0701 04:02:19.150022 37004 solver.cpp:228] Iteration 6500, loss = 6.36949
I0701 04:02:19.150202 37004 solver.cpp:244] Train net output #0: loss = 6.42496 (* 1 = 6.42496 loss)
I0701 04:02:19.150215 37004 sgd_solver.cpp:106] Iteration 6500, lr = 5e-06
I0701 04:11:54.604629 37004 solver.cpp:228] Iteration 6600, loss = 6.36929
I0701 04:11:54.604832 37004 solver.cpp:244] Train net output #0: loss = 6.33844 (* 1 = 6.33844 loss)
I0701 04:11:54.604851 37004 sgd_solver.cpp:106] Iteration 6600, lr = 5e-06
I0701 04:21:30.507041 37004 solver.cpp:228] Iteration 6700, loss = 6.36869
I0701 04:21:30.507215 37004 solver.cpp:244] Train net output #0: loss = 6.38403 (* 1 = 6.38403 loss)
I0701 04:21:30.507228 37004 sgd_solver.cpp:106] Iteration 6700, lr = 5e-06
I0701 04:31:06.062548 37004 solver.cpp:228] Iteration 6800, loss = 6.36864
I0701 04:31:06.062722 37004 solver.cpp:244] Train net output #0: loss = 6.30328 (* 1 = 6.30328 loss)
I0701 04:31:06.062736 37004 sgd_solver.cpp:106] Iteration 6800, lr = 5e-06
I0701 04:40:41.331341 37004 solver.cpp:228] Iteration 6900, loss = 6.36848
I0701 04:40:41.331552 37004 solver.cpp:244] Train net output #0: loss = 6.3618 (* 1 = 6.3618 loss)
I0701 04:40:41.331565 37004 sgd_solver.cpp:106] Iteration 6900, lr = 5e-06
I0701 04:50:16.289945 37004 solver.cpp:228] Iteration 7000, loss = 6.36867
I0701 04:50:16.290112 37004 solver.cpp:244] Train net output #0: loss = 6.38101 (* 1 = 6.38101 loss)
I0701 04:50:16.290124 37004 sgd_solver.cpp:106] Iteration 7000, lr = 5e-06
I0701 04:59:52.134511 37004 solver.cpp:228] Iteration 7100, loss = 6.36832
I0701 04:59:52.134704 37004 solver.cpp:244] Train net output #0: loss = 6.33786 (* 1 = 6.33786 loss)
I0701 04:59:52.134718 37004 sgd_solver.cpp:106] Iteration 7100, lr = 5e-06
I0701 05:09:26.957919 37004 solver.cpp:228] Iteration 7200, loss = 6.36806
I0701 05:09:26.958179 37004 solver.cpp:244] Train net output #0: loss = 6.3699 (* 1 = 6.3699 loss)
I0701 05:09:26.958211 37004 sgd_solver.cpp:106] Iteration 7200, lr = 5e-06
I0701 05:19:02.494422 37004 solver.cpp:228] Iteration 7300, loss = 6.36829
I0701 05:19:02.494627 37004 solver.cpp:244] Train net output #0: loss = 6.33712 (* 1 = 6.33712 loss)
I0701 05:19:02.494642 37004 sgd_solver.cpp:106] Iteration 7300, lr = 5e-06
I0701 05:28:38.527725 37004 solver.cpp:228] Iteration 7400, loss = 6.36791
I0701 05:28:38.527973 37004 solver.cpp:244] Train net output #0: loss = 6.35295 (* 1 = 6.35295 loss)
I0701 05:28:38.527997 37004 sgd_solver.cpp:106] Iteration 7400, lr = 5e-06
I0701 05:38:14.596971 37004 solver.cpp:228] Iteration 7500, loss = 6.36772
I0701 05:38:14.597138 37004 solver.cpp:244] Train net output #0: loss = 6.36236 (* 1 = 6.36236 loss)
I0701 05:38:14.597151 37004 sgd_solver.cpp:106] Iteration 7500, lr = 5e-06
I0701 05:47:50.383836 37004 solver.cpp:228] Iteration 7600, loss = 6.36775
I0701 05:47:50.384032 37004 solver.cpp:244] Train net output #0: loss = 6.34805 (* 1 = 6.34805 loss)
I0701 05:47:50.384047 37004 sgd_solver.cpp:106] Iteration 7600, lr = 5e-06
I0701 05:57:25.717046 37004 solver.cpp:228] Iteration 7700, loss = 6.36769
I0701 05:57:25.717314 37004 solver.cpp:244] Train net output #0: loss = 6.30781 (* 1 = 6.30781 loss)
I0701 05:57:25.717340 37004 sgd_solver.cpp:106] Iteration 7700, lr = 5e-06
I0701 06:07:01.027081 37004 solver.cpp:228] Iteration 7800, loss = 6.36749
I0701 06:07:01.027351 37004 solver.cpp:244] Train net output #0: loss = 6.57134 (* 1 = 6.57134 loss)
I0701 06:07:01.027376 37004 sgd_solver.cpp:106] Iteration 7800, lr = 5e-06
I0701 06:16:36.836551 37004 solver.cpp:228] Iteration 7900, loss = 6.36737
I0701 06:16:36.836737 37004 solver.cpp:244] Train net output #0: loss = 6.42115 (* 1 = 6.42115 loss)
I0701 06:16:36.836751 37004 sgd_solver.cpp:106] Iteration 7900, lr = 5e-06
I0701 06:26:13.201685 37004 solver.cpp:228] Iteration 8000, loss = 6.36733
I0701 06:26:13.201864 37004 solver.cpp:244] Train net output #0: loss = 6.351 (* 1 = 6.351 loss)
I0701 06:26:13.201877 37004 sgd_solver.cpp:106] Iteration 8000, lr = 5e-06
I0701 06:35:49.609525 37004 solver.cpp:228] Iteration 8100, loss = 6.36747
I0701 06:35:49.609714 37004 solver.cpp:244] Train net output #0: loss = 6.4463 (* 1 = 6.4463 loss)
I0701 06:35:49.609727 37004 sgd_solver.cpp:106] Iteration 8100, lr = 5e-06
I0701 06:45:25.507555 37004 solver.cpp:228] Iteration 8200, loss = 6.36747
I0701 06:45:25.507720 37004 solver.cpp:244] Train net output #0: loss = 6.40877 (* 1 = 6.40877 loss)
I0701 06:45:25.507735 37004 sgd_solver.cpp:106] Iteration 8200, lr = 5e-06
I0701 06:55:01.318944 37004 solver.cpp:228] Iteration 8300, loss = 6.3672
I0701 06:55:01.319157 37004 solver.cpp:244] Train net output #0: loss = 6.29098 (* 1 = 6.29098 loss)
I0701 06:55:01.319174 37004 sgd_solver.cpp:106] Iteration 8300, lr = 5e-06
I0701 07:04:37.129796 37004 solver.cpp:228] Iteration 8400, loss = 6.36722
I0701 07:04:37.129982 37004 solver.cpp:244] Train net output #0: loss = 6.31862 (* 1 = 6.31862 loss)
I0701 07:04:37.129997 37004 sgd_solver.cpp:106] Iteration 8400, lr = 5e-06
I0701 07:14:12.720811 37004 solver.cpp:228] Iteration 8500, loss = 6.36721
I0701 07:14:12.721050 37004 solver.cpp:244] Train net output #0: loss = 6.36596 (* 1 = 6.36596 loss)
I0701 07:14:12.721065 37004 sgd_solver.cpp:106] Iteration 8500, lr = 5e-06
I0701 07:23:47.912351 37004 solver.cpp:228] Iteration 8600, loss = 6.36702
I0701 07:23:47.912521 37004 solver.cpp:244] Train net output #0: loss = 6.33996 (* 1 = 6.33996 loss)
I0701 07:23:47.912535 37004 sgd_solver.cpp:106] Iteration 8600, lr = 5e-06
I0701 07:33:23.173938 37004 solver.cpp:228] Iteration 8700, loss = 6.36741
I0701 07:33:23.174176 37004 solver.cpp:244] Train net output #0: loss = 6.41518 (* 1 = 6.41518 loss)
I0701 07:33:23.174192 37004 sgd_solver.cpp:106] Iteration 8700, lr = 5e-06
I0701 07:42:59.137446 37004 solver.cpp:228] Iteration 8800, loss = 6.36747
I0701 07:42:59.137647 37004 solver.cpp:244] Train net output #0: loss = 6.31778 (* 1 = 6.31778 loss)
I0701 07:42:59.137676 37004 sgd_solver.cpp:106] Iteration 8800, lr = 5e-06
I0701 07:52:35.062599 37004 solver.cpp:228] Iteration 8900, loss = 6.36801
I0701 07:52:35.062813 37004 solver.cpp:244] Train net output #0: loss = 6.3531 (* 1 = 6.3531 loss)
I0701 07:52:35.062829 37004 sgd_solver.cpp:106] Iteration 8900, lr = 5e-06
I0701 08:02:10.337667 37004 solver.cpp:228] Iteration 9000, loss = 6.36803
I0701 08:02:10.337936 37004 solver.cpp:244] Train net output #0: loss = 6.30003 (* 1 = 6.30003 loss)
I0701 08:02:10.337967 37004 sgd_solver.cpp:106] Iteration 9000, lr = 5e-06
I0701 08:11:46.733788 37004 solver.cpp:228] Iteration 9100, loss = 6.36778
I0701 08:11:46.734009 37004 solver.cpp:244] Train net output #0: loss = 6.28385 (* 1 = 6.28385 loss)
I0701 08:11:46.734024 37004 sgd_solver.cpp:106] Iteration 9100, lr = 5e-06
I0701 08:21:22.723340 37004 solver.cpp:228] Iteration 9200, loss = 6.36715
I0701 08:21:22.723495 37004 solver.cpp:244] Train net output #0: loss = 6.25476 (* 1 = 6.25476 loss)
I0701 08:21:22.723508 37004 sgd_solver.cpp:106] Iteration 9200, lr = 5e-06
I0701 08:30:59.121311 37004 solver.cpp:228] Iteration 9300, loss = 6.36732
I0701 08:30:59.121498 37004 solver.cpp:244] Train net output #0: loss = 6.36934 (* 1 = 6.36934 loss)
I0701 08:30:59.121512 37004 sgd_solver.cpp:106] Iteration 9300, lr = 5e-06
I0701 08:40:35.622364 37004 solver.cpp:228] Iteration 9400, loss = 6.36709
I0701 08:40:35.622761 37004 solver.cpp:244] Train net output #0: loss = 6.37381 (* 1 = 6.37381 loss)
I0701 08:40:35.622778 37004 sgd_solver.cpp:106] Iteration 9400, lr = 5e-06
I0701 08:50:11.963475 37004 solver.cpp:228] Iteration 9500, loss = 6.36716
I0701 08:50:11.963636 37004 solver.cpp:244] Train net output #0: loss = 6.34492 (* 1 = 6.34492 loss)
I0701 08:50:11.963649 37004 sgd_solver.cpp:106] Iteration 9500, lr = 5e-06
I0701 08:59:47.968722 37004 solver.cpp:228] Iteration 9600, loss = 6.36734
I0701 08:59:47.968909 37004 solver.cpp:244] Train net output #0: loss = 6.36362 (* 1 = 6.36362 loss)
I0701 08:59:47.968922 37004 sgd_solver.cpp:106] Iteration 9600, lr = 5e-06
I0701 09:09:23.164079 37004 solver.cpp:228] Iteration 9700, loss = 6.36709
I0701 09:09:23.164289 37004 solver.cpp:244] Train net output #0: loss = 6.38415 (* 1 = 6.38415 loss)
I0701 09:09:23.164304 37004 sgd_solver.cpp:106] Iteration 9700, lr = 5e-06
I0701 09:18:58.576824 37004 solver.cpp:228] Iteration 9800, loss = 6.36717
I0701 09:18:58.577024 37004 solver.cpp:244] Train net output #0: loss = 6.35856 (* 1 = 6.35856 loss)
I0701 09:18:58.577039 37004 sgd_solver.cpp:106] Iteration 9800, lr = 5e-06
I0701 09:28:33.498967 37004 solver.cpp:228] Iteration 9900, loss = 6.36684
I0701 09:28:33.499186 37004 solver.cpp:244] Train net output #0: loss = 6.37362 (* 1 = 6.37362 loss)
I0701 09:28:33.499202 37004 sgd_solver.cpp:106] Iteration 9900, lr = 5e-06
I0701 09:38:09.326488 37004 solver.cpp:228] Iteration 10000, loss = 6.36638
I0701 09:38:09.326683 37004 solver.cpp:244] Train net output #0: loss = 6.39197 (* 1 = 6.39197 loss)
I0701 09:38:09.326697 37004 sgd_solver.cpp:106] Iteration 10000, lr = 5e-06
I0701 09:47:45.468917 37004 solver.cpp:228] Iteration 10100, loss = 6.36636
I0701 09:47:45.469141 37004 solver.cpp:244] Train net output #0: loss = 6.34749 (* 1 = 6.34749 loss)
I0701 09:47:45.469154 37004 sgd_solver.cpp:106] Iteration 10100, lr = 5e-06
I0701 09:57:20.844796 37004 solver.cpp:228] Iteration 10200, loss = 6.36639
I0701 09:57:20.844997 37004 solver.cpp:244] Train net output #0: loss = 6.33531 (* 1 = 6.33531 loss)
I0701 09:57:20.845011 37004 sgd_solver.cpp:106] Iteration 10200, lr = 5e-06
I0701 10:06:56.752022 37004 solver.cpp:228] Iteration 10300, loss = 6.36571
I0701 10:06:56.752212 37004 solver.cpp:244] Train net output #0: loss = 6.41833 (* 1 = 6.41833 loss)
I0701 10:06:56.752229 37004 sgd_solver.cpp:106] Iteration 10300, lr = 5e-06
I0701 10:16:32.718156 37004 solver.cpp:228] Iteration 10400, loss = 6.36577
I0701 10:16:32.718371 37004 solver.cpp:244] Train net output #0: loss = 6.33601 (* 1 = 6.33601 loss)
I0701 10:16:32.718401 37004 sgd_solver.cpp:106] Iteration 10400, lr = 5e-06
I0701 10:26:08.617499 37004 solver.cpp:228] Iteration 10500, loss = 6.36542
I0701 10:26:08.617682 37004 solver.cpp:244] Train net output #0: loss = 6.38808 (* 1 = 6.38808 loss)
I0701 10:26:08.617697 37004 sgd_solver.cpp:106] Iteration 10500, lr = 5e-06
I0701 10:35:44.113360 37004 solver.cpp:228] Iteration 10600, loss = 6.36547
I0701 10:35:44.113560 37004 solver.cpp:244] Train net output #0: loss = 6.36456 (* 1 = 6.36456 loss)
I0701 10:35:44.113574 37004 sgd_solver.cpp:106] Iteration 10600, lr = 5e-06
I0701 10:45:19.789994 37004 solver.cpp:228] Iteration 10700, loss = 6.36511
I0701 10:45:19.790192 37004 solver.cpp:244] Train net output #0: loss = 6.42208 (* 1 = 6.42208 loss)
I0701 10:45:19.790206 37004 sgd_solver.cpp:106] Iteration 10700, lr = 5e-06
I0701 10:54:55.508617 37004 solver.cpp:228] Iteration 10800, loss = 6.36504
I0701 10:54:55.508797 37004 solver.cpp:244] Train net output #0: loss = 6.2942 (* 1 = 6.2942 loss)
I0701 10:54:55.508810 37004 sgd_solver.cpp:106] Iteration 10800, lr = 5e-06
I0701 11:04:30.646239 37004 solver.cpp:228] Iteration 10900, loss = 6.36502
I0701 11:04:30.646423 37004 solver.cpp:244] Train net output #0: loss = 6.34523 (* 1 = 6.34523 loss)
I0701 11:04:30.646436 37004 sgd_solver.cpp:106] Iteration 10900, lr = 5e-06
I0701 11:14:06.759104 37004 solver.cpp:228] Iteration 11000, loss = 6.36526
I0701 11:14:06.759284 37004 solver.cpp:244] Train net output #0: loss = 6.351 (* 1 = 6.351 loss)
I0701 11:14:06.759297 37004 sgd_solver.cpp:106] Iteration 11000, lr = 5e-06
I0701 11:23:42.316406 37004 solver.cpp:228] Iteration 11100, loss = 6.36489
I0701 11:23:42.316603 37004 solver.cpp:244] Train net output #0: loss = 6.3084 (* 1 = 6.3084 loss)
I0701 11:23:42.316617 37004 sgd_solver.cpp:106] Iteration 11100, lr = 5e-06
I0701 11:33:18.294520 37004 solver.cpp:228] Iteration 11200, loss = 6.36515
I0701 11:33:18.294694 37004 solver.cpp:244] Train net output #0: loss = 6.30394 (* 1 = 6.30394 loss)
I0701 11:33:18.294708 37004 sgd_solver.cpp:106] Iteration 11200, lr = 5e-06
I0701 11:42:53.509534 37004 solver.cpp:228] Iteration 11300, loss = 6.36544
I0701 11:42:53.509718 37004 solver.cpp:244] Train net output #0: loss = 6.37031 (* 1 = 6.37031 loss)
I0701 11:42:53.509732 37004 sgd_solver.cpp:106] Iteration 11300, lr = 5e-06
I0701 11:52:29.102586 37004 solver.cpp:228] Iteration 11400, loss = 6.36531
I0701 11:52:29.102768 37004 solver.cpp:244] Train net output #0: loss = 6.286 (* 1 = 6.286 loss)
I0701 11:52:29.102782 37004 sgd_solver.cpp:106] Iteration 11400, lr = 5e-06
I0701 12:02:05.443864 37004 solver.cpp:228] Iteration 11500, loss = 6.36575
I0701 12:02:05.444046 37004 solver.cpp:244] Train net output #0: loss = 6.3652 (* 1 = 6.3652 loss)
I0701 12:02:05.444059 37004 sgd_solver.cpp:106] Iteration 11500, lr = 5e-06
I0701 12:11:40.850350 37004 solver.cpp:228] Iteration 11600, loss = 6.36503
I0701 12:11:40.850555 37004 solver.cpp:244] Train net output #0: loss = 6.32248 (* 1 = 6.32248 loss)
I0701 12:11:40.850569 37004 sgd_solver.cpp:106] Iteration 11600, lr = 5e-06
I0701 12:21:16.678872 37004 solver.cpp:228] Iteration 11700, loss = 6.36513
I0701 12:21:16.679083 37004 solver.cpp:244] Train net output #0: loss = 6.32152 (* 1 = 6.32152 loss)
I0701 12:21:16.679105 37004 sgd_solver.cpp:106] Iteration 11700, lr = 5e-06
I0701 12:30:52.241518 37004 solver.cpp:228] Iteration 11800, loss = 6.36501
I0701 12:30:52.241720 37004 solver.cpp:244] Train net output #0: loss = 6.36225 (* 1 = 6.36225 loss)
I0701 12:30:52.241736 37004 sgd_solver.cpp:106] Iteration 11800, lr = 5e-06
I0701 12:40:28.188256 37004 solver.cpp:228] Iteration 11900, loss = 6.36467
I0701 12:40:28.188505 37004 solver.cpp:244] Train net output #0: loss = 6.41787 (* 1 = 6.41787 loss)
I0701 12:40:28.188524 37004 sgd_solver.cpp:106] Iteration 11900, lr = 5e-06
I0701 12:50:03.910610 37004 solver.cpp:228] Iteration 12000, loss = 6.36469
I0701 12:50:03.910825 37004 solver.cpp:244] Train net output #0: loss = 6.40997 (* 1 = 6.40997 loss)
I0701 12:50:03.910856 37004 sgd_solver.cpp:106] Iteration 12000, lr = 5e-06
I0701 12:59:40.461107 37004 solver.cpp:228] Iteration 12100, loss = 6.36484
I0701 12:59:40.461313 37004 solver.cpp:244] Train net output #0: loss = 6.31551 (* 1 = 6.31551 loss)
I0701 12:59:40.461328 37004 sgd_solver.cpp:106] Iteration 12100, lr = 5e-06
I0701 13:09:16.208829 37004 solver.cpp:228] Iteration 12200, loss = 6.3649
I0701 13:09:16.209013 37004 solver.cpp:244] Train net output #0: loss = 6.33957 (* 1 = 6.33957 loss)
I0701 13:09:16.209028 37004 sgd_solver.cpp:106] Iteration 12200, lr = 5e-06
I0701 13:18:52.084708 37004 solver.cpp:228] Iteration 12300, loss = 6.36518
I0701 13:18:52.084893 37004 solver.cpp:244] Train net output #0: loss = 6.42534 (* 1 = 6.42534 loss)
I0701 13:18:52.084906 37004 sgd_solver.cpp:106] Iteration 12300, lr = 5e-06
I0701 13:28:27.970780 37004 solver.cpp:228] Iteration 12400, loss = 6.36528
I0701 13:28:27.970983 37004 solver.cpp:244] Train net output #0: loss = 6.34837 (* 1 = 6.34837 loss)
I0701 13:28:27.970999 37004 sgd_solver.cpp:106] Iteration 12400, lr = 5e-06
I0701 13:38:03.804321 37004 solver.cpp:228] Iteration 12500, loss = 6.36492
I0701 13:38:03.804510 37004 solver.cpp:244] Train net output #0: loss = 6.33786 (* 1 = 6.33786 loss)
I0701 13:38:03.804524 37004 sgd_solver.cpp:106] Iteration 12500, lr = 5e-06
I0701 13:47:39.707697 37004 solver.cpp:228] Iteration 12600, loss = 6.36566
I0701 13:47:39.717149 37004 solver.cpp:244] Train net output #0: loss = 6.40129 (* 1 = 6.40129 loss)
I0701 13:47:39.717169 37004 sgd_solver.cpp:106] Iteration 12600, lr = 5e-06
I0701 13:57:15.349875 37004 solver.cpp:228] Iteration 12700, loss = 6.36595
I0701 13:57:15.350069 37004 solver.cpp:244] Train net output #0: loss = 6.35109 (* 1 = 6.35109 loss)
I0701 13:57:15.350082 37004 sgd_solver.cpp:106] Iteration 12700, lr = 5e-06
I0701 14:06:51.413095 37004 solver.cpp:228] Iteration 12800, loss = 6.36622
I0701 14:06:51.413272 37004 solver.cpp:244] Train net output #0: loss = 6.33011 (* 1 = 6.33011 loss)
I0701 14:06:51.413287 37004 sgd_solver.cpp:106] Iteration 12800, lr = 5e-06
I0701 14:16:27.170462 37004 solver.cpp:228] Iteration 12900, loss = 6.36621
I0701 14:16:27.170661 37004 solver.cpp:244] Train net output #0: loss = 6.26113 (* 1 = 6.26113 loss)
I0701 14:16:27.170675 37004 sgd_solver.cpp:106] Iteration 12900, lr = 5e-06
I0701 14:26:03.400574 37004 solver.cpp:228] Iteration 13000, loss = 6.36623
I0701 14:26:03.400774 37004 solver.cpp:244] Train net output #0: loss = 6.32122 (* 1 = 6.32122 loss)
I0701 14:26:03.400789 37004 sgd_solver.cpp:106] Iteration 13000, lr = 5e-06
I0701 14:35:39.557139 37004 solver.cpp:228] Iteration 13100, loss = 6.36602
I0701 14:35:39.557322 37004 solver.cpp:244] Train net output #0: loss = 6.33821 (* 1 = 6.33821 loss)
I0701 14:35:39.557337 37004 sgd_solver.cpp:106] Iteration 13100, lr = 5e-06
I0701 14:45:15.261057 37004 solver.cpp:228] Iteration 13200, loss = 6.36586
I0701 14:45:15.261271 37004 solver.cpp:244] Train net output #0: loss = 6.36026 (* 1 = 6.36026 loss)
I0701 14:45:15.261286 37004 sgd_solver.cpp:106] Iteration 13200, lr = 5e-06
I0701 14:54:50.477665 37004 solver.cpp:228] Iteration 13300, loss = 6.36576
I0701 14:54:50.477852 37004 solver.cpp:244] Train net output #0: loss = 6.47356 (* 1 = 6.47356 loss)
I0701 14:54:50.477866 37004 sgd_solver.cpp:106] Iteration 13300, lr = 5e-06
I0701 15:04:26.470077 37004 solver.cpp:228] Iteration 13400, loss = 6.36598
I0701 15:04:26.470291 37004 solver.cpp:244] Train net output #0: loss = 6.41587 (* 1 = 6.41587 loss)
I0701 15:04:26.470306 37004 sgd_solver.cpp:106] Iteration 13400, lr = 5e-06
I0701 15:14:01.931891 37004 solver.cpp:228] Iteration 13500, loss = 6.36585
I0701 15:14:01.932154 37004 solver.cpp:244] Train net output #0: loss = 6.32254 (* 1 = 6.32254 loss)
I0701 15:14:01.932173 37004 sgd_solver.cpp:106] Iteration 13500, lr = 5e-06
I0701 15:23:37.722364 37004 solver.cpp:228] Iteration 13600, loss = 6.3652
I0701 15:23:37.722725 37004 solver.cpp:244] Train net output #0: loss = 6.29158 (* 1 = 6.29158 loss)
I0701 15:23:37.722765 37004 sgd_solver.cpp:106] Iteration 13600, lr = 5e-06
I0701 15:33:13.793253 37004 solver.cpp:228] Iteration 13700, loss = 6.3652
I0701 15:33:13.793450 37004 solver.cpp:244] Train net output #0: loss = 6.3315 (* 1 = 6.3315 loss)
I0701 15:33:13.793464 37004 sgd_solver.cpp:106] Iteration 13700, lr = 5e-06
I0701 15:42:49.779914 37004 solver.cpp:228] Iteration 13800, loss = 6.36529
I0701 15:42:49.780091 37004 solver.cpp:244] Train net output #0: loss = 6.32884 (* 1 = 6.32884 loss)
I0701 15:42:49.780103 37004 sgd_solver.cpp:106] Iteration 13800, lr = 5e-06
I0701 15:52:26.196202 37004 solver.cpp:228] Iteration 13900, loss = 6.36544
I0701 15:52:26.196405 37004 solver.cpp:244] Train net output #0: loss = 6.29431 (* 1 = 6.29431 loss)
I0701 15:52:26.196420 37004 sgd_solver.cpp:106] Iteration 13900, lr = 5e-06
I0701 16:02:02.627634 37004 solver.cpp:228] Iteration 14000, loss = 6.36515
I0701 16:02:02.627825 37004 solver.cpp:244] Train net output #0: loss = 6.32216 (* 1 = 6.32216 loss)
I0701 16:02:02.627838 37004 sgd_solver.cpp:106] Iteration 14000, lr = 5e-06
I0701 16:11:38.249892 37004 solver.cpp:228] Iteration 14100, loss = 6.36523
I0701 16:11:38.250103 37004 solver.cpp:244] Train net output #0: loss = 6.27742 (* 1 = 6.27742 loss)
I0701 16:11:38.250119 37004 sgd_solver.cpp:106] Iteration 14100, lr = 5e-06
I0701 16:21:13.691813 37004 solver.cpp:228] Iteration 14200, loss = 6.36555
I0701 16:21:13.691992 37004 solver.cpp:244] Train net output #0: loss = 6.37257 (* 1 = 6.37257 loss)
I0701 16:21:13.692008 37004 sgd_solver.cpp:106] Iteration 14200, lr = 5e-06
I0701 16:30:49.557400 37004 solver.cpp:228] Iteration 14300, loss = 6.36498
I0701 16:30:49.557606 37004 solver.cpp:244] Train net output #0: loss = 6.35166 (* 1 = 6.35166 loss)
I0701 16:30:49.557621 37004 sgd_solver.cpp:106] Iteration 14300, lr = 5e-06
I0701 16:40:25.440153 37004 solver.cpp:228] Iteration 14400, loss = 6.36507
I0701 16:40:25.440317 37004 solver.cpp:244] Train net output #0: loss = 6.34774 (* 1 = 6.34774 loss)
I0701 16:40:25.440331 37004 sgd_solver.cpp:106] Iteration 14400, lr = 5e-06
I0701 16:50:02.165655 37004 solver.cpp:228] Iteration 14500, loss = 6.36521
I0701 16:50:02.165823 37004 solver.cpp:244] Train net output #0: loss = 6.38507 (* 1 = 6.38507 loss)
I0701 16:50:02.165838 37004 sgd_solver.cpp:106] Iteration 14500, lr = 5e-06
I0701 16:59:38.122241 37004 solver.cpp:228] Iteration 14600, loss = 6.36536
I0701 16:59:38.122443 37004 solver.cpp:244] Train net output #0: loss = 6.38732 (* 1 = 6.38732 loss)
I0701 16:59:38.122459 37004 sgd_solver.cpp:106] Iteration 14600, lr = 5e-06

Train with inception_v4_train_test.prototxt: Check failed: top_shape[j] == bottom[i]->shape(j)

That is the error message:

I0613 10:06:28.511732 16295 net.cpp:84] Creating Layer inception_stem3
I0613 10:06:28.511740 16295 net.cpp:406] inception_stem3 <- inception_stem3_3x3_s2
I0613 10:06:28.511749 16295 net.cpp:406] inception_stem3 <- inception_stem3_pool
I0613 10:06:28.511759 16295 net.cpp:380] inception_stem3 -> inception_stem3
F0613 10:06:28.511782 16295 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (25 vs. 26) All inputs must have the same shape, except at concat_axis.

xception 224x224 can not run

I get the following error:

xception2_elewise <- xception2_match_conv
xception2_elewise <- xception2_pool
xception2_elewise -> xception2_elewise
Check failed: bottom[0]->shape() == bottom[i]->shape() bottom[0]: 1 256 28 28 (200704), bottom[1]: 1 256 27 27 (186624)

Training ILSVRC12 on inception v4 / inception_resnet

@soeaver have you by any chance done some training of ILSVRC12 on your inception v4 / inception_resnet2? I would be interested in an ILSVRC12 pre-trained model for both architectures, therefore I have started training inception_resnet2 a couple of days ago. But since this is taking quite some time I thought maybe you already have done some steps into that direction yourself and one could save some effort here / share?

How to change network?

I'm sorry to bother you.I'm new in depth learning , I want to change the object detection network,like ssd,yolo.But I do not know how to do it,Is it all right to just change train.prototxt and deploy.prototxt?Do I should add something else?

Message type "caffe.PoolingParameter" has no field named "ceil_mode"

Hi!
there is an error when I finetune my task using your prototxt of ResneXt-50 net.
Protobuf : Error parsing text-format caffe.NetParameter: 104:14: Message type "caffe.PoolingParameter" has no field named "ceil_mode". . at ..\src\google\protobuf\text_format.cc Line 274

Would you like to help me how to solve the problem?

What's `priv` stands for?

The ResNet-18 proto filename deploy_resnet18-priv.prototxt is different from others, is there any special meaning for priv?

target_blobs.size() == source_layer.blobs_size() (2 vs. 1)

I use the resnext101-64x4d(R-FCN) to do det. And the pre-trained model is resnext101-64x4d-merge.caffemodel.
but it report the error:
Check failed: target_blobs.size() == source_layer.blobs_size() (2 vs. 1) Incompatible number of blobs for layer conv1
is my pre-trained model not suit for the resnext101-64x4d
could you help me?

fine-tune 分类模型时会报错

会报如下错误:
Check failed: error == cudaSuccess (74 vs. 0) misaligned address
只有将训练时的batch size设置为1才能train,设置为32,、64都报错。显存24G足够,不知道问题出在哪。。
是预训练模型存在问题吗?请问你还有其他版本的resnext和inception的pretrain model吗

关于detection中的multiscale training 和testing问题

请问该如何实现多尺度的训练和测试?
我之前将config文件中的train.scales参数从单一的600改为(200,300,400,500,600,700,800),训练可以进行,但是测试时test.scales修改为多尺度后会报错,请问是原版不支持多尺度测试吗?
还有一个问题就是除了修改图片尺度外,是否需要调整anchor大小?原有的9种anchor是否只是针对600尺寸设计的,当进行多尺度训练时是否应该引入更小和更大尺寸的anchor呢?

Merge scale layer

Hello.
Great work!
In all models you merge BatchNorm layer, but save Scale layer with lr_mult: 0. Why we cant merge Scale layer too. (I try this -- but have poor resault, why?)
Maybe BatchNorm layer after roi_pooling is usefull

The dropout of WRN

Thanks for your sharing.
Recently, I read the paper of WRN ,and the author points that they apply dropout in WRN, have you tried to use the dropout? And what is the performance of using dropout?

Caffe version trainint inception-resnet-v2

Hi, I am trying to use inception-resnet-v2 on training faster rcnn with my own dataset.
I was wondering if I have to install py-RFCN-priv.
Can't I just use original caffe version???

Size of images in validation set

I see that the validation images are in size 500x375. In the evaluation python script you can set one value at the base_size variable, but as you can see the val images are not square, how can I use 'evaluation_cls.py' ? it throws error :

Check failed: top_shape[j] == bottom[i]->shape(j) (12 vs. 13) All inputs must have the same shape, except at concat_axis.

Thanks.

A little problem when run the having trained incenption_v4.caffemodel

I have trained inception-v4 for det .And i use the trained caffemodel to do test,it report that :

_### _File "./tools/demo_inception.py", line 378, in
, = im_detect(net, im)
File "/home/hytz/Downloads/DB-R-FCN_scale_4/tools/../lib/fast_rcnn/test.py", line 173, in im_detect
pred_boxes = bbox_transform_inv(boxes, box_deltas)
File "/home/hytz/Downloads/DB-R-FCN_scale_4/tools/../lib/fast_rcnn/bbox_transform.py", line 53, in bbox_transform_inv
pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w
ValueError: could not broadcast input array from shape (300,2,300,1) into shape (300,2,1,1)

And I carefully check the deploy.prototxt and train.prototxt,they doesn't seem to have any problem...
In the test.py the box_deltas should be Two-dimensional ,but in fact is four-dimensional.could you help me ?

inception_v4测试错误

您好,我想请您给我看下我训练的有什么问题。
训练数据1万个类别,人脸图像
训练prototxt:在您给出的连接下载的inception_v4
3train_val_inception_v4.txt
deploy_v4_1.txt

训练过程没有什么问题,虽然我训练过程的val集只有8个图像,输出accuracy增长到0.91

但是我测试的时候下载的
evaluation_cls.txt
不管输入哪个人脸图,输出分数最大值序号都是7320. 不知道哪里弄错了。我尝试改deplot_v4_1.txt的BatchNorm层use_global_stats: false 输出分数最大值序号都是248.

非常感谢您的回答!

In DenseNet, there should not have pad=1 in pool1 in caffe version

Hi,
Your implementation of DenseNet use pad=1 as in the original torch version

But the implementation of pooling is different between the caffe and torch, the former use ceil((imsize + 2*pad - kernel) / stride) + 1 and the latter use floor((imsize + 2*pad - kernel) / stride) + 1

So, your feature map is different with the paper.
Just remove the pad=1 in pool1 will be ok.

The wrn prototxt file in cls

deploy_wrn50-2.prototxt doesn't seems like the real ‘wrn’ network,its just the nomal resnet-50 i
think..

fail in finetuning xception with caffe

It throws an error that "check failed: status==CUDNN_STATUS_SUCCESS(4 VS. 0) CUDNN_STATUS_INTERNAL_ERROR"
My cuda is cuda8.0 and cudnn is v5
Can someone help me?

range of input values for the models

Hi soeaver,

What is the range of input values for inception-resnet and resnet? Could you please tell me which one is right? [0,255], [0,1] or [-1,1]? Thanks.

FineTune Resnext50

Hi

Are the train_val and solver prototxt files available for resnext50? or any other resnext?, I’ve been trying to finetune the provided model but the loss seems to be stuck at about 5.3

Thank you

Can't reproduce the reported accuracies in Inception-V4 and resnext101-64x4d.

Hi,
I'm facing problems with the caffe models of Inception-V4 and resnext101-64x4d. It's not possible for me to get the reported accuracies and I just get around 0.1% which is just a random guess. I've tried it using either my own python script(which is derived from caffe example) or the provided script(I'm aware of the crop_size and base_size and change them accordingly).
I've downloaded the validation images from ImageNet and using their own val.txt which is sorted unlike yours.
Do you have any idea what can be the problem?
Thanks

inception-resnet-v2 224/299

I want to reproduce the accuracy of inception-resnet-v2. However I resize short to 299 and single crop to 224. I get this error in concat layer

expected (1,0,12,12), got (1,320,13,13)

and 331/331 works.

Would you mind share resnet34?

I training resnet34 for myself ,but the accuracy is only 62.2(top-1). the picture size is 224*224.
Would you mind share resnet34?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.