I am using tf-nightly-gpu dev20190205 and keras 2.2.4. Keras backend is tensorflow.
The result of the first predictor seems fine.
Now we have 1/29 predictors.
[0000] test = 400.717, test_dummy = 35.730, train = 250.169, val = 361.254.
[0001] test = 229.488, test_dummy = 35.730, train = 196.438, val = 218.495.
[0002] test = 95.848, test_dummy = 35.730, train = 128.429, val = 67.883.
[0003] test = 131.620, test_dummy = 35.730, train = 86.672, val = 87.068.
[0004] test = 127.860, test_dummy = 35.730, train = 83.004, val = 86.102.
[0005] test = 121.149, test_dummy = 35.730, train = 79.777, val = 83.238.
[0006] test = 117.143, test_dummy = 35.730, train = 76.550, val = 82.130.
[0007] test = 116.458, test_dummy = 35.730, train = 73.160, val = 83.352.
[0008] test = 109.620, test_dummy = 35.730, train = 69.687, val = 80.677.
[0009] test = 96.495, test_dummy = 35.730, train = 65.771, val = 74.193.
[0010] test = 89.862, test_dummy = 35.730, train = 61.930, val = 72.068.
[0011] test = 87.194, test_dummy = 35.730, train = 57.443, val = 72.924.
[0012] test = 82.010, test_dummy = 35.730, train = 53.082, val = 72.291.
[0013] test = 70.244, test_dummy = 35.730, train = 50.106, val = 67.062.
[0014] test = 75.681, test_dummy = 35.730, train = 48.663, val = 72.526.
[0015] test = 64.220, test_dummy = 35.730, train = 47.879, val = 66.390.
[0016] test = 64.904, test_dummy = 35.730, train = 47.895, val = 67.356.
[0017] test = 70.042, test_dummy = 35.730, train = 47.376, val = 70.178.
[0018] test = 72.951, test_dummy = 35.730, train = 47.125, val = 72.526.
[0019] test = 62.617, test_dummy = 35.730, train = 47.048, val = 66.888.
[0020] test = 62.307, test_dummy = 35.730, train = 47.006, val = 66.946.
[0021] test = 63.614, test_dummy = 35.730, train = 46.580, val = 67.927.
[0022] test = 64.639, test_dummy = 35.730, train = 46.480, val = 68.664.
[0023] test = 63.164, test_dummy = 35.730, train = 46.315, val = 68.060.
[0024] test = 62.632, test_dummy = 35.730, train = 46.085, val = 67.720.
[0025] test = 63.270, test_dummy = 35.730, train = 46.297, val = 68.020.
[0026] test = 57.431, test_dummy = 35.730, train = 45.896, val = 64.943.
[0027] test = 62.448, test_dummy = 35.730, train = 45.619, val = 67.574.
[0028] test = 62.062, test_dummy = 35.730, train = 45.559, val = 67.545.
[0029] test = 66.334, test_dummy = 35.730, train = 45.404, val = 69.701.
[0030] test = 61.042, test_dummy = 35.730, train = 45.344, val = 66.581.
[0031] test = 66.224, test_dummy = 35.730, train = 45.410, val = 69.639.
[0032] test = 62.496, test_dummy = 35.730, train = 45.056, val = 67.591.
[0033] test = 56.331, test_dummy = 35.730, train = 44.912, val = 64.265.
[0034] test = 66.461, test_dummy = 35.730, train = 44.732, val = 69.361.
[0035] test = 63.552, test_dummy = 35.730, train = 44.615, val = 67.774.
[0036] test = 61.630, test_dummy = 35.730, train = 44.543, val = 66.960.
[0037] test = 57.977, test_dummy = 35.730, train = 44.369, val = 64.616.
[0038] test = 59.852, test_dummy = 35.730, train = 44.320, val = 65.562.
[0039] test = 60.299, test_dummy = 35.730, train = 44.227, val = 65.858.
[0040] test = 64.031, test_dummy = 35.730, train = 44.120, val = 67.900.
[0041] test = 60.272, test_dummy = 35.730, train = 43.909, val = 65.952.
[0042] test = 62.131, test_dummy = 35.730, train = 43.859, val = 66.975.
[0043] test = 57.825, test_dummy = 35.730, train = 43.709, val = 64.676.
[0044] test = 57.408, test_dummy = 35.730, train = 43.567, val = 64.322.
[0045] test = 64.868, test_dummy = 35.730, train = 43.395, val = 68.591.
[0046] test = 59.758, test_dummy = 35.730, train = 43.315, val = 65.871.
[0047] test = 63.039, test_dummy = 35.730, train = 43.768, val = 67.353.
[0048] test = 62.418, test_dummy = 35.730, train = 43.128, val = 67.107.
[0049] test = 60.137, test_dummy = 35.730, train = 43.138, val = 65.651.
[0050] test = 59.793, test_dummy = 35.730, train = 42.966, val = 65.658.
[0051] test = 62.317, test_dummy = 35.730, train = 42.779, val = 67.082.
[0052] test = 54.273, test_dummy = 35.730, train = 42.689, val = 62.452.
[0053] test = 58.119, test_dummy = 35.730, train = 42.697, val = 64.138.
[0054] test = 56.406, test_dummy = 35.730, train = 42.457, val = 63.394.
[0055] test = 58.466, test_dummy = 35.730, train = 42.403, val = 64.424.
[0056] test = 55.233, test_dummy = 35.730, train = 42.242, val = 62.729.
[0057] test = 56.947, test_dummy = 35.730, train = 42.130, val = 63.660.
[0058] test = 59.724, test_dummy = 35.730, train = 42.152, val = 64.817.
[0059] test = 53.021, test_dummy = 35.730, train = 41.852, val = 61.323.
[0060] test = 51.008, test_dummy = 35.730, train = 41.977, val = 60.155.
[0061] test = 54.600, test_dummy = 35.730, train = 41.845, val = 61.994.
[0062] test = 49.409, test_dummy = 35.730, train = 41.783, val = 59.586.
...
Now we have 27/29 predictors.
[0000] test = 353.166, test_dummy = 35.730, train = 29.346, val = 144.498.
[0001] test = 350.791, test_dummy = 35.730, train = 19.965, val = 145.339.
[0002] test = 351.112, test_dummy = 35.730, train = 16.450, val = 151.287.
[0003] test = 349.421, test_dummy = 35.730, train = 14.289, val = 152.010.
[0004] test = 353.675, test_dummy = 35.730, train = 13.170, val = 152.310.
[0005] test = 352.262, test_dummy = 35.730, train = 12.588, val = 149.141.
[0006] test = 357.054, test_dummy = 35.730, train = 10.941, val = 153.442.
[0007] test = 354.914, test_dummy = 35.730, train = 10.309, val = 157.844.
[0008] test = 342.204, test_dummy = 35.730, train = 12.243, val = 154.254.
[0009] test = 344.954, test_dummy = 35.730, train = 9.946, val = 152.557.
[0010] test = 348.354, test_dummy = 35.730, train = 9.138, val = 154.026.
[0011] test = 347.432, test_dummy = 35.730, train = 9.539, val = 154.585.
[0012] test = 353.018, test_dummy = 35.730, train = 9.318, val = 154.395.
[0013] test = 357.550, test_dummy = 35.730, train = 8.292, val = 155.190.
[0014] test = 353.914, test_dummy = 35.730, train = 7.926, val = 154.913.
[0015] test = 355.490, test_dummy = 35.730, train = 7.585, val = 156.675.
[0016] test = 354.821, test_dummy = 35.730, train = 8.850, val = 155.362.
[0017] test = 354.489, test_dummy = 35.730, train = 7.336, val = 152.145.
[0018] test = 356.071, test_dummy = 35.730, train = 8.081, val = 154.395.
[0019] test = 352.757, test_dummy = 35.730, train = 7.413, val = 154.405.
[0020] test = 354.936, test_dummy = 35.730, train = 9.924, val = 156.035.
[0021] test = 348.511, test_dummy = 35.730, train = 7.375, val = 151.506.
[0022] test = 348.432, test_dummy = 35.730, train = 7.027, val = 154.944.
[0023] test = 347.811, test_dummy = 35.730, train = 6.719, val = 150.221.
[0024] test = 351.702, test_dummy = 35.730, train = 7.312, val = 155.571.
[0025] test = 350.940, test_dummy = 35.730, train = 6.994, val = 152.205.
[0026] test = 352.935, test_dummy = 35.730, train = 6.384, val = 153.850.
[0027] test = 356.619, test_dummy = 35.730, train = 6.487, val = 156.018.
It seems that train loss continues lowering ,however test MAPE is large.
Is this normal? And why test_dummy does not change?
I am a real green hand in deep learning and looking forward to your reply.