All Notebooks using Keras 2.2.4 and Tensorflow 1.11. Batch-Norm layer in this version of Keras is implemented in a way that: during training your network will always use the mini-batch statistics either the BN layer is frozen or not; also during inference you will use the previously learned statistics of the frozen BN layers. As a result, if you fine-tune the top layers, their weights will be adjusted to the mean/variance of the new dataset. Nevertheless, during inference they will receive data which are scaled differently because the mean/variance of the original dataset will be used. Consequently, if use Keras's example codes for fine-tuning Inception V3 or any network with batch norm layer, the results will be very bad. Please refer to issue #9965 and #9214. One temporary solution is:
Hi,
Just to say thank you because you make some great work, I mean your results seem honest compared to some studies that achieved something like 99% without providing any information. And you put your sources, the dataset, the faced issues with batch norm (I got the same trouble).
I planned to reuse your DenseNet approach on my dermoscopy dataset, and that's my point : did you have any published work on these results? I would like to cite your work on a paper I'm currently writting?
Have a nice day