I found out that during model training RAM usage increase continuously.
Deep dive into sources show that in function model_tensor_input_ff
memory is allocated first for post_activation
and next post_activation
is assigned the input
, which is passed to function. But the allocated memory is not freed. Here the example from model.c
:
void model_tensor_input_ff(model* m, int tensor_depth, int tensor_i, int tensor_j, float* input){
if(m == NULL)
return;
int i,j,z,w,count,count2,z2,k1 = 0, k2 = 0, k3 = 0;
/* Setting the input inside a convolutional structure*/
cl* temp = (cl*)malloc(sizeof(cl));
temp->post_activation = (float*)malloc(sizeof(float)*tensor_depth*tensor_i*tensor_j);
temp->normalization_flag = NO_NORMALIZATION;
temp->pooling_flag = NO_POOLING;
temp->activation_flag = SIGMOID;
temp->n_kernels = tensor_depth;
temp->rows1 = tensor_i;
temp->cols1 = tensor_j;
temp->post_activation = input;
temp->layer = -1;