Readme.md can be converted to Contents.m, which Matlab uses in its internal help system.
Code itself should be made "pretty", which good comments -- i.e., something we'd like someone else to see.
We should look at its computational complexity. If it really takes 5 days to run, that's OK, but maybe there's something that can be done to take parts of it that are currently interpreted and make them calls to built-in Matlab functions, which would be faster. Let's do a code review when everything is up to see about this.
the line of code in question is below:
outfile = [h5dir '/Binned/burst_', num2str(i), '.csv'];
csvwrite(outfile, frames);
Making this change will require changes in functions that read from these burst_n.csv files as well. This will improve those functions runtime as they will no longer need to call csvread on those thousands of csv files. These functions are getBurstOriginXYN and getBurstSpeed. There may be more.
So, the code below is basically all we save when we run a simulation and produce XML output: the binned spike counts (for 10ms bins). Of course, this only works for neural simulations. We want to go to just saving whatever the recorder is told to save. However, it might be good to preserve this computation temporarily, just as a double-check for regression testing. But it should go. Also, it does the computation in a poor way, because we should never compute the time of an event from the beginning of the simulation; just the interval between events (because simulations can be very long โ too long to store in a double accurately).
Currently, it is in Matlab and takes 2-4 days to run for a full neuronal development simulation. This could certainly be sped up in C++. The main algorithms aren't complicated, but will require reading data from an HDF5 file in C++.
Matlab now has the ability to load XML files and supports traversing the DOM to extract contents. So, there is no need to get the old C-based mex files working; just rewrite them in Matlab. This supersedes issue #3.