Git Product home page Git Product logo

supereeg's Issues

refactor Model to use correlation matrices directly

switch out the numerator/denominator approach with one that uses (z-transformed) correlation matrices directly

  • when new data come in, simply compute the correlation matrix and (if needed) blur it out to the indicated locations
  • update: multiply each object by n_subs, then divided by the sum of n_subs. if locations are different, first blur both matrices out to the union of the locations
  • get_model: compute the inverse of the fisher z-transform (_z2r)
  • sub: for a - b, multiply b's correlation matrix by -1 and run a.update(b)

test reconstructions on synthetic data

if numerator/denominator approach reconstructs synthetic data well, we can stick with it. otherwise we should switch to the "pyFR" approach (just average the correlation matrices across subjects)

use logs instead of raw values to prevent small number errors

  • change rbf --> log_rbf
  • save log of numerator and denominator
  • full model = np.exp(numerator - denominator); also need to change everywhere there's an np.divide-- I think this has now been consolidated to one function
  • add and update can be supported via np.logsumexp of the numerators and denominators
  • subtract can be supported by np.logsumexp but multiplying numerator and denominator of the "to-be-added" model by -1

fix Model.__init__

the handling of locations isn't correct-- need to work through the logic and update. i added some comments directly to the code.

expand_corrmat

Questions:

  • Why is there a separate _fit and _predict function?
  • We can see where old and new locations match by looking where the weights == 0
  • Need clearer variable names to make it clear which are the old vs. new locations

Given m by m "old" correlation matrix and n by n "new" correlation matrix:

When the old and new locations match, assign the corresponding entries of new to those values in old. This can be done efficiently by creating a subset of matching (old_match_inds, new_match_inds) and not matching (new_inds) locations, and then:

  • setting new[new_match_inds, ][new_match_inds] = old[old_match_inds, ][old_match_inds]. Log numerator: values from logZ. Log denominator: all zeros.
  • for new[new_inds, ][new_inds] we need to blur out old (considering all locations) to the corresponding locations in new. The weights are given by log_rbf_weights[~old_match_inds, ][~new_match_inds]. Log numerator: values + weights. Log denominator: weights.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.