using previous versions of hypertools, the demo "gif" can be reproduced as follows:
import hypertools as hyp
import timecorr as tc
from scipy.interpolate import pchip
import numpy as np
def resample(a, n):
b = np.zeros([n, a.shape[1]])
x = np.linspace(1, a.shape[0], num=a.shape[0], endpoint=True)
xx = np.linspace(1, a.shape[0], num=n, endpoint=True)
for i in range(a.shape[1]):
interp = pchip(x, a[:, i])
b[:, i] = interp(xx)
return b
pieman_data = hyp.load('weights').data
smoothed = tc.smooth(pieman_data, kernel_fun=tc.helpers.gaussian_weights, kernel_params={'var': 50})
resampled = [resample(x, 1000) for x in smoothed]
aligned = hyp.align(resampled, align='hyper')
hyp.plot(aligned, align=True, reduce='UMAP', animate=True, duration=30, tail_duration=4.0, zoom=0.5, save_path='pieman.mov')
A similar approach should work with the revamped version, but it doesn't seem to work well in practice:
import hypertools as hyp
data = hyp.load('weights')
manip = [{'model': 'Smooth', 'args': [], 'kwargs': {'kernel_width': 25}},
{'model': 'Resample', 'args': [], 'kwargs': {'n_samples': 1000}},
'ZScore']
hyperalign = {'model': 'HyperAlign', 'args': [], 'kwargs': {'n_iter': 2}}
hyp.plot(data, manip=manip, align=hyperalign, animate='window', reduce='UMAP', duration=30, focused=4, zoom=0.5)
there are at least a few differences that i notice:
- prior versions of hypertools normalized data by default. this no longer happens. in the above example, i've specified that the data should be z-scored (within feature) prior to passing to UMAP, but i need to double check that the "preprocessing" is analogous to the prior version. e.g. should 'ZScore' be replaced with 'Normalize' and/or some other preprocessing step?
- the first demo uses a Gaussian kernel (variance = 50) to smooth the data, whereas the second example uses a boxcar kernel (width=25). i can't imagine that this would substantially change the results...but you never know...
- i don't think that the normalization step gets applied in the old
hyp.align
function, but it's possible the previous demo normalized the data twice (once prior to the first alignment step and again prior to the second alignment/projecting into 3D steps). if so, the second demo could be updated to normalize multiple times.
it's also possible that something is off with the revised hyperalignment implementation, despite that the alignment tests are passing...