Git Product home page Git Product logo

pyddm's People

Contributors

bpesquet avatar cherepaha avatar covertg avatar mwshinn avatar normanlam1217 avatar ntardiff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pyddm's Issues

set disp=False or disp=verbose when using `differential_evolution` method

First, thanks a lot for this awesome package! 🏆

I'm trying to use it in a Jupyter notebook, but the textual outputs make it hard to quickly diagnose the final results and plots.

I tried verbose=False but scipy still produces some outputs at every fitting step. I think the reason is scipy.optimize.differential_evolution uses disp param to print out evaluations at every step, and this disp is hardcoded and cannot be overridden by verbose or passing custom fitparams:

x_fit = differential_evolution(_fit_model, constraints, disp=True, **fitparams)

Since disp is already used in that function call, it cannot be overridden by fitparams and the following code raises an error:

model_fit = fit_adjust_model(sample=_SAMPLE, model=model, verbose=False, fitparams={'disp':False})

The text it prints is like this:

differential_evolution step 1: f(x)= -63.8486
differential_evolution step 2: f(x)= -91.0174
differential_evolution step 3: f(x)= -150.359
differential_evolution step 4: f(x)= -152.205
differential_evolution step 5: f(x)= -212.682
differential_evolution step 6: f(x)= -212.682
differential_evolution step 7: f(x)= -221.318
...

It only occurs when differential_evolution is used as the fitting method.

Quick solution would be removing that hardcoded disp or setting it equal to verbose.

Error with accuracy coding for biased starting point

Hi! I'm having difficulty with implementing the biased starting point. I'm using the same code as provided in the docs:

import numpy as np
from pyddm import InitialCondition
class ICPointSideBias(InitialCondition):
    name = "A starting point with a left or right bias."
    required_parameters = ["x0"]
    required_conditions = ["left_is_correct"]
    def get_IC(self, x, dx, conditions):
        start = np.round(self.x0/dx)
        # Positive bias for left choices, negative for right choices
        if not conditions['left_is_correct']:
            start = -start
        shift_i = int(start + (len(x)-1)/2)
        assert shift_i >= 0 and shift_i < len(x), "Invalid initial conditions"
        pdf = np.zeros(len(x))
        pdf[shift_i] = 1. # Initial condition at x=self.x0.
        return pdf

But the assertion line for invalid initial conditions is inevitably called after a few iterations. Additionally, the docs explain that x0 can be positive or negative and that this indicates the side of the bias. However, the example code in the cookbook is written so that x0 is always positive:

model = Model(IC=ICPointSideBias(x0=Fittable(minval=0, maxval=1)),
              dx=.01, dt=.01)

I'm just a bit confused as how to how to approach this problem. The model works fine without the bias.

GUI doesn't display histograms of different conditions and Realtime appears to have no effect

Python 3.8.8
PyDDM 0.5.1
Matplotlib 3.3.4

When the GUI is launched with ddm.plot.model_gui I get this:

Screen Shot 2021-03-24 at 3 36 28 PM

I assume that clicking any radio button on the left-hand side besides All then clicking Update is supposed to show the histogram of data belonging to that condition, not the histogram of the entire dataset, but the histogram never updates irrespective of the radio button.

Moreover, the Real-time checkbox doesn't appear to be functional. I still have to click Update to see differences in the curve when moving the sliders on the right-hand side. The sliders themselves, Update and Reset and work as expected.

Seed option in evolution strategy for reporting stability.

In functions.py line 403, random.normal() and random.random() are called during the evolution_strategy.

I am using pyDDM for a paper and am wrestling with a few unstable parameters. I would really help out if these two steps could be optionally seeded for solution stability. Would you be comfortable if I proposed a PR for this?

Noise in documentation

Thank you for you an amazing piece of software for DDM models.

Should the get_noise method of the Noise class return the standard deviation or the variance of the noise? In places in the documentation it sounds like it should return the variance, but in the methods section of your preprint it sounds like the standard deviation.

Thanks for any clarification, and for your time.

solve_all_conditions condition parameter not used

LossFunction.cache_by_condition calls solve_all_conditions to compute a Solution for each condition. However, the conditions argument is not being used, and instead the function is using model.required_conditions to determine the required conditions. Is this a bug or is this expected behavior? If it's expected, then removing the conditions argument would be good so that it's not misleading.

source: model.required_conditions

Solving fails using Gamma non-decision time distribution

Fails when solving for the pdf with a model that features Gamma distribution for non-decision time. here is the error message, and below minimal code to reproduce it:

ExitConditionsError: Ensures statement 'np.sum(return.corr) <= np.sum(solution.corr)' failed in OverlayNonDecisionGamma.apply
params: {'self': OverlayNonDecisionGamma(nondectime=0, shape=5, scale=0.1), 'solution': <ddm.solution.Solution object at 0x00000225BCB39048>, 'return': <ddm.solution.Solution object at 0x00000225BC032710>}

'''
from ddm import Model
from ddm.models import NoiseConstant, BoundConstant, OverlayChain, OverlayNonDecisionGamma, OverlayPoissonMixture
import ddm.models

class DriftCoherence(ddm.models.Drift):
name = "Drift depends linearly on coherence"
required_parameters = ["driftcoh"] # <-- Parameters we want to include in the model
required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.

# We must always define the get_drift function, which is used to compute the instantaneous value of drift.
def get_drift(self, conditions, **kwargs):
    return self.driftcoh * conditions['coh']

drift = 9.949529598148764
B = 1 #0.9219102973749433
shape = 5
scale = .1
T_dur = 4

NonDecisionModel = OverlayNonDecisionGamma(nondectime=0, shape=shape, scale=scale)

model = Model(name='drift varies with coherence and gamma nondecisiontime',

            drift = DriftCoherence(driftcoh=drift),
             noise=NoiseConstant(noise=1),
             bound=BoundConstant(B=B),
             
             overlay=OverlayChain(overlays=[NonDecisionModel,
                                            OverlayPoissonMixture(pmixturecoef=.02,
                                                                  rate=.5)]),
             dx=.01, dt=.01, T_dur=T_dur)

model.T_dur = 20 # to ensure we have no (or very few) undecided trials

dd = {"coh":0.24628543028830302}
sol = model.solve(conditions = dd)
'''

Can we get Hessian matrix?

Is it possible to get the Hessian matrix from fitted pyddm objects?

# how to get Hessian matrix from a model like this one (adapted from docs)?
fit_model_rs = fit_adjust_model(sample=roitman_sample, model=model_rs, fitting_method="basin")

Appreciate any help anyone can provide. Thanks!

error with analytical solver with collapsing bound and noise > 1

When I try to run a DDM with noise > 1 and a linear collapsing bound using the analytical solver, I get errors such as the following (see below).

I think it has something to do with the collapsing bound, since it doesn't happen with the same model where I've set the slope on the collapse = 0. It happens at the outset of running the model. It does not happen with the numerical solver.

I am probably just going to work around this at the moment, but I wanted to note the issue.

Traceback (most recent call last):
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/multiprocess/pool.py", line 48, in mapstar
    return list(map(*args))
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/paranoid_scientist-0.2.2-py3.9.egg/paranoid/decorators.py", line 114, in _decorated
    return func(*args, **kwargs)
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/model.py", line 510, in solve
    return self.solve_analytical(conditions=conditions)
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/paranoid_scientist-0.2.2-py3.9.egg/paranoid/decorators.py", line 114, in _decorated
    return func(*args, **kwargs)
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/model.py", line 544, in solve_analytical
    anal_pdf_corr, anal_pdf_err = analytic_ddm(self.get_dependence("drift").get_drift(t=0, conditions=conditions),
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/analytic.py", line 93, in analytic_ddm
    dist_cor = analytic_ddm_linbound(b_upper, -drift+b_slope, -b_lower, -drift-b_slope, teval_valid)
  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/analytic.py", line 41, in analytic_ddm_linbound
    np.exp(tmp*(n+1)*(n*a1-(n+1)*a2))*((2*n+1)*a1-2*(n+1)*a2)
FloatingPointError: overflow encountered in exp
"""


The above exception was the direct cause of the following exception:

Traceback (most recent call last):

  File "/Users/nathan/Dropbox/Goldlab/correlated/doFit_realsubj.py", line 187, in <module>
    main()

  File "/Users/nathan/Dropbox/Goldlab/correlated/doFit_realsubj.py", line 169, in main
    model_fit = gdw.run_model(sample,model=model,subj=subj,sess=sess,it=it,

  File "/Users/nathan/Dropbox/Goldlab/correlated/gddmwrapper/base.py", line 110, in run_model
    model_fit = fit_adjust_model(sample=sample, model=model,**kwargs)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/functions.py", line 347, in fit_adjust_model
    x_fit = differential_evolution(_fit_model, constraints, **fitparams)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/scipy/optimize/_differentialevolution.py", line 392, in differential_evolution
    ret = solver.solve()

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/scipy/optimize/_differentialevolution.py", line 984, in solve
    self._calculate_population_energies(

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/scipy/optimize/_differentialevolution.py", line 1116, in _calculate_population_energies
    calc_energies = list(

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/scipy/_lib/_util.py", line 407, in __call__
    return self.f(x, *self.args)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/functions.py", line 329, in _fit_model
    lossf = lf.loss(m)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/paranoid_scientist-0.2.2-py3.9.egg/paranoid/decorators.py", line 114, in _decorated
    return func(*args, **kwargs)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/models/loss.py", line 170, in loss
    sols = self.cache_by_conditions(model)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/models/loss.py", line 96, in cache_by_conditions
    return solve_all_conditions(model, self.sample, conditions=self.required_conditions, method=self.method)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/pyddm-0.5.2-py3.9.egg/ddm/functions.py", line 484, in solve_all_conditions
    sols = _parallel_pool.map(meth, conds, chunksize=1)

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/multiprocess/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()

  File "/Users/nathan/opt/miniconda3-x86/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
    raise self._value

FloatingPointError: overflow encountered in exp

Where can I declare x?

Can you let me know how I can declare x domain? there are only parameters for dt, dx and T but not x.

Sample.__eq__ errors

I recently discovered that equality comparison of two Samples via == could raise a ValueError when the Samples are of different sizes (in particular, if the sample_corrs and/or sample_errs are of a different size). E.g.:

import numpy as np
import pyddm
s1 = pyddm.Sample(sample_corr=np.array([0, 1]), sample_err=np.array([]))
s2 = pyddm.Sample(sample_corr=np.array([0, 1, 2]), sample_err=np.array([]))
s1 == s2
ValueError: operands could not be broadcast together with shapes (2,) (3,) `

This appears to be due to Sample.__eq__'s use of np.allclose() on the input arrays. If the arrays can be broadcast together, no error is raised; otherwise we get a ValueError.

Thinking more on this also made me wonder: is the equality operator for Samples invariant to array order? Currently it is not; i.e. we have:

s3 = pyddm.Sample(sample_corr=np.array([0, 1]), sample_err=np.array([]))
s4 = pyddm.Sample(sample_corr=np.array([1, 0]), sample_err=np.array([]))
s3 == s4
False

My prior assumption was that it should be, although perhaps I could be mistaken.

Undecided trial support

Support for undecided trials is currently not fully baked into model fitting and other parts of the codebase. E.g.: model loss functions (can't combine discrete and continuous likelihood); Sample conversion methods (to and from numpy/pandas doesn't have a set representation for undecided trials, c.f. https://pyddm.readthedocs.io/en/stable/cookbook/loss.html#loss-undecided); and so on.

As discussed offline — making this issue as a reminder and to consolidate further discussion of undecided trials.

access trial-by-trial drift

I am trying to fit across-trials drifts, but i cannot access the estimated drifts.

I followed the steps suggested on the documentation page, but maybe i'm doing something wrong (?):

Create a sample object to be input in the model.

    df_sample = Sample.from_pandas_dataframe(df_to_fit, 
                                            rt_column_name="RT", 
                                            choice_column_name="Choice", 
                                            choice_names=("Yes", "No"))

Create function for trial-by-trial drift subclass coming next

    RESOLUTION = n_tot 
    def prepare_sample_for_variable_drift(orig_sample, resolution=RESOLUTION):
        new_samples = []
        for i in range(0, resolution):
            choice_upper = orig_sample.choice_upper.copy()
            choice_lower = orig_sample.choice_lower.copy()
            undecided = orig_sample.undecided
            conditions = copy.deepcopy(orig_sample.conditions)
            conditions['driftnum'] = (np.asarray([i]*len(choice_upper)),
                                    np.asarray([i]*len(choice_lower)),
                                    np.asarray([i]*undecided))
            new_samples.append(Sample(choice_upper, choice_lower, undecided, choice_names=orig_sample.choice_names, **conditions))
        new_sample = new_samples.pop()
        for s in new_samples:
            new_sample += s
        return new_sample

Create new sample

      new_sample_tbt = prepare_sample_for_variable_drift(df_sample, RESOLUTION)

Define model

    our_drift = DriftUniform(drift=Fittable(minval=-20, maxval=20), width=Fittable(minval=1, maxval=2))
    my_model = Model(name=condition_to_fit,
                    drift=our_drift,
                    noise=NoiseConstant(noise=noise_val),
                    bound=BoundConstant(B=boundaries_val),
                    IC=ICPointRatio(x0=Fittable(minval=-.9, maxval=.9)),
                    overlay=OverlayNonDecision(nondectime=ndt_val),
                    dx=.009,
                    dt=.009,
                    T_dur=rt_hi_abs_thresh,
                    choice_names=("Yes", "No"))

Fitting

    my_model_fit = fit_adjust_model(sample=new_sample_tbt, 
                                    model=my_model, 
                                    fitting_method="differential_evolution",
                                    lossfunction=LossRobustBIC,
                                    verbose=False)

If i try accessing parameters
fitted_params = my_model_fit.get_model_parameters()
I am returned a single drift estimate.

Which command should i use? Am i missing something?

Thank you in advance for your help!

IndexError when interating over Sample.items()

First, thanks for putting together this great package!

Second, I'm not sure if that's a bug or I'm doing something wrong, but I'm getting IndexError when iterating over Sample.items().

This code

import pandas as pd
import ddm
import paranoid as pns
pns.settings.Settings.set(enabled=False)

exp_data = pd.read_csv('measures.csv', usecols=['subj_id', 'RT', 'is_turn_decision'])
exp_sample = ddm.Sample.from_pandas_dataframe(df=exp_data, rt_column_name='RT', correct_column_name='is_turn_decision')
rts = [item[0] for item in exp_sample.items(correct=True)]

results in

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-28-b68ea2d791d2> in <module>
----> 1 rts = [item[0] for item in exp_sample.items(correct=True)]

<ipython-input-28-b68ea2d791d2> in <listcomp>(.0)
----> 1 rts = [item[0] for item in exp_sample.items(correct=True)]

C:\ProgramData\Anaconda3\lib\site-packages\ddm\sample.py in __next__(self)
    396             rt = self.sample.err
    397             ind = 1
--> 398         return (rt[self.i-1], {k : self.sample.conditions[k][ind][self.i-1] for k in self.sample.conditions.keys()})
    399 

IndexError: index 350 is out of bounds for axis 0 with size 350

where 350 is the length of my correct RTs, and the data is measures.csv.

Taking a quick look at _Sample_Iter_Wraper.__next__(), it seems like the iterator would only stop at len(self.sample):

if self.i == len(self.sample):
    raise StopIteration

whereas I'd guess it should stop earlier, at len(self.sample.corr) or len(self.sample.err). If not, I am confused and would appreciate a quick hint on how to iterate over correct/error RTs...

Also, if I don't disable paranoid warnings, I cannot even create a sample from dataframe:

EntryConditionsError: Invalid function requirement 'not np.any(np.isnan(df))' in Sample.from_pandas_dataframe

but that's probably another issue...

Error when using different fitting algorithms

I'm running the example on monkey data here.

If we change the fitting method from default to 'simple', 'simplex', or 'basin' like this:

fit_model_rs = fit_adjust_model(sample=roitman_sample, model=model_rs, verbose=False, method='simple')

It gives the following error:

RuntimeError: The map-like callable must be of the form f(func, iterable), returning a sequence of numbers the same length as 'iterable'

ICGaussian - possibly wrong symmetric x verification

In ic.py line 170, there is a @requires("np.all(np.isclose(x-x[::-1], 0))") # Symmetric across x=y. As x is symmetric over 0, it produces an error when attempting to use ICGaussian for IC parameter in model initialization. It is possible to fix this by changing the verification to @requires("np.all(np.isclose(x+x[::-1], 0))") # Symmetric across x=y.
We use the following code that produces the error:
import ddm m = ddm.Model(name='Simple model', IC=ddm.ICGaussian(stdev=0.3), drift=ddm.DriftConstant(drift=2.2), noise=ddm.NoiseConstant(noise=1.5), bound=ddm.BoundConstant(B=1.1), overlay=ddm.OverlayNonDecision(nondectime=.1), dx=.001, dt=.01, T_dur=2) sol: ddm.Solution = m.solve()

Error running ddm.plot.model_gui()

Hi Max,

Thanks for the very cool looking package!

I am running through the roitman_shadlen.py tutorial now and am running into an error when I run this function:

Line 104: ddm.plot.model_gui(model=fit_model_rs, sample=roitman_sample)


Returns this error:

ddm.plot.model_gui(model=fit_model_rs, sample=roitman_sample)
cond var vals [{'0.0': 0.0, '0.032': 0.032000000000000001, '0.064': 0.064000000000000001, '0.128': 0.128, '0.256': 0.25600000000000001, '0.512': 0.51200000000000001}]
cond vars [<tkinter.StringVar object at 0x000002715063D2B0>]
D:\Anaconda3\lib\site-packages\matplotlib\figure.py:1742: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.
warnings.warn("This figure includes Axes that are not "
Traceback (most recent call last):

File "", line 1, in
ddm.plot.model_gui(model=fit_model_rs, sample=roitman_sample)

File "D:\Anaconda3\lib\site-packages\ddm\plot.py", line 501, in model_gui
set_defaults()

File "D:\Anaconda3\lib\site-packages\ddm\plot.py", line 489, in set_defaults
update()

File "D:\Anaconda3\lib\site-packages\ddm\plot.py", line 402, in update
plot(model=m, fig=fig, sample=sample, conditions=current_conditions, data_dt=data_dt)

File "D:\Anaconda3\lib\site-packages\ddm\plot.py", line 232, in plot_fit_diagnostics
fig.tight_layout()

File "D:\Anaconda3\lib\site-packages\matplotlib\figure.py", line 1752, in tight_layout
rect=rect)

File "D:\Anaconda3\lib\site-packages\matplotlib\tight_layout.py", line 322, in get_tight_layout_figure
max_nrows = max(nrows_list)

ValueError: max() arg is an empty sequence


Initially it was producing a GUI with an error message but this no longer happens.

Kind regards,

Angus

CI failing due to travis changes

Builds are failing due to configuration changes in how travis deals with scipy/numpy versions, not because of issues with PyDDM itself. Considering travis is no longer free for open source projects, it makes more sense to switch to github actions (or another solution) for CI.

Issue with defining bounds varying linearly with a column value

I am trying to define a bound that would vary linearly with a column 'b' in the dataset. The column 'b' contains value that is a function of z-scores of reaction-time data (included in the data frame named 'z'). However, the code shows error on running. Link to the dataframe: https://drive.google.com/file/d/1O4Ea-oaqbPR626KvMASk2m5hV09fhoHl/view?usp=sharing
I'm new to python and pyDDM, and any help is appreciated.
Below is how I defined the drift-rate and bound:

class DriftCoherence(ddm.models.Drift):
    name = "Drift depends linearly on coherence"
    required_parameters = ["driftcoh"] 
    required_conditions = ["coh_list"] 
    
    def get_drift(self, conditions, **kwargs):
        return self.driftcoh * conditions['coh_list']

class adaptBound(ddm.Bound):
    name = "adaptive bounds depends linearly on the bound calculated as a function of z scores of RT"
    required_parameters = ["mbound"]
    required_conditions = ["b"]
    
    def get_bound(self, conditions, **kwargs):
        return self.metabound * conditions['b']

Model construction and fitting go like this:

from ddm import Model, Fittable
from ddm.functions import fit_adjust_model, display_model
from ddm.models import NoiseConstant, OverlayChain, OverlayNonDecision, OverlayPoissonMixture, LossRobustBIC
model_adapt = Model(name='All participants data, drift varies with coherence',
                 drift=DriftCoherence(driftcoh=Fittable(minval=0, maxval=20)),
                 noise=NoiseConstant(noise=1),
                 bound=adaptBound(mbound=Fittable(minval=-4, maxval=4.5)),
                 overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fittable(minval=0, maxval=.4)),
                                                OverlayPoissonMixture(pmixturecoef=.02,
                                                                      rate=1)]),
                 dx=.001, dt=.01, T_dur=2)


fit_model_adapt = fit_adjust_model(sample=all_sample, model=model_adapt, fitting_method="differential_evolution", lossfunction=LossRobustBIC, verbose=False)
display_model(fit_model_adapt)

Can't use string columns in pandas dataset

Unsure whether this is intended, a bug, of too much of an edge case but a pandas DataFrame cannot be turned into a Sample if it contains string columns.

import pandas as pd
from ddm import Sample


df = pd.DataFrame([
    {"cond": -0.5, "rt": 1.1, "response": 0},
    {"cond": 0., "rt": 2.1, "response": 1},
    {"cond": 0.1, "rt": 1.2, "response": 1},
])
df["extra_col"] = 1
Sample.from_pandas_dataframe(
    df, rt_column_name="rt", correct_column_name="response"
)  # works
df["extra_col"] = "hello"
Sample.from_pandas_dataframe(
    df, rt_column_name="rt", correct_column_name="response"
)  # doesn't work

Produces the error:

AttributeError: 'str' object has no attribute 'rint'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc
    return bound(*args, **kwds)
TypeError: loop of ufunc does not support argument 0 of type str which has no callable rint method

During handling of the above exception, another exception occurred:

AttributeError: 'str' object has no attribute 'rint'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/samuelrobertmathias/PycharmProjects/FrequencyRT/_explore_rts.py", line 17, in <module>
    Sample.from_pandas_dataframe(
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/paranoid/decorators.py", line 126, in _decorated
    returnvalue = func(*args, **kwargs)
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/ddm/sample.py", line 212, in from_pandas_dataframe
    conditions = {k: (pt(df[c][k]), pt(df[nc][k]), np.asarray([])) for k in column_names}
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/ddm/sample.py", line 212, in <dictcomp>
    conditions = {k: (pt(df[c][k]), pt(df[nc][k]), np.asarray([])) for k in column_names}
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/ddm/sample.py", line 207, in pt
    if np.all(arr == np.round(arr)):
  File "<__array_function__ internals>", line 5, in round_
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3637, in round_
    return around(a, decimals=decimals, out=out)
  File "<__array_function__ internals>", line 5, in around
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3262, in around
    return _wrapfunc(a, 'round', decimals=decimals, out=out)
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 67, in _wrapfunc
    return _wrapit(obj, method, *args, **kwds)
  File "/Users/samuelrobertmathias/miniconda3/envs/psychoacoustics/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 44, in _wrapit
    result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: loop of ufunc does not support argument 0 of type str which has no callable rint method

Documentation inconsistency

In the documentation, it says that, to recover the evolution of the pdf after the fokker-plank integration, we should use:

" Model.solve() argument “return_pdf”, which should be set to True".

But solve() is not stated to have "return_pdf" as an argument:

solve(conditions={}, return_evolution=False, force_python=False)

renormalization warnings

Following up on the previous discussion from #21:

Regarding the second issue with lots of renormalization warnings in the analytic solver, this seems a bit more serious. Could you please describe these in more detail in a bug report? I will close this PR, so if you could open a new bug report that would be best instead of reusing this thread.

After digging a little more, I don't think this is specifically about the analytic solver. I found one instance in which the numeric solver also gave frequent renormalization warnings during fitting, and this was a case in which I had made the initial bound height of an exponentially collapsing bound was a linear function of a parametric condition. A number of very similar models without this feature did not raise any warnings.

When I switched to the analytic solver, I switched from an exponentially-collapsing bound to a linearly-collapsing bound, and it seems that the linearly collapsing bound frequently produces these warnings.

So I am wondering if the issue is less about the solver and more about extreme conditions that become possible with certain types of bounds but not others? Sorry I don't have more insight than this at the moment...

cannot import name 'LossRobustBIC' from 'ddm.models'

Hi, there,

Thanks for sharing your package!
I am trying to run the code in the quick start guid (https://pyddm.readthedocs.io/en/latest/quickstart.html). When running from ddm.models import LossRobustBIC, I encountered an error:

ImportError: cannot import name 'LossRobustBIC' from 'ddm.models' (/home/user/miniconda3/envs/pyddm/lib/python3.8/site-packages/ddm/models/__init__.py)

Here is my system info:

{'commit_hash': '2922831ac',
 'commit_source': 'installation',
 'default_encoding': 'utf-8',
 'ipython_path': '/home/user/miniconda3/envs/pyddm/lib/python3.8/site-packages/IPython',
 'ipython_version': '7.15.0',
 'os_name': 'posix',
 'platform': 'Linux-5.4.0-37-generic-x86_64-with-glibc2.10',
 'sys_executable': '/home/hcp4715/miniconda3/envs/pyddm/bin/python',
 'sys_platform': 'linux',
 'sys_version': '3.8.3 (default, May 19 2020, 18:47:26) \n[GCC 7.3.0]'}

The modules imported:

backcall==0.2.0
certifi==2020.4.5.2
cycler==0.10.0
decorator==4.4.2
entrypoints==0.3
ipykernel==5.3.0
ipython @ file:///tmp/build/80754af9/ipython_1592348389743/work
ipython-genutils==0.2.0
jedi==0.17.0
jupyter-client==6.1.3
jupyter-core==4.6.3
kiwisolver==1.2.0
matplotlib @ file:///tmp/build/80754af9/matplotlib-base_1592406092505/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy==1.18.1
pandas==1.0.4
paranoid-scientist==0.2.2
parso==0.7.0
pexpect==4.8.0
pickleshare==0.7.5
prompt-toolkit==3.0.5
ptyprocess==0.6.0
pyddm==0.3.0
Pygments==2.6.1
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
pyzmq==18.1.1
scipy==1.4.1
seaborn==0.10.1
sip==4.19.13
six==1.15.0
tornado==6.0.4
traitlets==4.3.3
wcwidth==0.2.4

Invalid pdf in solve_numerical_cn()?

Estimating a model sometimes results in ReturnTypeErrors that appear to arise in solve_numerical_cn() from the pdf having negative entries.
Running a model (BoundConstant) with ICPointSourceCenter works fine, while running it with a custom ICPointSource (which literally copies the code for ICPointSourceCenter from the repo) leads to

ReturnTypeError: Invalid return type of [  4.84549201e-22   4.37823329e-20   1.94597413e-18 ...,   0.00000000e+00
   0.00000000e+00   0.00000000e+00] in Solution.pdf_corr

for some (but not all!) parameters during the fitting procedure.

The reason is that has_analytic_solution() requires ic=ICPointSourceCenter, and solve_numerical_cn() seems to run into issues with the pdf having negative entries (the reason for which does not seem apparent from the code in the repo...).

  1. What is the cause for these ReturnTypeErrors and how can they be avoided?
  2. Is ICPointSourceCenter strictly necessary for an analytical solution, or could the condition be relaxed to a Dirac initial condition that allows for a bias to be estimated?

exception when bound collapses to 0

When using collapsing bound models, exceptions can occur when the bound collapses to 0. I first observed this behavior in a hyperbolic collapsing bound model I was trying to fit, which would raise the following exception:

File "/home/ntardiff/data/miniconda2/envs/py36/lib/python3.6/site-packages/pyddm-0.3.0-py3.6.egg/ddm/model.py", line 662, in solve_numerical_implicit
File "/home/ntardiff/data/miniconda2/envs/py36/lib/python3.6/site-packages/paranoid/decorators.py", line 114, in _decorated
return func(*args, **kwargs)
File "/home/ntardiff/data/miniconda2/envs/py36/lib/python3.6/site-packages/pyddm-0.3.0-py3.6.egg/ddm/model.py", line 576, in solve_numerical
File "/home/ntardiff/data/miniconda2/envs/py36/lib/python3.6/site-packages/paranoid/decorators.py", line 114, in _decorated
return func(*args, **kwargs)
File "/home/ntardiff/data/miniconda2/envs/py36/lib/python3.6/site-packages/pyddm-0.3.0-py3.6.egg/ddm/tridiag.py", line 135, in spsolve
ValueError: unexpected array size: new_size=1, got array with arr_size=0

When trying to solve a model rather than fitting, the exception raised is:

s4 = model4.solve()
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 126, in _decorated
returnvalue = func(*args, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\model.py", line 420, in solve
return self.solve_numerical_implicit(conditions=conditions, return_evolution=return_evolution)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 126, in _decorated
returnvalue = func(*args, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\model.py", line 662, in solve_numerical_implicit
return self.solve_numerical(method="implicit", conditions=conditions, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 126, in _decorated
returnvalue = func(*args, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\model.py", line 576, in solve_numerical
pdf_inner = diffusion_matrix.splice(1,-1).spsolve(pdf_prev[x_index_inner:len(x_list)-x_index_inner])
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\tridiag.py", line 94, in splice
return TriDiagMatrix(diag=self.diag[lower:upper], up=self.up[lower:upper-1], down=self.down[lower:upper-1])
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 125, in _decorated
_check_requires(func, argvals)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 49, in _check_requires
raise E.EntryConditionsError("Function requirement '%s' failed in %s\nparams: %s" % (requirementtext, func.qualname, str(argvals)))
EntryConditionsError: Function requirement 'len(up) > 0' failed in TriDiagMatrix.init
params: {'self': <ddm.tridiag.TriDiagMatrix object at 0x0000026F36DFC6D8>, 'diag': array([10001.]), 'up': array([], dtype=float64), 'down': array([], dtype=float64)}

I also confirmed this same error can occur with BoundCollapsingStep when solving, though I had no issues when fitting using BoundCollapsingExponential and also didn't reproduce when trying to solve BoundCollapsingLinear, though I only made a cursory attempt on that one.

It appears that the problem is averted as long as the bound is >= dx. E.g., with dx=.001, allowing the bound to collapse to .0009 fails, but .001 is OK [i.e returning max(boundfunc, .001) from get_bound()].

model.solve() error for model variant with DriftCoherence

Hi Max and colleagues:
Thank you so much for this package! I was able to fit a few model variants, all in which drift varies linearly with coherence (class DriftCoherence), and also with either Starting point (ICPointSideBiasRatio) and/or drift bias (DriftCoherenceRewBias). Getting the outputs, log likelihood and parameter estimates has been great! I would also like to generate plots of model predictions, the psychometric and chronometric fits. However, when trying to get the model predictions with sol = model.solve(), while the sample provided works for me also (simple.py from your quickstart link https://pyddm.readthedocs.io/en/stable/quickstart.html), when I replaced the model.txt file with my own output files (for instance Mae_s_1_c_1.csv._base_model_output.txt), I get the error below. Any help would be appreciated, I have very little experience in Python.

runfile('/Users/horgalab/Documents/Python/simpleE.py', wdir='/Users/horgalab/Documents/Python')
Traceback (most recent call last):

File /Applications/Spyder.app/Contents/Resources/lib/python3.9/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)

File ~/Documents/Python/simpleE.py:75
sol = model_loaded.solve(conditions=cond_all)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/paranoid/decorators.py:114 in _decorated
return func(*args, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyddm/model.py:512 in solve
return self.solve_analytical(conditions=conditions)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/paranoid/decorators.py:114 in _decorated
return func(*args, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyddm/model.py:552 in solve_analytical
anal_pdf_corr, anal_pdf_err = analytic_ddm(self.get_dependence("drift").get_drift(t=0, x=0, conditions=conditions),

File ~/Documents/Python/simpleE.py:32 in get_drift
return self.driftcoh * conditions['coh']

TypeError: can't multiply sequence by non-int of type 'Fitted'

File /Applications/Spyder.app/Contents/Resources/lib/python3.9/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)

File ~/Documents/Python/simpleE_ours.py:81
sol = model_loaded.solve_numerical_implicit(conditions=conditions)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/paranoid/decorators.py:114 in _decorated
return func(*args, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyddm/model.py:861 in solve_numerical_implicit
return self.solve_numerical(method="implicit", conditions=conditions, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/paranoid/decorators.py:114 in _decorated
return func(*args, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyddm/model.py:744 in solve_numerical
drift_matrix = self.get_dependence('drift').get_matrix(x=x_list_inbounds, t=t, dt=self.dt, dx=self.dx, conditions=conditions)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/paranoid/decorators.py:114 in _decorated
return func(*args, **kwargs)

File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pyddm/models/drift.py:46 in get_matrix
drift = self.get_drift(x=x, t=t, dx=dx, dt=dt, conditions=conditions, **kwargs)

File ~/Documents/Python/simpleE_ours.py:26 in get_drift
return float(self.driftcoh) * conditions['coh']

TypeError: can't multiply sequence by non-int of type 'float'

Please find below our script simpleE.py:

`# Simple demonstration of PyDDM.
from pyddm import Sample
import pandas

Create a simple model with constant drift, noise, and bounds.

from pyddm import Model
from pyddm.models import DriftConstant, NoiseConstant, BoundConstant, OverlayNonDecision, OverlayChain, ICPointSourceCenter, OverlayPoissonMixture, LossRobustLikelihood
from pyddm.functions import fit_adjust_model, display_model
import pyddm as ddm

Solve the model, i.e. simulate the differential equations to

generate a probability distribution solution.

#display_model(model)
#sol = model.solve()

Now, sample from the model solution to create a new generated

sample.

#samp = sol.resample(1000)

Fit a model identical to the one described above on the newly

generated data so show that parameters can be recovered.

from pyddm import Fittable, Fitted
from pyddm.models import LossRobustBIC
from pyddm.functions import fit_adjust_model

class DriftCoherence(ddm.models.Drift):
name = "Drift depends linearly on coherence"
required_parameters = ["driftcoh"] # <-- Parameters we want to include in the model
required_conditions = ["coh"] # <-- Task parameters ("conditions"). Should be the same name as in the sample.

# We must always define the get_drift function, which is used to compute the instantaneous value of drift.
def get_drift(self, conditions, **kwargs):
    return self.driftcoh * conditions['coh']

model_base = Model(name='Roitman data, drift varies with coherence',
drift=DriftCoherence(driftcoh=Fittable(minval=0, maxval=20)),
noise=NoiseConstant(noise=1),
bound=BoundConstant(B=Fittable(minval=.1, maxval=3)),
overlay=OverlayChain(overlays=[OverlayNonDecision(nondectime=Fittable(minval=0, maxval=2)),
OverlayPoissonMixture(pmixturecoef=.02,
rate=1)]),
dx=.01, dt=.0005, T_dur=2.999)

with open("roitman_rts.csv", "r") as f:
df_rt = pandas.read_csv(f)

#df_rt = df_rt[df_rt["monkey"] == 1] # Only monkey 1

Remove short and long RTs, as in 10.1523/JNEUROSCI.4684-04.2005.

This is not strictly necessary, but is performed here for

compatibility with this study.

df_rt = df_rt[df_rt["rt"] > .1] # Remove trials less than 100ms
df_rt = df_rt[df_rt["rt"] < 1.65] # Remove trials greater than 1650ms

Create a sample object from our data. This is the standard input

format for fitting procedures. Since RT and correct/error are

both mandatory columns, their names are specified by command line

arguments.

roitman_sample = Sample.from_pandas_dataframe(df_rt, rt_column_name="rt", correct_column_name="correct")

cond_all={"coh": [0.0, 0.005, 0.010, 0.015, 0.020, 0.025, 0.030, 0.035, 0.040, 0.045, 0.050,0.055, 0.060,0.065,0.070,
0.075,0.080, 0.085, 0.090, 0.095, 0.100, 0.105, 0.110, 0.115, 0.120, 0.125, 0.130, 0.135, 0.140,
0.145, 0.150, 0.155, 0.160, 0.165, 0.170, 0.175,0.180, 0.185, 0.190, 0.195,0.200, 0.205, 0.210,
0.215, 0.220, 0.225, 0.230, 0.235, 0.240, 0.245, 0.250, 0.255, 0.260, 0.265, 0.270, 0.275, 0.280,
0.285, 0.290, 0.295, 0.300],"left_is_correct": [0,1]}

Load the model

from pyddm import FitResult
with open("mae_s_1_c_1.csv_base_model_output.txt", "r") as f:
model_loaded = eval(f.read())
#with open("model.txt", "r") as f:

model_loaded = eval(f.read())

#sol = model_loaded.solve()
sol = model_loaded.solve(conditions=cond_all)

print(sol.prob_correct())
print(sol.pdf_err())

Plot the model fit to the PDFs and save the file.

import pyddm.plot
import matplotlib.pyplot as plt
pyddm.plot.plot_fit_diagnostics(model=model_loaded, sample=roitman_sample)
plt.savefig("simple-fit.png")
plt.show()`

mae_s_1_c_1.csv_base_model_output.txt

error in Sample.subset()

Hi again,

Sorry to be bugging you again so soon. I just upgraded to the most recent version from github and now am encountering the following error:

model_fit = fit_adjust_model(sample=sample, model=model)

File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\functions.py", line 310, in fit_adjust_model
nparams=len(params), samplesize=len(sample))
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\models\loss.py", line 56, in init
self.setup(**kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\models\loss.py", line 153, in setup
for comb in self.sample.condition_combinations(required_conditions=self.required_conditions):
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 114, in _decorated
return func(*args, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\sample.py", line 320, in condition_combinations
if len(self.subset(**dict(zip(names, p)))) != 0:
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\paranoid\decorators.py", line 114, in _decorated
return func(*args, **kwargs)
File "C:\Users\ntard\miniconda3\envs\py36\lib\site-packages\pyddm-0.3.0-py3.6.egg\ddm\sample.py", line 269, in subset
v[2][mask_undec] if len(v) == 3 else np.asarray([])

TypeError: only integer scalar arrays can be converted to a scalar index

Large number of warnings generated from Quickstart examples

Running the second example in the Quick start guide (copying exactly, but adding verbose=False):

from ddm import Model
from ddm.models import DriftConstant, NoiseConstant, BoundConstant, OverlayNonDecision
from ddm.functions import fit_adjust_model, display_model

model = Model(name='Simple model',
              drift=DriftConstant(drift=2.2),
              noise=NoiseConstant(noise=1.5),
              bound=BoundConstant(B=1.1),
              overlay=OverlayNonDecision(nondectime=.1),
              dx=.001, dt=.01, T_dur=2)
display_model(model)
sol = model.solve()

samp = sol.resample(1000)

from ddm import Fittable
from ddm.models import LossRobustBIC
from ddm.functions import fit_adjust_model
model_fit = Model(name='Simple model (fitted)',
                  drift=DriftConstant(drift=Fittable(minval=0, maxval=4)),
                  noise=NoiseConstant(noise=Fittable(minval=.5, maxval=4)),
                  bound=BoundConstant(B=1.1),
                  overlay=OverlayNonDecision(nondectime=Fittable(minval=0, maxval=1)),
                  dx=.001, dt=.01, T_dur=2)

fit_adjust_model(samp, model_fit,
                 fitting_method="differential_evolution",
                 lossfunction=LossRobustBIC, verbose=False)

Generates a large number (> 200) of warnings with the following text:

Warning: infinite likelihood encountered. Please either use a Robust likelihood method (e.g. LossRobustLikelihood or LossRobustBIC) or even better use a mixture model (via an Overlay) which covers the full range of simulated times to avoid infinite negative log likelihood. See the FAQs in the documentation for more information.

Two things about this are surprising:

  1. It's unexpected to get so much negative feedback from running one of the provided examples
  2. The warning instructes to use LossRobustBIC, but I believe that is what the code is doing.

Additionally, instead of using a simple print() here, would it make sense to use warnings.warn or logging with a WARNING level so it's easier to filter?

AttributeError: 'Solution' object has no attribute 'pdf'

I'm trying to run the simple example

import matplotlib.pyplot as plt
from pyddm import Model
m = Model()
s = m.solve()
plt.plot(s.model.t_domain(), s.pdf("correct"))
plt.savefig("helloworld.png")
plt.show()

but i keep getting:
AttributeError: 'Solution' object has no attribute 'pdf'

Discrepancy between pyDDM solution and simulated trials.

Hi,

I am using pyDDM as a backend for some other project.
When I was generating trial data (with model.simulated_solution) it appeared that the data distribution is quite different from the density of the solved model.
I was wondering if this is just a precision issue of the solver, if I am doing something wrong or if this is a more severe problem.

I show a working example in the code below, just for the correct trials, but the same holds true for the error trials. Also for other parameters, it seems that there is a systematic shift of the solution compared to the simulations.

Happy to get some comments on this!
Thanks!

image

model_true = Model(name='model true',
                  drift= DriftLinear(drift=1.2,t=0, x=-10),
                  noise=NoiseConstant(noise=1),
                  bound=BoundCollapsingExponential(B=0.8, tau=0.8),
                  overlay=OverlayNonDecision(nondectime=0.2),
                  IC = ICPointSourceCenter(),
                  dx=.001, dt=.005, T_dur=4)


# generate data
samples = model_true.simulated_solution(seed=0,size=1_000) 
# size = 50_000 was used for the plot above

# solve the model
sol_true = model_true.solve()

## plotting
fig = plt.figure()

plt.title("reaction time distribution")
plt.plot(samples.t_domain(), samples.pdf_corr(), label="pdf data")
plt.plot(sol_true.model.t_domain(),sol_true.pdf_corr(), label="pyddm solution")


plt.xlim(0,2)
plt.xlabel("sec")
plt.ylabel("density")

plt.legend()

fig.patch.set_facecolor('white')

ICPoint.get_IC()

ICPoint.get_IC() is missing (*args, **kwargs) which appears to cause problems with scipy's differentialevolution.py

Estimate parameters for several within-subject conditions

Thank you for putting together this great code!

My question is related to #75, but I still did not find an answer over there.

I want to compare the model parameters under two conditions: drug and saline, coded as 0 and 1. I see in the tutorial that you can add conditions[] in get_drift(). However, I'm not sure how you retrieve the fitted parameters for both conditions in such a case.

And the second, more general question: To retrieve parameters for different conditions, would you use the method above or rather fit the model separately per subject and condition?

Best,
Tadeusz

Discussed in #75

Originally posted by sadegh1985 May 8, 2023
When we estimate parameters for several within-subject conditions (e.g., coherence levels in Roitman data) the output is an average (across condition) for all parameters. Is it possible to retrieve parameters for separate conditions in PyDDM.

Bests,

Sadegh

Can `Overlay`s depend on conditions?

I have a model where the drift rate, noise, and non-decision time depend on conditions. The drift rate is equal to the condition value delta. This is easy to implement:

class Drift(ddm.models.Drift):
    name = "drift rate depends on delta"
    required_parameters = []
    required_conditions = ["delta"]

    def get_drift(self, conditions, **kwargs):
        return conditions["delta"]

There are two sources of noise, one is a free parameter and another is proportional to the condition value isi. This again is not difficult:

class Noise(ddm.models.Noise):
    name = "noise contains two variance components, one depends on the isi"
    required_parameters = ["s", "d"]
    required_conditions = ["isi"]

    def get_noise(self, conditions, **kwargs):
        return np.sqrt(2 * np.power(self.s, 2) + self.d * (conditions["isi"] + 0.1))

However I can't allow non-decision time to depend upon conditions because there is no get_nodectime method to overwrite. The following doesn't work:

class NonDecision(ddm.models.OverlayNonDecision):
    name = "NonDecision is a flat amount plus isi"
    required_parameters = ["ndt"]
    required_conditions = ["isi"]

    def get_nondectime(self, conditions, **kwargs):
        return self.ndt + conditions["isi"]

Digging into the source code, ddm.models.Drift and ddm.models.Noise are subclasses of Dependence whereas ddm.models.OverlayNonDecision is a subclass of Overlay (a subclass of Dependence itself).

Is there a reason why Overlays don't have get_{x} methods? Is there some way to make them dependent on conditions?

ddm.plot.plot_decision_variable_distribution() missing?

In version 0.1.3 I'm getting an Attribute error ('module 'ddm.plot' has no attribute 'plot_decision_variable_distribution') when calling ddm.plot.plot_decision_variable_distribution() - has it been removed from PyDDM?

shinn2020 example colab notebook broken

Hi,

It seems that the code for visualizing the Shin2020 model is broken.

The following error is given when trying to run the visualize cell:

ArgumentTypeError: Invalid argument type: solution=<ddm.solution.Solution object at 0x7f4ed67f5810> is not of type Generic(<class 'ddm.solution.Solution'>) in OverlayChain.apply

corrupted condition numbers

The nested function pt() in Sample.from_pandas_dataframe() and Sample.from_numpy_array() casts condition numbers to int: arr = arr.astype(int). This casts them to np.int32, not to python built-in int, causing overflow for very long condition numbers. I'm guessing most people wouldn't encounter this problem, but I have a case where I needed to use very long condition numbers and stumbled upon the issue. I'm not sure if you could instead cast to np.int64 and still preserve functionality elsewhere? I also understand if this is too idiosyncratic a problem to address. In my case there is a 1-1 correspondence between the original condition number and the overflowed number, so I am able to recover my labels and can work around it.

Improved documentation for Model.fitresult

Currently documentation is lacking for Model.fitresult. This should be documented in the Model object definition, as well as in the quickstart guide, and in the section of the cookbook devoted to loss functions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.