Git Product home page Git Product logo

expfactory-experiments's Introduction

The Experiment Factory

DOI DOI Gitter chat

expfactory/static/img/expfactoryticketyellow.png

See our documentation for getting started. If you are new to containers, read our background or paper first. If you want a more guided entry, see the detailed start

The Experiment Factory is software to create a reproducible container that you can easily customize to deploy a set of web-based experiments.

Citation

If the Experiment Factory is useful to you, please cite the paper to support the software and open source development.

Sochat, (2018). The Experiment Factory: Reproducible Experiment Containers. Journal of Open Source Software, 3(22), 521, https://doi.org/10.21105/joss.00521

Contributing

We have many ways to contribute, and will briefly provide resources here to get you started.

How to Contribute

If you are a developer interested in working on the Experiment Factory software you should read out contributing guidelines for details. For contributing containers and experiments, see our user documentation. If you have any questions, please don't hesitate to ask a question. You'll need to lint your code using black:

$ pip install black
$ black expfactory --exclude template.py

Code of Conduct

It's important to treat one another with respect, and maintain a fun and respectful environment for the open source community. Toward this aim, we ask that you review our code of conduct

Background

It's predecessor at Expfactory.org was never able to open up to the public, and this went against the original goal of the software. Further, the badly needed functionality to serve a local battery was poorly met with expfactory-python as time progressed and dependencies changes.

This version is agnostic to the underlying driver of the experiments, and provides reproducible, instantly deployable "container" experiments. What does that mean?

  • You obtain (or build) one container, a battery of experiments.
  • You (optionally) customize it
    • custom variables (e.g., a study identifier) and configurations go into the build recipe
    • you can choose to use your own database (default output is flat files)
    • other options are available at runtime
  • The container can be easily shared.
  • You run the container, optionally specifying a subset and ordering, and collect your results

If you build on Docker Hub anyone else can then pull and use your exact container to collect their own results. It is exact down to the file hash. Note that bases for expfactory were initially provided on Docker Hub and have moved to Quay.io. Dockerfiles in the repository that use the expfactory-builder are also updated. If you need a previous version, please see the tags on the original Docker Hub.

Experiment Library

The experiments themselves are now maintained under expfactory-experiments, official submissions to be found by expfactory can be added to the library (under development) to be tested that they meet minimum requirements.

expfactory-experiments's People

Contributors

henrymj avatar ianeisenberg avatar jkl071 avatar kywch avatar matildevaghi avatar mckenziephagen avatar rios-jaime avatar rwblair avatar sjshim avatar vsoch avatar waltersjonathon avatar yarikoptic avatar zenkavi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

expfactory-experiments's Issues

mpq_control has exp_id coded with capital letters

The MPQ questionnaire has one or more exp_id variables that are coded with capital letters:

image

It would be ideal to have the tag generated dynamically for all data blocks from somewhere in the init, and that way we can have a test to ensure it is accurate.

General issue on redundancy at beginning of experiments

In several tasks including probabilistic selection, discount titrate, and motor selective stop there are two displays, one welcoming to experiment and another welcoming to instructions. This should probably be streamlined into only 1 display.

Test task variables have wrong name

The result of the experiment returns "rt" for reaction time,

image

but the config.json specifies "avg_rt"

For now I'm going to check first for the entire variable name, and then try splitting the prefix and looking for common stat names (avg, mean, med, sum), and if any of those are found, will return the summary statistic. We will want to talk about if there is a larger set, and when we have a decided set, we need docs.

experiments not valid:

bickel_titrator: config.json must be defined for field cognitive_atlas_task_id
letter_memory: config.json must be defined for field cognitive_atlas_task_id
tower_of_london: tag parameter tol does not match folder name.
dospert_rp: config.json must be defined for field cognitive_atlas_task_id
digit_span: config.json must be defined for field cognitive_atlas_task_id
WARNING: config.json is missing value for field reference: flanker
WARNING: config.json is missing value for field reference: multiplication
multiplication: config.json must be defined for field cognitive_atlas_task_id
number_letter: config.json must be defined for field cognitive_atlas_task_id
antisaccade: config.json must be defined for field cognitive_atlas_task_id
tone_monitoring: config.json must be defined for field cognitive_atlas_task_id
ax_cpt: config.json must be defined for field cognitive_atlas_task_id
cognitive_reflection: style.css is specified in config.json but missing.
spatial_span: config.json must be defined for field cognitive_atlas_task_id
WARNING: config.json is missing value for field name: volatile_bandit
volatile_bandit: config.json must be defined for field cognitive_atlas_task_id
plus_minus: config.json must be defined for field cognitive_atlas_task_id
dospert_rt: config.json must be defined for field cognitive_atlas_task_id
WARNING: config.json is missing value for field reference: multisource
multisource: config.json must be defined for field cognitive_atlas_task_id
hierarchical_rule: config.json is missing field notes
directed_forgetting: config.json must be defined for field cognitive_atlas_task_id
dietary_decision: config.json must be defined for field cognitive_atlas_task_id
prp: config.json must be defined for field cognitive_atlas_task_id
ant: config.json must be defined for field cognitive_atlas_task_id
recent_probes: config.json must be defined for field cognitive_atlas_task_id
dospert_eb: config.json must be defined for field cognitive_atlas_task_id
rng: config.json must be defined for field cognitive_atlas_task_id
shift_task: config.json must be defined for field cognitive_atlas_task_id
threebytwo: config.json must be defined for field cognitive_atlas_task_id
n_back: config.json must be defined for field cognitive_atlas_task_id
willingness-to-wait: tag parameter wtw does not match folder name.
tower_of_london_imagine: tag parameter tol does not match folder name.

Custom Plugins README

We should have a detailed README.md in the custom_plugins folder detailing how the custom plugins differ from the current, as this will eventually get confusing (if someone is implementing a task and is not familiar with our changes).

Custom Battery does not load in psiTurk

After creating a custom battery (adaptive n back only for now) with expfactory, the exp.html does not load in PsiTurk.

One problem I ran into was that Exp.html js source path references to jspysch/custom_plugins, however the folder is renamed as jspysch/poldrack_plugins when the custom battery is generated. I changed the source path in the exp.html to jspysch/poldrack_plugins, but still ran into the following errors when loading exp.html.

  1. - No plugin loaded for trials of type "text" (anonymous function) @ jspsych.js:273
  2. - No plugin loaded for trials of type "consent" (anonymous function) @ jspsych.js:273
  3. - Uncaught TypeError: Cannot read property 'trial' of undefined jspsych.js:581

Add performance variables to config.json

To start: should be single dict of variable in browser that indicates level of performance, to be used by user to decide on allocation of bonuses. Might look like:

  {"name":"reaction_time",
   "description":"the reaction time of the subject, in seconds",
   "type":"int".
   "range":[0,100]}

Add attention checks as experiment variable

The user should be able to select if he/she wants to turn on or off attention checks, and this is possibly by addition as an experiment_variable in the config.json. Since we want to streamline getting experiments finished and merged, it has been decided to not add this right now, but just have them default in some experiments.

Continuous Integration Web Preview

Currently, the generated web is pulling experiments from the repo, and this arg needs to be updated to take in the current experiment folder (in circle CI).

General issue for questionnaires

14
What is the purpose of the red stars? This is sometimes used in surveys when certain questions must be answered, but all questions must be answered so these stars seem unnecessary.

Test task rejection variable not found in data

The test task rejection variable should coincide with a variable that can be found in the data. For example, here is the current variable definition:

                              {
                             "name":"reject",
                             "type":"credit",
                             "datatype": "boolean",
                             "description":"True if avg_rt < 100"
                              },

There is no variable "reject" that can be found in the data, so we have two choices. The task can be altered to include the variable "reject" in the taskdata (and I am including it here from jspsych to show what we currently do capture):

  '[{"current_trial": 0, "trialdata": {"rt": 911, "trial_type": "text", "internal_chunk_id": "0-0", "time_elapsed": 912, "key_press": 13, "trial_index": 0, "trial_index_global": 0}, "uniqueid": "1", "dateTime": 1451066377482}, {"current_trial": 1, "trialdata": {"rt": 915, "trial_type": "single-stim", "stimulus": "<div class = shapebox><div id = cross></div></div>", "trial_id": "test", "internal_chunk_id": "0-0", "time_elapsed": 2832, "key_press": 32, "exp_id": "test_task", "trial_index": 0, "trial_index_global": 1}, "uniqueid": "1", "dateTime": 1451066379401}, {"current_trial": 2, "trialdata": {"rt": 361, "trial_type": "single-stim", "stimulus": "<div class = shapebox><div id = cross></div></div>", "trial_id": "test", "internal_chunk_id": "0-0", "time_elapsed": 3298, "key_press": 32, "exp_id": "test_task", "trial_index": 0, "trial_index_global": 2}, "uniqueid": "1", "dateTime": 1451066379867}, {"current_trial": 3, "trialdata": {"rt": 214, "trial_type": "single-stim", "stimulus": "<div class = shapebox><div id = cross></div></div>", "trial_id": "test", "internal_chunk_id": "0-0", "time_elapsed": 3615, "key_press": 32, "exp_id": "test_task", "trial_index": 0, "trial_index_global": 3}, "uniqueid": "1", "dateTime": 1451066380184}, {"current_trial": 4, "trialdata": {"rt": 871, "trial_type": "text", "internal_chunk_id": "0-0", "time_elapsed": 4591, "key_press": 13, "trial_index": 0, "trial_index_global": 4}, "uniqueid": "1", "dateTime": 1451066381160}]'

or we can model the variable as numeric, and give the user the suggestion to reject if avg_rt < 100. As currently modeled, the variable is not found and no rejection criteria are applied.

Choice RT

In instructions, perhaps change "get" to "receive"

Global Local 2

It seemed strange and counterintuitive that the next button was on the left and the previous button was on the right. These should probably be switched.

experiment tag validation

  • cannot include characters that do not work in javascript variables (eg, "-)
  • must be equivalent to folder name
  • no spaces
  • must be specified as tag variable

Robot to run tasks

Given that we are just using jspych right now, I see no reason that I can't make some kind of robot that can read the task structure, and push the required button to go through the tasks. This will be step 1, and step 2 will be to integrate tests into that process so all experiments can be automatically tested for errors.

Initial battery Instruction feedback:1

It says something like subjects should pay attention "Because we have hidden tasks in these blocks". Though that's true, that's not why subjects should pay attention. They should pay attention because this is what we are asking them (and they are agreeing) to do. Perhaps delete.

General issue about trial length and response deadline

This first came up in simple RT but came up several times later. I think that in the absence of a compelling reason to do otherwise, we should trial lengths that are not determined by subject performance (i.e., either fixed at one duration or jittered, as in the simple RT task, but do not become shorter or longer based upon faster or slower responses).

I think for some tasks (for example, stop-signal or go/no-go, but also many other tasks) it can be difficult to get a subject to balance the demands of speed and accuracy. By letting them complete the trials faster by responding faster (which is how I believe at least the simple & choice RT tasks are), I think this may shift the balance too far towards speed. If the trial lengths are all fixed, subjects cannot speed up the completion of the experiment by trading accuracy (or inhibition) for speed, so they will instead just try to follow our instructions (to balance the competing demands of speed/accuracy/control)

Favicon.ico

As a part of #57 some script is trying to load a "favicon.ico" at the server base /favicon.ico I can't seem to find this in any of the direct css files, so likely it is hiding with the other bug. This also needs to be resolved as it will fail tests.

angling risk task freezes

If I collect immediately with 0, the experiment seems to freeze and clicking "collect" or "go fish" doesn't seem to do anything:

image

sadfish

Strange 404 errors for adaptive_n_back

The strange 404 errors seem to be related to some reference of divs as files that the script is asking to retrieve:

image

It appears that all the text has classes defined without parens, so they are being called as functions. I am testing this now.

Discount Titrate 4

In this task, only 2 trials occurred. I believe more should have occurred.

Change global local Navon Stim to letters

I would recommend replacing this more complicated version of the global local task with the traditional version (I believe Navon, 1977) in which subjects a large letter stimulus (e.g., S) made up of smaller letter stimuli (e.g., H's). Our current version appears to be unnecessarily complicated to instruct on and understand.

Experiments missing times

holt_laury
directed_forgetting
recent_probes

Will be specified as 999 in the config, and this should be changed when the real time is known.

General Issue: Redoing tasks

Why are we letting them redo tasks? In the initial battery instructions, it's unclear under what circumstances subjects should be redoing a task. I don't think we want them to redo (for example) if they think they have caught onto the task and would like to do it again now that they're better. I think we should consider either: (1) not letting subjects redo tasks, or (2) be more explicit that we only want them to redo tasks under a specific set of circumstances (e.g., they were forced away from the computer by some pressing real-world demand)

Simple RT

Related to (but somewhat distinct from) the general issue of performance-independent trial lengths, we should probably have something in the simple RT task that dissuades subjects from just pressing the spacebar constantly and not even attending to the task. Perhaps the best way would be to give them some form of negative feedback (e.g., "Please wait to respond until the stimulus appears) if they respond before the stimulus.

Willpower 1

why tell subjects what willpower is? This seems unnecessary and could be a demand characteristics concern. For questionnaires, I prefer minimalist instructions like the ones presented for the brief self control questionnaire

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.