Git Product home page Git Product logo

ILS Labs's Projects

zep-hybride-visual-fixation icon zep-hybride-visual-fixation

The hybrid visual fixation or habituation (`hvf`) can be used to access perception in younger children. It involves measuring looking times via scoring a video feed of the child.

zep-identification-yes-no-experiment icon zep-identification-yes-no-experiment

Purpose of this experiment is to measure a participant's ability to identify (speech) sounds. For each trial a sound is presented. Participant's task is to jugdge whether a prespecified property is present in the stimulus or not. Self-paced. Output: chosen value.

zep-magnitude-estimation-of-linguistic-acceptability icon zep-magnitude-estimation-of-linguistic-acceptability

Purpose of this experiment is to measure the acceptability of a stimulus (a sentence) a participant is perceiving relative to a standard stimulus (also known as the modulus). In every block, first, the modulus will be displayed. For the modulus, the participant will enter a numerical value stating the acceptability of the sentence. Then for each test trial the participant will be presented a sentence to be judged. His task is to assign a score to the sentence relative to the score entered for the modulus.

zep-non-adjacent-dependency-learning--hpp- icon zep-non-adjacent-dependency-learning--hpp-

Purpose of this headturn preference experiment is to see whether an infant participant can detect a difference between two types of auditory stimuli. The infant sits on the caregiver's lap facing a wall on which a green light, an invisible speaker and a camera is mounted. On each side wall a red light and an invisible speaker are mounted. In this implementation there is a familiarization phase and a test phase. For each trial in the test phase the infant's attention is drawn to one the side lights (blinking). When the infant looks at the blinking light a sequence of sound stimuli starts and a timer runs as long as the infant keeps looking at the blinking light. The trial ends when the infant looks away too long (or when a particular number of stimuli have been presented). In the familiarization phase that precedes the test phase a similar contingency procedure is used but only for the lights; the sound stimuli once started continue until all have been presented. Output test phase: looking time. Output familiarization phase: total looking time.

zep-non-adjacent-dependency-learning--hpp---alt1- icon zep-non-adjacent-dependency-learning--hpp---alt1-

Purpose of this headturn preference experiment is to see whether an infant participant can detect a difference between two types of auditory stimuli. The infant sits on the caregiver's lap facing a wall on which a green light, an invisible speaker and a camera is mounted. On each side wall a red light and an invisible speaker are mounted. In this implementation there is a familiarization phase and a test phase. For each trial in the test phase the infant's attention is drawn to one the side lights (blinking). When the infant looks at the blinking light a sequence of sound stimuli starts and a timer runs as long as the infant keeps looking at the blinking light. The trial ends when the infant looks away too long (or when a particular number of stimuli have been presented). In the familiarization phase that precedes the test phase a similar contingency procedure is used but only for the lights; the sound stimuli once started continue until all have been presented.

zep-read-stationary-window-incorrectness-response icon zep-read-stationary-window-incorrectness-response

Purpose of this experiment is to measure speed of detecting a grammatical error in sentences, using an auto-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant then hits a button when a grammatical error is shown. RT is measured from the presentation of a segment to button- press. This particular SPR implementation uses a stationary window: For each segment the text in the segment is displayed at a fixed position on the screen. So basically the segment window is stationary and the sentence moves underneath it segment by segment.

zep-redirect icon zep-redirect

Small script that allows for easy usage of use-zep-x.xx.txt files in script files for getting the correct zep versions.

zep-self-paced-reading-with-cumulative-window-ying-liu icon zep-self-paced-reading-with-cumulative-window-ying-liu

Purpose of this experiment is to measure word or segment reading times in sentences, using a self-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant reveals next segment by hitting a button. RT is measured from the presentation of a segment to button- press.

zep-self-paced-reading-with-moving-window-boilerplate icon zep-self-paced-reading-with-moving-window-boilerplate

Purpose of this experiment is to measure word or segment reading times in sentences, using a self-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant reveals next segment by hitting a button. RT is measured from the presentation of a segment to button- press. For each sentence there is a 50% chance of a comprehension question to be asked.

zep-single-image-visual-world-boilerplate icon zep-single-image-visual-world-boilerplate

Purpose of this experiment is to record participant's eye-movements while he/she is listening to a spoken utterance and looking at a screen displaying a semi-realistic scene. For each trial a scene is displayed and an utterance relating to the scene is played. Participant's task is to carefully look and listen. Self-paced. Output: Eye-Tracking data as collected by the eye-tracker.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.