uil-ots-labs Goto Github PK
Name: ILS Labs
Type: Organization
Bio: A wide range of programs and experiment scripts written by the technical-lab-support staff of the Institute for Language Science Labs (ILS Labs)
Location: Utrecht University
Name: ILS Labs
Type: Organization
Bio: A wide range of programs and experiment scripts written by the technical-lab-support staff of the Institute for Language Science Labs (ILS Labs)
Location: Utrecht University
HTP for four Dutch labs
The hybrid visual fixation or habituation (`hvf`) can be used to access perception in younger children. It involves measuring looking times via scoring a video feed of the child.
Purpose of this experiment is to measure a participant's ability to identify (speech) sounds. For each trial a sound is presented. Participant's task is to jugdge whether a prespecified property is present in the stimulus or not. Self-paced. Output: chosen value.
Five images are shown with one in the middle with an instruction. Participant is to select an image. Can be used with eyetracking.
Tutorial for the magic variables of ZEP
Purpose of this experiment is to measure the acceptability of a stimulus (a sentence) a participant is perceiving relative to a standard stimulus (also known as the modulus). In every block, first, the modulus will be displayed. For the modulus, the participant will enter a numerical value stating the acceptability of the sentence. Then for each test trial the participant will be presented a sentence to be judged. His task is to assign a score to the sentence relative to the score entered for the modulus.
A Zep module to send (EEG) markers using a parallel port.
New hvf experiment for Maartje to be run in 2017/2018
Zep based experiment designed for Josje Verhagen; investigates certain grammatical distinctions in children
Namepick adaptation for Majron Jessurun
Auditory discrimination task with addition stimuli options, visual appealing instruction, and test page.
Purpose of this headturn preference experiment is to see whether an infant participant can detect a difference between two types of auditory stimuli. The infant sits on the caregiver's lap facing a wall on which a green light, an invisible speaker and a camera is mounted. On each side wall a red light and an invisible speaker are mounted. In this implementation there is a familiarization phase and a test phase. For each trial in the test phase the infant's attention is drawn to one the side lights (blinking). When the infant looks at the blinking light a sequence of sound stimuli starts and a timer runs as long as the infant keeps looking at the blinking light. The trial ends when the infant looks away too long (or when a particular number of stimuli have been presented). In the familiarization phase that precedes the test phase a similar contingency procedure is used but only for the lights; the sound stimuli once started continue until all have been presented. Output test phase: looking time. Output familiarization phase: total looking time.
Purpose of this headturn preference experiment is to see whether an infant participant can detect a difference between two types of auditory stimuli. The infant sits on the caregiver's lap facing a wall on which a green light, an invisible speaker and a camera is mounted. On each side wall a red light and an invisible speaker are mounted. In this implementation there is a familiarization phase and a test phase. For each trial in the test phase the infant's attention is drawn to one the side lights (blinking). When the infant looks at the blinking light a sequence of sound stimuli starts and a timer runs as long as the infant keeps looking at the blinking light. The trial ends when the infant looks away too long (or when a particular number of stimuli have been presented). In the familiarization phase that precedes the test phase a similar contingency procedure is used but only for the lights; the sound stimuli once started continue until all have been presented.
This project is a small modification to a digitspan task
Zep experiment that shows (Dutch) sentences and records wether a participant presses either yes or no.
Participant sees two sentences and matches these to the correct picture
This boilerplate experiment present a target picture and two possible answer pictures for the participant to respond to.
A picture selection task for Shuangshuang
A preferential looking paradigm which includes animations.
Purpose of this experiment is to measure speed of detecting a grammatical error in sentences, using an auto-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant then hits a button when a grammatical error is shown. RT is measured from the presentation of a segment to button- press. This particular SPR implementation uses a stationary window: For each segment the text in the segment is displayed at a fixed position on the screen. So basically the segment window is stationary and the sentence moves underneath it segment by segment.
Self-Paced Reading (SPR) Zep experiment with Cumulative Window followed by a two pictures choice task.
Small script that allows for easy usage of use-zep-x.xx.txt files in script files for getting the correct zep versions.
Purpose of this experiment is to measure word or segment reading times in sentences, using a self-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant reveals next segment by hitting a button. RT is measured from the presentation of a segment to button- press.
Purpose of this experiment is to measure word or segment reading times in sentences, using a self-paced word/segment revealing mechanism. Participant's task is to read sentences which are presented in a segment-by-segment fashion. Participant reveals next segment by hitting a button. RT is measured from the presentation of a segment to button- press. For each sentence there is a 50% chance of a comprehension question to be asked.
Purpose of this experiment is to record participant's eye-movements while he/she is listening to a spoken utterance and looking at a screen displaying a semi-realistic scene. For each trial a scene is displayed and an utterance relating to the scene is played. Participant's task is to carefully look and listen. Self-paced. Output: Eye-Tracking data as collected by the eye-tracker.
A visual world expriment with a added fixation cross to avoid initial looking bias.
Purpose of this experiment is to detect whether an infant participant has a preference for one particular type of sound (word, intonation, accent, language etc) over another.
ZEP Task for Statistical and Categorical learning with implicit or explicit instruction. Contains auditory identification part and auditory AX discrimination part
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.