Git Product home page Git Product logo

matthewcso / nrgyorku-brain-computer-music-interface Goto Github PK

View Code? Open in Web Editor NEW
1.0 2.0 0.0 154.93 MB

A project intended to generate music fitted to a person's emotional state as determined through real-time EEG recordings.

Python 2.13% HTML 0.07% JavaScript 0.03% CSS 0.01% MATLAB 0.39% Jupyter Notebook 97.37%
brain-computer-interface electroencephalography emotiv-epoc classification music

nrgyorku-brain-computer-music-interface's Introduction

NRGYorkU Brain Computer Music Interface

Goal:

To develop a Brain-Computer Music Interface (BCMI) to generate music based on a person's emotions, as determined by real-time EEG classification using an Emotiv kit.

Potential Steps:

There are 3 main parts of this project that can be developed in parallel. There may exist a number of technical difficulties involved in connecting multiple real-time data feeds to each other.

  • EEG data feed: We need to develop a pipeline for acquiring a real-time EEG data feed that can be attached to a classifier. This may be the most difficult step conceptually, as there are few good libraries available for acquiring this type of data from the Emotiv EPOC.
  • Emotion Classification: We need to extract robust EEG features and we need to train a classifier or regressor to determine the emotional content from EEG data. We will use publicly available EEG datasets for this task, and might want to incorporate some elements of semi-supervised learning (given the abundance of EEG data without labels).
  • Music Generation: We need to generate music based on the identified emotions. This can be done using Erlich's algorithm, which has been translated to Python at this point in time. As a brief note, it might potentially be better to do away with Erlich's algorithm entirely in the future and to use a generative adversarial music generation algorithm instead; however, this will be a technically difficult challenge.

Requirements:

  • Python 3.x. Anaconda installation highly recommended. Ability to run Jupyter Notebooks highly recommended.
  • All libraries listed in requirements.txt. Run pip install -r requirements.txt from command line to install dependencies.
  • VirtualMIDISynth. You can download this software from here.
  • Sound fonts for VirtualMIDISynth. I used these soundfonts.

Citations

nrgyorku-brain-computer-music-interface's People

Contributors

matthewcso avatar hglassman avatar dannybcmi avatar

Stargazers

 avatar

Watchers

Denis Laesker avatar  avatar

nrgyorku-brain-computer-music-interface's Issues

Create MIDI output from Generative Algorithm

Try to get the Audio Toolbox working in composer_algorithm in MATLAB, and send it to a MIDI source output to see whether or not we can execute MIDI commands independent of EEG input.

Find Workflow Images for Presentation

  1. Create background slide with an image explaining BCMI/music theory (Harley or Nicia)
  2. Expand on the circumplex model (Nicia)
  3. Explain process of the overall system (Harley)
  4. EEG Signal Processing - how will Emotiv stream to the classifier in the testing phase, and how will DEAP stream to classifier in the training phase? (Harley)
  5. Feature selection - find Python image to graphically represent features, and expand
  6. Classifiers - use schematic to represent Random Forest, and KNN and explain how it may classify valence/arousal. Elaborate on Ramzan and Dawn's model (Harley or Matthew)
  7. Generative Algorithm - explain parameters, how generative algorithm works, and how MIDI signals will be sent to a DAW (Danny)
  8. Future direction - explain neurofeedback component and formalizing it methodologically, and additional applications for the BCMI (e.g. entertainment, music composition, coma interventions) (Danny)

Implement FFT descriptive features

For each electrode, take the fast Fourier transform and separate it into frequency bands. Then, for each frequency band, find the Mean/Median/Variance/SD/Min_value/Max_value/Range/Skewness/whatever from the FFT coefficients and concatenate this with the existing final_features array (note that final_features should have the shape of (number of training samples, number of features)).

Realtime framework

Need to combine real-time EEG data feed with preprocessing, classification/regression, and then feed model outputs to generative model. Might require multiple threads and/or asyncio.

Program real-time EEG data into classifier

Find a way to capture a robust resampling rate (e.g last 10 sec) for real-time EEG data to be continuously sent to the classifier. If possible, stream this through a preprocessing pipeline either on Python (preferably) or EEGLab intermediary to being sent to the classifier.

Implement complexity measures

Use pyeeg to compute a metric of timeseries complexity for each electrode, and concatenate this with the existing feature matrix.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.