Git Product home page Git Product logo

hyperscanning2-redesign's People

Contributors

ihgumilar avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

sheenawang12

hyperscanning2-redesign's Issues

Notes on saved list (*pkl file) that contains inter-brain synchrony scores & indices of deleted epochs

Interbrain synchrony scores

Save the into pkl file
with open('list_circular_correlation_scores_all_pairs_direct_pre_no_filter.pkl', 'wb') as handle:
pickle.dump(list_circular_correlation_direct_pre_no_filter_all, handle,
protocol=pickle.HIGHEST_PROTOCOL)

NOTE : The structure of list that has been saved (pkl file) is each pair will have 4 lists, which has the following order
* Theta, Alpha, Beta, and Gamma. So for example, the first 4 lists are inter-brain synchrony scores that are calculated from subject 1 and 2.
* then move the 2nd four lists which are calculated from subject 3 and 4.

So, if there are 15 pairs, then there will be (15 x 4) 60 lists in each pkl file

Mixed results

NOTE :
Before we run analysis,

  • we take average score of questionnaire for pre and post
  • then substract post and pre

6ef7b5b

Correlation between SPGQ total & Direct

  • Significant correlations in :
  • coherence - theta : (0.6189193814333278, p-val : 0.03189176226829951
  • coherence - beta : (0.564254914717613, p-val : 0.05599109700636175)
  • coherence - gamma : (0.6105006072150732, p-val : 0.034995765473106906)
  • PLV - gamma : (0.6538593554639102, p-val : 0.021091591502908406)

Good reason why we may want to choose PLV, does not require stationary of EEG = the stability of core characteristics of the time series

Image

https://academic.oup.com/scan/article/16/1-2/72/5919711?login=false

Correlational analysis EEG and Questionnaire

Analysis :

  1. Correlational analysis with EEG
    • The number of connections and questionnaire score of SPGQ
    • The number of connections and questionnaire score of subscales of SPGQ
    • The number of connections and questionnaire score of Co-Presence

NOTE : Remember that the index starts from 0

  1. total_sig_ccorr_theta_connections,
  2. total_sig_ccorr_alpha_connections,
  3. total_sig_ccorr_beta_connections,
  4. total_sig_ccorr_gamma_connections,
  5. total_sig_coh_theta_connections,
  6. total_sig_coh_alpha_connections,
  7. total_sig_coh_beta_connections,
  8. total_sig_coh_gamma_connections,
  9. total_sig_plv_theta_connections,
  10. total_sig_plv_alpha_connections,
  11. total_sig_plv_beta_connections,
  12. total_sig_plv_gamma_connections

Count significant connections and give labels

60bff86 start from this commit. It is new file for counting significant connections and give labels
Count significant connections

  • Label which electrodes that are significantly correlated
  • List from the connections that have most connections to the least one

ToDo :

  • Get the actual score of PLV, ccor, or coh, if it is significant
  • Get the index of that actual score, if it is significant
  • Average actual score for PLV, ccor, or coh, out of all connections from all pairs
  • Adjust the variable names that are under Get labels of electrode pairs. Make it more meaningful and organized. So that it will be easy to read in the future (IMPORTANT git add EEG/analysis/statistical_analysis_inter-brain_connections.py )

Function to calculate how many percentage of looking and not looking

Do the following steps

  • - [ ] Call this function for every pair instead of all pairs that have been combined
  • The result is a list that contains whether look at or not throughout the experiment
  • Calculate the percentage of looking and not looking ,i.e. (number of looking / len(total_looking_not_looking)) * 100 for each pair (return value of the function)

Change end for loop in combine_fif_files.py

Change end for loop in combine_fif_files.py here 9aa1e7a
This is the example

Image

Do for

Experimental data

  • Even subjects (10 and onwards, eg. 10, 12, 14, etc..)
  • Odd subjects (10 and onwards, eg. 10, 12, 14, etc..)

Baseline data

  • Even subjects (10 and onwards, eg. 10, 12, 14, etc..)
  • Odd subjects (10 and onwards, eg. 10, 12, 14, etc..)

Analyze pre-processed data of eye tracker

  • 1. Populate odd subjects and even subjects into one separate dataFrame for all conditions , i.e. averted_pre* , averted_post* , direct_pre*, direct_post*, natural_pre*, and natural_post*. It means for each condition, eg. averted_pre* , there will be two dataFrames, one for odd subjects and one for even subjects.

      - [x]   This has been turned into a function
    
  • 2. Convert GazeDirections, which are in cartesian (x, y, z) to degree, see Functions to convert cartesian to polar degree, for both Right and Left eyes. Put the converted value (degree) into a new column (eg. eye right-left), by using this code. Inside it, a function gaze_direction_in_x_axis_degree(x, y) is resulting a degree that indicates a movement of the eye in xaxis (right or left). A function gaze_direction_in_y_axis_degree(y, z) is resulting a degree that indicates a movement of eye in yaxis (up or down).

    • It has an issue when we run it with **averted_pre_even** subject. See the issue below here.
  • 3. Check whether the values (degree) that are under the column of (eg. eye right-left) are within the range of fovea, see Function to check whether a degree within fovea (30 degrees in total) and use the function check_degree_within_fovea(gaze_direction) inside this script. If it is within the fovea, it will result it 1 otherwise 0. Save those new values (1 or 0) into a new colum, eg in_fovea.

  • 4. Compare values of in_fovea for both x-axis and y-axis whether both of them are 1. For example, if both of them are 1 then give a label or value 1 in a new column, e.g. look_each_other, otherwise 0. Do this for both right and left eyes as well. Of course, for both odd and even subject dataframes.

  • 5. Since the sampling rate is 125 / per second (125 rows for each second) , we need to count how many "1" as well as "0" in column look_each_other within a second (125 rows). We need a threshold Threshold of within a second to determine "looking" (conscious) or "not looking" (unconscious)ย #10. For example, if the threshold is 13, when we count "1" in column look_each_other within a second (125 rows), there are 13 of "1", then we need to create a new label 1 which indicates "really" or "scientifically" looking, otherwise 0. Put those new values (0 and 1) into a new column, eg. sig_looking

  • 6. Do the step 2-4 for both odd and even subjects' dataFrames for each eye condition.

  • 7. Check whether each pair looked at each other or not. Take a value from column in_fovea of odd and even subject, then check if both are giving 1 (must be 1 for both of them), which indicates look at each other in the same range of fovea. If so, give label 1 and put it into a new column again, eg. look_each_other, otherwise give label 0, that indicates that their eyes were not matched within a second.

  • 8. Count a proportion of value 1 and 0 in column sig_looking. For example, if there are 60 "1" in the column sig_looking , then we can calculate the proportion : (60 / 1800) x 100 = 33 %. It means that out of 120 seconds (2 minutes), their eyes met only in 33%~40 seconds). Note 1800 = 120 (seconds) x 15 subjects (odd or even)

Process demographic data

56802ef Use this commit

Init code for demographic data

  • Put into dataframe
  • Plot values of each column
  • Only age that is numerical (float) data. Other than data is object type
  • Data has been preprocessed from original raw data (*.csv) so that it is easy to processed

Module of Questionnaire

Under Questionnaire

Questionnaire class

def scoring_questionnnaire

Image

Keep going down below until

Image

def diff_score_questionnaire_pre_post
For this, we need to provide parameter for the name of column. Tweek a little bit the code from
Combine SPGQ Total score of subject1 & 2, etc..
NOTE : With subtraction of post and pre

Image

def corr_eeg_questionnaire

print("Averted")
for i in range(len(diff_averted)):
    print(F"{i}, {pearsonr(diff_averted[i], substracted_averted_non_SPGQ_total)}")

Image

Notes : Code to count average significant actual score of specific connections out of all pairs (from dictionary), which have key

# Python3 code to demonstrate working of
# Convert list of dictionaries to Dictionary Value list
# Using loop
from collections import defaultdict
import numpy as np

# initializing lists
# list_temp = [{"Gfg" : 6},
# 			{"Gfg" : 8},
# 			{"Gfg" : 2},
# 			{"Gfg" : 12},
# 			{"Gfg" : 22}]

list_temp =[]
a = {"a" : 1}
aa = {"a" : 2}
b = {"b" : 1}
bb = {"b" : 3}

for i in range(4):
    list_temp.append(a)
    list_temp.append(aa)
    list_temp.append(b)
    list_temp.append(bb)

# printing original list
print("The original list : " + str(list_temp))

# using loop to get dictionaries
# defaultdict used to make default empty list
# for each key
res = defaultdict(list)
for sub in list_temp:
	for key in sub:
		res[key].append(sub[key])
	
# printing result
print("The extracted dictionary : " + str(dict(res)))

average_a = np.mean(res["a"])

print(f"Average score of a {average_a}")

Count average score of significant connections for each pair (combine all electrodes)

total_sig_ccorr_theta_connections, total_sig_ccorr_alpha_connections, total_sig_ccorr_beta_connections, total_sig_ccorr_gamma_connections,

total_sig_coh_theta_connections, total_sig_coh_alpha_connections, total_sig_coh_beta_connections, total_sig_coh_gamma_connections,

total_sig_plv_theta_connections, total_sig_plv_alpha_connections, total_sig_plv_beta_connections, total_sig_plv_gamma_connections

Order of using code for questionnaire data

ANCOVA SPGQ & Co-Presence questionnaire

  1. Calculation total score of each sub-scale of SPGQ
  2. Calculation total score of SPGQ
  3. Calculation total score of Co-Presence
  4. Do ANCOVA for total score of SPGQ
  5. Do ANCOVA for total score of Co-Presence

All the above stuff can be done via this code b0c996d

Next step is #33 which is not implemented yet.

  1. Do that step #33 , when all actual data of EEG have been calculated completed by this, point no. 1

Notes : To see deleted indices of EEG data

import pandas as pd
del_list = pd.read_pickle(r"/hpc/igum002/codes/Hyperscanning2-redesign/data/EEG/pre-processed_eeg_data/list_deleted_epoch_indices_direct_pre.pkl")

for i in range(len(del_list)):
    print(len(del_list[i]))

Notes : Running permutation in several screen of linux

** Activate environment
source /hpc/igum002/environments/hyperscanning2_redesign_new/bin/activate
** Go to folder that contains file to run
cd /hpc/igum002/codes/Hyperscanning2-redesign/EEG/analysis/permutations/

time python permut_file_name.py output:
for example, time python permut_averted_pre.py output:

Screen :

  • natural post
  • natural pre
  • direct post
  • direct pre
  • averted pre
  • averted post

Order of using code for EEG data

Pre-processing

  1. Separate EEG between baseline & experimetal data using this code 2e2b86e
  2. Combine pre and post data (for each baseline and experimental), for all eye conditions, using this code 432da5e
  • ToDo: Change loop from 16 to whatever length of files that are available)
  1. Clean EEG data for both baseline and experimental data using this code a619b0b
  • ToDo : Update bad channels, in case the data has increased / updated

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.