ihgumilar / hyperscanning2-redesign Goto Github PK
View Code? Open in Web Editor NEWExperiment of hyperscanning2-redesign is adding questionnaire inside VR and also improved the UNITY that is used for the experiment.
Experiment of hyperscanning2-redesign is adding questionnaire inside VR and also improved the UNITY that is used for the experiment.
Save the into pkl file
with open('list_circular_correlation_scores_all_pairs_direct_pre_no_filter.pkl', 'wb') as handle:
pickle.dump(list_circular_correlation_direct_pre_no_filter_all, handle,
protocol=pickle.HIGHEST_PROTOCOL)
NOTE : The structure of list that has been saved (pkl file) is each pair will have 4 lists, which has the following order
* Theta, Alpha, Beta, and Gamma. So for example, the first 4 lists are inter-brain synchrony scores that are calculated from subject 1 and 2.
* then move the 2nd four lists which are calculated from subject 3 and 4.
So, if there are 15 pairs, then there will be (15 x 4) 60 lists in each pkl file
Please refer to #32 to get started or develop from there
Start working here 0375c4d
There is a cell that has already some ideas
ToDo
EEG-analysis
Done
05016c7 Has been adjusted to the current data. But there is still an error
NOTE :
Before we run analysis,
Correlation between SPGQ total & Direct
Good reason why we may want to choose PLV, does not require stationary of EEG = the stability of core characteristics of the time series
https://academic.oup.com/scan/article/16/1-2/72/5919711?login=false
Analysis :
NOTE : Remember that the index starts from 0
60bff86 start from this commit. It is new file for counting significant connections and give labels
Count significant connections
ToDo :
Do the following steps
(number of looking / len(total_looking_not_looking)) * 100
for each pair (return value of the function)Has been integrated into HyPyP
Change end for loop in combine_fif_files.py
here 9aa1e7a
This is the example
Do for
Experimental data
Baseline data
1. Populate odd subjects and even subjects into one separate dataFrame for all conditions , i.e. averted_pre* , averted_post* , direct_pre*, direct_post*, natural_pre*, and natural_post*. It means for each condition, eg. averted_pre* , there will be two dataFrames, one for odd subjects and one for even subjects.
- [x] This has been turned into a function
2. Convert GazeDirections, which are in cartesian (x, y, z) to degree, see Functions to convert cartesian to polar degree, for both Right and Left eyes. Put the converted value (degree) into a new column (eg. eye right-left), by using this code. Inside it, a function gaze_direction_in_x_axis_degree(x, y) is resulting a degree that indicates a movement of the eye in xaxis (right or left). A function gaze_direction_in_y_axis_degree(y, z) is resulting a degree that indicates a movement of eye in yaxis (up or down).
**averted_pre_even**
subject. See the issue below here.3. Check whether the values (degree) that are under the column of (eg. eye right-left) are within the range of fovea, see Function to check whether a degree within fovea (30 degrees in total) and use the function check_degree_within_fovea(gaze_direction) inside this script. If it is within the fovea, it will result it 1 otherwise 0. Save those new values (1 or 0) into a new colum, eg in_fovea.
4. Compare values of in_fovea
for both x-axis and y-axis whether both of them are 1. For example, if both of them are 1 then give a label or value 1 in a new column, e.g. look_each_other
, otherwise 0. Do this for both right and left eyes as well. Of course, for both odd and even subject dataframes.
5. Since the sampling rate is 125 / per second (125 rows for each second) , we need to count how many "1" as well as "0" in column look_each_other
within a second (125 rows). We need a threshold Threshold of within a second to determine "looking" (conscious) or "not looking" (unconscious)ย #10. For example, if the threshold is 13, when we count "1" in column look_each_other
within a second (125 rows), there are 13 of "1", then we need to create a new label 1 which indicates "really" or "scientifically" looking, otherwise 0. Put those new values (0 and 1) into a new column, eg. sig_looking
6. Do the step 2-4 for both odd and even subjects' dataFrames for each eye condition.
7. Check whether each pair looked at each other or not. Take a value from column in_fovea
of odd and even subject, then check if both are giving 1 (must be 1 for both of them), which indicates look at each other in the same range of fovea. If so, give label 1 and put it into a new column again, eg. look_each_other
, otherwise give label 0, that indicates that their eyes were not matched within a second.
8. Count a proportion of value 1 and 0 in column sig_looking
. For example, if there are 60 "1" in the column sig_looking
, then we can calculate the proportion : (60 / 1800) x 100 = 33 %. It means that out of 120 seconds (2 minutes), their eyes met only in 33%~40 seconds). Note 1800 = 120 (seconds) x 15 subjects (odd or even)
56802ef Use this commit
Init code for demographic data
339f7fe start from here. It is able to visualize
Under Questionnaire
Questionnaire class
def scoring_questionnnaire
Keep going down below until
def diff_score_questionnaire_pre_post
For this, we need to provide parameter for the name of column. Tweek a little bit the code from
Combine SPGQ Total score of subject1 & 2, etc..
NOTE : With subtraction of post and pre
def corr_eeg_questionnaire
print("Averted")
for i in range(len(diff_averted)):
print(F"{i}, {pearsonr(diff_averted[i], substracted_averted_non_SPGQ_total)}")
428592f In progress. Now it is still in Direct pre (experimental data)
EEG-pre-processing
Done
Pre-processing is here d95cf3e
# Python3 code to demonstrate working of
# Convert list of dictionaries to Dictionary Value list
# Using loop
from collections import defaultdict
import numpy as np
# initializing lists
# list_temp = [{"Gfg" : 6},
# {"Gfg" : 8},
# {"Gfg" : 2},
# {"Gfg" : 12},
# {"Gfg" : 22}]
list_temp =[]
a = {"a" : 1}
aa = {"a" : 2}
b = {"b" : 1}
bb = {"b" : 3}
for i in range(4):
list_temp.append(a)
list_temp.append(aa)
list_temp.append(b)
list_temp.append(bb)
# printing original list
print("The original list : " + str(list_temp))
# using loop to get dictionaries
# defaultdict used to make default empty list
# for each key
res = defaultdict(list)
for sub in list_temp:
for key in sub:
res[key].append(sub[key])
# printing result
print("The extracted dictionary : " + str(dict(res)))
average_a = np.mean(res["a"])
print(f"Average score of a {average_a}")
eye-tracker-pre-processing
Done
fbb72dd it is done here
Done
total_sig_ccorr_theta_connections, total_sig_ccorr_alpha_connections, total_sig_ccorr_beta_connections, total_sig_ccorr_gamma_connections,
total_sig_coh_theta_connections, total_sig_coh_alpha_connections, total_sig_coh_beta_connections, total_sig_coh_gamma_connections,
total_sig_plv_theta_connections, total_sig_plv_alpha_connections, total_sig_plv_beta_connections, total_sig_plv_gamma_connections
All the above stuff can be done via this code b0c996d
import pandas as pd
del_list = pd.read_pickle(r"/hpc/igum002/codes/Hyperscanning2-redesign/data/EEG/pre-processed_eeg_data/list_deleted_epoch_indices_direct_pre.pkl")
for i in range(len(del_list)):
print(len(del_list[i]))
This is needed as an input for separating baseline and experimental data from raw csv file (26 files of csv)
[1,2,1,2,1,2,3,4,3,4,3,4,5,6,5,6,5,6,7,8,7,8,7,8,9,10]
Loop
** Activate environment
source /hpc/igum002/environments/hyperscanning2_redesign_new/bin/activate
** Go to folder that contains file to run
cd /hpc/igum002/codes/Hyperscanning2-redesign/EEG/analysis/permutations/
time python permut_file_name.py output:
for example, time python permut_averted_pre.py output:
Screen :
Analysis :
1.1. Correlational analysis with EEG
b. Plot for total scores
c. Plot for each questionnaire or subsection
Actually we need Repeated measure ANOVA NOT T-TEST
856b6c6
Compare total score of SPGQ pre vs post
Need to add standard deviation score for each t-test
T-test between subscale of SPGQ (pre vs post)
Correlate CoPresense and SPGQ
Correlate CoPresence and subscales of SPGQ
It is currently separated
Up to here 2ffae82
See this code for permutation
It is better to save data after being pre-processed and epoched. Then run this above code or integrate it.
eye-tracker-analysis
Done
Right now it is in separate files
339f7fe start from here
Progress
Assign bad channels for each eye condition
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.