Git Product home page Git Product logo

data-management's People

Contributors

alexfoias avatar jcohenadad avatar kousu avatar mathieuboudreau avatar taowa avatar valosekj avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-management's Issues

PR: `af/add_karo_lesion`

Here is my Terminal activity: http://showterm.io/520cb08eeccf3bae8b24a

Few comments:
I followed the procedure listed here https://github.com/neuropoly/data-management/blob/master/internal-server.md#reviewing-pull-requests, but the files were not present (only the symlinks):

bash-3.2$ ls -la
total 16
drwxr-xr-x  6 julien  staff  192 25 Mar 15:44 .
drwxr-xr-x  3 julien  staff   96 25 Mar 15:44 ..
-rw-r--r--  1 julien  staff    0 25 Mar 15:44 sub-karo2123_T2star.json
-rwxr-xr-x  1 julien  staff  105 25 Mar 15:44 sub-karo2123_T2star.nii.gz
-rw-r--r--  1 julien  staff    0 25 Mar 15:44 sub-karo2123_acq-sagcerv_T2star.json
-rwxr-xr-x  1 julien  staff  105 25 Mar 15:44 sub-karo2123_acq-sagcerv_T2star.nii.gz
bash-3.2$ git-annex whereis sub-karo2123_T2star.nii.gz
whereis sub-karo2123_T2star.nii.gz (2 copies) 
  	36f66d50-a1cd-4c21-89ad-0d227fe4c757 -- [email protected]:~/data/sct-testing-large
   	6c8420e2-ee60-4383-96ba-cb43ef3c5611 -- [email protected]:~/repositories/datasets/sct-testing-large.git [origin]
ok

'(merging origin/git-annex into git-annex...)'

There's a performance issue: git-annex tracks metadata about itself in special git-annex branch, so when different users make edits it needs to be merged.

nguenther@data:~/datasets/sct-testing-large$ git annex sync --content
commit 
On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean
ok
pull origin 
Enter passphrase for key '/home/nguenther/.ssh/id_ed25519': 
remote: Enumerating objects: 42045, done.
remote: Counting objects: 100% (42045/42045), done.
remote: Compressing objects: 100% (5215/5215), done.
remote: Total 29096 (delta 22794), reused 29096 (delta 22794), pack-reused 0
Receiving objects: 100% (29096/29096), 2.24 MiB | 17.35 MiB/s, done.
Resolving deltas: 100% (22794/22794), completed with 5651 local objects.
From data.neuro.polymtl.ca:datasets/sct-testing-large
   30927e2ad..4f4d0d73b  master           -> origin/master
   686d50741..168320867  git-annex        -> origin/git-annex
   686d50741..168320867  synced/git-annex -> origin/synced/git-annex
   30927e2ad..4f4d0d73b  synced/master    -> origin/synced/master

Updating 30927e2ad..4f4d0d73b
Fast-forward
 sub-user0014/anat/sub-user0014_T2w.json | 7 +++++++
 1 file changed, 7 insertions(+)
 create mode 100755 sub-user0014/anat/sub-user0014_T2w.json

Already up to date.
ok
(merging origin/git-annex into git-annex...) ## this takes about 30s

Why is this so slow!

datalad status slow to run in remote station

On my mac, via VPN, it took about 1min to run this command on the dummy dataset:

julien-macbook:/Volumes/sct_testing/test/Datalad-dummy_dataset $ datalad status

Given this dataset is only 5MB, how long could it be with a 5GB dataset?

Set up and document workflow for contributing to the private datasets

This is a WIP -- suggestions welcome

  • Set up two roles: CREATORS | WRITERS
  • WRITERS:
    • can create new branches (e.g. to submit manual segmentations, add datasets)
    • cannot write to master

Typical workflows:

  • Upload manual segmentations
    • User in the WRITERS group would make the modification locally
    • Would create a new branch and commit/push to that branch
    • Someone in CREATORS review the proposition and eventually accepts

Backup strategies:

  • In case a PR is wrongly merged, we would need to mirror the sensitive repository somewhere else. E.g. UNF.

"warning: There are too many unreachable loose objects; run 'git prune' to remove them."

Currently on git+ssh://data.neuro.polymtl.ca:datasets/sct-testing-large.git:

nguenther@data:~/datasets/sct-testing-large$ git annex sync
commit 
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

nothing to commit, working tree clean
ok
pull origin 
Enter passphrase for key '/home/nguenther/.ssh/id_ed25519': 
ok
push origin 
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 305 bytes | 305.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0), pack-reused 0
remote: warning: The last gc run reported the following. Please correct the root cause
remote: and remove gc.log.
remote: Automatic cleanup will not be performed until the file is removed.
remote: 
remote: warning: There are too many unreachable loose objects; run 'git prune' to remove them.
remote: 
To data.neuro.polymtl.ca:datasets/sct-testing-large.git
   f8ce8cf22..30927e2ad  master -> synced/master
ok

I don't know what this is about. I've never seen git complain about this before. I assume this is somehow git-annex's fault.

data.neuro.polymtl.ca november 2020 outage

2020-11-14

On November 14th the server data.neuro.polymtl.ca went down and did not come back up until December 2nd. I believe it was specifically 2am November 14th, the scheduled time for unattended-upgrades but I haven't totally confirmed that.

Here are the last messages I received from the server; I'm not sure why there's two of them, they look like they're both part of the same upgrade:

Return-Path: [email protected]
Delivered-To: [email protected]
Received: from data.neuro.polymtl.ca (donnees.neuro.polymtl.ca [132.207.65.204])
	by comms.kousu.ca (OpenSMTPD) with ESMTPS id a5d0bd6b (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO)
	for <[email protected]>;
	Fri, 13 Nov 2020 11:36:36 +0000 (UTC)
Received: from localhost (data.neuro.polymtl.ca [local])
	by data.neuro.polymtl.ca (OpenSMTPD) with ESMTPA id bc0570e4
	for <root@localhost>;
	Fri, 13 Nov 2020 11:36:35 +0000 (UTC)
Date: Fri, 13 Nov 2020 06:36:35 -0500 (EST)
Subject: unattended-upgrades result for data.neuro.polymtl.ca: SUCCESS
From: [email protected]
To: root@localhost
Auto-Submitted: auto-generated
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Message-ID: <[email protected]>
X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4

Unattended upgrade result: All upgrades installed

Packages that were upgraded:
 apport intel-microcode libmaxminddb0 python3-apport
 python3-problem-report

Package installation log:
Log started: 2020-11-13  06:35:41
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../python3-problem-report_2.20.11-0ubuntu50.1_all.deb =
...
Unpacking python3-problem-report (2.20.11-0ubuntu50.1) over (2.20.11-0ubunt=
u50) ...
Setting up python3-problem-report (2.20.11-0ubuntu50.1) ...
Log ended: 2020-11-13  06:36:03

Log started: 2020-11-13  06:36:03
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../intel-microcode_3.20201110.0ubuntu0.20.10.2_amd64.d=
eb ...
Unpacking intel-microcode (3.20201110.0ubuntu0.20.10.2) over (3.20201110.0u=
buntu0.20.10.1) ...
Setting up intel-microcode (3.20201110.0ubuntu0.20.10.2) ...
update-initramfs: deferring update (trigger activated)
intel-microcode: microcode will be updated at next boot
Processing triggers for initramfs-tools (0.137ubuntu12) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-1012-azure
Log ended: 2020-11-13  06:36:18

Log started: 2020-11-13  06:36:18
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../libmaxminddb0_1.4.2-0ubuntu1.20.10.1_amd64.deb ...
Unpacking libmaxminddb0:amd64 (1.4.2-0ubuntu1.20.10.1) over (1.4.2-0ubuntu1=
) ...
Setting up libmaxminddb0:amd64 (1.4.2-0ubuntu1.20.10.1) ...
Processing triggers for man-db (2.9.3-2) ...
Processing triggers for libc-bin (2.32-0ubuntu3) ...
Log ended: 2020-11-13  06:36:21

Log started: 2020-11-13  06:36:22
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../python3-apport_2.20.11-0ubuntu50.1_all.deb ...
Unpacking python3-apport (2.20.11-0ubuntu50.1) over (2.20.11-0ubuntu50) ...
Preparing to unpack .../apport_2.20.11-0ubuntu50.1_all.deb ...
Unpacking apport (2.20.11-0ubuntu50.1) over (2.20.11-0ubuntu50) ...
Setting up python3-apport (2.20.11-0ubuntu50.1) ...
Setting up apport (2.20.11-0ubuntu50.1) ...
apport-autoreport.service is a disabled or a static unit, not starting it.
Processing triggers for systemd (246.6-1ubuntu1) ...
Processing triggers for man-db (2.9.3-2) ...
Processing triggers for ureadahead (0.100.0-21) ...
Log ended: 2020-11-13  06:36:33



Unattended-upgrades log:
Starting unattended upgrades script
Allowed origins are: o=3DUbuntu,a=3Dgroovy, o=3DUbuntu,a=3Dgroovy-security,=
 o=3DUbuntuESMApps,a=3Dgroovy-apps-security, o=3DUbuntuESM,a=3Dgroovy-infra=
-security, o=3DUbuntu,a=3Dgroovy-updates, o=3DUbuntu,a=3Dgroovy-backports
Initial blacklist:=20
Initial whitelist (not strict):=20
Packages that will be upgraded: apport intel-microcode libmaxminddb0 python=
3-apport python3-problem-report
Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.l=
og
All upgrades installed
Return-Path: [email protected]
Delivered-To: [email protected]
Received: from data.neuro.polymtl.ca (data.neuro.polymtl.ca [132.207.65.204])
	by comms.kousu.ca (OpenSMTPD) with ESMTPS id 6e6ee5f1 (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO)
	for <[email protected]>;
	Thu, 12 Nov 2020 11:40:24 +0000 (UTC)
Received: from localhost (data.neuro.polymtl.ca [local])
	by data.neuro.polymtl.ca (OpenSMTPD) with ESMTPA id a3c085a3
	for <root@localhost>;
	Thu, 12 Nov 2020 11:40:23 +0000 (UTC)
Date: Thu, 12 Nov 2020 06:40:23 -0500 (EST)
Subject: [reboot required] unattended-upgrades result for data.neuro.polymtl.ca: SUCCESS
From: [email protected]
To: root@localhost
Auto-Submitted: auto-generated
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Message-ID: <[email protected]>
X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4

Unattended upgrade result: All upgrades installed

Warning: A reboot is required to complete this upgrade, or a previous one.

Packages that were upgraded:
 intel-microcode linux-azure linux-cloud-tools-azure
 linux-cloud-tools-common linux-headers-azure linux-image-azure
 linux-tools-azure linux-tools-common

Package installation log:
Log started: 2020-11-12  06:38:44
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Selecting previously unselected package linux-modules-5.8.0-1012-azure.
Preparing to unpack .../00-linux-modules-5.8.0-1012-azure_5.8.0-1012.13_amd=
64.deb ...
Unpacking linux-modules-5.8.0-1012-azure (5.8.0-1012.13) ...
Selecting previously unselected package linux-image-5.8.0-1012-azure.
Preparing to unpack .../01-linux-image-5.8.0-1012-azure_5.8.0-1012.13_amd64=
.deb ...
Unpacking linux-image-5.8.0-1012-azure (5.8.0-1012.13) ...
Preparing to unpack .../02-linux-azure_5.8.0.1012.12_amd64.deb ...
Unpacking linux-azure (5.8.0.1012.12) over (5.8.0.1011.11) ...
Preparing to unpack .../03-linux-image-azure_5.8.0.1012.12_amd64.deb ...
Unpacking linux-image-azure (5.8.0.1012.12) over (5.8.0.1011.11) ...
Selecting previously unselected package linux-azure-headers-5.8.0-1012.
Preparing to unpack .../04-linux-azure-headers-5.8.0-1012_5.8.0-1012.13_all=
.deb ...
Unpacking linux-azure-headers-5.8.0-1012 (5.8.0-1012.13) ...
Selecting previously unselected package linux-headers-5.8.0-1012-azure.
Preparing to unpack .../05-linux-headers-5.8.0-1012-azure_5.8.0-1012.13_amd=
64.deb ...
Unpacking linux-headers-5.8.0-1012-azure (5.8.0-1012.13) ...
Preparing to unpack .../06-linux-headers-azure_5.8.0.1012.12_amd64.deb ...
Unpacking linux-headers-azure (5.8.0.1012.12) over (5.8.0.1011.11) ...
Selecting previously unselected package linux-azure-tools-5.8.0-1012.
Preparing to unpack .../07-linux-azure-tools-5.8.0-1012_5.8.0-1012.13_amd64=
.deb ...
Unpacking linux-azure-tools-5.8.0-1012 (5.8.0-1012.13) ...
Selecting previously unselected package linux-tools-5.8.0-1012-azure.
Preparing to unpack .../08-linux-tools-5.8.0-1012-azure_5.8.0-1012.13_amd64=
.deb ...
Unpacking linux-tools-5.8.0-1012-azure (5.8.0-1012.13) ...
Preparing to unpack .../09-linux-tools-azure_5.8.0.1012.12_amd64.deb ...
Unpacking linux-tools-azure (5.8.0.1012.12) over (5.8.0.1011.11) ...
Selecting previously unselected package linux-azure-cloud-tools-5.8.0-1012.
Preparing to unpack .../10-linux-azure-cloud-tools-5.8.0-1012_5.8.0-1012.13=
_amd64.deb ...
Unpacking linux-azure-cloud-tools-5.8.0-1012 (5.8.0-1012.13) ...
Selecting previously unselected package linux-cloud-tools-5.8.0-1012-azure.
Preparing to unpack .../11-linux-cloud-tools-5.8.0-1012-azure_5.8.0-1012.13=
_amd64.deb ...
Unpacking linux-cloud-tools-5.8.0-1012-azure (5.8.0-1012.13) ...
Preparing to unpack .../12-linux-cloud-tools-azure_5.8.0.1012.12_amd64.deb =
...
Unpacking linux-cloud-tools-azure (5.8.0.1012.12) over (5.8.0.1011.11) ...
Setting up linux-modules-5.8.0-1012-azure (5.8.0-1012.13) ...
Setting up linux-azure-cloud-tools-5.8.0-1012 (5.8.0-1012.13) ...
Setting up linux-azure-headers-5.8.0-1012 (5.8.0-1012.13) ...
Setting up linux-azure-tools-5.8.0-1012 (5.8.0-1012.13) ...
Setting up linux-image-5.8.0-1012-azure (5.8.0-1012.13) ...
I: /boot/vmlinuz is now a symlink to vmlinuz-5.8.0-1012-azure
I: /boot/initrd.img is now a symlink to initrd.img-5.8.0-1012-azure
Setting up linux-cloud-tools-5.8.0-1012-azure (5.8.0-1012.13) ...
Setting up linux-headers-5.8.0-1012-azure (5.8.0-1012.13) ...
Setting up linux-tools-5.8.0-1012-azure (5.8.0-1012.13) ...
Setting up linux-headers-azure (5.8.0.1012.12) ...
Setting up linux-image-azure (5.8.0.1012.12) ...
Setting up linux-tools-azure (5.8.0.1012.12) ...
Setting up linux-cloud-tools-azure (5.8.0.1012.12) ...
Setting up linux-azure (5.8.0.1012.12) ...
Processing triggers for linux-image-5.8.0-1012-azure (5.8.0-1012.13) ...
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-5.8.0-1012-azure
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.8.0-1012-azure
Found initrd image: /boot/initrd.img-5.8.0-1012-azure
Found linux image: /boot/vmlinuz-5.8.0-1011-azure
Found initrd image: /boot/initrd.img-5.8.0-1011-azure
Adding boot menu entry for UEFI Firmware Settings
done
[master 3c027c9] committing changes in /etc made by "/usr/bin/python3 /usr/=
bin/unattended-upgrade"
 1 file changed, 37 insertions(+), 55 deletions(-)
 rewrite apt/apt.conf.d/01autoremove-kernels (83%)
Log ended: 2020-11-12  06:39:44

Log started: 2020-11-12  06:39:45
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../linux-cloud-tools-common_5.8.0-28.30_all.deb ...
Unpacking linux-cloud-tools-common (5.8.0-28.30) over (5.8.0-26.27) ...
Setting up linux-cloud-tools-common (5.8.0-28.30) ...
Processing triggers for ureadahead (0.100.0-21) ...
Processing triggers for man-db (2.9.3-2) ...
Log ended: 2020-11-12  06:39:56

Log started: 2020-11-12  06:39:57
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../intel-microcode_3.20201110.0ubuntu0.20.10.1_amd64.d=
eb ...
Unpacking intel-microcode (3.20201110.0ubuntu0.20.10.1) over (3.20200609.0u=
buntu0.20.04.2) ...
Setting up intel-microcode (3.20201110.0ubuntu0.20.10.1) ...
update-initramfs: deferring update (trigger activated)
intel-microcode: microcode will be updated at next boot
Processing triggers for initramfs-tools (0.137ubuntu12) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-1012-azure
Log ended: 2020-11-12  06:40:07

Log started: 2020-11-12  06:40:08
apt-listchanges: Reading changelogs...
apt-listchanges: Reading changelogs...
Preparing to unpack .../linux-tools-common_5.8.0-28.30_all.deb ...
Unpacking linux-tools-common (5.8.0-28.30) over (5.8.0-26.27) ...
Setting up linux-tools-common (5.8.0-28.30) ...
Processing triggers for man-db (2.9.3-2) ...
Log ended: 2020-11-12  06:40:22



Unattended-upgrades log:
Starting unattended upgrades script
Allowed origins are: o=3DUbuntu,a=3Dgroovy, o=3DUbuntu,a=3Dgroovy-security,=
 o=3DUbuntuESMApps,a=3Dgroovy-apps-security, o=3DUbuntuESM,a=3Dgroovy-infra=
-security, o=3DUbuntu,a=3Dgroovy-updates, o=3DUbuntu,a=3Dgroovy-backports
Initial blacklist:=20
Initial whitelist (not strict):=20
Packages that will be upgraded: intel-microcode linux-azure linux-cloud-too=
ls-azure linux-cloud-tools-common linux-headers-azure linux-image-azure lin=
ux-tools-azure linux-tools-common
Writing dpkg log to /var/log/unattended-upgrades/unattended-upgrades-dpkg.l=
og
All upgrades installed

After the reboot it was inaccessible.

2020-11-17

Here's a screenshot of the boot console (sorry for not transcribing it for accessibility)

image003

Basically, it seems that /dev/sdb, the terabyte storage disk that was recently added to the system, has become corrupted or inaccessible.

Since I put this disk directly into /etc/fstab, that means the boot is now broken.

2020-12-02

Finally we were able to get together yesterday with Jean-Sébastien Décarie to investigate the server.

Recovering access

An immediate stumbling blockwas that no one knew the root password. The Ubuntu installer set up an account with sudo rights, and the root password was never recorded. In normal operation that's fine, maybe even desirable, but the systemd rescue shell insists on taking the root password.

  1. Attempted to follow https://linuxconfig.org/recover-reset-forgotten-linux-root-password (which recommends to boot with init=/bin/bash instead of init=/sbin/init) but it (and variations on it) just led to a hung server. Jean-Sébastien found an Ubuntu-specific guide but I don't know the link and anyway it wasn't any more informative.
  2. Boot with Ubuntu installer .iso that was used to install the system in the first place.
  3. Open a Terminal
  4. sudo mkdir -p /mnt/root && sudo mount /dev/sda2 /mnt/root
  5. vi /mnt/root/etc/fstab #-> comment out the line for /srv/git/repositories
  6. Reboot

Ensuring future access:

  1. Locally: xkcdpass | pass insert [email protected] (or equivalent password manager)
  2. Remotely: sudo passwd root and input the new password [email protected]
  3. Give the root password to @jcohenadad
  4. Give the root password to @alexfoias

Debugging 1TB storage disk

Before moving on, I wanted to investigate what's wrong with the storage disk, to see if we can recover it and maybe understand what went wrong so we can avoid it.

One thing to note about this is the VM system is running on Microsoft HyperV, and the attached disk is a physical 1TB in passthrough mode, it's not a virtual disk.

  1. Basic reconnnaissance
root@data:/home/nguenther# mount /dev/sdb1 /srv/git/repositories
mount: /srv/git/repositories: can't read superblock on /dev/sdb1.

That's not good :/

root@data:/home/nguenther# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.36).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdb: 1 TiB, 1099511627776 bytes, 2147483648 sectors
Disk model: 2145            
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: F39B8299-4E4E-8C4B-96BF-758F07539380

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 2147483614 2147481567 1024G Linux filesystem

The partition table looks okay.

root@data:/home/nguenther# e2fsck /dev/sdb1
e2fsck 1.45.6 (20-Mar-2020)
neuropoly-data: recovering journal
e2fsck: Input/output error while recovering journal of neuropoly-data
e2fsck: unable to set superblock flags on neuropoly-data


neuropoly-data: ********** WARNING: Filesystem still has errors **********
  1. Digging into errors

During that fsck attempt:

root@data:/home/nguenther# dmesg
[...]
[  798.481918] blk_update_request: I/O error, dev sdb, sector 2888 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[  798.482003] buffer_io_error: 2697 callbacks suppressed
[  798.482006] Buffer I/O error on dev sdb1, logical block 840, lost async page write
[  798.482064] Buffer I/O error on dev sdb1, logical block 841, lost async page write
[  798.482120] Buffer I/O error on dev sdb1, logical block 842, lost async page write
[  798.482196] Buffer I/O error on dev sdb1, logical block 843, lost async page write
[  798.482252] Buffer I/O error on dev sdb1, logical block 844, lost async page write
[  798.482318] Buffer I/O error on dev sdb1, logical block 845, lost async page write
[  798.482375] Buffer I/O error on dev sdb1, logical block 846, lost async page write
[  798.482430] Buffer I/O error on dev sdb1, logical block 847, lost async page write
[  798.482505] sd 0:0:0:2: [sdb] tag#311 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.482509] sd 0:0:0:2: [sdb] tag#311 CDB: Write(10) 2a 00 0d 80 09 00 00 00 08 00
[  798.482512] blk_update_request: I/O error, dev sdb, sector 226494720 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[  798.482589] Buffer I/O error on dev sdb1, logical block 226492672, lost async page write
[  798.482647] Buffer I/O error on dev sdb1, logical block 226492673, lost async page write
[  798.482716] sd 0:0:0:2: [sdb] tag#310 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.482719] sd 0:0:0:2: [sdb] tag#310 CDB: Write(10) 2a 00 00 01 2d 08 00 00 08 00
[  798.482722] blk_update_request: I/O error, dev sdb, sector 77064 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[  798.482806] sd 0:0:0:2: [sdb] tag#309 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.482809] sd 0:0:0:2: [sdb] tag#309 CDB: Write(10) 2a 00 00 00 09 08 00 00 08 00
[  798.482811] blk_update_request: I/O error, dev sdb, sector 2312 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[  798.482956] sd 0:0:0:2: [sdb] tag#308 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.482959] sd 0:0:0:2: [sdb] tag#308 CDB: Write(10) 2a 00 0d c0 1e a0 00 04 00 00
[  798.482962] blk_update_request: I/O error, dev sdb, sector 230694560 op 0x1:(WRITE) flags 0x4800 phys_seg 1024 prio class 0
[  798.483564] sd 0:0:0:2: [sdb] tag#307 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.483567] sd 0:0:0:2: [sdb] tag#307 CDB: Write(10) 2a 00 0d c0 0e a0 00 04 00 00
[  798.483569] blk_update_request: I/O error, dev sdb, sector 230690464 op 0x1:(WRITE) flags 0x4800 phys_seg 1024 prio class 0
[  798.484169] sd 0:0:0:2: [sdb] tag#305 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.484171] sd 0:0:0:2: [sdb] tag#305 CDB: Write(10) 2a 00 0d c0 12 a0 00 04 00 00
[  798.484174] blk_update_request: I/O error, dev sdb, sector 230691488 op 0x1:(WRITE) flags 0x4800 phys_seg 1024 prio class 0
[  798.484787] sd 0:0:0:2: [sdb] tag#306 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.484789] sd 0:0:0:2: [sdb] tag#306 CDB: Write(10) 2a 00 0d c0 16 a0 00 04 00 00
[  798.484792] blk_update_request: I/O error, dev sdb, sector 230692512 op 0x1:(WRITE) flags 0x4800 phys_seg 1024 prio class 0
[  798.485384] sd 0:0:0:2: [sdb] tag#302 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.485387] sd 0:0:0:2: [sdb] tag#302 CDB: Write(10) 2a 00 0d c0 1a a0 00 04 00 00
[  798.485389] blk_update_request: I/O error, dev sdb, sector 230693536 op 0x1:(WRITE) flags 0x4800 phys_seg 1024 prio class 0
[  798.485962] sd 0:0:0:2: [sdb] tag#303 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[  798.485965] sd 0:0:0:2: [sdb] tag#303 CDB: Write(10) 2a 00 00 00 08 c8 00 00 08 00
[  798.485968] blk_update_request: I/O error, dev sdb, sector 2248 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
[...]
  1. Scanning with badblocks

Read-only scan:

root@data:~# time badblocks -sv -b 32768 /dev/sdb
Checking blocks 0 to 33554431
Checking for bad blocks (read-only test): done                                                 
Pass completed, 0 bad blocks found. (0/0/0 errors)

real	18m0.506s
user	0m2.672s
sys	1m25.659s

So reads are working, or at least doing something?

But yet:

root@data:~# e2fsck /dev/sdb1
e2fsck 1.45.6 (20-Mar-2020)
neuropoly-data: recovering journal
Superblock needs_recovery flag is clear, but journal has data.
Run journal anyway<y>? yes
e2fsck: Input/output error while recovering journal of neuropoly-data
e2fsck: unable to set superblock flags on neuropoly-data


neuropoly-data: ********** WARNING: Filesystem still has errors **********
root@data:~# dmesg
[...]
[ 2233.819080] buffer_io_error: 21662 callbacks suppressed
[ 2233.819091] Buffer I/O error on dev sdb1, logical block 1069809664, lost async page write
[ 2233.819103] Buffer I/O error on dev sdb1, logical block 1069809665, lost async page write
[ 2233.819116] Buffer I/O error on dev sdb1, logical block 1069809666, lost async page write
[ 2233.819129] Buffer I/O error on dev sdb1, logical block 1069809667, lost async page write
[ 2233.819142] Buffer I/O error on dev sdb1, logical block 1069809668, lost async page write
[ 2233.819154] Buffer I/O error on dev sdb1, logical block 1069809669, lost async page write
[ 2233.819167] Buffer I/O error on dev sdb1, logical block 1069809670, lost async page write
[ 2233.819179] Buffer I/O error on dev sdb1, logical block 1069809671, lost async page write
[ 2233.819354] sd 0:0:0:2: [sdb] tag#117 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[ 2233.819369] sd 0:0:0:2: [sdb] tag#117 CDB: Write(10) 2a 00 00 00 08 08 00 04 00 00
[ 2233.819385] blk_update_request: I/O error, dev sdb, sector 2056 op 0x1:(WRITE) flags 0x800 phys_seg 1024 prio class 0
[ 2233.819398] Buffer I/O error on dev sdb1, logical block 8, lost async page write
[ 2233.819412] Buffer I/O error on dev sdb1, logical block 9, lost async page write
[ 2234.070920] sd 0:0:0:2: [sdb] tag#89 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK cmd_age=0s
[ 2234.070958] sd 0:0:0:2: [sdb] tag#89 CDB: Write(10) 2a 00 3f c4 08 00 00 00 08 00
[ 2234.070972] blk_update_request: I/O error, dev sdb, sector 1069811712 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
root@data:~# time badblocks -svn -b 32768 /dev/sdb 2>&1 | tee ~/dev-sdb2-badblocks-n.log
Checking for bad blocks in non-destructive read-write mode
From block 0 to 33554431
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: badblocks: Input/output error during test data write, block 0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[...]
badblocks: Input/output error during test data write, block 64
64
65
66
67
68
69
70
71
72
73
74
[...]
119
120
121
122
123
124
125
126
127
badblocks: Input/output error during test data write, block 128
128
129
130
131
132
[...]
33554430
33554431
done                                                 
Pass completed, 33554432 bad blocks found. (0/0/33554432 errors)

real	1612m28.915s
user	1m29.985s
sys	10m56.125s

The log is pretty noisy because, it seems, every single block is bad, i.e. it's not storing the data requested.
In addition, some of them return I/O errors during write; we can focus on them like this:

root@data:~# cat dev-sdb2-badblocks-n.log | egrep 'Input/output error .* block [[:digit:]]+'
Testing with random pattern: badblocks: Input/output error during test data write, block 0
badblocks: Input/output error during test data write, block 64
badblocks: Input/output error during test data write, block 128
badblocks: Input/output error during test data write, block 192
badblocks: Input/output error during test data write, block 256
badblocks: Input/output error during test data write, block 320
badblocks: Input/output error during test data write, block 384
badblocks: Input/output error during test data write, block 448
badblocks: Input/output error during test data write, block 512
badblocks: Input/output error during test data write, block 576
badblocks: Input/output error during test data write, block 640
badblocks: Input/output error during test data write, block 704
badblocks: Input/output error during test data write, block 768
badblocks: Input/output error during test data write, block 832
badblocks: Input/output error during test data write, block 896
badblocks: Input/output error during test data write, block 960
[...]

I am suspicious. It seems like it's every 64th block that's giving an exception. I'll confirm that with this:

root@data:~# cat dev-sdb2-badblocks-n.log | egrep 'Input/output error .* block [[:digit:]]+' | egrep -o '[[:digit:]]+$' | while read block; do echo $(($block % 64)); done | sort | uniq -c
 524268 0

So, indeed, every single "Input/output error" line is on a specific boundary. Now, these aren't usual sized blocks. I followed what fdisk reported, and used -b 32768, which is the same as using 64x the usual 512-byte sized blocks, so that means these errors are actually happening every 32768B*64 = 2MiB.

So every 2MiB the disk IO stack freaks out, and in between writes are silently failing.

I have to think this has something to do with combining Microsoft's HyperV hypervisor, the pass-through driver, and linux. Something in that stack is angry at the other parts. It is possible that the upgrade (maybe linux-image-azure?) is buggy with regard to the version of HyperV deployed at Polytechnique.

I think the best solution is to not push HyperV that hard. Let's just switch to using a fully virtual storage disk and migrate to that, and make a backup server (#20).

"warning: unable to convert submodule to form that will work with git-annex"

With git-annex linux-64 v8.20201127 from https://anaconda.org/conda-forge/git-annex, trying to use our dataset as a submodule (the way datalad recommends) gives a bizarre warning while downloading the annexed files.

warning: unable to convert submodule to form that will work with git-annex
warning: unable to convert submodule to form that will work with git-annex
warning: unable to convert submodule to form that will work with git-annex
warning: unable to convert submodule to form that will work with git-annex
...

It only triggers late in the download, so maybe once it has downloaded everything it does some sort of post-processing step, or maybe it's only certain files that are causing it?

This reproduces it (XXX I think, I'm going to double check)

mkdir test
cd test
git init
git submodule add https://github.com/spine-generic/data-multi-subject
cd data-multi-subject
git annex init
git annex sync --content

Is data-single-subject an 'out-of-sync' duplicate of that in AWS?

There is already a data-single-subject hosted on AWS, so where is the datasets/data-single-subject coming from? What happens if the one on AWS is updated, is this one updated too? If not, we should probably do something about it, eg: remove it or make sure it is synced.

Otherwise, what will happen, is that someone will use that dataset, then another person will use the AWS dataset, the two results won't match and it will take us 3 days to figure out why they don't match (been there done that).

@kousu

Migrate uk-biobank dataset to internal server

alexfoias has made a minimal viable dataset out of our copy of the UK Biobank data for doing some experiments on. It is currently sitting on our internal server at smb://duke/tmp/uk_biobank_BIDS. Julien wants it to be named uk-biobank. I can do that along the way.

Internal server (git+ssh://data.neuro.polymtl.ca)

I am deploying a git server on poly's internal infrastructure. This is cheaper, and faster (bandwidth-wise) than paying Amazon or Github for hosting the large datasets we experiment on, and safer for our datasets with privileged medical data.

Cloning a repository effectively downloads the data under .git/ tree

So far when I was using datalad, a git clone would not download the data. Only when fetching the data on demand would download them.

Now, when trying to clone a large dataset, e.g. datasets/uk-biobank it is effectively downloading ~20GB of data, even before asking to fetch the data with git-annex get. Is that an expected behaviour? Is there something we can do about it @kousu ?

Possibly related to #34 and #23

Investigate annex.thin and annex.hardlink

In git-annex v8, the default config makes checked out files the full files, instead of symlinks like in v7 and earlier, git-lfs style. But if .git/annex also stores a copy of all the data then users are doubling their storage for nothing. (note: this does not double the storage on a server; servers only keep git 'bare' git repos, without a checked out copy)

There are two options, 'annex.hardlink' and 'annex.thin' but I can't tell what they do. It sounds like they should avoid the duplication by, but if so why are there two of them? Are they mutually exclusive? What happens on Windows?

I found this thread where some of the people from datalad seem to be equally confused: https://git-annex.branchable.com/bugs/annex.hardlink_is_not___34__in_effect__34___in_thin_mode_/.

In git-annex v7 there was the "adjusted" branch which I think was meant to accomplish the same goal?

gitolite: `trunk`

Git is moving away from master. I figured it was harmless, so I set

$ git config --global init.defaultBranch
trunk

on my machine.

If I try to upload a new repo to gitolite, though, it says:

$ git push --set-upstream origin trunk
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint: 
hint: 	git config --global init.defaultBranch <name>
hint: 
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint: 
hint: 	git branch -m <name>
Initialized empty Git repository in /home/gitt/repositories/d/t1.git/
Enumerating objects: 38, done.
Counting objects: 100% (38/38), done.
Delta compression using up to 4 threads
Compressing objects: 100% (36/36), done.
Writing objects: 100% (38/38), 4.97 KiB | 2.49 MiB/s, done.
Total 38 (delta 0), reused 0 (delta 0), pack-reused 0
To localhost:d/t1
 * [new branch]      trunk -> trunk
Branch 'trunk' set up to track remote branch 'trunk' from 'origin'.

But on the actual repo on the server master is nowhere to be found:

[gitt@requiem t1.git]$ git branch -a
  trunk
[gitt@requiem t1.git]$ ls -l refs/heads/
total 4
-rw------- 1 gitt gitt 41 Feb  9 11:42 trunk

I don't know if this will cause problems.
I notice also that git-annex has master strewn throughout its docs. I suspect it might have some problems working with this.

For now I haven't run into anything but I will document problems as they arise in this issue.

Dataset health monitoring

We need scripts that regularly report on:

  • dataset sizes on the server (ideally plotting growth over time)
  • unreachable versions of files (e.g. caused by using annex.thin without eagerly uploading data #23, or other kinds of corruption, like a bad disk)
  • integrity checking the regular git files (.git/objects)
  • integrity checking of the annexed files (.git/annex/objects)
    • git-annex does integrity checking at git annex get and git add; I am unclear if it checks at git annex copy --to as well, and even if it does it's definitely not running a daemon in the background looking for problems. We need to check that the checksums on the files on the server match their contents.
    • git-lfs has this same problem
  • check that there are no chmod +x files in a dataset; it doesn't make any sense for data files to be executable

Maybe this stuff can be integrated into netdata writing a collector? But netdata is meant more for in-the-moment monitoring, and is configured to store metrics for about a week, whereas we would want to store over years.

Part of #20

uk-biobank: invalid .tsv

The participants index is incorrectly formatted. It has blanks where the BIDS standard wants the string "n/a":

nguenther@data:~/datasets/uk-biobank$ /usr/local/bin/bids-validator  .
[email protected]

	1: [ERR] All rows must have the same number of columns as there are headers. (code: 22 - TSV_EQUAL_ROWS)
		./participants.tsv
			@ line: 2
			Evidence: row 1: sub-10000xx	X	99	99999		99999

	Please visit https://neurostars.org/search?q=TSV_EQUAL_ROWS for existing conversations about this issue.

	2: [ERR] Empty cell in TSV file detected: The proper way of labeling missing values is "n/a". (code: 23 - TSV_EMPTY_CELL)
		./participants.tsv
			@ line: 2
			Evidence: row 1: sub-10000xx	X	99	99999		99999

	Please visit https://neurostars.org/search?q=TSV_EMPTY_CELL for existing conversations about this issue.

	1: [WARN] The Authors field of dataset_description.json should contain an array of fields - with one author per field. This was triggered based on the presence of only one author field. Please ignore if all contributors are already properly listed. (code: 102 - TOO_FEW_AUTHORS)

	Please visit https://neurostars.org/search?q=TOO_FEW_AUTHORS for existing conversations about this issue.


        Summary:                  Available Tasks:        Available Modalities: 
        1404 Files, 9.75GB                                T1w                   
        350 - Subjects                                    T2w                   
        1 - Session                                                             


	If you have any questions, please post on https://neurostars.org/tags/bids.

large number of files changed after a adding new data

I added 2 new subjects (user008 & user009) to the datalad dataset.

When checking the status I have a very long list of files that have been changes.

I think it is due to the file system of the windows smb mount.

@kousu @jcohenadad I'll write a protocol to pass the datalad to nvme on joplin this afternoon.

clarify doc

it took me 5min to realize that in order to get the proper doc, i needed to click on '"internal server"-- if it happened to me it will happen to other users in the lab-- so we need to change that.

git annex sync -- content failed on uk-biobank-processed

Description

I am trying to add processed images to datasets/uk-biobank-processed. I followed these instructions. Everything went smoothly until git annex sync --content.
This is the error that I got when running git annex sync --content:
(this is the end only, I added bellow my comlpete terminal history)

  transfer failed
failed
copy sub-1141841/anat/sub-1141841_T2w.nii.gz
  Lost connection (fd:14: hGetChar: end of file)
(unable to check origin) failed
push origin
FATAL: W any datasets/uk-biobank-processed sandrine DENIED by fallthru
(or you mis-spelled the reponame)
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

FATAL: W any datasets/uk-biobank-processed sandrine DENIED by fallthru(or you mis-spelled the reponame)fatal: Could not read from remote repository.Please make sure you have the correct access rightsand the repository exists.  Pushing to origin failed.
failed
git-annex: sync: 1301 failed

Here is my complete terminal history from git add to git annex sync --content:

terminal_joplin_git_annex.txt

Many empty .json files copied after pulling annex

There are a lot of json files that are not properly copied.
Way more within the derivatives.

For reference:

Within sct-testing-large the empty ones are mostly DWI sequences.

find sub* -empty >empty_files.txt

sub-calCadotte001/dwi/sub-calCadotte001_acq-dwiMean_dwi.json
sub-calCadotte001/dwi/sub-calCadotte001_acq-b0Mean_dwi.json
sub-ivdm3seg001/anat/sub-ivdm3seg001_acq-inn_T1w.json
sub-ivdm3seg001/anat/sub-ivdm3seg001_acq-fat_T1w.json
sub-koreajisun001/dwi/sub-koreajisun001_acq-dwiMean_dwi.json
sub-koreajisun001/dwi/sub-koreajisun001_acq-b0Mean.json
sub-koreajisun001/dwi/sub-koreajisun001_acq-b0_dwi.json
sub-koreajisun002/dwi/sub-koreajisun002_acq-b0_dwi.json
sub-koreajisun002/dwi/sub-koreajisun002_acq-b0Mean.json
sub-koreajisun002/dwi/sub-koreajisun002_acq-dwiMean_dwi.json
sub-koreajisun003/dwi/sub-koreajisun003_acq-b0Mean.json
sub-koreajisun003/dwi/sub-koreajisun003_acq-dwiMean_dwi.json
sub-koreajisun003/dwi/sub-koreajisun003_acq-b0_dwi.json
sub-koreajisun004/dwi/sub-koreajisun004_acq-dwiMean_dwi.json
sub-koreajisun004/dwi/sub-koreajisun004_acq-b0_dwi.json
sub-koreajisun004/dwi/sub-koreajisun004_acq-b0Mean.json
sub-koreajisun005/dwi/sub-koreajisun005_acq-b0_dwi.json
sub-koreajisun005/dwi/sub-koreajisun005_acq-b0Mean.json
sub-koreajisun005/dwi/sub-koreajisun005_acq-dwiMean_dwi.json
sub-koreajisun006/dwi/sub-koreajisun006_acq-b0_dwi.json
sub-koreajisun006/dwi/sub-koreajisun006_acq-dwiMean_dwi.json
sub-koreajisun006/dwi/sub-koreajisun006_acq-b0Mean.json
sub-koreajisun007/dwi/sub-koreajisun007_acq-b0Mean.json
sub-koreajisun007/dwi/sub-koreajisun007_acq-dwiMean_dwi.json
sub-koreajisun007/dwi/sub-koreajisun007_acq-b0_dwi.json
sub-koreajisun008/dwi/sub-koreajisun008_acq-b0_dwi.json
sub-koreajisun008/dwi/sub-koreajisun008_acq-dwiMean_dwi.json
sub-koreajisun008/dwi/sub-koreajisun008_acq-b0Mean.json
sub-koreajisun009/dwi/sub-koreajisun009_acq-b0_dwi.json
sub-koreajisun009/dwi/sub-koreajisun009_acq-b0Mean.json
sub-koreajisun009/dwi/sub-koreajisun009_acq-dwiMean_dwi.json
sub-koreajisun010/dwi/sub-koreajisun010_acq-dwiMean_dwi.json
sub-koreajisun010/dwi/sub-koreajisun010_acq-b0_dwi.json
sub-koreajisun010/dwi/sub-koreajisun010_acq-b0Mean.json
sub-milanMarcella001/dwi/sub-milanMarcella001_acq-b0_dwi.json
sub-milanMarcella001/dwi/sub-milanMarcella001_acq-dwiMean_dwi.json
sub-milanMarcella001/dwi/sub-milanMarcella001_acq-b0Mean.json
sub-nwuHaleh004/dwi/sub-nwuHaleh004_acq-dwiMocoMean_dwi.json
sub-nwuHaleh004/dwi/sub-nwuHaleh004_acq-b0_dwi.json
sub-nwuHaleh004/dwi/sub-nwuHaleh004_acq-b0Mean.json
sub-nwuHaleh004/dwi/sub-nwuHaleh004_acq-dwiMean_dwi.json
sub-nwuHaleh004/anat/sub-nwuHaleh004_acq-MTon_MTR.json
sub-nwuHaleh004/anat/sub-nwuHaleh004_acq-MToff_MTR.json
sub-nwuHaleh005/dwi/sub-nwuHaleh005_acq-b0Mean.json
sub-nwuHaleh005/dwi/sub-nwuHaleh005_acq-dwiMean_dwi.json
sub-nwuHaleh005/dwi/sub-nwuHaleh005_acq-dwiMocoMean_dwi.json
sub-nwuHaleh005/dwi/sub-nwuHaleh005_acq-b0_dwi.json
sub-nwuHaleh005/anat/sub-nwuHaleh005_acq-MTon_MTR.json
sub-nwuHaleh005/anat/sub-nwuHaleh005_acq-MToff_MTR.json
sub-nwuHaleh006/dwi/sub-nwuHaleh006_acq-dwiMean_dwi.json
sub-nwuHaleh006/dwi/sub-nwuHaleh006_acq-dwiMocoMean_dwi.json
sub-nwuHaleh006/dwi/sub-nwuHaleh006_acq-b0Mean.json
sub-nwuHaleh006/dwi/sub-nwuHaleh006_acq-b0_dwi.json
sub-nwuHaleh006/anat/sub-nwuHaleh006_acq-MTon_MTR.json
sub-nwuHaleh006/anat/sub-nwuHaleh006_acq-MToff_MTR.json
sub-nwuHaleh007/dwi/sub-nwuHaleh007_acq-dwiMean_dwi.json
sub-nwuHaleh007/dwi/sub-nwuHaleh007_acq-b0Mean.json
sub-nwuHaleh007/dwi/sub-nwuHaleh007_acq-b0_dwi.json
sub-nwuHaleh007/dwi/sub-nwuHaleh007_acq-dwiMocoMean_dwi.json
sub-nwuHaleh007/anat/sub-nwuHaleh007_acq-MToff_MTR.json
sub-nwuHaleh007/anat/sub-nwuHaleh007_acq-MTon_MTR.json
sub-nwuHaleh008/dwi/sub-nwuHaleh008_acq-dwiMocoMean_dwi.json
sub-nwuHaleh008/dwi/sub-nwuHaleh008_acq-b0_dwi.json
sub-nwuHaleh008/dwi/sub-nwuHaleh008_acq-dwiMean_dwi.json
sub-nwuHaleh008/dwi/sub-nwuHaleh008_acq-b0Mean.json
sub-nwuHaleh008/anat/sub-nwuHaleh008_acq-MToff_MTR.json
sub-nwuHaleh008/anat/sub-nwuHaleh008_acq-MTon_MTR.json
sub-nwuHaleh009/dwi/sub-nwuHaleh009_acq-b0_dwi.json
sub-nwuHaleh009/dwi/sub-nwuHaleh009_acq-dwiMocoMean_dwi.json
sub-nwuHaleh009/dwi/sub-nwuHaleh009_acq-b0Mean.json
sub-nwuHaleh009/dwi/sub-nwuHaleh009_acq-dwiMean_dwi.json
sub-nwuHaleh009/anat/sub-nwuHaleh009_acq-MToff_MTR.json
sub-nwuHaleh009/anat/sub-nwuHaleh009_acq-MTon_MTR.json
sub-nwuHaleh010/dwi/sub-nwuHaleh010_acq-dwiMocoMean_dwi.json
sub-nwuHaleh010/dwi/sub-nwuHaleh010_acq-dwiMean_dwi.json
sub-nwuHaleh010/dwi/sub-nwuHaleh010_acq-b0Mean.json
sub-nwuHaleh010/dwi/sub-nwuHaleh010_acq-b0_dwi.json
sub-nwuHaleh010/anat/sub-nwuHaleh010_acq-MToff_MTR.json
sub-nwuHaleh010/anat/sub-nwuHaleh010_acq-MTon_MTR.json
sub-nwuHaleh011/dwi/sub-nwuHaleh011_acq-b0_dwi.json
sub-nwuHaleh011/dwi/sub-nwuHaleh011_acq-dwiMean_dwi.json
sub-nwuHaleh011/dwi/sub-nwuHaleh011_acq-dwiMocoMean_dwi.json
sub-nwuHaleh011/dwi/sub-nwuHaleh011_acq-b0Mean.json
sub-nwuHaleh011/anat/sub-nwuHaleh011_acq-MTon_MTR.json
sub-nwuHaleh011/anat/sub-nwuHaleh011_acq-MToff_MTR.json
sub-nwuHaleh012/dwi/sub-nwuHaleh012_acq-b0_dwi.json
sub-nwuHaleh012/dwi/sub-nwuHaleh012_acq-b0Mean.json
sub-nwuHaleh012/dwi/sub-nwuHaleh012_acq-dwiMean_dwi.json
sub-nwuHaleh012/dwi/sub-nwuHaleh012_acq-dwiMocoMean_dwi.json
sub-nwuHaleh012/anat/sub-nwuHaleh012_acq-MTon_MTR.json
sub-nwuHaleh012/anat/sub-nwuHaleh012_acq-MToff_MTR.json
sub-nwuHaleh013/dwi/sub-nwuHaleh013_acq-b0_dwi.json
sub-nwuHaleh013/dwi/sub-nwuHaleh013_acq-dwiMean_dwi.json
sub-nwuHaleh013/dwi/sub-nwuHaleh013_acq-dwiMocoMean_dwi.json
sub-nwuHaleh013/dwi/sub-nwuHaleh013_acq-b0Mean.json
sub-nwuHaleh013/anat/sub-nwuHaleh013_acq-MTon_MTR.json
sub-nwuHaleh013/anat/sub-nwuHaleh013_acq-MToff_MTR.json
sub-nwuHaleh014/dwi/sub-nwuHaleh014_acq-dwiMean_dwi.json
sub-nwuHaleh014/dwi/sub-nwuHaleh014_acq-b0Mean.json
sub-nwuHaleh014/dwi/sub-nwuHaleh014_acq-dwiMocoMean_dwi.json
sub-nwuHaleh014/dwi/sub-nwuHaleh014_acq-b0_dwi.json
sub-nwuHaleh014/anat/sub-nwuHaleh014_acq-MTon_MTR.json
sub-nwuHaleh014/anat/sub-nwuHaleh014_acq-MToff_MTR.json
sub-nwuHaleh015/dwi/sub-nwuHaleh015_acq-b0_dwi.json
sub-nwuHaleh015/dwi/sub-nwuHaleh015_acq-dwiMocoMean_dwi.json
sub-nwuHaleh015/dwi/sub-nwuHaleh015_acq-dwiMean_dwi.json
sub-nwuHaleh015/dwi/sub-nwuHaleh015_acq-b0Mean.json
sub-nwuHaleh015/anat/sub-nwuHaleh015_acq-MTon_MTR.json
sub-nwuHaleh015/anat/sub-nwuHaleh015_acq-MToff_MTR.json
sub-nwuHaleh016/dwi/sub-nwuHaleh016_acq-b0_dwi.json
sub-nwuHaleh016/dwi/sub-nwuHaleh016_acq-b0Mean.json
sub-nwuHaleh016/dwi/sub-nwuHaleh016_acq-dwiMean_dwi.json
sub-nwuHaleh016/dwi/sub-nwuHaleh016_acq-dwiMocoMean_dwi.json
sub-nwuHaleh016/anat/sub-nwuHaleh016_acq-MTon_MTR.json
sub-nwuHaleh016/anat/sub-nwuHaleh016_acq-MToff_MTR.json
sub-nwuHaleh016/anat/sub-nwuHaleh016_acq-T1w_MTR.json
sub-nwuHaleh017/dwi/sub-nwuHaleh017_acq-b0_dwi.json
sub-nwuHaleh017/dwi/sub-nwuHaleh017_acq-dwiMocoMean_dwi.json
sub-nwuHaleh017/dwi/sub-nwuHaleh017_acq-dwiMean_dwi.json
sub-nwuHaleh017/dwi/sub-nwuHaleh017_acq-b0Mean.json
sub-nwuHaleh017/anat/sub-nwuHaleh017_acq-MTon_MTR.json
sub-nwuHaleh017/anat/sub-nwuHaleh017_acq-MToff_MTR.json
sub-nwuHaleh018/dwi/sub-nwuHaleh018_acq-dwiMean_dwi.json
sub-nwuHaleh018/dwi/sub-nwuHaleh018_acq-dwiMocoMean_dwi.json
sub-nwuHaleh018/dwi/sub-nwuHaleh018_acq-b0Mean.json
sub-nwuHaleh018/dwi/sub-nwuHaleh018_acq-b0_dwi.json
sub-nwuHaleh018/anat/sub-nwuHaleh018_acq-MTon_MTR.json
sub-nwuHaleh018/anat/sub-nwuHaleh018_acq-MToff_MTR.json
sub-sherbrookeBiospective001/dwi/sub-sherbrookeBiospective001_acq-dwiMean_dwi.json
sub-sherbrookeBiospective001/dwi/sub-sherbrookeBiospective001_acq-b0Mean.json
sub-sherbrookeBiospective001/dwi/sub-sherbrookeBiospective001_acq-b0_dwi.json
sub-sherbrookeBiospective002/dwi/sub-sherbrookeBiospective002_acq-b0Mean.json
sub-sherbrookeBiospective002/dwi/sub-sherbrookeBiospective002_acq-b0_dwi.json
sub-sherbrookeBiospective002/dwi/sub-sherbrookeBiospective002_acq-dwiMean_dwi.json
sub-sherbrookeBiospective003/dwi/sub-sherbrookeBiospective003_acq-b0Mean.json
sub-sherbrookeBiospective003/dwi/sub-sherbrookeBiospective003_acq-b0_dwi.json
sub-sherbrookeBiospective003/dwi/sub-sherbrookeBiospective003_acq-dwiMean_dwi.json
sub-sherbrookeBiospective004/dwi/sub-sherbrookeBiospective004_acq-b0Mean.json
sub-sherbrookeBiospective004/dwi/sub-sherbrookeBiospective004_acq-b0_dwi.json
sub-sherbrookeBiospective004/dwi/sub-sherbrookeBiospective004_acq-dwiMean_dwi.json
sub-sherbrookeBiospective005/dwi/sub-sherbrookeBiospective005_acq-b0Mean.json
sub-sherbrookeBiospective005/dwi/sub-sherbrookeBiospective005_acq-dwiMean_dwi.json
sub-sherbrookeBiospective005/dwi/sub-sherbrookeBiospective005_acq-b0_dwi.json
sub-sherbrookeBiospective006/dwi/sub-sherbrookeBiospective006_acq-dwiMean_dwi.json
sub-sherbrookeBiospective006/dwi/sub-sherbrookeBiospective006_acq-b0Mean.json
sub-sherbrookeBiospective006/dwi/sub-sherbrookeBiospective006_acq-b0_dwi.json
sub-sherbrookeBiospective007/dwi/sub-sherbrookeBiospective007_acq-b0_dwi.json
sub-sherbrookeBiospective007/dwi/sub-sherbrookeBiospective007_acq-b0Mean.json
sub-sherbrookeBiospective007/dwi/sub-sherbrookeBiospective007_acq-dwiMean_dwi.json
sub-sherbrookeBiospective008/dwi/sub-sherbrookeBiospective008_acq-dwiMean_dwi.json
sub-sherbrookeBiospective008/dwi/sub-sherbrookeBiospective008_acq-b0_dwi.json
sub-sherbrookeBiospective008/dwi/sub-sherbrookeBiospective008_acq-b0Mean.json
sub-sherbrookeBiospective009/dwi/sub-sherbrookeBiospective009_acq-b0_dwi.json
sub-sherbrookeBiospective009/dwi/sub-sherbrookeBiospective009_acq-dwiMean_dwi.json
sub-sherbrookeBiospective009/dwi/sub-sherbrookeBiospective009_acq-b0Mean.json
sub-spineGeneric009/dwi/sub-spineGeneric009_acq-dwiMocoMean_dwi.json
sub-spineGeneric009/anat/sub-spineGeneric009_T2star.json
sub-spineGeneric009/anat/sub-spineGeneric009_acq-T1w_MTR.json
sub-spineGeneric009/anat/sub-spineGeneric009_acq-MTon_MTR.json
sub-spineGeneric009/anat/sub-spineGeneric009_T2w.json
sub-spineGeneric009/anat/sub-spineGeneric009_T1w.json
sub-spineGeneric011/anat/sub-spineGeneric011_T2star.json
sub-unfbiospective001/dwi/sub-unfbiospective001_acq-b0Mean.json
sub-unfbiospective001/dwi/sub-unfbiospective001_acq-b0_dwi.json
sub-unfbiospective001/dwi/sub-unfbiospective001_acq-dwiMean_dwi.json
sub-unfbiospective002/dwi/sub-unfbiospective002_acq-dwiMean_dwi.json
sub-unfbiospective002/dwi/sub-unfbiospective002_acq-b0Mean.json
sub-unfbiospective002/dwi/sub-unfbiospective002_acq-b0_dwi.json
sub-unfbiospective004/dwi/sub-unfbiospective004_acq-b0_dwi.json
sub-unfbiospective004/dwi/sub-unfbiospective004_acq-dwiMean_dwi.json
sub-unfbiospective004/dwi/sub-unfbiospective004_acq-b0Mean.json
sub-unfbiospective005/dwi/sub-unfbiospective005_acq-b0Mean.json
sub-unfbiospective005/dwi/sub-unfbiospective005_acq-dwiMean_dwi.json
sub-unfbiospective005/dwi/sub-unfbiospective005_acq-b0_dwi.json
sub-unfbiospective006/dwi/sub-unfbiospective006_acq-b0Mean.json
sub-unfbiospective006/dwi/sub-unfbiospective006_acq-dwiMean_dwi.json
sub-unfbiospective006/dwi/sub-unfbiospective006_acq-b0_dwi.json
sub-unfbiospective007/dwi/sub-unfbiospective007_acq-b0_dwi.json
sub-unfbiospective007/dwi/sub-unfbiospective007_acq-dwiMean_dwi.json
sub-unfbiospective007/dwi/sub-unfbiospective007_acq-b0Mean.json
sub-unfbiospective008/dwi/sub-unfbiospective008_acq-b0_dwi.json
sub-unfbiospective008/dwi/sub-unfbiospective008_acq-b0Mean.json
sub-unfbiospective008/dwi/sub-unfbiospective008_acq-dwiMean_dwi.json
sub-unfbiospective009/dwi/sub-unfbiospective009_acq-dwiMean_dwi.json
sub-unfbiospective009/dwi/sub-unfbiospective009_acq-b0Mean.json
sub-unfbiospective009/dwi/sub-unfbiospective009_acq-b0_dwi.json
sub-unfbiospective010/dwi/sub-unfbiospective010_acq-b0_dwi.json
sub-unfbiospective010/dwi/sub-unfbiospective010_acq-b0Mean.json
sub-unfbiospective010/dwi/sub-unfbiospective010_acq-dwiMean_dwi.json
sub-unfbiospective011/dwi/sub-unfbiospective011_acq-dwiMean_dwi.json
sub-unfbiospective011/dwi/sub-unfbiospective011_acq-b0_dwi.json
sub-unfbiospective011/dwi/sub-unfbiospective011_acq-b0Mean.json
sub-unfbiospective012/dwi/sub-unfbiospective012_acq-b0Mean.json
sub-unfbiospective012/dwi/sub-unfbiospective012_acq-b0_dwi.json
sub-unfbiospective012/dwi/sub-unfbiospective012_acq-dwiMean_dwi.json
sub-unfbiospective013/dwi/sub-unfbiospective013_acq-dwiMean_dwi.json
sub-unfbiospective013/dwi/sub-unfbiospective013_acq-b0Mean.json
sub-unfErssm001/dwi/sub-unfErssm001_acq-b0_dwi.json
sub-unfErssm001/dwi/sub-unfErssm001_acq-b0Mean.json
sub-unfErssm001/dwi/sub-unfErssm001_acq-dwiMean_dwi.json
sub-unfErssm002/dwi/sub-unfErssm002_acq-b0Mean.json
sub-unfErssm002/dwi/sub-unfErssm002_acq-b0_dwi.json
sub-unfErssm002/dwi/sub-unfErssm002_acq-dwiMean_dwi.json
sub-unfErssm003/dwi/sub-unfErssm003_acq-b0_dwi.json
sub-unfErssm003/dwi/sub-unfErssm003_acq-dwiMean_dwi.json
sub-unfErssm003/dwi/sub-unfErssm003_acq-b0Mean.json
sub-unfErssm004/dwi/sub-unfErssm004_acq-dwiMean_dwi.json
sub-unfErssm004/dwi/sub-unfErssm004_acq-b0_dwi.json
sub-unfErssm004/dwi/sub-unfErssm004_acq-b0Mean.json
sub-unfErssm005/dwi/sub-unfErssm005_acq-dwiMean_dwi.json
sub-unfErssm005/dwi/sub-unfErssm005_acq-b0Mean.json
sub-unfErssm005/dwi/sub-unfErssm005_acq-b0_dwi.json
sub-unfErssm006/dwi/sub-unfErssm006_acq-b0_dwi.json
sub-unfErssm006/dwi/sub-unfErssm006_acq-b0Mean.json
sub-unfErssm006/dwi/sub-unfErssm006_acq-dwiMean_dwi.json
sub-unfErssm007/dwi/sub-unfErssm007_acq-b0_dwi.json
sub-unfErssm007/dwi/sub-unfErssm007_acq-b0Mean.json
sub-unfErssm007/dwi/sub-unfErssm007_acq-dwiMean_dwi.json
sub-unfErssm008/dwi/sub-unfErssm008_acq-b0_dwi.json
sub-unfErssm008/dwi/sub-unfErssm008_acq-b0Mean.json
sub-unfErssm008/dwi/sub-unfErssm008_acq-dwiMean_dwi.json
sub-unfErssm009/dwi/sub-unfErssm009_acq-b0_dwi.json
sub-unfErssm009/dwi/sub-unfErssm009_acq-b0Mean.json
sub-unfErssm009/dwi/sub-unfErssm009_acq-dwiMean_dwi.json
sub-unfErssm013/dwi/sub-unfErssm013_acq-dwiMean_dwi.json
sub-unfErssm013/dwi/sub-unfErssm013_acq-b0Mean.json
sub-unfErssm013/dwi/sub-unfErssm013_acq-b0_dwi.json
sub-unfErssm014/dwi/sub-unfErssm014_acq-dwiMean_dwi.json
sub-unfErssm014/dwi/sub-unfErssm014_acq-b0_dwi.json
sub-unfErssm014/dwi/sub-unfErssm014_acq-b0Mean.json
sub-unfErssm016/dwi/sub-unfErssm016_acq-b0Mean.json
sub-unfErssm016/dwi/sub-unfErssm016_acq-b0_dwi.json
sub-unfErssm016/dwi/sub-unfErssm016_acq-dwiMean_dwi.json
sub-unfErssm018/dwi/sub-unfErssm018_acq-b0_dwi.json
sub-unfErssm018/dwi/sub-unfErssm018_acq-b0Mean.json
sub-unfErssm018/dwi/sub-unfErssm018_acq-dwiMean_dwi.json
sub-unfErssm019/dwi/sub-unfErssm019_acq-b0_dwi.json
sub-unfErssm019/dwi/sub-unfErssm019_acq-dwiMean_dwi.json
sub-unfErssm019/dwi/sub-unfErssm019_acq-b0Mean.json
sub-unfErssm020/dwi/sub-unfErssm020_acq-b0Mean.json
sub-unfErssm020/dwi/sub-unfErssm020_acq-dwiMean_dwi.json
sub-unfErssm020/dwi/sub-unfErssm020_acq-b0_dwi.json
sub-unfErssm021/dwi/sub-unfErssm021_acq-b0_dwi.json
sub-unfErssm021/dwi/sub-unfErssm021_acq-b0Mean.json
sub-unfErssm021/dwi/sub-unfErssm021_acq-dwiMean_dwi.json
sub-unfErssm026/dwi/sub-unfErssm026_acq-b0_dwi.json
sub-unfErssm026/dwi/sub-unfErssm026_acq-b0Mean.json
sub-unfErssm026/dwi/sub-unfErssm026_acq-dwiMean_dwi.json
sub-unfErssm028/dwi/sub-unfErssm028_acq-dwiMean_dwi.json
sub-unfErssm028/dwi/sub-unfErssm028_acq-b0_dwi.json
sub-unfErssm028/dwi/sub-unfErssm028_acq-b0Mean.json
sub-unfErssm029/dwi/sub-unfErssm029_acq-b0_dwi.json
sub-unfErssm029/dwi/sub-unfErssm029_acq-b0Mean.json
sub-unfErssm029/dwi/sub-unfErssm029_acq-dwiMean_dwi.json
sub-unfPain001/dwi/sub-unfPain001_acq-dwiMean_dwi.json
sub-unfPain001/dwi/sub-unfPain001_acq-b0Mean.json
sub-unfPain001/dwi/sub-unfPain001_acq-b0_dwi.json
sub-unfPain002/dwi/sub-unfPain002_acq-b0_dwi.json
sub-unfPain002/dwi/sub-unfPain002_acq-b0Mean.json
sub-unfPain002/dwi/sub-unfPain002_acq-dwiMean_dwi.json
sub-unfPain003/dwi/sub-unfPain003_acq-dwiMean_dwi.json
sub-unfPain003/dwi/sub-unfPain003_acq-b0Mean.json
sub-unfPain003/dwi/sub-unfPain003_acq-b0_dwi.json
sub-unfPain004/dwi/sub-unfPain004_acq-b0Mean.json
sub-unfPain004/dwi/sub-unfPain004_acq-dwiMean_dwi.json
sub-unfPain004/dwi/sub-unfPain004_acq-b0_dwi.json
sub-unfPain005/dwi/sub-unfPain005_acq-b0Mean.json
sub-unfPain005/dwi/sub-unfPain005_acq-b0_dwi.json
sub-unfPain005/dwi/sub-unfPain005_acq-dwiMean_dwi.json
sub-unfPain006/dwi/sub-unfPain006_acq-dwiMean_dwi.json
sub-unfPain006/dwi/sub-unfPain006_acq-b0_dwi.json
sub-unfPain006/dwi/sub-unfPain006_acq-b0Mean.json
sub-unfPain007/dwi/sub-unfPain007_acq-b0Mean.json
sub-unfPain007/dwi/sub-unfPain007_acq-b0_dwi.json
sub-unfPain007/dwi/sub-unfPain007_acq-dwiMean_dwi.json
sub-unfPain008/dwi/sub-unfPain008_acq-b0Mean.json
sub-unfPain008/dwi/sub-unfPain008_acq-dwiMean_dwi.json
sub-unfPain008/dwi/sub-unfPain008_acq-b0_dwi.json
sub-unfPain009/dwi/sub-unfPain009_acq-b0_dwi.json
sub-unfPain009/dwi/sub-unfPain009_acq-b0Mean.json
sub-unfPain009/dwi/sub-unfPain009_acq-dwiMean_dwi.json
sub-unfPain010/dwi/sub-unfPain010_acq-dwiMean_dwi.json
sub-unfPain010/dwi/sub-unfPain010_acq-b0Mean.json
sub-unfPain010/dwi/sub-unfPain010_acq-b0_dwi.json
sub-unfPain011/dwi/sub-unfPain011_acq-b0Mean.json
sub-unfPain011/dwi/sub-unfPain011_acq-b0_dwi.json
sub-unfPain011/dwi/sub-unfPain011_acq-dwiMean_dwi.json
sub-unfPain012/dwi/sub-unfPain012_acq-b0Mean.json
sub-unfPain012/dwi/sub-unfPain012_acq-dwiMean_dwi.json
sub-unfPain012/dwi/sub-unfPain012_acq-b0_dwi.json
sub-unfPain013/dwi/sub-unfPain013_acq-dwiMean_dwi.json
sub-unfPain013/dwi/sub-unfPain013_acq-b0Mean.json
sub-unfPain013/dwi/sub-unfPain013_acq-b0_dwi.json
sub-unfPain014/dwi/sub-unfPain014_acq-b0Mean.json
sub-unfPain014/dwi/sub-unfPain014_acq-b0_dwi.json
sub-unfPain014/dwi/sub-unfPain014_acq-dwiMean_dwi.json
sub-unfPain015/dwi/sub-unfPain015_acq-b0Mean.json
sub-unfPain015/dwi/sub-unfPain015_acq-dwiMean_dwi.json
sub-unfPain015/dwi/sub-unfPain015_acq-b0_dwi.json
sub-unfPain016/dwi/sub-unfPain016_acq-b0Mean.json
sub-unfPain016/dwi/sub-unfPain016_acq-dwiMean_dwi.json
sub-unfPain016/dwi/sub-unfPain016_acq-b0_dwi.json
sub-unfPain017/dwi/sub-unfPain017_acq-b0_dwi.json
sub-unfPain017/dwi/sub-unfPain017_acq-b0Mean.json
sub-unfPain017/dwi/sub-unfPain017_acq-dwiMean_dwi.json
sub-unfPain018/dwi/sub-unfPain018_acq-b0Mean.json
sub-unfPain018/dwi/sub-unfPain018_acq-b0_dwi.json
sub-unfPain018/dwi/sub-unfPain018_acq-dwiMean_dwi.json
sub-unfPain019/dwi/sub-unfPain019_acq-dwiMean_dwi.json
sub-unfPain019/dwi/sub-unfPain019_acq-b0_dwi.json
sub-unfPain019/dwi/sub-unfPain019_acq-b0Mean.json
sub-unfPain020/dwi/sub-unfPain020_acq-b0_dwi.json
sub-unfPain020/dwi/sub-unfPain020_acq-dwiMean_dwi.json
sub-unfPain020/dwi/sub-unfPain020_acq-b0Mean.json
sub-unfPain021/dwi/sub-unfPain021_acq-b0Mean.json
sub-unfPain021/dwi/sub-unfPain021_acq-dwiMean_dwi.json
sub-unfPain021/dwi/sub-unfPain021_acq-b0_dwi.json
sub-unfPain022/dwi/sub-unfPain022_acq-dwiMean_dwi.json
sub-unfPain022/dwi/sub-unfPain022_acq-b0Mean.json
sub-unfPain022/dwi/sub-unfPain022_acq-b0_dwi.json
sub-unfPain023/dwi/sub-unfPain023_acq-b0Mean.json
sub-unfPain023/dwi/sub-unfPain023_acq-b0_dwi.json
sub-unfPain023/dwi/sub-unfPain023_acq-dwiMean_dwi.json
sub-unfPain024/dwi/sub-unfPain024_acq-dwiMean_dwi.json
sub-unfPain024/dwi/sub-unfPain024_acq-b0Mean.json
sub-unfPain024/dwi/sub-unfPain024_acq-b0_dwi.json
sub-unfPain025/dwi/sub-unfPain025_acq-b0Mean.json
sub-unfPain025/dwi/sub-unfPain025_acq-dwiMean_dwi.json
sub-unfPain025/dwi/sub-unfPain025_acq-b0_dwi.json
sub-unfPain026/dwi/sub-unfPain026_acq-b0_dwi.json
sub-unfPain026/dwi/sub-unfPain026_acq-dwiMean_dwi.json
sub-unfPain026/dwi/sub-unfPain026_acq-b0Mean.json
sub-unfPain027/dwi/sub-unfPain027_acq-b0Mean.json
sub-unfPain027/dwi/sub-unfPain027_acq-b0_dwi.json
sub-unfPain027/dwi/sub-unfPain027_acq-dwiMean_dwi.json
sub-unfPain028/dwi/sub-unfPain028_acq-b0Mean.json
sub-unfPain028/dwi/sub-unfPain028_acq-b0_dwi.json
sub-unfPain028/dwi/sub-unfPain028_acq-dwiMean_dwi.json
sub-unfPain029/dwi/sub-unfPain029_acq-dwiMean_dwi.json
sub-unfPain029/dwi/sub-unfPain029_acq-b0_dwi.json
sub-unfPain029/dwi/sub-unfPain029_acq-b0Mean.json
sub-unfPain030/dwi/sub-unfPain030_acq-b0_dwi.json
sub-unfPain030/dwi/sub-unfPain030_acq-dwiMean_dwi.json
sub-unfPain030/dwi/sub-unfPain030_acq-b0Mean.json
sub-unfPain031/dwi/sub-unfPain031_acq-b0Mean.json
sub-unfPain031/dwi/sub-unfPain031_acq-b0_dwi.json
sub-unfPain031/dwi/sub-unfPain031_acq-dwiMean_dwi.json
sub-unfPain034/dwi/sub-unfPain034_acq-b0_dwi.json
sub-unfPain034/dwi/sub-unfPain034_acq-dwiMean_dwi.json
sub-unfPain034/dwi/sub-unfPain034_acq-b0Mean.json
sub-unfSCT003/dwi/sub-unfSCT003_acq-b0_dwi.json
sub-unfSCT003/dwi/sub-unfSCT003_acq-b0Mean.json
sub-unfSCT003/dwi/sub-unfSCT003_acq-dwiMean_dwi.json
sub-unfSCT004/dwi/sub-unfSCT004_acq-dwiMean_dwi.json
sub-unfSCT004/dwi/sub-unfSCT004_acq-b0_dwi.json
sub-unfSCT004/dwi/sub-unfSCT004_acq-b0Mean.json
sub-unfSCT005/dwi/sub-unfSCT005_acq-b0Mean.json
sub-unfSCT005/dwi/sub-unfSCT005_acq-dwiMean_dwi.json
sub-unfSCT005/dwi/sub-unfSCT005_acq-b0_dwi.json
sub-unfSCT006/dwi/sub-unfSCT006_acq-b0Mean.json
sub-unfSCT006/dwi/sub-unfSCT006_acq-dwiMean_dwi.json
sub-unfSCT006/dwi/sub-unfSCT006_acq-b0_dwi.json
sub-unfSCT007/dwi/sub-unfSCT007_acq-b0Mean.json
sub-unfSCT007/dwi/sub-unfSCT007_acq-dwiMean_dwi.json
sub-unfSCT007/dwi/sub-unfSCT007_acq-b0_dwi.json
sub-unfSCT011/dwi/sub-unfSCT011_acq-b0_dwi.json
sub-unfSCT011/dwi/sub-unfSCT011_acq-dwiMean_dwi.json
sub-unfSCT011/dwi/sub-unfSCT011_acq-b0Mean.json
sub-unfSCT012/dwi/sub-unfSCT012_acq-b0Mean.json
sub-unfSCT012/dwi/sub-unfSCT012_acq-dwiMean_dwi.json
sub-unfSCT012/dwi/sub-unfSCT012_acq-b0_dwi.json
sub-unfSCT013/dwi/sub-unfSCT013_acq-dwiMean_dwi.json
sub-unfSCT013/dwi/sub-unfSCT013_acq-b0Mean.json
sub-unfSCT013/dwi/sub-unfSCT013_acq-b0_dwi.json
sub-unfSCT017/dwi/sub-unfSCT017_acq-b0Mean.json
sub-unfSCT017/dwi/sub-unfSCT017_acq-dwiMean_dwi.json
sub-unfSCT017/dwi/sub-unfSCT017_acq-b0_dwi.json
sub-xuanwuChenxi001/dwi/sub-xuanwuChenxi001_dwi.json
sub-xuanwuChenxi001/dwi/sub-xuanwuChenxi001_acq-b0_dwi.json
sub-xuanwuChenxi001/dwi/sub-xuanwuChenxi001_acq-b0Mean.json
sub-xuanwuChenxi002/dwi/sub-xuanwuChenxi002_dwi.json
sub-xuanwuChenxi002/dwi/sub-xuanwuChenxi002_acq-b0Mean.json
sub-xuanwuChenxi002/dwi/sub-xuanwuChenxi002_acq-b0_dwi.json

HCI Stress Testing

We are unsure about performance of our storage system. We want to measure I/O performance to duke.neuro.polymtl.ca.

There are several factors:

  • which disk we're writing to (represented by different mountpoints)
  • whether we're writing from
    • locally on duke
    • remotely on the neuropoly LAN
      • over smb://
        • from Windows
        • from linux's mount -t cifs
      • over afp://
    • remotely over the VPN
      • over smb://
        • from Windows
        • from linux's mount -t cifs
      • over afp://
  • the I/O pattern:
    • large bulk writes
    • large bulk reads
    • many small writes
    • many small reads
    • interleaved bulk/small writes

internal server: permissions

We need to figure out how to do protected branches and protected repos on gitolite. This is a requisite for #19, and for me feeling good enough about #22 to close it. I can do this with its permission system: https://gitolite.com/gitolite/conf.html#access-rules but I need to read it closely and test it out.

Probably we want people to share repos and just work in private branches -- the way we often do here, e.g. in @neuropoly/spinal-cord-toolbox -- to avoid exploding the storage on the server, which means we need a way to grant access to personal branches but lock down the trunk. (though there's gitolite fork, run as ssh [email protected] fork, which behaves like github: all the repos. Does that mean it shares the same bug as github, where forks will share each others commits and trees too?).

#19 will involve documenting how these restrictions fit into a cli-and-chat-based pull request system.

uk-biobank: 7 missing subjects

HEADS UP: this issue is about a private dataset covered by PII protections. Make doubly-sure not to post the content of any files on this thread.

These subjects are missing from datasets/uk-biobank:

sub-1007019
sub-1016641
sub-1020747
sub-1022968
sub-1025700
sub-1039546
sub-1040420

I have the folders, imported from our SMB share, but they are empty so git didn't record them and as of the current master they are just missing when people clone the dataset.

What happened here?

discovered in collab with @alexfoias.

Double size of dataset after cloning

The size of the folder is double than what is expected.

After my latest cloning:

u111358@rosenberg:~/data_nvme_u111358/ivado-project/Datasets$ du -sh sct-testing-large/
37G     sct-testing-large/

In the past, we achieved:
image

We had this issue before but the procedure on how we go around it is not documented as a standard approach at: https://github.com/neuropoly/data-management/blob/ng/gitolite/internal-server.md

I think we went around it with something like:
git clone --recurse-submodules [email protected]:konstantinos/ivado-project

Can you confirm?

implement CI for uk-biobank

@kousu Is it possible to implement CI for the uk-biobank repo ?

Aspects to check:

  • check if all subjects listed in participants.tsv have data folders
  • check if all subjects have T1w, T2w images and json sidecar

rolling clean of gitolite trash

Docs for gitolite D say:

The 'trash', 'list-trash', and 'restore' subcommands:
You can 'trash' a repo, which moves it to a special place:
ssh git@host D trash repo
You can then 'list-trash'
ssh git@host D list-trash
which prints something like
repo/2012-04-11_05:58:51
allowing you to restore by saying
ssh git@host D restore repo/2012-04-11_05:58:51

This is probably better than rm, but it's unhelpful if it never gets erased. So I think we should set up a cronjob that runs once a day but deletes all repos trashed over $n days ago.

Also maybe send an email (i.e. print to stdout) about repos in the trash at the same time -- the repos left over after the daily cleanup, that are on the chopping block for the coming week.

Document `bids-validator`

BIDS and bids-validator are tightly related to how we manage our data, but we don't explain or even link them at all anywhere in these docs.

I don't know where they belong but someone probably has an idea.

Data modified whereas I did not modify them

I've mounted duke:sct_testing, and then ran:

julien-macbook:/Volumes/sct_testing/test/Datalad-dummy_dataset $ datalad status
 modified: sub-amuAMU15001/anat/sub-amuAMU15001_T2star.nii.gz (file)
 modified: sub-amuAMU15002/anat/sub-amuAMU15002_T2star.nii.gz (file)
 modified: sub-amuAMU15003/anat/sub-amuAMU15003_T2star.nii.gz (file)

As you notice, the three files are reported to have changed, even though I did not change them. What could be the cause? file system issue?

ssh: Could not resolve hostname data.neuro.polymtl.ca: Temporary failure in name resolution

While trying to connect to data.neuro.polymtl.ca, following the steps for internal server the following error occurred:

(base) sabeda@DESKTOP-JQ8A4UV:/mnt/c/Users/sb199/Projet3$ ssh [email protected] help
ssh: Could not resolve hostname data.neuro.polymtl.ca: Temporary failure in name resolution

I am using WSL with Ubuntu 18.04.
ping data.neuro.polymtl.ca worked, so VPN connection was ok.

UPDATE: rebooting the computer solved this issue.

Cancelling sync --content leaves tree in laggy state

It seems that if you do:

git annex sync --content
[...]
^C

i.e. cancel a large download, then you end up with files in the same state as

https://github.com/spine-generic/spine-generic/wiki/git-annex%3A-Troubleshooting#a-cosmetic-problem-affecting-git-status

The symptoms are:

  • git status shows a huge number of modified files
  • a second git annex sync hangs

The fix is the same:

git status | sed -n 's/modified://p' | xargs git update-index -q --refresh

uk-biobank: invalid authors

$ /usr/local/bin/bids-validator .; echo $?
[email protected]

	1: [WARN] The Authors field of dataset_description.json should contain an array of fields - with one author per field. This was triggered based on the presence of only one author field. Please ignore if all contributors are already properly listed. (code: 102 - TOO_FEW_AUTHORS)

	Please visit https://neurostars.org/search?q=TOO_FEW_AUTHORS for existing conversations about this issue.


        Summary:                  Available Tasks:        Available Modalities: 
        1404 Files, 9.75GB                                T1w                   
        350 - Subjects                                    T2w                   
        1 - Session                                                             


	If you have any questions, please post on https://neurostars.org/tags/bids.

This is because our dataset says

$ cat dataset_description.json 
{
    "Name": "Minimalistic UKBioBank Dataset",
    "BIDSVersion": "1.4.1",
    "Authors": [
        "NeuroPoly"]
}

This is both wrong and it annoys bids-validator. Can we write "UK Biobank" in there? Maybe a URL? A DOI? We can add our own names on the end too, since we're definitely part of the provenance of this dataset now that we're bidsifying it.

Bad header info for nifti files in uk-biobank dataset

Description

I git cloned the uk-biobank dataset on joplin to test preprocessing steps, it could'nt work because it was unable to open the nifti files.
When I try to get the header of a nifti file from the dataset I get this:

(base) sebeda@joplin:~/uk-biobank/sub-1000032/anat$ fslhd sub-1000032_T1w.nii.gz
** ERROR (nifti_image_read): bad binary header read for file 'sub-1000032_T1w.nii.gz'
  - read 106 of 348 bytes
** ERROR: nifti_image_open(sub-1000032_T1w): bad header info
ERROR: failed to open file sub-1000032_T1w
ERROR: Could not open file

Can somebody check if they can reproduce this error? Maybe I downloaded the data the wrong way, I followed https://github.com/neuropoly/data-management/blob/master/internal-server.md#download

git-annex: Repository version 8 is not supported. Upgrade git-annex.

On joplin:

p101317@joplin:~/duke/sct_testing/test/Datalad-dummy_dataset$ datalad save
CommandError: command '['git-annex', 'add', '-c', 'annex.dotfiles=true', '--json', '--json-error-messages', '--include-dotfiles', '--', 'participants.tsv']' failed with exitcode 1
Failed to run ['git-annex', 'add', '-c', 'annex.dotfiles=true', '--json', '--json-error-messages', '--include-dotfiles', '--', 'participants.tsv'] under '/home/GRAMES.POLYMTL.CA/p101317/duke/sct_testing/test/Datalad-dummy_dataset'. Exit code=1.
git-annex: Repository version 8 is not supported. Upgrade git-annex.
p101317@joplin:~/duke/sct_testing/test/Datalad-dummy_dataset$ datalad --version
datalad 0.12.4

drop user namespaces

I enabled CREATOR/..* so that everyone would have a local namespace of data, like on Github.

But Julien explained we don't want to keep projects on this server, just data, and in light of that I don't think there's really a use-case for these anymore. Drop them from the config and the docs.

Migration

We need to migrate datasets off smb://duke.neuro.polymtl.ca and onto git+ssh://data.neuro.polymtl.ca.

I imagine both will live on for a while, but we want to prefer the git server to:

a. save space by using branching instead of duplicating entire datasets
b. have provenance tracking

To do this we need to (I think):

  1. set up permissions (#27) to replace ActiveDirectory permissions
    • this will mean we can self-manage permissions, which will be nice; but also it's an extra responsibility, so we should probably have some auditing scripts too
  2. De-duplicate the duplicated datasets
    • this is the hardest and slowest part
  3. Make each (deduplicated) dataset into BIDS format (just to be sure)
  4. Migrate each dataset to git-annex
  5. Upload each dataset to the server

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.