humanbrainproject / neuroglancer-scripts Goto Github PK
View Code? Open in Web Editor NEWConversion of neuroimaging data for display in Neuroglancer
License: MIT License
Conversion of neuroimaging data for display in Neuroglancer
License: MIT License
When I produce precomputed files from png files, the program will take up a lot of memory. Is there any way to reduce memory consumption?
Since Windows path syntax is not a subset of URL syntax, urllib.parse.urlsplit
exceptions on it.
Version on pypi (0.1.0) has a bug in which options are not passed correctly between functions (appears to have been fixed 5 months ago). f0e1e79
It would make it easier to use if you had an up to date version on pypi. Thanks!
Reproduce the error:
git clone https://github.com/HumanBrainProject/neuroglancer-scripts
cd neuroglancer-scripts
python3 -m venv venv/
. venv/bin/activate
pip install neuroglancer-scripts
mv ~/mygii.gii ./
mesh-to-precomputed \
--coord-transform=-1,0,0,0,1,0,0,0,1 \
mygii.gii \
./mymesh
digging into the saucecode, in the github repo:
mesh_to_precomputed.py
def mesh_file_to_precomputed(args):
# truncated
triangles = triangles_list[0].data
if (coord_transform is not None
and np.linalg.det(coord_transform[:3, :3]) < 0):
# Flip the triangles to fix inside/outside
triangles = np.flip(triangles, axis=1)
# truncated
whereas in the pip installed package:
mesh_to_precomputed.py
def mesh_file_to_precomputed(input_filename, output_filename,
coord_transform=None):
# truncated
if coord_transform is not None:
if coord_transform.shape[0] == 4:
assert np.all(coord_transform[3, :] == [0, 0, 0, 1])
points = points.T
points = np.dot(coord_transform[:3, :3], points)
points += coord_transform[:3, 3, np.newaxis]
points = points.T
if np.linalg.det(coord_transform[:3, :3]) < 0:
# Flip the triangles to fix inside/outside
triangles = np.flip(triangles, axis=1)
triangles_list = mesh.get_arrays_from_intent("NIFTI_INTENT_TRIANGLE")
assert len(triangles_list) == 1
triangles = triangles_list[0].data
# truncated
Is there an interest in adding a profile functionality to neuroglancer-scripts? (similar to scale-stats
)
Usecase: we would like to be able to
1/ monitor the "liveliness" of a VM serving precomputed volume
2/ simulate average usecases (e.g. 10 concurrent users) and test the performance (average/median/min/max response time) of image services
I would imagine the usage would be something like:
# N.B. not yet implemented
profile-precomputed [-c/--concurrent int=1] \
[-n int=1] \
[-t/--threshold_ms int=-1] \
PRECOMPUTED_URL1 [PRECOMPUTED_URL2] ...
I would imagine for each iteration the script would select a random x, y, z, level, then fetch volumes at +- 3 levels, with appropriate bounding box (bound by the limit of the volume)
We will happy to put in work for this utility, as we foresee that we will need to create such a utility in any case.
The question is, is neuroglancer-script a good/correct place for such utility?
would love to hear your thoughts @ylep
despite #15 , it seems different nifti softwares (e.g. fiji nifti plugin) seems to store rgb channel on 3rd index, rather than 4th index. related PR #23 is somewhat suppose to fix this, but it would be more suitable to clarify how RGB is encoded.
Unfortunately, this feature is outside of my capacity at the moment. Perhaps I can pick it up in a few months.
Whilst writing a new feature (#35), I noticed that this package supports up to python 3.5
Whilst I was able to dance around a few restrictions, the lack of f-string support would mean rewriting a large chunk of code. Whilst they are mostly cosmetic, I do wonder if it is possible to remove py3.5 from supported list, since it is well over 3 years past its EOL
ping @ylep
When using the slices-to-precomputed
command (example command below) I get a directory structure of files instead of a flat file structure.
slices-to-precomputed slices output --flat
where slices and output are both directories.
Running compute-scales
outputs the files in a flat structure without any problems.
Directory tree of output:
tree.txt
Hi,
I am trying to set up Apache (Apache/2.4.37 centos) for file serving but somehow I cannot get it to work the .gz
rewriting for the precomputed dataset. Chrome Dev Tool is showing 404 without .gz
extensions. Could I be missing something here? I am looking at the following example. google/neuroglancer#357
Thank you,
-m
https://neuroglancer-scripts.readthedocs.io/en/latest/serving-data.html
# If you get a 403 Forbidden error, try to comment out the Options directives
# below (they may be disallowed by your server's AllowOverride setting).
<IfModule headers_module>
# Needed to use the data from a Neuroglancer instance served from a
# different server (see http://enable-cors.org/server_apache.html).
Header set Access-Control-Allow-Origin "*"
</IfModule>
# Data chunks are stored in sub-directories, in order to avoid having
# directories with millions of entries. Therefore we need to rewrite URLs
# because Neuroglancer expects a flat layout.
Options FollowSymLinks
RewriteEngine On
RewriteRule "^(.*)/([0-9]+-[0-9]+)_([0-9]+-[0-9]+)_([0-9]+-[0-9]+)$" "$1/$2/$3/$4"
# Microsoft filesystems do not support colons in file names, but pre-computed
# meshes use a colon in the URI (e.g. 100:0). As :0 is the most common (only?)
# suffix in use, we will serve a file that has this suffix stripped.
RewriteCond "%{REQUEST_FILENAME}" !-f
RewriteRule "^(.*):0$" "$1"
<IfModule mime_module>
# Allow serving pre-compressed files, which can save a lot of space for raw
# chunks, compressed segmentation chunks, and mesh chunks.
#
# The AddType directive should in theory be replaced by a "RemoveType .gz"
# directive, but with that configuration Apache fails to serve the
# pre-compressed chunks (confirmed with Debian version 2.2.22-13+deb7u6).
# Fixes welcome.
Options Multiviews
AddEncoding x-gzip .gz
AddType application/octet-stream .gz
</IfModule>
scikit-image io.imread function expects a string for the filename input. It uses that string to determine if the file is a TIFF file, and if it is, it uses the tifffile package instead of PILLOW. PILLOW does not support uint64 TIFFs.
when loading images a Path object is used instead of a String, which for uint64 tiff files, causes an exception. https://github.com/HumanBrainProject/neuroglancer-scripts/blob/master/src/neuroglancer_scripts/scripts/slices_to_precomputed.py#L142-L143
If you replace the above lines with the following, uint64 images load correctly in scikit-image:
block = skimage.io.concatenate_images(
map(skimage.io.imread, [str(f) for f in slice_filenames[slice_slicing]]))
when following the guide on converting nii file to NG precomputed segmentations, I ran into the case where the nii file is encoded in float. While NG can display the converted chunks (on hover shows the right label), the segments do not have colour map applied.
The nii had to be converted to one of the uint encodings (in my case, uint8).
Would it be wise to log a warning if user had --type=segmentation
flag on, and the nii file is encoded in float?
related: would be cool if the nii encoding conversion can be done by the script (or a separate library, if there is a need as such)
Hi, Great package :) I have a large nii volume which will not fit in 260GB of ram. Using mmap is not an option since this would take more than a month per volume on my workstation. I am wondering whether it is possible for something in between mmap and loading the full volume into memory. Potentially loading and chunking portions of the volume (or manually specifying maximum available memory). Thank you in advance :)
currently, nifti with data type RGB24 or tiff with RGB are not converted properly.
Ideally, they can be converted into type: image, channel: 3
edit: it seems RGB tiff is not the issue, but, rather lzw compressoin.
as of 1.0.0
min repro
volume-to-precomputed-pyramid rgb.nii.gz ./out/
expected behaviour:
converts rgb nifti to 3 channel volume
actual behaviour:
exits confused with why void24 is the data_type of input nifti
comment:
apologies, this one escaped me. I will provide a PR & add tests.
I haven't yet figured out exactly why this is happening, but the progress bar in slices-to-precomputed is causing the script to fail prematurely for me.
When called with a relatively simple input:
slices-to-precomputed.exe data\ output\ --flat
Here's the stack trace:
Exception ignored in: <bound method tqdm.__del__ of writing chunks: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 90/90 [00:18<00:00, 3.59chunks/s]>
Traceback (most recent call last):
File "/mnt/c/Users/ben/Documents/AB_MouseRegistrationTests_MSGSW171registrationtest_0_2160_0_2560_0_16_Ch0_1/env/lib/python3.5/site-packages/tqdm/_tqdm.py", line 889, in __del__
self.close()
File "/mnt/c/Users/ben/Documents/AB_MouseRegistrationTests_MSGSW171registrationtest_0_2160_0_2560_0_16_Ch0_1/env/lib/python3.5/site-packages/tqdm/_tqdm.py", line 1095, in close
self._decr_instances(self)
File "/mnt/c/Users/ben/Documents/AB_MouseRegistrationTests_MSGSW171registrationtest_0_2160_0_2560_0_16_Ch0_1/env/lib/python3.5/site-packages/tqdm/_tqdm.py", line 454, in _decr_instances
cls.monitor.exit()
File "/mnt/c/Users/ben/Documents/AB_MouseRegistrationTests_MSGSW171registrationtest_0_2160_0_2560_0_16_Ch0_1/env/lib/python3.5/site-packages/tqdm/_monitor.py", line 52, in exit
self.join()
File "/usr/lib/python3.5/threading.py", line 1051, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
When this fails in such a way, it's nearly complete, but doesn't get to creating the individual .gz files for that base resolution (it has all the folders for each "block" of data.
If I comment out the lines for the progress bar in the loop for slices_to_raw_chunks
the script completes without error.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.