Git Product home page Git Product logo

nd2reader's Introduction

nd2reader

Anaconda-Server Badge Anaconda-Server Badge Build status

About

nd2reader is a pure-Python package that reads images produced by NIS Elements 4.0+. It has only been definitively tested on NIS Elements 4.30.02 Build 1053. Support for older versions is being actively worked on. The reader is written in the pims framework, enabling easy access to multidimensional files, lazy slicing, and nice display in IPython.

Documentation

The documentation is available here.

Installation

The package is available on PyPi. Install it using:

pip install nd2reader

If you don't already have the packages numpy, pims, six and xmltodict, they will be installed automatically if you use the setup.py script. Python >= 3.5 are supported.

Installation via Conda Forge

Installing nd2reader from the conda-forge channel can be achieved by adding conda-forge to your channels with:

conda config --add channels conda-forge

Once the conda-forge channel has been enabled, nd2reader can be installed with:

conda install nd2reader

It is possible to list all of the versions of nd2reader available on your platform with:

conda search nd2reader --channel conda-forge

ND2s

nd2reader follows the pims framework. To open a file and show the first frame:

from nd2reader import ND2Reader
import matplotlib.pyplot as plt

with ND2Reader('my_directory/example.nd2') as images:
  plt.imshow(images[0])

After opening the file, all pims features are supported. Please refer to the pims documentation.

Backwards compatibility

Older versions of nd2reader do not use the pims framework. To provide backwards compatibility, a legacy Nd2 class is provided.

Contributing

If you'd like to help with the development of nd2reader or just have an idea for improvement, please see the contributing page for more information.

Bug Reports and Features

If this fails to work exactly as expected, please open an issue. If you get an unhandled exception, please paste the entire stack trace into the issue as well.

Acknowledgments

PIMS modified version by Ruben Verweij.

Original version by Jim Rybarski. Support for the development of this package was partially provided by the Finkelstein Laboratory.

nd2reader's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

nd2reader's Issues

Z resolution (sampling over Z)

Hello there and thanks for this awesome package! I went through the documentation and the code, and the only thing I could not find is how to extract from the metadata the resolution in the third dimension. Did I miss something? Can it be easily added as a new feature? Thanks! :)

nd2reader with NIS 3 InvalidVersionError

Dear all,

I am new to python. I have the NIS software on my microscope computer but it is the version 3.
I am trying to use the nd2reader package to open my nd2 files.
However, I get this error:

File "C:\Users\jammes\Anaconda3\envs\PhD_FJ_Python\lib\site-packages\nd2reader\parser.py", line 129, in _check_version_supported
raise InvalidVersionError("No parser is available for that version.")

InvalidVersionError: No parser is available for that version.

I guess it is because it is a nd2 file from NIS 3 instead of NIS4.

Is there any fix to that ?

Thanks in advance :)

Bug in attempt to remove unwanted bytes

Hi everyone,

first, thanks for this great library!

I think the recent PR #40 by @ggirelli introduced a bug in the library, as I'm not able anymore to read some multi-channel files. Specifically, when I open a file with nd2file = ND2Reader(fh) and try to get a given frame, the first channel works fine with nd2file.get_frame_2D(c=0) but the other channels like nd2file.get_frame_2D(c=1) fail with an error stating:

AssertionError: An unexpected number of extra bytes was encountered based on the expected frame size, therefore the file could not be parsed.

I had a look in the code and I think the offending line is here: https://github.com/rbnvrw/nd2reader/blob/df6d70e7d46dc4c97b99ab9e6f856d9333648650/nd2reader/stitched.py#L13

This line measures if there are any "remaining" pixels from the expected number width and height of the image. The problem is that the first image_data_start elements are skipped when measuring the image size and image_data_start depends on the chosen channel as defined here:
https://github.com/rbnvrw/nd2reader/blob/df6d70e7d46dc4c97b99ab9e6f856d9333648650/nd2reader/parser.py#L269

where 4 takes care of the standard 4 elements to skip because they correspond to the time stamp and channel_offset is just the channel index.

Let's imagine the case of a 10x10 image with 2 channels and with no pixels to remove. Here:
len(image_group_data)=4+2*10*10 = 204.
If we consider the case c=0 then:
image_data_start = 4 + 0 = 4 and therefore (len(image_group_data[4:])) % (height * width) = 0.
However if we now consider c=1 then:
image_data_start = 4 + 1 = 5 and therefore (len(image_group_data[5:])) % (height * width) = 99.

This is a long-winded explanation to say that I believe the offending line should simply be changed to:
n_unwanted_bytes = (len(image_group_data[4:])) % (height * width)
just like in the preceding one. I don't think that figuring out if there are pixels to remove should be depending on the selected channel. When I modified the code in this way I was again able to read all channesl.

I can open a PR if needed but it's probably easier if one of the maintainers checks if that's correct and directly fixes this tiny error.

Error while parsing partially saved nd2 files

The reader works well for most nd2 files, however some of my nd2 files are not parsed correctly and I get an error:
EmptyFileError: No axes were found for this .nd2 file.

I think the error occurs during parsing from the end of the file (in _build_label_map):

# go 8 bytes back from file end
self._fh.seek(-8, 2) 
chunk_map_start_location = struct.unpack("Q", self._fh.read(8))[0]

where for the files that fail: chunk_map_start_location is assigned to 0.

I suspect that this happens with nd2 files that had an error during acquisition, so the end of the file might be corrupt. However, since nd2 files hold multiple images it means that there are many valuable images in the same file.

The bioimage reader in ImageJ parses and opens those (maybe corrupt) files correctly.
I will be happy to send you an example file, the smallest I have that fails is ~8GB.

Issue with time metadata

I am unable to extract the acquisition time (in ms) for each frame. With pims_nd2, I can extract this info with -

frames[0].metadata['t_ms'] # time of frame in milliseconds

As I do alot of time based imaging, this info is essential. Is there a way to get this metadata with nd2reader?

Thanks!

Issue to open stitched RGB nd2 files (related to issue #17)

Hi @rbnvrw ,

I recently tried to open an nd2 image with nd2read and I have a very similar issue than issue #17, when I tried to open it with Bioformats or PIMS nd2-reader, the image is distorted. In nd2reader, it seems that the image is skipping line and the format is off.
It's an 8Bits RGB image as such in Elements:
Screen Shot 2021-02-06 at 9 58 33 PM
And it becomes a float 64 in nd2reader:
Screen Shot 2021-02-06 at 9 59 39 PM
pims and bioformats does open it as an 8bits image but it distorted
Screen Shot 2021-02-06 at 10 01 39 PM

Here is the file if by any chance you can take a look:

https://github.com/bioimage-analysis/Files/blob/master/error.nd2

File with large number of ROIs is not parsed correctly

Hello, not sure how to use github very well, so i will just tell you the problem and my solution to this problem.
in the raw_metadata.py file on line 245 it says
number_of_timepoints = raw_roi_dict[six.b('m_vectAnimParams_Size')]
This causes the function to break down so we replace this with
number_of_timepoints = 0
And the function will run fine after this.

Plotting ROI's from nd2 files

Hi!

Im trying to extract from ROI metadata from the nd2 files which i kind of drew like this.

Screen Shot 2021-02-22 at 6 05 00 PM

And after extracting the ROI metadata the output is in a format im not familiar with.

Screen Shot 2021-02-22 at 6 10 59 PM

Is there a way to plot the ROI so that it preserves the overall shape and position? Thanks for the help??

wrong number of z plane / time points

Hello,
I might be missing something but a file with 1419 images planes as a 2D time sequence is decoded as an image with 1419 z planes and 1419 time points.

from nd2reader import ND2Reader

with ND2Reader('../data/example.nd2') as images:
  print('Image size:', images.sizes)
  print("Number of images: ", len(images))

gives:

Image size: {'x': 512, 'y': 512, 't': 1419, 'z': 1419}
Number of images: 1419

The iter axe is also misidentify as being z instead of t..

from nd2reader import ND2Reader

with ND2Reader('../data/example.nd2') as images:
    images.iter_axes = 'z' # instead  of 't'
    plt.imshow(images[3])

[ERROR] Library not working in Windows

I have tried to import your library in Windows and I am getting following error:

Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from nd2reader import ND2Reader
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\nd2reader\__init__.py", line 2, in <module>
    from nd2reader.reader import ND2Reader
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\nd2reader\reader.py", line 1, in <module>
    from pims import Frame
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\pims\__init__.py", line 1, in <module>
    from pims.api import *
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\pims\api.py", line 113, in <module>
    from pims_nd2 import ND2_Reader as ND2Reader_SDK
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\pims_nd2\__init__.py", line 1, in <module>
    from .nd2reader import ND2_Reader
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\pims_nd2\nd2reader.py", line 8, in <module>
    from . import ND2SDK as h
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\site-packages\pims_nd2\ND2SDK.py", line 22, in <module>
    nd2 = cdll.LoadLibrary(os.path.join(dlldir, 'v6_w32_nd2ReadSDK.dll'))
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\ctypes\__init__.py", line 442, in LoadLibrary
    return self._dlltype(name)
  File "C:\Users\david\AppData\Local\Programs\Python\Python37\lib\ctypes\__init__.py", line 364, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found

Channel handling?

I think the way channels are handled is a bit funny, at least when it comes to the get_frame_2D function. You ask the user to pass an index for the metadata["channels"] list. Within the function you pull the string name for the channel then feed that string into a function that turns the original list into a dictionary that is just channel_name: number. It is that number that is used to generate the eventual PIMS Frame.

I don't understand why there is this number->list->string->dictionary->number organization when you can just use the number in the first place.

I deleted this middle process from my source and it now works exactly like I would expect it to.

I ran into this problem because I had two channels with the same name so the dictionary was always returning the same number even though there were two different channels. Of course, I shouldn't have channels with the same name, but I honestly thought they had different names when I ran my experiment.

Long sequence of frames in metadata

In the metadata, there is a variable frames which is a list of all frame numbers.
This is not very useful. Convert to generator instead to save memory for files without gaps while keeping it flexible at the same time to accommodate more special cases.

in 3.2.0, get_frame_2D fails

in nd2reader/reader.py, the function get_frame_2D in line 55 raises an error NameError: name 'get_frame_vczyx' is not defined

it seems to be a typo. the line
return get_frame_vczyx(v=v, c=c, t=t, z=z, x=x, y=y)
should be
return self.get_frame_vczyx(v=v, c=c, t=t, z=z, x=x, y=y)

Improving metadata: Add X and Y stage coordinates in Metadata.

Adding this to the code seems to do the trick.
*same as z_coordinates"

    def get_parsed_metadata(self):
        """Returns the parsed metadata in dictionary form.

        Returns:
            dict: the parsed metadata
        """

        if self._metadata_parsed is not None:
            return self._metadata_parsed

        frames_per_channel = self._parse_total_images_per_channel()
        self._metadata_parsed = {
            "height": parse_if_not_none(self.image_attributes, self._parse_height),
            "width": parse_if_not_none(self.image_attributes, self._parse_width),
            "date": parse_if_not_none(self.image_text_info, self._parse_date),
            "fields_of_view": self._parse_fields_of_view(),
            "frames": self._parse_frames(),
            "z_levels": self._parse_z_levels(),
            "z_coordinates": parse_if_not_none(self.z_data, self._parse_z_coordinates),
            "x_coordinates":  parse_if_not_none(self.x_data, self._parse_x_coordinates),
            "y_coordinates":  parse_if_not_none(self.y_data, self._parse_y_coordinates),
            "total_images_per_channel": frames_per_channel,
            "channels": self._parse_channels(),
            "pixel_microns": parse_if_not_none(self.image_calibration, self._parse_calibration)
        }

    def _parse_x_coordinates(self):
        """The coordinate in micron for all x.

        Returns:
            list: the x coordinates in micron
        """
        return self.x_data.tolist()

    def _parse_y_coordinates(self):
        """The coordinate in micron for all y.

        Returns:
            list: the y coordinates in micron
        """
        return self.y_data.tolist()
        

Any chance we can add this to the main branch code?

More metadata?

Hi, when I print the metadata from Nd2Reader I have a lot of things missing (even comparing to the metadata when opening with bioformats).

Is there any way for us to get more metadata with nd2reader?

Example of metadata in the .nd2 files that are not available (by order of preference and need):
EmissionWavelength
The setup PhysicalSizeZ (we can estimate from the Z positions, however I have the accuracy in my experiments to "open" and it varies a bit).
RefractiveIndex
Detector ID /Model etc.

DeltaT (when there's only one T the acquisition time)
ExposureTime,
Position X, Y and Z
Color Set for each Channel in NIS-elements.

Thank you for the module!

Unable to: from nd2reader import ND2Reader

Hi there,

I can't seem to import the ND2Reader.
When following your example code python gives an ImportError on the first line.

image
image
image

Sorry if this is just a dumb error on my part but I can't seem to figure out what I'm doing wrong

Thanks in advance

Edit: Added pip freeze image

Z-axis of large ND2 image not properly parsed.

Hi there.

I have a large ND2 image with one field of view and the following XYZ dimensions: 14633x14634x17
Axes are XYCZT with 2 channels and 1 time frame only.
These details are according to the Nikon reader and FIJI metadata.

When I try to read it with ND2Reader I do not get a Z axis:

>>> i = nd.ND2Reader(path)
>>> print(i.axes)
['x', 'y', 'c', 't']
>>> print(i.sizes)
{'x': 14633, 'y': 14634, 'c': 2, 't': 1}

I digged a bit and used the RawMetadata class to retrieve the ImageTextInfo and parsed metadata below:

{b'SLxImageTextInfo': {b'TextInfoItem_0': b'', b'TextInfoItem_1': b'', b'TextInfoItem_2': b'', b'TextInfoItem_3': b'', b'TextInfoItem_4': b'', b'TextInfoItem_5': b'Camera Name: Andor Zyla VSC-05544
Numerical Aperture: 1,4
Refractive Index: 1,515
Number of Picture Planes: 2
Plane #1:
 Name: dapi
 Component Count: 1
 Modality: Widefield Fluorescence
 Camera Settings:   
  Camera Type: Andor Zyla
  Binning: 1x1
  Exposure: 20 ms
  Readout Mode: Rolling shutter at 12-bit
  Readout Rate: 540 MHz 
  Conversion Gain: Gain 4
  Spurious Noise Filter: on
  Sensor Mode: Overlap
  Trigger Mode: Internal
  Temperature: -0.4\xc2\xb0C
 Microscope Settings:   Microscope: Ti2 Microscope
  DIC Prism, position: Out
  Bertrand Lens, position: Out
  Nikon Ti2, FilterChanger(Turret-Lo): 5 (DA/FI/TR/Cy5/Cy7-5X-A (DAPI / FITC / TRITC / Cy5 / Cy7 - Pinkel Penta))
  Nikon Ti2, FilterChanger(EM Wheel1): 7
  Nikon Ti2, Shutter(FL-Lo): Opened
  Nikon Ti2, Shutter(DIA LED): Closed
  Nikon Ti2, Illuminator(DIA): Off
  Nikon Ti2, Illuminator(DIA) Iris intensity: 36.4
  LightPath: L100
  Analyzer Slider: Extracted
  Zoom: 1.00x
  SpectraIII/Celesta, Shutter(SpectraIII): Active
  SpectraIII/Celesta, MultiLaser(SpectraIII):
     Line:1; ExW:390; Power:  2.0; On
     Line:2; ExW:635; Power: 10.0; Off
     Line:3; ExW:690; Power:  5.8; Off
     Line:4; ExW:760; Power:  0.0; Off
     Line:5; ExW:780; Power: 20.0; Off
     Line:6; ExW:488; Power: 10.0; Off
     Line:7; ExW:561; Power: 20.0; Off
     Line:8; ExW:594; Power:  4.0; Off

Plane #2:
 Name: A647
 Component Count: 1
 Modality: Widefield Fluorescence
 Camera Settings:   
  Camera Type: Andor Zyla
  Binning: 1x1
  Exposure: 50 ms
  Readout Mode: Rolling shutter at 12-bit
  Readout Rate: 540 MHz 
  Conversion Gain: Gain 4
  Spurious Noise Filter: on
  Sensor Mode: Overlap
  Trigger Mode: Internal
  Temperature: -0.4\xc2\xb0C
 Microscope Settings:   Microscope: Ti2 Microscope
  DIC Prism, position: Out
  Bertrand Lens, position: Out
  Nikon Ti2, FilterChanger(Turret-Lo): 1 (41023 - Cy5.5)
  Nikon Ti2, FilterChanger(EM Wheel1): 7
  Nikon Ti2, Shutter(FL-Lo): Opened
  Nikon Ti2, Shutter(DIA LED): Closed
  Nikon Ti2, Illuminator(DIA): Off
  Nikon Ti2, Illuminator(DIA) Iris intensity: 36.4
  LightPath: L100
  Analyzer Slider: Extracted
  Zoom: 1.00x
  SpectraIII/Celesta, Shutter(SpectraIII): Active
  SpectraIII/Celesta, MultiLaser(SpectraIII):
     Line:1; ExW:390; Power:  2.0; Off
     Line:2; ExW:635; Power:  8.0; On
     Line:3; ExW:690; Power:  5.8; Off
     Line:4; ExW:760; Power: 22.1; Off
     Line:5; ExW:780; Power: 20.0; Off
     Line:6; ExW:488; Power: 10.0; Off
     Line:7; ExW:561; Power: 20.0; Off
     Line:8; ExW:594; Power:  4.0; Off

', b'TextInfoItem_6': b'Andor Zyla VSC-05544
Sample 1:
  
  Camera Type: Andor Zyla
  Binning: 1x1
  Exposure: 20 ms
  Readout Mode: Rolling shutter at 12-bit
  Readout Rate: 540 MHz 
  Conversion Gain: Gain 4
  Spurious Noise Filter: on
  Sensor Mode: Overlap
  Trigger Mode: Internal
  Temperature: -0.4\xc2\xb0C
Sample 2:
  
  Camera Type: Andor Zyla
  Binning: 1x1
  Exposure: 50 ms
  Readout Mode: Rolling shutter at 12-bit
  Readout Rate: 540 MHz 
  Conversion Gain: Gain 4
  Spurious Noise Filter: on
  Sensor Mode: Overlap
  Trigger Mode: Internal
  Temperature: -0.4\xc2\xb0C', b'TextInfoItem_7': b'', b'TextInfoItem_8': b'', b'TextInfoItem_9': b'2019-10-16  17:29:37', b'TextInfoItem_10': b'', b'TextInfoItem_11': b'', b'TextInfoItem_12': b'', b'TextInfoItem_13': b'Plan Apo \xce\xbb 60x Oil'}}

{'height': 14634, 'width': 14633, 'date': None, 'fields_of_view': [0], 'frames': [0], 'z_levels': [], 'z_coordinates': [2144.443472808363, 2145.0434728083633, 2145.643472808363, 2146.243472808363, 2146.843472808363, 2147.443472808363, 2148.0434728083633, 2148.643472808363, 2149.243472808363, 2149.843472808363, 2150.443472808363, 2151.0434728083633, 2151.643472808363, 2152.243472808363, 2152.843472808363, 2153.443472808363, 2154.0434728083633], 'total_images_per_channel': 17, 'channels': ['dapi', 'A647'], 'pixel_microns': 0.108333333333333, 'num_frames': 1, 'experiment': {'description': '', 'loops': [{'start': 0, 'duration': 0, 'stimulation': False, 'sampling_interval': 0.0}]}, 'events': []}

I think the issue here is in how the Z-levels are parsed. The parsing gives 'z_levels': [], 'z_coordinates': [2144.443472808363, 2145.0434728083633, 2145.643472808363, 2146.243472808363, 2146.843472808363, 2147.443472808363, 2148.0434728083633, 2148.643472808363, 2149.243472808363, 2149.843472808363, 2150.443472808363, 2151.0434728083633, 2151.643472808363, 2152.243472808363, 2152.843472808363, 2153.443472808363, 2154.0434728083633] which I guess should not happen. The z_levels key is empty because the regular expression here does not match anything in the image textinfo. Still, FIJI and Nikon read this properly.

Do you think it would make sense to change how the z_levels are parsed or at least to check if z_levels count and z_coordinates length match?

nd2reader outputs empty arrays for certain ND2 files.

Hello,

I am having some trouble reading certain ND2 files using nd2reader. It successfully reads the metadata, but when I index the image itself, an empty numpy array is outputted. Here is an example of an image that doesn't work. Here is a very similar example of an image that does work (a non-empty array is outputted).

Does anyone know why this is happening?

Thanks!

Array shape

Hi,
I have problems reading frames from:

imReader

Axes: 3
Axis 'x' size: 973
Axis 'y' size: 973
Axis 't' size: 443
Pixel Datatype: <class 'numpy.float64'>

while asking for the first frame:
imReader[0]
I get an error ending with:
ValueError: cannot reshape array of size 947702 into shape (973,973)

Thanks!

KeyError when opening nd2 file

When i'm opening a nd2 file I get a key error and a return 0.
Far before that there is a warning about Z-levels: UserWarning: Z-levels details missing in metadata. Using Z-coordinates instead.
Unfortunately I cant share the file but it contains a 500 frames long video of a microscope. The file is about 1gb.

Here is a part of the trackback:

nd2reader\raw_metadata.py:171: UserWarning: Z-levels details missing in metadata. Using Z-coordinates instead.
warnings.warn("Z-levels details missing in metadata. Using Z-coordinates instead.")
... my call to nd2
nd2reader\parser.py", line 36, in init
self._parse_metadata()
nd2reader\parser.py", line 143, in _parse_metadata
self.metadata = self._raw_metadata.dict
nd2reader\raw_metadata.py", line 27, in dict
return self.get_parsed_metadata()
nd2reader\raw_metadata.py", line 58, in get_parsed_metadata
self._parse_experiment_metadata()
nd2reader\raw_metadata.py", line 332, in _parse_experiment_metadata
self._metadata_parsed['experiment']['loops'] = self._parse_loop_data(raw_data[six.b('uLoopPars')])
nd2reader\raw_metadata.py", line 344, in _parse_loop_data
loops = get_loops_from_data(loop_data)
nd2reader\common_raw_metadata.py", line 54, in get_loops_from_data
loops = [loops[i] for i in range(len(loops)) if loop_data[six.b('pPeriodValid')][i] == 1]
nd2reader\common_raw_metadata.py", line 54, in
loops = [loops[i] for i in range(len(loops)) if loop_data[six.b('pPeriodValid')][i] == 1]
KeyError: 0

float64 .nd2 files don't recognize intensities above 2**16-1

When opening a .nd2 that was deconvolved that has intensity values higher than 65535.0, they don't get recognized and the image doesn't render well.

from nd2reader import ND2Reader
a = ND2Reader('./WellC05_ChannelFITC (Em),mRuby3,DIC_Seq0007.nd2')
a.bundle_axes ='zyx' 
a.iter_axes = 'c'
a[0].dtype
a[0].max()

In this example, only the green signal is over 2**16-1 in intensity. The green signals that are higher show up as nan or 0
image

Cannot open nd2 files obtained with version 5.21.03 of the acquisition software

Hi!
I'm trying to use nd2reader with some nd2 files obtained from a Nikon NIS Elements but I get a "ZeroDivisionError".
The version of the the acquisition software is 5.21.03.
Since I've read that nd2reader has only been tested with version 4.13, is it possible to use it with files obtained with a later version of the acquisition software?

Thanks for your help

Issues loading large size images

Hi,

I'm trying to load big 3 channel ND2 microscopy images, some of the images it has no problem loading but there are some images that give this error:

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 668, in runfile
    execfile(filename, namespace)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/Desktop/untitled1.py", line 9, in <module>
    redio = images[0]

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\slicerator.py", line 187, in __getitem__
    return self._get(indices)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pims\base_frames.py", line 148, in __getitem__
    return self.get_frame(key)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pims\base_frames.py", line 642, in get_frame
    result = self._get_frame_wrapped(**coords)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\nd2reader\reader.py", line 77, in get_frame_2D
    return self._parser.get_image_by_attributes(t, v, c_name, z, y, x)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\nd2reader\parser.py", line 98, in get_image_by_attributes
    height, width)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\nd2reader\parser.py", line 272, in _get_raw_image_data
    image_data = np.reshape(image_group_data[image_data_start::number_of_true_channels], (height, int(round(len(image_group_data[image_data_start::number_of_true_channels])/height))))

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\numpy\core\fromnumeric.py", line 257, in reshape
    return _wrapfunc(a, 'reshape', newshape, order=order)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\numpy\core\fromnumeric.py", line 62, in _wrapfunc
    return _wrapit(obj, method, *args, **kwds)

  File "V:\WPy-3670\python-3.6.7.amd64\lib\site-packages\numpy\core\fromnumeric.py", line 42, in _wrapit
    result = getattr(asarray(obj), method)(*args, **kwds)

ValueError: cannot reshape array of size 1767457564 into shape (41995,42087)

It seems to make a calculation error when calculating the shape..

I ran bundle axes, iter axes and metadata with this result:


['y', 'x']
['c']
{'height': 41995, 'width': 42087, 'date': datetime.datetime(2018, 8, 12, 19, 15, 17), 'fields_of_view': [0], 'frames': [0], 'z_levels': [], 'total_images_per_channel': 1, 'channels': ['x10 Axel DAPI', 'x10 Axel 470', 'x10 Axel 647'], 'pixel_microns': 0.651332724829376, 'num_frames': 1, 'experiment': {'description': '', 'loops': [{'start': 0, 'duration': 0, 'stimulation': False, 'sampling_interval': 0.0}]}}

I can share the raw image file, but it's 13.3gig so I'm not sure how to do this.

Thanks in advance,
Axel

ND2Reader gets incorrect video length for cut movies with NISElements

We're dealing with some large movies that were cut into several parts to be treated on other computers, original movie is of dimensions (c, t, x, y, z) = (2, 1715, 512, 512, 35) and was cut into segments of approximately (2, 300, 512, 512, 35) with NISElements.

The issue is that in the nd2 metadata the Dimensions section still shows T(1715) x Z(35).
Therefore your code says that there are more frames in the video than there actually are.

By exporting the metadata in Fiji (which gets the correct video length) I can see that there's a property called sizeT which has the correct length.

I was trying to get to this value from image_text_info in the RawMetadata but to no avail. Is there any way for me to retrieve this data.

Here's the metadta from FIJI
metadata.log

default_coords fails for t (always stuck at time 0)

default_coords['t'] does not change when assigned.

c and v work fine.

frames =  ND2Reader(fname)
# fames2 = nd2_two(fname)
# frames = pims.bioformats.BioformatsReader(fname)
# frames.reader=0
# frames.iter_axes = 't'  # 't' is the default already
frames.bundle_axes = 'zyx'  # when 'z' is available, this will be default
frames

print(frames.sizes['v'])
print(frames.sizes['t'])

frames.default_coords['t'] = 0
frames.default_coords['v'] = 2
print(frames[0].mean())


frames.default_coords['c'] = 0
frames.default_coords['t'] = 20
frames.default_coords['v'] = 0
print(frames[0].mean())


frames.default_coords['c'] = 1
frames.default_coords['t'] = 0
print(frames[0].mean())


frames.default_coords['c'] = 0
frames.default_coords['t'] = 50
print(frames[0].mean())


frames.default_coords['c'] = 1
frames.default_coords['t'] = 50
print(frames[0].mean())


> 10
> 120
> 160.967576548358
> 165.96041711258562
> 124.08112726080907
> 165.96041711258562
> 124.08112726080907
> 

git-lfs in the repo pointing to missing objects

Trying to clone/fork this project to an internal remote (based on GitLab) I found that any attempt to push the repo was rejected (by gitlab prereceive hooks) because of missing LFS objects.

Actually, just cloning the repo from GitHub I can see that there are references to missing LFS objects

$ /tmp> git clone https://github.com/rbnvrw/nd2reader                                      (base)
Cloning into 'nd2reader'...
remote: Enumerating objects: 3195, done.
remote: Counting objects: 100% (101/101), done.
remote: Compressing objects: 100% (82/82), done.
remote: Total 3195 (delta 39), reused 42 (delta 12), pack-reused 3094
Receiving objects: 100% (3195/3195), 27.46 MiB | 4.81 MiB/s, done.
Resolving deltas: 100% (2005/2005), done.
$ /tmp/nd2reader> git lfs install
Updated git hooks.
Git LFS initialized.
$ /tmp/nd2reader> git lfs fetch --all
fetch: 4 object(s) found, done.
fetch: Fetching all references...
[0bcac3b4c1daa4d9e084ea748763e3607590a79ccf8ede3f60f9f95cc7ad132c] Object does not exist on the server: [404] Object does not exist on the server
[6cd9e93509616e4326a63d48cc8b8024bb9eff370059616585cab6e40d20d371] Object does not exist on the server: [404] Object does not exist on the server
[d8d8bcccea69a15171220b3665986e2eecea1b9012c055d5ca4a86e3de14cec4] Object does not exist on the server: [404] Object does not exist on the server
error: failed to fetch some objects from 'https://github.com/rbnvrw/nd2reader.git/info/lfs'

The concerned objects are the following (the first one still present in the repo, the others deleted in previous commits):

$ /tmp/nd2reader> git lfs ls-files --all
e3b0c44298 * tests/__init__.py
6cd9e93509 - tests/test_data/data001.nd2
d8d8bcccea - tests/test_data/data002.nd2
0bcac3b4c1 - tests/test_data/data003.nd2

If someone of the developers still have access to these file, would it be possible to upload them to GitHub (git lfs push origin --all)?

In case these files are completely lost/missing I think the only way to get a integer repository is to rewrite history from the first commit that referenced one of these files (there are automated tools for that). Do you believe it is a viable approach?

Problem with indexing

I'm developing an image processing program - flika - that relies on this library. Thank you for maintaining (and thanks to @jimrybarski for writing it).

I recently upgraded to a newer version of this reader and I'm having some trouble opening files. I've pinpointed the problem to the _calculate_image_group_number() method in the parser class in parser.py. I can send you a sample file, it's 30 MB, bigger than I can attach here. The image is a z stack with 28 frames in total. When trying to access the second frame (index = 1) , this method returns an image group number of 28. Here is the function.

def _calculate_image_group_number(self, frame_number, fov, z_level):
    """
    Images are grouped together if they share the same time index, field of view, and z-level.

    Args:
        frame_number: the time index
        fov: the field of view number
        z_level: the z level number

    Returns:
        int: the image group number

    """
    return frame_number * len(self.metadata["fields_of_view"]) * len(self.metadata["z_levels"]) + (
        fov * len(self.metadata["z_levels"]) + z_level)

I believe this can be fixed if len(self.metadata["z_levels"]) is removed. I would create a pull request but I wanted to check if that simple solution might cause problems somewhere else.

If you would like the .nd2 file, let me know where I can send it.

Z coord of Timepoint seen as Z steps

Hi there,
First, thank you for the great reader!

I found that when I open nd2 files with different time point but no Z stack, the nd2reader interpret the stage coordinate in Z at every time point as a Z steps.
So I will get this:
{'x': 512, 'y': 512, 't': 10, 'z': 10}
While if I look into the metadata:

with ND2Reader(file) as images:
    for x,y in images.parser._raw_metadata.image_text_info.items():
        for i,j in y.items():
            print(f"{i.decode()} => {j.decode()}")

I have that for the Dimensions:

TextInfoItem_5 => Metadata:
Dimensions: T(10) x Ξ»(1)

My understanding is that _parse_z_levels is the reason for the error:

def _parse_z_levels(self):
        """The different levels in the Z-plane.
        Returns:
            list: the z levels, just a sequence from 0 to n.
        """
        z_levels = self._parse_dimension(r""".*?Z\((\d+)\).*?""")
        if 0 != len(z_levels):
            z_levels = parse_if_not_none(self.z_data, self._parse_z_coordinates)
            if z_levels is None:
                z_levels = []
            else:
                z_levels = range(len(z_levels))
                warnings.warn("Z-levels details missing in metadata. Using Z-coordinates instead.")
        return z_levels

It is using the Z-coordinates when it doesn't find Z in the dimensions part of the Metadata. I was wondering what was the reason for doing that? It seems that it would make more sense (at least in my examples) to put 1 for z_steps when no Z is in dimension?
Thank you!

Loading images from multiple fields of view

Hi
My output files have ~ 200 fields of view and the data are zxy with no time points. Is it possible to use pims to load directly the images from each field of view?

When I load the file:

with ND2Reader(fname) as images:
    print(images.sizes)

{'x': 2048, 'y': 2048, 't': 1, 'z': 45}

I get the correct information for one of the field of view. Even after I set images.bundle_axes = 'zyx' I am still not able to access all the images in the file.
I have the same issue if i load the file with the legacy.Nd2 module that I was using in the previous version of nd2reader.

I looked in the docs and also in pims and pims_nd2 but I couldn't find the relevant infos.

I will really appreciate any help.
Thanks a lot!

Access z-stack metadata

As reported via email:

(...) Z step varies. NIS keeps this information in experiment part of metadata (together
with top and bottom positions). When I import those files using ND2Reader
it shows all pictures, but has no information about Z step nor about Z
limits.

here are metadata for example file:

{'height': 1920,
'width': 2560,
'date': None,
'fields_of_view': [0],
'frames': [0],
'z_levels': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34],
'total_images_per_channel': 35,
'channels': ['Brightfield'],
'pixel_microns': 0.056962261336516,
'num_frames': 1,
'experiment': {'description': 'ND Acquisition', 'loops': [{'start': 0,
'duration': 0, 'stimulation': False, 'sampling_interval': 0.0}]}}

NIS version is 4.30

Is there a way to retrieve this kind of information using your package?
Or do I have to find some workaround? I would prefer to keep using
Your package, as it is much faster than other methods Ive tried, and I
usually have lots of data coming to me from all the guys in my lab.

Parsing point name in nd2

Hi Great tools you have created here
I'm a python novice, but I'm currently thinking to automate my experiment process.

We are conducting high throughput experiments with nikon microscopes
one way to annotate experiments during a ND acqusitions is to rename the point name (as screen shot attached)

Is there a function in nd2reader to parse the point name in a multipoint loop?

image

Thanks!
Best,

Missing metadata

Hi,

I'm prob a bit dense here, but I am wondering why some metadata is missing with this reader as opposed to https://github.com/soft-matter/pims_nd2

The other reader provides a 'metadata_text' with info related to camera model and microscope settings. It is using the nikon sdk. I am trying to figure out the raw_metadata module and if a label_map is required to get the extra metadata.

Issues to Open stiched ND2 files

Hi Ruben,

I can not open stitched Nd2 files from our Slidescanner in Python.
I use Python 3.6 and nd2reader=3.1.0.

This is the metadata that is read:

<Deprecated ND2 C:\Data\slidescanner\20190514_000911_813\Slide1-1-1_ChannelBrightfield_Seq0000.nd2>
Created: Unknown
Image size: 3302x5146 (HxW)
Frames: 1
Channels: Brightfield
Fields of View: 1
Z-Levels: 0

When I open the file the numpy array has this dimension:
(3302, 7720)

I have strange lines in the image.
The image is no longer RGB.

Images were acquired with NIS Elements 5.02.00.

Do you have any advise (other than using the dedicated Fiji Plugin…)

Thanks a lot &
Kind regards

Tobias

Timesteps in ms unable to be located

Good Morning. Using an older version of NIS elements I am attempting to obtain the time-steps of the video we use. Below is the code i get after it cannot get the time-step information.

You can access the video file by following this link


 import os
 import nd2reader

os.chdir("Y:/Sabrina Kozel/190323.m2.w1 ContRO1")
vid = nd2reader.ND2Reader("video.nd2")
vid.timesteps
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\Microscope_1\Anaconda3\lib\site-packages\nd2reader\reader.py",
line 107, in timesteps
    return self.get_timesteps()
  File "C:\Users\Microscope_1\Anaconda3\lib\site-packages\nd2reader\reader.py",
line 183, in get_timesteps
    self._timesteps = np.array(list(self._parser._raw_metadata.acquisition_times
), dtype=np.float) * 1000.0
  File "C:\Users\Microscope_1\Anaconda3\lib\site-packages\nd2reader\raw_metadata
.py", line 542, in acquisition_times
    acquisition_times = read_array(self._fh, 'double', self._label_map.acquisiti
on_times)
  File "C:\Users\Microscope_1\Anaconda3\lib\site-packages\nd2reader\common.py",
line 89, in read_array
    raw_data = read_chunk(fh, chunk_location)
  File "C:\Users\Microscope_1\Anaconda3\lib\site-packages\nd2reader\common.py",
line 65, in read_chunk
    raise ValueError("The ND2 file seems to be corrupted.")
ValueError: The ND2 file seems to be corrupted.

In 3.2.0, get_frame_vczyx() implementation breaks functionality

Hi everyone,

First of all, thanks to @rbnvrw for all your work πŸ‘ I ran into some issues in the 3.2.0 build and would love to be able to help.

Unfortunately, the current build of nd2reader (3.2.0) breaks functionality that was previously provided. This is due to the current implementation of reader.get_frame_vczyx() which replaces reader.get_frame_2D() by providing specific methods via _register_get_frame().

Problem (1)
For e.g. this ND2-file a multi-FOV, multichannel, z-stack with shape (v=11, z=7, c=5, y=1806, x=1278) setting bundle_axes = 'zcyx' worked fine in 3.1.0. Now:

with ND2Reader(path + '2019-8-14_24W-S3a-Sec1-A1_012.nd2') as images:
    images.bundle_axes = 'zcyx'
    images.iter_axes = 'v'
    
    print(f"Fields-of-view in ND2 file: {len(images)}")
    im = images[0]
    images.close()
    
print(f"Shape of single field-of-view: {im.shape}")

fails in base_frames._transpose() due to axes mismatch:

C:\Anaconda3\envs\ND2tiff_dev\lib\site-packages\pims\base_frames.py in get_frame_T(**ind)
    300         transposition = [expected_axes.index(a) for a in desired_axes]
    301         def get_frame_T(**ind):
--> 302             return get_frame(**ind).transpose(transposition)
    303         return get_frame_T
    304 

ValueError: axes don't match array

Running with images.bundle_axes = 'czyx' instead avoids this error, but yields an object of incorrect shape:

with ND2Reader(path + '2019-8-14_24W-S3a-Sec1-A1_012.nd2') as images:
    images.bundle_axes = 'czyx'
    images.iter_axes = 'v'
    
    print(f"Fields-of-view in ND2 file: {len(images)}")
    im = images[0]
    images.close()
    
print(f"Shape of single field-of-view: {im.shape}")

Output:

Fields-of-view in ND2 file: 11
Shape of single field-of-view: (35, 1806, 1278)

This should be (5,7,1806,1278).

Bad fix: Manually calling np.reshape(im, (5,7,1806,1278)) corrects this and a possible fix could work by changing get_frames_vczyx():
https://github.com/rbnvrw/nd2reader/blob/b17c9d1c8996cf87bee1f935d9512a371387c640/nd2reader/reader.py#L70
to:

res = np.squeeze(np.array(result, dtype=self._dtype))
return np.reshape(res, ([self.sizes[i] for i in self.bundle_axes]))

With images.bundle_axes = 'czyx' this returns the correct shape, with all frames in the right place. However, with images.bundle_axes = 'zcyx', np.reshape works incorrectly, since the order of frames in res will always follow the order of the loop in get_frame_vczyx(), aka v,c,z. I am not sure how write a fix without changing core loop (more about that at the end of this post).

Problem (2)
When avoiding the axes mismatch error in baseframes._transpose() by either using images.bundle_axes = 'czyx' or with the "bad_fix", metadata is not being propagated to the individual images, as would be the case if in baseframes._make_get_frame() the "first option" fails (no appropriate register_get_frame method was defined for get_frame_dict) and _bundle() is called. As such, calling:

with ND2Reader(path + '2019-8-14_24W-S3a-Sec1-A1_012.nd2') as images:
    images.bundle_axes = 'czyx'
    images.iter_axes = 'v'
    im = images[0]
    images.close()
print(im.shape)
print(im.metadata)

yields:

(35, 1806, 1278)
{'axes': ['c', 'z', 'y', 'x'], 'coords': {'v': 0, 't': 0}}

When assembled through base_frames._bundle(), metadata is propagated and print(im.metadata) yields:

{'height': 1806, 'width': 1278, 'date': datetime.datetime(2019, 8, 15, 10, 55, 24), 'fields_of_view': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'frames': [0], 'z_levels': [0, 1, 2, 3, 4, 5, 6], 'total_images_per_channel': 77, 'channels': ['647', '488', '561', '405', 'dic'], 'pixel_microns': 0.108333333333333, 'num_frames': 1, 'experiment': {'description': 'W3D', 'loops': [{'start': 0, 'duration': 0, 'stimulation': False, 'sampling_interval': 0.0}]}, 'events': [{'index': 1, 'time': 3265.7298961281776, 'type': 7, 'name': 'Command Executed'}, {'index': 2, 'time': 4311.860809832811, 'type': 7, 'name': 'Command Executed'}, {'index': 3, 'time': 25922.38347223401, 'type': 7, 'name': 'Command Executed'}, {'index': 4, 'time': 26966.55641874671, 'type': 7, 'name': 'Command Executed'}, {'index': 5, 'time': 48852.96897947788, 'type': 7, 'name': 'Command Executed'}, {'index': 6, 'time': 49896.39902755618, 'type': 7, 'name': 'Command Executed'}, {'index': 7, 'time': 71575.21477815509, 'type': 7, 'name': 'Command Executed'}, {'index': 8, 'time': 72618.643543154, 'type': 7, 'name': 'Command Executed'}, {'index': 9, 'time': 95025.13088130951, 'type': 7, 'name': 'Command Executed'}, {'index': 10, 'time': 96067.59092727304, 'type': 7, 'name': 'Command Executed'}, {'index': 11, 'time': 117582.08049416542, 'type': 7, 'name': 'Command Executed'}, {'index': 12, 'time': 118625.93363505602, 'type': 7, 'name': 'Command Executed'}, {'index': 13, 'time': 140330.72805649042, 'type': 7, 'name': 'Command Executed'}, {'index': 14, 'time': 141398.76003962755, 'type': 7, 'name': 'Command Executed'}, {'index': 15, 'time': 163148.8205845952, 'type': 7, 'name': 'Command Executed'}, {'index': 16, 'time': 164190.54478898644, 'type': 7, 'name': 'Command Executed'}, {'index': 17, 'time': 187320.49074921012, 'type': 7, 'name': 'Command Executed'}, {'index': 18, 'time': 188363.45247617364, 'type': 7, 'name': 'Command Executed'}, {'index': 19, 'time': 211586.02401459217, 'type': 7, 'name': 'Command Executed'}, {'index': 20, 'time': 212629.77386826277, 'type': 7, 'name': 'Command Executed'}, {'index': 21, 'time': 235066.1590194106, 'type': 7, 'name': 'Command Executed'}, {'index': 22, 'time': 236108.4846636057, 'type': 7, 'name': 'Command Executed'}], 'axes': ['c', 'z', 'y', 'x'], 'coords': {'v': 0, 't': 0}}

Problem (3):
Currently, get_frames_2D() is simply forwarding to get_frame_vczyx(), however the only time that get_frames_2D() would actually be called (as far as I can see) would be from within base_frames._bundle() or _drop(). Both functions will call get_frames_2D() iteratively (as they expect the original behavior of get_frames_2D(): they internally loop over the desired axes. Since this would now call the loop inside get_frame_vczyx(), it would iterate over all axes twice, and probably cause issues. I haven't tested this, since due to:
https://github.com/rbnvrw/nd2reader/blob/b17c9d1c8996cf87bee1f935d9512a371387c640/nd2reader/reader.py#L167-L176
base_frames._drop() and base_frames._bundle() will pretty much never be called.

Finally a general comment: basically, in order to fix problems 1 & 2, get_frame_vczyx() must re-implement most (all?) functionality of base_frames._bundle(). I am probably missing something, but right now, it is not clear to me what the benefit of the new get_frame_vczyx() function and associated commits since 1d43617 really is. @rbnvrw is there some added functionality you are hoping to achieve? Again, I'd be happy to help πŸ˜„

Single time point exported from time series appears to have as many frames as original

I have an nd2 from a collaborator that contains a single frame out of a longer time series. When I open it with nd2reader, it appears to have 193 time points, as the original time series did. However when I try to access any frame other than [0] I (understandably) get a KeyError. Is there any way for me to introspect this file to know that there is only one time frame within it?

example file

Script to reproduce the issue with the provided file:

from nd2reader import ND2Reader

f = ND2Reader('train_TR67_Inj7_fr50.nd2')
f.bundle_axes = 'zyx'
print(f'{len(f)=}')  # 193
print(f'{f.iter_axes=}')  # 't'
print(f'{f.sizes=}')  # 193 time points
print(f'{f[0].shape=}')  # 33 x 512 x 512
print(f'{f.frame_shape=}')  # 33 x 512 x 512
print(f'{f[1].shape=}')  # KeyError

Output on current master:

len(f)=193
f.iter_axes=['t']
f.sizes={'x': 512, 'y': 512, 'c': 4, 't': 193, 'z': 33}
f[0].shape=(33, 512, 512)
f.frame_shape=(33, 512, 512)
Traceback (most recent call last):
  File "/Users/jni/projects/useful-histories/nd2-single-frame.py", line 10, in <module>
    print(f'{f[1].shape=}')  # KeyError
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/slicerator/__init__.py", line 188, in __getitem__
    return self._get(indices)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/pims/base_frames.py", line 98, in __getitem__
    return self.get_frame(key)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/pims/base_frames.py", line 592, in get_frame
    result = self._get_frame_wrapped(**coords)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/pims/base_frames.py", line 265, in get_frame_bundled
    frame = get_frame(**ind)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/pims/base_frames.py", line 303, in get_frame_dropped
    result = get_frame(**ind)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/nd2reader/reader.py", line 88, in get_frame_2D
    return self._parser.get_image_by_attributes(t, v, c, z, y, x)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/nd2reader/parser.py", line 103, in get_image_by_attributes
    timestamp, raw_image_data = self._get_raw_image_data(image_group_number, channel,
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/nd2reader/parser.py", line 261, in _get_raw_image_data
    chunk = self._label_map.get_image_data_location(image_group_number)
  File "/Users/jni/conda/envs/napari-proofreading/lib/python3.9/site-packages/nd2reader/label_map.py", line 80, in get_image_data_location
    return self._image_data[index]
KeyError: 33

Thank you!

Cannot open certain stitched .nd2 files from Elements

Hello Ruben,

I am having an issue that I noticed had come up some time in the past but was never fully resolved (in #6). When trying to open certain .nd2 files using nd2Reader I get the error:

ValueError: cannot reshape array of size 45686334 into shape (6759,6759)

This happens only on certain images from this run, and I can't seem to find any way to predict which images will open alright and which will not. We recently changed the stitching in Elements to overlap slightly more than it used to, and maybe that has some connection to this problem cropping up again?

Here is the object info:
<FramesSequenceND> Axes: 4 Axis 'x' size: 6759 Axis 'y' size: 6759 Axis 'c' size: 3 Axis 't' size: 1 Pixel Datatype: <class 'numpy.float64'>

bundle_axes:
['y', 'x']

iter_axes:
['c']

metadata:
{'height': 6759, 'width': 6759, 'date': datetime.datetime(2019, 10, 29, 13, 0, 1), 'fields_of_view': [0], 'frames': [0], 'z_levels': [], 'z_coordinates': [5246.4125], 'total_images_per_channel': 1, 'channels': ['Hoechst', 'AF488', 'AF555'], 'pixel_microns': 0.648678997775168, 'num_frames': 1, 'experiment': {'description': 'unknown', 'loops': []}, 'events': []}

Here is a traceback:

ValueError                                Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\nd2reader\parser.py in _get_raw_image_data(self, image_group_number, channel_offset, height, width)
    276         try:
--> 277             image_data = np.reshape(image_group_data[image_data_start::number_of_true_channels], (height, width))
    278         except ValueError:

<__array_function__ internals> in reshape(*args, **kwargs)

C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in reshape(a, newshape, order)
    300     """
--> 301     return _wrapfunc(a, 'reshape', newshape, order=order)
    302 

C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
     57     if bound is None:
---> 58         return _wrapit(obj, method, *args, **kwds)
     59 

C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in _wrapit(obj, method, *args, **kwds)
     46         wrap = None
---> 47     result = getattr(asarray(obj), method)(*args, **kwds)
     48     if wrap:

ValueError: cannot reshape array of size 45686334 into shape (6759,6759)

VERSION handling in setup.py breaks some automatic tools

Hi there! While waiting for the new release (my ability to use the package really depends on #33 ) I tried to use my dependency manager (I use poetry) to add nd2reader current master branch version directly from git.

As most dependency managers do not actually execute the setup.py file, having the version flag as a constant in the __init__.py file breaks it.

I understand that having the version flag value in a single position simplifies the routine of releasing a new package version, but do you think this could be dropped in the future, to increase compatibility with this kind of tools?

I made a temporary fork with version flag set to 3.2.3.post1 that works nicely here: ggirelli/nd2reader.

Colors wrong when saving ND2 to png

Hi,
I am using ND2reader to take each color channel (3 in my case) combine and export to a tif. This is an example of what it should look like:
31 5 21 alk ascending concentration_croppedt01xy01 (1)

but instead I get this image which seems like the colors are completely off.
ImageFromND2_hour0 frame0

doing an image analysis and comparison using ImageMagick it seems like my script is just not saving it as 16 bit or combining as 16 bit at all. Any thoughts?

Image from script -> ImageFromND2_hour0.frame0.png PNG 512x512 512x512+0+0 8-bit sRGB 400303B 0.000u 0:00.002
Image From NisElements -> 31.5.21 alk ascending concentration_croppedt01xy01.tif TIFF 512x512 512x512+0+0 16-bit sRGB 1.51925MiB 0.031u 0:00.035

My script:

import time
start_time = time.time()
from nd2reader import ND2Reader, legacy
import cv2
import numpy as np
import pandas as pd 
import matplotlib.pyplot as plt
import os
# Print iterations progress

filelocation = 'Nd2 Conversion\\31.5.21 ALK ascending concentration_Cropped.nd2'
from nd2reader import ND2Reader
with ND2Reader(filelocation) as images:
    l = len(images) * images[0].metadata['fields_of_view'][-1]
    addition = 100 / l
    np.set_printoptions(precision=3)
    numberoffiles = 0
    for x in range(len(images)):
        for fov in images[x].metadata['fields_of_view']:
            c0 = images.parser.get_image_by_attributes(frame_number=x,field_of_view=fov,channel=0,z_level=0,height=512,width=512)
            c1 = images.parser.get_image_by_attributes(frame_number=x,field_of_view=fov,channel=1,z_level=0,height=512,width=512)
            c2 = images.parser.get_image_by_attributes(frame_number=x,field_of_view=fov,channel=2,z_level=0,height=512,width=512)
            no_img = 3
            imagefinal = c0/no_img + c1/no_img + c2/no_img
            plt.imsave("Nd2 Conversion\\ALK Cropped Python\\ImageFromND2_hour" + str(x) + ".frame" + str(fov) + ".png", imagefinal)
            numberoffiles += 1

print("Complete!")
print("--- %s seconds ---" % (time.time() - start_time))

Bad reader sizes are parsed.

Hi,
nd2reader is having trouble reading a file as it finds "Zs" that are not there.

Image was acquired with triggered imaging
1 Z, 120T, 2044 Y, 2048 X, 24 Visit Points.

pims_nd2readerSDK and pims.bioformats read the file fine.
I can share but this file is 20GB.
It did this to 2 files from two different experiments.
I'll try to crop the data to a small XY, if it replicates I'll share.

%matplotlib inline
import pims
from nd2reader import ND2Reader
from pims import ND2_Reader as nd2_sdk


# fname = 'igfp1_caruby5_continue001trigger003.nd2'
fname = '../Data/igfp1_caruby5_VOG002.nd2'
# fname = 'igfp3_caruby3_rutin_bvd_f2_2days_xy02.nd2'
# fname = '/tmp/test.ome.tiff'

frames =  ND2Reader(fname)
frames_sdk = nd2_sdk(fname)
frames_bioformats = pims.bioformats.BioformatsReader(fname)
# frames.iter_axes = 't'  # 't' is the default already.
# frames.bundle_axes = 'zyx'  # when 'z' is available, this will be default

print(frames.sizes)
print(frames_sdk.sizes)
print(frames_bioformats.sizes)

{'x': 2048, 'y': 2044, 'c': 2, 't': 51, 'z': 1224, 'v': 24}
{'x': 2048, 'y': 2044, 'c': 2, 't': 51, 'm': 24}
{'x': 2048, 'y': 2044, 'c': 2, 't': 51}

Reading 'date' in metadata fails within PyQt5.QWidgets.QApplication

The 'date' entry from images.metadata is None when ND2Reader is used after QApplication has been instantied. All other fields in images.metadata are read normally and the image array is also correctly read, as far as I can judge.

Here is a code snippet that shows the problem:

import sys

from PyQt5.QtWidgets import QApplication, QWidget
from nd2reader import ND2Reader
from pathlib import Path


def load(path):
    with ND2Reader(path) as images:
        date = images.metadata['date']
        weird = date is None
        print(date, ', weird: {}'.format(weird))
        for index, arr in enumerate(images):
            print('{:02d}. detected image shape: {}'.format(index, arr.shape))


if __name__ == '__main__':
    path = Path('~/temp/microscopy.nd2').expanduser()

    print('outside loop')
    load(path)  # first: outside pyqt5 main loop
    print()

    print('inside loop')
    app = QApplication(sys.argv)
    load(path)  # second: inside pyqt5 main loop
    widget = QWidget()
    widget.show()  # produces an empty widget for the sake of demonstration
    sys.exit(app.exec_())

Here is the output when I execute the code from a terminal:

mycomputer~/temp$ python nd2pyqt5bug.py 
outside loop
2018-03-23 10:36:34 , weird: False
00. detected image shape: (2044, 2048)
01. detected image shape: (2044, 2048)

inside loop
None , weird: True
00. detected image shape: (2044, 2048)
01. detected image shape: (2044, 2048)

and the images are correctly plotted (from what I can judge) so I suppose there is no other mistake that reading the bytes associated with the date.

Perhaps the QApplication main loop interferes with the datetime reading? (though no error pops up)

Context:

  • OS : macOS 10.13.5
  • python -V returns Python 3.6.4 :: Anaconda, Inc. (installed from miniconda)
  • installed packages:
    • pyqt 5.9.2 py36h11d3b92_0
    • nd2reader 3.0.9 py_0 conda-forge

(edit) PS: except that tiny bug, which looks like to be pyqt's fault, good work!

reading from files being written

Do you think it's possible to read data from files from experiments currently being acquired(extremely long time lapses?).

The idea would be to manually shape the reader.

We want to analyze the data while it's being acquired to do conditional imaging.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.