Git Product home page Git Product logo

opm's People

Contributors

alexcoul avatar dpshepherd avatar fdjutant avatar ptbrown1729 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

opm's Issues

BDV XML not created

@AlexCoul - did the bright and dark fields images get written? If not, then my guess is the program got stuck trying to load data somewhere and failed.

issue with orthogonal interpolation code

Hi @dpshepherd

We've been using the orthogonal interpolation code you referred to as an opencl kernel in pyclesperanto_prototype and its been working quite well. Recently, we came across some issues and would like your help troubleshooting this: clEsperanto/pyclesperanto_prototype#278

@oleksiievetsno have a custom setup where they acquire images of a GUV (Giant unilamellar vesicle), which has a spherical shape, so final image should be roughly spherical after deskewing. But, after deskewing it doesn't appear spherical.

image

I figured out that the scan direction was bottom to top, so we had to flip the image before deskewing.
However, issue still persists.

Raw data: timepoint zero link to download

I think its a nice dataset to benchmark the code.

I've included the code that I've been troubleshooting:

voxel_size_x_in_microns =0.1773018
voxel_size_z_in_microns = 0.5
deskewing_angle_in_degrees = 49

from numba import njit, prange
from skimage.io import imread
import numpy as np 


@njit(parallel=True)
def opm_deskew(data,theta,distance,pixel_size):
    """
    Perform parallelized orthogonal interpolation into a uniform pixel size grid.
    
    :param data: ndarray
        image stack of uniformly spaced OPM planes
    :param theta: float 
        angle relative to coverslip
    :param distance: float 
        step between image planes along coverslip
    :param pizel_size: float 
        in-plane camera pixel size in OPM coordinates
    :return output: ndarray
        image stack of deskewed OPM planes on uniform grid
    """

    # unwrap parameters 
    [num_images,ny,nx]=data.shape     # (pixels)

    # change step size from physical space (nm) to camera space (pixels)
    pixel_step = distance/pixel_size    # (pixels)

    # calculate the number of pixels scanned during stage scan 
    scan_end = num_images * pixel_step  # (pixels)

    # calculate properties for final image
    final_ny = np.int64(np.ceil(scan_end+ny*np.cos(theta*np.pi/180))) # (pixels)
    final_nz = np.int64(np.ceil(ny*np.sin(theta*np.pi/180)))          # (pixels)
    final_nx = np.int64(nx)                                           # (pixels)

    # create final image
    output = np.zeros((final_nz, final_ny, final_nx),dtype=np.float32)  # (time, pixels,pixels,pixels - data is float32)

    # precalculate trig functions for scan angle
    tantheta = np.float32(np.tan(theta * np.pi/180)) # (float32)
    sintheta = np.float32(np.sin(theta * np.pi/180)) # (float32)
    costheta = np.float32(np.cos(theta * np.pi/180)) # (float32)

    # perform orthogonal interpolation

    # loop through output z planes
    # defined as parallel loop in numba
    # http://numba.pydata.org/numba-doc/latest/user/parallel.html#numba-parallel
    for z in prange(0,final_nz):
        # calculate range of output y pixels to populate
        y_range_min=np.minimum(0,np.int64(np.floor(np.float32(z)/tantheta)))
        y_range_max=np.maximum(final_ny,np.int64(np.ceil(scan_end+np.float32(z)/tantheta+1)))

        # loop through final y pixels
        # defined as parallel loop in numba
        # http://numba.pydata.org/numba-doc/latest/user/parallel.html#numba-parallel
        for y in prange(y_range_min,y_range_max):

            # find the virtual tilted plane that intersects the interpolated plane 
            virtual_plane = y - z/tantheta

            # find raw data planes that surround the virtual plane
            plane_before = np.int64(np.floor(virtual_plane/pixel_step))
            plane_after = np.int64(plane_before+1)

            # continue if raw data planes are within the data range
            if ((plane_before>=0) and (plane_after<num_images)):
                
                # find distance of a point on the  interpolated plane to plane_before and plane_after
                l_before = virtual_plane - plane_before * pixel_step
                l_after = pixel_step - l_before
                
                # determine location of a point along the interpolated plane
                za = z/sintheta
                virtual_pos_before = za + l_before*costheta
                virtual_pos_after = za - l_after*costheta

                # determine nearest data points to interpoloated point in raw data
                pos_before = np.int64(np.floor(virtual_pos_before))
                pos_after = np.int64(np.floor(virtual_pos_after))

                # continue if within data bounds
                if ((pos_before>=0) and (pos_after >= 0) and (pos_before<ny-1) and (pos_after<ny-1)):
                    
                    # determine points surrounding interpolated point on the virtual plane 
                    dz_before = virtual_pos_before - pos_before
                    dz_after = virtual_pos_after - pos_after

                    # compute final image plane using orthogonal interpolation
                    output[z,y,:] = (l_before * dz_after * data[plane_after,pos_after+1,:] +
                                    l_before * (1-dz_after) * data[plane_after,pos_after,:] +
                                    l_after * dz_before * data[plane_before,pos_before+1,:] +
                                    l_after * (1-dz_before) * data[plane_before,pos_before,:]) /pixel_step


    # return output
    return output


#timepoint zero
img_path = "../nazar_deskew_GUV/GUV_raw_t0.tif"
img_t0 = imread(img_path)

voxel_size_x_in_microns =0.1773018
voxel_size_z_in_microns = 0.5
deskewing_angle_in_degrees = 49

#scan direction is bottom to top so flip image in y
rot_img = np.flip(img_t0,axis=1)

deskewed = opm_deskew(rot_img,
                          deskewing_angle_in_degrees,
                          voxel_size_z_in_microns,
                          voxel_size_x_in_microns
                          )

#function to plot XY, XZ and YZ viewer of 3D image
shp = deskewed.shape

import matplotlib.pyplot as plt
fig,axes = plt.subplots(1,3)

axes[0].imshow(deskewed[int(shp[0]/2)],cmap='gray')
axes[1].imshow(deskewed[:,int(shp[1]/2)],cmap='gray')
axes[2].imshow(deskewed[:,:,int(shp[2]/2)],cmap='gray')

axes[0].set_title("XY")
axes[1].set_title("XZ")
axes[2].set_title("YZ")

As a temporary fix, I've recommended this code: clEsperanto/pyclesperanto_prototype#278 (comment)

Thanks

Pradeep

Code to read affine transforms for registered data

@ptbrown1729

I added return_affine_xform to data_io.py to read back out the affine transforms that register all tiles to a global coordinate system and across iterative labeling rounds as determined by BigStitcher.

The order for the transforms in the list returned by return_affine_xform for each tile should go:

registration across rounds - anchored by round 0, y = 0, z=0
registration within round - anchored by y=0, z=0 in each round
flip x-axis and shift to other side of image
place into stage coordinates
pixel calibration (uniform matrix)

This demo code should give you a list, with each affine transform in reverse order, with the newest transform first. Right now, the data has to be flipped between python and fiji, which is why there is a weird transforms.

from data_io import return_affine_xform
from pathlib import Path

bdv_xml_path = Path('/mnt/opm2/20210908/deskew_flatfield_output/bdv/16plex_lung_bdv.xml')
affine_xforms = return_affine_xform(bdv_xml_path,r_idx=1,y_idx=1,z_idx=0,total_z_pos=2)
print(affine_xforms)

We'll need to work out exactly what the (0,0,0) pixel that anchors everything is between java and python, but this should be enough for you to register localizations across rounds in the coverslip coordinate system.

FYI - I'm working towards merging the iterative localization branch. My goal is to have only one reconstruction code that auto detects what kind of acquisition was run based on the metadata.

Consolidate modular code among various efforts

We should think about how best to consolidate metadata parsing, dataset loading, viewing, and other modular functions between the various efforts going on. There will be some more changes on the acquisition side soon to allow users to set everything up in Micro-manager and hopefully view a computational deskew preview of their FOV to aid setting the microscope up.

Also - I noticed a couple people forked the repo recently. I will make an effort to keep the master branch more in synch with important changes from here on out.

Update napari-control to use new pymmcore-plus and napari-micromanager

hi @AlexCoul -

There has been a lot of work over at pymmcore-plus and napari-micromanager on threading and cleaning up the shared memory model.

Adapting the OPM control code to take advantage of these changes might help clean up the intermittent acquisition errors in my custom Napari based control code.

I suggest looking through @ptbrown1729 fork of napari-micromanager for the SIM-ODT as a place to start. We should lay out a plan to build around that fork and make new OPM specific widgets instead of the custom Napari solution that I created.

We should also make the work general enough that we can control both the new low NA OPM and epi-fluorescence MERFISH scope with minimal effort.

I see this as:

  • OPM mirror scanning widget that setups the lasers, camera, and DAQ sequence then execute
  • OPM stage scanning widget that setups the stage, lasers, camera, and DAQ sequence then execute
  • OPM O2-O3 focus maintenance widget that setups the shutter, O3 stage, and algorithmic parameters for optimizing O2-O3 spacing
  • standard microscope stage tiling widget that setups stage, lasers, camera, and DAQ sequence then execute
  • iterative multiplexed chemistry widget that setups the flow controller and run appropriate scan type using user settings

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.