Git Product home page Git Product logo

bioimageanalysisnotebooks's Introduction

Bio-Image Analysis Notebooks

DOI

This repository contains a collection of Python Jupyter notebooks explaining bio-image analysis to a broad audience, focusing on life-scientists working with three dimensional fluorescence microscopy for analyzing cells and tissues. The content is available at

https://haesleinhuepf.github.io/BioImageAnalysisNotebooks

It is maintained using Jupyter lab and build using Jupyter book.

To edit this book, install depencencies like this:

pip install pyclesperanto-prototype
pip install jupyterlab
pip install jupyter-book
pip install jupyterlab-spellchecker

git clone https://github.com/haesleinhuepf/BioImageAnalysisNotebooks
cd BioImageAnalysisNotebooks
jupyter lab

To build the book, you can run this from the same folder (tested on MacOS only):

chmod u+x ./build.sh
./build.sh

To clear the build, e.g. before committing using git, run this:

chmod u+x ./clean.sh
./clean.sh

Acknowledgements

R.H. acknowledges support by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy—EXC2068–Cluster of Excellence Physics of Life of TU Dresden. This project has been made possible in part by grant numbers 2021-240341, 2021-237734 and 2022-252520 from the Chan Zuckerberg Initiative DAF, an advised fund of the Silicon Valley Community Foundation.

bioimageanalysisnotebooks's People

Contributors

amgfernandes avatar elisabethkugler avatar guiwitz avatar haesleinhuepf avatar iimog avatar marabuuu avatar rayanirban avatar shannon-e-taylor avatar thawn avatar yitengdang avatar zoccoler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bioimageanalysisnotebooks's Issues

contributions

What an amazing resource @haesleinhuepf ! I had in mind to extend some of my courses in this way but of course never found time. I'm planning a workshop on bioimage processing in the coming months and instead of reinventing the wheel I'm considering to use your material as basis and also use the opportunity contribute to your project. I read that you encourage people to contribute new notebooks and I have a few questions:

  • is there any specific topic that you think needs particularly attention?
  • do you have any roadmap in mind (e.g. things you want to do next)?
  • are you also happy with people improving the existing notebooks?

I just want to avoid working on something to then find out that you or someone else has worked on the same thing. In general, I'm also curious whether you want to keep tight control on the content (wich would be perfectly fine!) or if you intend this as a collaborative project. In the latter case, maybe potential contributions or proposals for improvement of specific notebooks could be discussed in issues?

Issue on page layouts

Hey @haesleinhuepf

Low priority issue, but when I try to browse the notebooks on a widescreen monitor that is rotated (e.g. tall and narrow), the table of contents panel on the left hand side does not show up. Moving the browser to a monitor with widescreen orientation reveals the table of contents.

Cheers,
Kyle

P.S. very happy to see you get new awesome opportunities in Leipzig :)

Navbar not working on iPad in portrait mode

I'm mainly reading the notebooks on my iPad in portrait mode. A couple of weeks ago, the navbar stopped working. When I click on the icon at the top left, the screen turns dim but the navbar does not open. It works in landscape mode, in the browser and on my phone. But I'm able to reproduce the problem in Firefox Mobile view (Ctrl+Shift+M) when selecting iPad there.

I tracked the issue down to this commit: 2cd4c35
Indeed, when upgrading to sphinx-book-theme version 1.0.1 the sidebar works on my iPad. However, the search icon is back in the middle, rather than the navbar.

General questions

I went a bit through the (great!) material and before trying to improve here and there, I have a few questions:

  • Is there anywhere a plan of the entire course (more detailed than what is explained here)? Maybe that could help structure the course a bit better. If I take the example of the segmentation chapter, there's a mix of basic notebooks, notebooks going through series of functions and notebooks showing complete workflows. Maybe there could be sub-chapters?
  • pyclesperanto: I noticed that in most notebooks introducing new concepts (e.g. thresholding) both scikit-image and pyclesperanto examples are given. I think it's great as it gives people a choice (it also guarantees that people can at least run the non-GPU code in any case). Some more advanced examples e.g. on nuclei quantification only exist with pyclesperanto. Would it be accepted (or desired) to have equivalent notebooks using just the classical packages (scikit-image, scipy etc.)?
  • the blobs picture is used in lots of places. Is that by design or was it just a first step? I think it makes it in general more interesting to show even simple things like labelling with real images.

A few things I mentioned in other issues, but as they are general questions I put them back here for completeness:

  • what's the plan about pyclesperanto's imshow. I just think it's a bit dangerous to depend on pyclesperanto just for image display (even in notebooks where pyclesperanto is not used). Again my (biased) opinion is to use microfilm.
  • the numbering both of chapters and of notebooks inside chapters is often strange. Do we avoid fixing this (so that the _toc.yml doesn't have to change all the time) until things settle down or should fix things bit by bit when we encounter them?
  • should the data in general be accessed by url or by paths? url's ensure that it always works, though with internet connection. Paths don't need a connection but only work when cloning the repo (tricky e.g. on Colab).
  • should we make the course work on Colab and/or on Binder?

Immunofluorescence data

Hi,

Thank you for the wonderful tutorial. I am working on some immunofluorescence data and this is my first time looking at it, would be great if you could provide some insights on my workflow. I am doing this to quantify immune cells (three channels corresponds to three stains) in the biopsy samples.
I have three channels and DAPI. I am trying to run your notebook for cell segmentation in one of the channel image to look at the stained cells. Some caveats, my files are RGB instead of the CMY on QuPath, I wonder what difference would it make?
Should I be using overlaid image (which has all three colors) or individual tiff files?

I basically loaded my tif files and ran the commands from [https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20b_deep_learning/cellpose.html].

Would appreciate any suggestions.

Thank you!

Truncated content in "Computing with images" notebook

Towards the end of the "Computing with images" notebook is a code cell, that seems to contain additional (truncated) content:

Finally we might want to have a look at the actual distribution of pixel values. For this

image

I'm not sure, whether the rest of the content is missing or whether it was an active decision to remove that content (and this cell is just a left over). In case of the latter, I'm happy to send a pull request just removing this cell.

{
"cell_type": "code",
"execution_count": null,
"id": "371c1e76-38f8-40d8-b983-3ce79e594a4a",
"metadata": {},
"outputs": [],
"source": [
"Finally we might want to have a look at the actual distribution of pixel values. For this "
]
},

PS: Thanks for this amazing learning and teaching material 💙

How do PRs work to notebooks?

Sorry for the stupid question, but I can't figure this out.
How does making a PR to a notebook work?
I can edit and change a code cell, but what about the output?

Slice shape issue on page /20_image_segmentation/Segmentation_3D.html

Hello ,

Thank you for your work. There is always a notebook for what I am looking for!

In this notebook : https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/Segmentation_3D.html
In the section intensity and background correction I think the slice should be

a_slice = cle.create([resampled.shape[1], resampled.shape[2]])

image

With the example data the change is not obvious but with less cubic data I think it will crop part of the image

Questions on page /29_algorithm_validation/validate-spot-counting.html

Hello,

Thank you for your work.
I have two questions concerning the 29_algorithm_validation/validate-spot-counting

Maximum or sum projection ?

def compute_true_positives(distance_matrix, thresh): 
    dist_mat = cle.push(distance_matrix)
    count_matrix = dist_mat < thresh

    ...

    detected = np.asarray(cle.maximum_y_projection(count_matrix))[0, 1:]
    # [0, 1:] is necessary to get rid of the first column which corresponds to background

    ...

    # ambiguous matches occur when one annotation corresponds to multiple detected spots 
    ambiguous_matches = len(detected[detected>1])

    ...

I think maximum_y_projection -> sum_y_projection otherwise we cannot detect ambiguous matches. (everything will be 0 or 1 using the maximum projection)

Who should be in the column position : detected or annotation ?

Considering that annotation is 16 points.
Considering that detected_spots is 31 points.

When distance_matrix is defined:

distance_matrix = cle.generate_distance_matrix(detected_spots, annotations.T)

It will put the detection as columns of the matrix (it's shape will be (16, 31))

But for the F1 score quantification:

print(f'We are detecting {distance_matrix.shape[0]} cells when there are {distance_matrix.shape[1]}')

The detections are in row, not in columns.
Also in def compute_true_positives(distance_matrix, thresh): the y_projection suggest also that annotation should be in the columns.

If distance_matrix.T is used from the definition maybe the legend of this figure will need to change
image

Issue on page /18_image_filtering/03_background_removal.html

Hi,

Thank you for these notes. I noticed a minor (furthermore) typo in this sentence

"There are other techniques for background subtraction such as the top-hat. Furthwermore, the Difference of Gaussians (DoG) is a technique for combined denoising and background removal."

More interestingly, would it be possible to add some benchmarks regarding how fast different algorithms are? This could be useful to devs interested in fast yet good enough implementations. Also, perhaps a table with a few cons/pros of different algorithms might be useful.

Apologies if these are tackled elsewhere and I do appreciate this awesome work.

NelsonGon

Logic error when trying to open image with pyscleranto_prototype

Getting the following error when trying to open a single channel tiff image with cle.imshow():

Command:

cle.imshow(single_channel_image, color_map='Greys_r')`

Output:

---------------------------------------------------------------------------
LogicError                                Traceback (most recent call last)
Cell In[6], line 3
      1 single_channel_image = multichannel_image[:, :, 0]
----> 3 cle.imshow(single_channel_image, color_map='Greys_r')

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_plugin_function.py:65, in plugin_function.<locals>.worker_function(*args, **kwargs)
     63 for key, value in bound.arguments.items():
     64     if is_image(value) and key in sig.parameters and sig.parameters[key].annotation is Image:
---> 65         bound.arguments[key] = push(value)
     66     if key in sig.parameters and sig.parameters[key].annotation is Image and value is None:
     67         sig2 = inspect.signature(output_creator)

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_push.py:41, in push(any_array)
     38 if hasattr(any_array, 'shape') and hasattr(any_array, 'dtype') and hasattr(any_array, 'get'):
     39     any_array = np.asarray(any_array.get())
---> 41 return Backend.get_instance().get().from_array(np.float32(any_array))

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_opencl_backend.py:44, in OpenCLBackend.from_array(self, *args, **kwargs)
     43 def from_array(self, *args, **kwargs):
---> 44     return OCLArray.from_array(*args, **kwargs)

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_pycl.py:69, in OCLArray.from_array(cls, arr, *args, **kwargs)
     66 @classmethod
     67 def from_array(cls, arr, *args, **kwargs):
     68     assert_supported_ndarray_type(arr.dtype.type)
---> 69     queue = get_device().queue
     70     return OCLArray.to_device(queue, prepare(arr), *args, **kwargs)

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_device.py:43, in get_device()
     41 def get_device() -> Device:
     42     """Get the current device GPU class."""
---> 43     return _current_device._instance or select_device()

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_device.py:72, in select_device(name, dev_type, score_key)
     68 except:
     69     pass
---> 72 device = filter_devices(name, dev_type, score_key)[-1]
     73 if _current_device._instance and device == _current_device._instance.device:
     74     return _current_device._instance

File ~/mambaforge/envs/devbio-napari-env/lib/python3.9/site-packages/pyclesperanto_prototype/_tier0/_device.py:101, in filter_devices(name, dev_type, score_key)
     89 """Filter devices based on various options
     90 
     91 :param name: First device that contains ``name`` will be returned, defaults to None
   (...)
     98 :rtype: List[cl.Device]
     99 """
    100 devices = []
--> 101 for platform in cl.get_platforms():
    102     for device in platform.get_devices():
    103         if name and name.lower() in device.name.lower():

LogicError: clGetPlatformIDs failed: PLATFORM_NOT_FOUND_KHR

Matplotlib Image Figures

Hi Robert,

Super cool collection of notebooks!

One of my tasks is to teach users about bio-image analysis and I am in awe of the beauty of this collection. Thanks a lot for putting it together and making it publicly available. Instead of building my own collection I will totally use this and try to contribute where I can.

By coincidence one of the first things I did was a notebook and figure_utils.py that explains/helps users to plot their image data at a given print/display resolution: https://github.com/fmi-faim/py_course/blob/main/01-figures-in-python/Figures.ipynb

The target audience would be users that want save results at a requested resolution with a scalebar for publications or posters.

I could find some plotting notebooks, but not one tackling this specific area. If you agree I would like to submit this work as a PR to the repo.

Currently the figure_utils.py is a standalone collection of utility functions not published via pip. Would you recommend to put it on pip?

Best,
Tim-Oliver

PS: The license can be changed, currently it is MIT.

napari and nD visualization

The chapter "Image visualization in 3D" actually presents nD visualization cases rather than just 3D. Should it be renamed? Also the notebook on napari only uses the blobs image and the explanations are quite limited. Should this notebook be extended and improved? Also this notebook is a bit redundant with the one presented in the GUI chapter. Should just one be kept? And should there be one notebook that explains all basic ideas of napari or would this be redundant with other tutorials/resources? I sort of like the idea to have all minimum material in one place but that's of course up for debate.

Intro questions

Hi @haesleinhuepf,

I have been looking through the introductory part and have a few questions. I don't want to create a separate issue for each of them or make a PR for each before discussing, so I group them here. If you prefer separate issues let me know. And sorry in advance if some of the points are already explained somewhere else and I just missed them:

  • in the basic types notebooks you show how to combined strings and numbers. Should 1-2 examples be added with f-strings? I find them extremely useful and now favour them even with beginners over the classic my text + str(a) approach.
  • the masking numpy arrays seem oddly placed as Numpy hasn't been introduced yet at that point. Maybe it should be moved?
  • there's never an introduction to packages. The first time it's used is when the math module is used, but without much comment. Should there be a short notebook introducing this, and in particular the different variants of import like from xx import yy, import xx etc.
  • in the custom functions notebook, you explain how to document functions but don't show how to document inputs/outputs (e.g. numpy-docs). I think that's actually quite useful. Similarly in that notebook, there's no explanation about how functions can take default parameter values and then be called in different ways e.g. myfun(3,4), myfun(a=3, b=4) etc. I often see people confused about this, so I think it would be worthwhile to add.
  • in the introduction to image processing, there's never a real intro to Numpy and things are added bits by bits and people referred to your other course Bio-image_Analysis_with_Python. Is that intentional? I just fear that people might be a bit confused and think it would be nice to have one short notebook just about Numpy (without all the indexing, cropping etc. which you nicely introduced then using images). Just:
    • what is an array
    • why is it useful (simple operations on all pixels in one line of code without for loops)
    • how to create basic arrays (moving the intro to np.zeros etc. that is currently in the intro
    • what other things one can do with numpy (functions, random generators etc.)

Ok, I think that's long enough... Let me know what you think and I'll make PRs if you agree with any of those!

Cheers,
Guillaume

Various suggestions from VolkerH

Hi @haesleinhuepf ,

how could I miss this? Have been working with jupyterbook myself a lot lately (but for proprietary commercial work) and love it.

I am starting this issue for various (hopefully helpful and constructive) suggestions. I will add to this issue as I read through the various notebooks, so don't close.

In the chapter on programming style and magic numbers you give this example:

# enter the expected radius of nuclei here, in pixel units
approximate_nuclei_radius = 3

This would be a good place to mention that the convention for variables representing constant values in the code (Literals) in python
is to USE ALL CAPS.
Eg.

# enter the expected radius of nuclei here, in pixel units
APPROXIMATE_NUCEI_RADIUS = 3

If you assign to an ALL CAPS variable in your code, that should ring an alarm bell.
Also, while looking at this, it is a good suggestion to include units in variable names (if they are fixed and not configurable), e.g.

APPROXIMATE_NUCLEI_RADIUS_PX = 3
# vs
APPROXIMATE_NUCLEI_RADIUS_UM = 12
# vs 
APPROXIMATE_NUCLEI_RADIUS_MM = 0.012

this of course is also useful for varibles that are not CONSTANT, such as
angle_rad, mass_kg, etc.

Jupytext

Maybe consider converting the source of this book to jupytext notebooks either with Myst-Markdown or py-percent. That way, it would be easier to actually submit PRs by just doing a few edits in the source with a text editor. It is not quite as trivial to edit the .ipynb as they also contain the cell outputs and potential collaborators may not have the same environment (lack of GPU) to regenerate the .ipynb with the output.

Error in importing the package

when I use:
import pyclesperanto_prototype as cle
I get this error:


TypeError Traceback (most recent call last)
/tmp/ipykernel_9974/4056458295.py in
1 from skimage.io import imread
----> 2 from pyclesperanto_prototype import imshow
3 import pyclesperanto_prototype as cle
4 import matplotlib.pyplot as plt

~/.local/lib/python3.8/site-packages/pyclesperanto_prototype/init.py in
----> 1 from ._tier0 import *
2 from ._tier1 import *
3 from ._tier2 import *
4 from ._tier3 import *
5 from ._tier4 import *

~/.local/lib/python3.8/site-packages/pyclesperanto_prototype/_tier0/init.py in
34 from ._push import push
35 from ._push import push as asarray
---> 36 from ._plugin_function import plugin_function
37 from ._types import Image
38 from ._cl_info import cl_info

~/.local/lib/python3.8/site-packages/pyclesperanto_prototype/_tier0/_plugin_function.py in
6
7 from ._create import create_like
----> 8 from ._types import Image, is_image
9 from ._push import push
10

~/.local/lib/python3.8/site-packages/pyclesperanto_prototype/_tier0/_types.py in
1 import numpy as np
----> 2 from ._pycl import OCLArray, _OCLImage
3 import pyopencl as cl
4 from typing import Union
5

~/.local/lib/python3.8/site-packages/pyclesperanto_prototype/_tier0/_pycl.py in
4
5 import numpy as np
----> 6 import pyopencl as cl
7 from pyopencl import characterize
8 from pyopencl import array

~/.local/lib/python3.8/site-packages/pyopencl/init.py in
28
29 # must import, otherwise dtype registry will not be fully populated
---> 30 import pyopencl.cltypes # noqa: F401
31
32 import logging

~/.local/lib/python3.8/site-packages/pyopencl/cltypes.py in
20
21 import numpy as np
---> 22 from pyopencl.tools import get_or_register_dtype
23 import warnings
24

~/.local/lib/python3.8/site-packages/pyopencl/tools.py in
134
135 import numpy as np
--> 136 from pytools import memoize, memoize_method
137 from pyopencl._cl import bitlog2, get_cl_header_version # noqa: F401
138 from pytools.persistent_dict import KeyBuilder as KeyBuilderBase

~/.local/lib/python3.8/site-packages/pytools/init.py in
811
812
--> 813 class keyed_memoize_on_first_arg(Generic[T, P, R]): # noqa: N801
814 """Like :func:memoize_method, but for functions that take the object
815 in which memoization information is stored as first argument.

/usr/local/lib/python3.8/typing.py in inner(*args, **kwds)
259 except TypeError:
260 pass # All real errors (not unhashable args) are raised below.
--> 261 return func(*args, **kwds)
262 return inner
263

/usr/local/lib/python3.8/typing.py in class_getitem(cls, params)
888 # Generic and Protocol can only be subscripted with unique type variables.
889 if not all(isinstance(p, TypeVar) for p in params):
--> 890 raise TypeError(
891 f"Parameters to {cls.name}[...] must all be type variables")
892 if len(set(params)) != len(params):

TypeError: Parameters to Generic[...] must all be type variables

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.