Git Product home page Git Product logo

btrack's Introduction

PyPI Downloads Black Tests pre-commit Documentation codecov

logo

Bayesian Tracker (btrack) ๐Ÿ”ฌ๐Ÿ’ป

btrack is a Python library for multi object tracking, used to reconstruct trajectories in crowded fields. Here, we use a probabilistic network of information to perform the trajectory linking. This method uses spatial information as well as appearance information for track linking.

The tracking algorithm assembles reliable sections of track that do not contain splitting events (tracklets). Each new tracklet initiates a probabilistic model, and utilises this to predict future states (and error in states) of each of the objects in the field of view. We assign new observations to the growing tracklets (linking) by evaluating the posterior probability of each potential linkage from a Bayesian belief matrix for all possible linkages.

The tracklets are then assembled into tracks by using multiple hypothesis testing and integer programming to identify a globally optimal solution. The likelihood of each hypothesis is calculated for some or all of the tracklets based on heuristics. The global solution identifies a sequence of high-likelihood hypotheses that accounts for all observations.

We developed btrack for cell tracking in time-lapse microscopy data.

Installation

btrack has been tested with Python on x86_64 macos>=11, ubuntu>=20.04 and windows>=10.0.17763. Note that btrack<=0.5.0 was built against earlier version of Eigen which used C++=11, as of btrack==0.5.1 it is now built against C++=17.

Installing the latest stable version

pip install btrack

Usage examples

Visit btrack documentation to learn how to use it and see other examples.

Cell tracking in time-lapse imaging data

We provide integration with Napari, including a plugin for graph visualization, arboretum.

CellTracking
Video of tracking, showing automatic lineage determination


Development

The tracker and hypothesis engine are mostly written in C++ with a Python wrapper. If you would like to contribute to btrack, you will need to install the latest version from GitHub. Follow the instructions on our developer guide.


Citation

More details of how this type of tracking approach can be applied to tracking cells in time-lapse microscopy data can be found in the following publications:

Automated deep lineage tree analysis using a Bayesian single cell tracking approach
Ulicna K, Vallardi G, Charras G and Lowe AR.
Front in Comp Sci (2021)
doi:10.3389/fcomp.2021.734559

Local cellular neighbourhood controls proliferation in cell competition
Bove A, Gradeci D, Fujita Y, Banerjee S, Charras G and Lowe AR.
Mol. Biol. Cell (2017)
doi:10.1091/mbc.E17-06-0368

@ARTICLE {10.3389/fcomp.2021.734559,
   AUTHOR = {Ulicna, Kristina and Vallardi, Giulia and Charras, Guillaume and Lowe, Alan R.},
   TITLE = {Automated Deep Lineage Tree Analysis Using a Bayesian Single Cell Tracking Approach},
   JOURNAL = {Frontiers in Computer Science},
   VOLUME = {3},
   PAGES = {92},
   YEAR = {2021},
   URL = {https://www.frontiersin.org/article/10.3389/fcomp.2021.734559},
   DOI = {10.3389/fcomp.2021.734559},
   ISSN = {2624-9898}
}
@ARTICLE {Bove07112017,
  author = {Bove, Anna and Gradeci, Daniel and Fujita, Yasuyuki and Banerjee,
    Shiladitya and Charras, Guillaume and Lowe, Alan R.},
  title = {Local cellular neighborhood controls proliferation in cell competition},
  volume = {28},
  number = {23},
  pages = {3215-3228},
  year = {2017},
  doi = {10.1091/mbc.E17-06-0368},
  URL = {http://www.molbiolcell.org/content/28/23/3215.abstract},
  eprint = {http://www.molbiolcell.org/content/28/23/3215.full.pdf+html},
  journal = {Molecular Biology of the Cell}
}

btrack's People

Contributors

alessandrofelder avatar dpshelio avatar dstansby avatar ianhi avatar jrussell25 avatar kristinaulicna avatar nthndy avatar p-j-smith avatar paddyroddy avatar quantumjot avatar tlambert03 avatar volkerh avatar zsdqui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

btrack's Issues

More details on how to set the algorithm's hyperparameters in the configuration file ?

Hello,

First of all, thank you for your work on this very useful library.

I would like to have more details about the hyperparameters of the algorithm (especially of the Hypothesis model) and how I need to set them in the config file regarding my specific data. I work on cell tracking, and typically have several hundreds of images and around a hundred cells within each image. For now, I've only used the algorithm with the default json config file given in the tutorial and this works fine as long as the number of images remains fairly low (~100). When I go a bit beyond (~300 images), the "Optimizing" step never ends. This is must be due to a poor optimizer configuration, as underlined in Issue #13.

I understand that I should change the Hypothesis Model hyperparameters in the config file, but the problem is that I couldn't find any detailed information about what is each hyperparameter useful for, and how it should be regarding the data (not enough details in the wiki or in the referenced papers). Did I miss something, or could it be possible to have more information regarding this ?

Thank you!

This runs incredibly slow on my data. How to speed it up?

I'm trying to track about 5000 peaks (2D) per frame for 10 frames. Using trackpy it takes about 10-15 seconds to run but btrack is still running after 20 minutes.. I've tried reducing theta_time and theta_dist in the model ("cellmodel.json") and that doesn't seem to have an effect. What optimization parameters can I change to speed this up? Additionally I'm running on a 20 core server. Can I take advantage of multiprocessing without writing my own implementation?

Issue decoding h5 files

Hello, I am building a tool to import cell tracking files, process them in Python and visualize them in Blender. I am having some issues decoding the h5 files exported by btrack.

Following the file structure from dataio.py, I understand that the tracks 'map' array is a (K x 2) array of indexes to the 'tracks' array, and that using these indices in the 'tracks' array allows us to extract the relevant indices for the 'coords' array.

Where I am a bit lost is why there are negative integers included in the tracks array, and how they relate to the size of the dummy dataset?
For every dataset queried:

abs(np.min(tracks)) == dummies.shape[0]

For example, a sample slice of 'tracks' using 'map' indices
looks like this:

[ 1545 2335 3635 4695 5727 6774 7787 8851 9906 10983 -1252 12986 14045 15066 16124 17145 18194 19229 20264 21266 22015 -1253 24251 25261 26018 27052 28443 29478 30546 31581 32653 33688 -1254 35627 36369 37463 38801 39846 40875 41946 42986 44003]

What is strange is how the pattern of negative values is such that they are decreasing by 1 each time, until they reach a minimum value equal to the number of rows in the 'dummies' array.

Using these indices to slice the 'coords' array leads to discontinuities of the otherwise temporally continuous slices of the dataset.

For example, here is the corresponding slice from 'coords' from the tracks indices above:

[[  1.       691.25354  305.169     56.560562   0.      ]
 [  2.       690.2381   306.49524   50.862858   0.      ]
 [  3.       679.69147  315.4468    53.95213    0.      ]
 [  4.       671.06665  317.68332   55.6025     0.      ]
 [  5.       658.7258   314.58566   60.326637   0.      ]
 [  6.       654.0179   307.42856   60.498215   0.      ]
 [  7.       657.6908   306.5987    59.42171    0.      ]
 [  8.       656.00934  306.84113   60.632942   0.      ]
 [  9.       656.31055  311.1738    60.31111    0.      ]
 [ 10.       656.6334   308.63986   60.369453   0.      ]
 [237.       613.4513   564.4992    57.711514   0.      ]
 [ 12.       658.06006  307.46738   58.93825    0.      ]
 [ 13.       663.02563  307.453     59.71154    0.      ]
 [ 14.       662.3555   308.1111    59.263332   0.      ]
 [ 15.       661.60986  308.29596   59.485428   0.      ]
 [ 16.       661.6381   307.41904   59.175713   0.      ]
 [ 17.       663.1128   309.82706   59.609776   0.      ]
 [ 18.       658.50543  306.27716   60.4125     0.      ]
 [ 19.       655.3111   304.77777   58.956665   0.      ]
 [ 20.       655.0373   306.6087    59.807144   0.      ]
 [ 21.       654.80743  306.40994   60.79286    0.      ]
 [237.       402.58563  561.1381    57.830387   0.      ]
 [ 23.       655.58093  307.2614    59.981327   0.      ]
 [ 24.       662.2381   307.25714   60.95       0.      ]
 [ 25.       654.6695   309.15967   56.97815    0.      ]
 [ 26.       655.3333   310.1839    59.918964   0.      ]
 [ 27.       661.25793  312.62442   59.680317   0.      ]
 [ 28.       659.13513  313.27927   60.483784   0.      ]
 [ 29.       657.9659   310.66476   60.061363   0.      ]
 [ 30.       657.5599   307.24295   59.415318   0.      ]
 [ 31.       658.72034  307.2523    59.174316   0.      ]
 [ 32.       658.81067  305.4852    60.181065   0.      ]
 [237.       107.055176 557.10345   60.258415   0.      ]
 [ 34.       657.86707  306.13925   60.942722   0.      ]
 [ 35.       657.526    305.88962   60.46461    0.      ]
 [ 36.       657.0847   307.0904    60.579662   0.      ]
 [ 37.       658.86957  305.21738   60.6        0.      ]
 [ 38.       658.6667   307.30362   58.786633   0.      ]
 [ 39.       658.77563  307.19232   59.037018   0.      ]
 [ 40.       659.3985   304.21054   59.661655   0.      ]
 [ 41.       658.49854  306.18552   58.82       0.      ]
 [ 42.       659.33496  306.68216   57.882397   0.      ]]

Thanks in advance for any advice you can provide,

Overlap issue of instance segmentation masks

Hello! Thank you for this great tracker.

I have a question about the segmentation that this tracker accepts. It seems like according to the example you gave and what I have tried, I have to put all the instance masks in one layer and pass it to the tracker. However, if there's overlap between two cell instances, the tracker will consider them as the same cell. Does it mean I have to remove all the overlaps before passing them to the tracker? Separation of cell instances is really important to me.

Thank you in advance!

add citation

Update README to add the reference to the btrack publication

Request more info about setting up the model

More information about setting up this tracking model would be really useful. Specifically, I'd like to know how the system deals with data that doesn't come in with equally spaced time steps (dt=1 always?), or how one defines a variable transition matrix to model this. Is that possible with this code?

Regards,
BillyPeanut

good library to segment cells

Hello:-)

Thank you for the wonderful software!

Do you have a suggestion for using a library for cell segmentation from images that would produce the csv data that would be compatible with btracker?

Thank you!

Maria

NameError: name 'csvfile' is not defined

It looks like there's a typo in dataio.py

    with open(filename, 'r') as csv_file:
        csvreader = csv.DictReader(csvfile, delimiter=' ', quotechar='|')

Should "csv_file" be "csvfile"?

No attribute "cleanup"

Hi, I'm trying to use btrack with spyder 5.0.0. When I call

with btrack as tracker: 
  <...>
  tracker.cleanup()
  <...>

I get

AttributeError: 'BayesianTracker' object has no attribute 'cleanup'

even though Kite is displaying it in the documentation right above. Could anyone please give me any idea of what I should check? Thanks.

Possible syntax issues.

This may be due to setup.py missing however.

>>> import btrack
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\g\test\lib\btrack\__init__.py", line 3, in <module>
    from core import *
  File "C:\g\test\lib\btrack\core.py", line 602
    self.__motion_model = None
                             ^
TabError: inconsistent use of tabs and spaces in indentation

Add a convenience function to convert an array of localizations to btrack objects

It would be great to add a convenience function that converts a numpy array (or Pandas dataframe) of localizations to a list of btrack objects in the utils library:

def convert_localizations_to_objects(data: np.ndarray) -> List[PyTrackObject]:
    # code to build objects using ObjectFactory 
    return objects

It could also be used by the CSV importer:
https://github.com/quantumjot/BayesianTracker/blob/dee211aaa7bbb283046a6539bd2b033adf6c31ee/btrack/dataio.py#L94

Request for a more thorough description of input format.

Are all of the entries for a JSON object required? If I'm only interested in spacial tracking (not lineages, or other states), do I need to indicate "label", "states", "probability", or "time"?

{
"Object_203622": {
"x": 554.29737483861709,
"y": 1199.362071438818,
"z": 0.0,
"t": 862,
"label": "interphase",
"states": 5,
"probability": [
0.996992826461792,
0.0021888131741434336,
0.0006106126820668578,
0.000165432647918351,
4.232166247675195e-05
],
"dummy": false
}
}

Custom Object detector for another domain

Hello Author,

Could you please suggest if this Bayesian tracker can be implemented for another domain along with any deep learning based object detector.

If yes, can you please let me know if the input format of the object detection that has to be followed for running this tracking algorithm ?

Thank You !

last frame missing from tracks

Hi,
I just found out about btracker some days ago and I would really like to use it in my project. Since now, it works pretty well and it makes a lot of fun to work with. Thank you for that!
However, I noticed an odd behavior to which I have not yet found a work around. Every time I am tracking cells, somehow the last frame is missing from all tracks. Here is an example:

Let there be one object in 7 frames (t0 - 6):

x = np.array([200, 201, 202, 203, 204, 207, 208])
y = np.array([503, 507, 499, 500, 510, 515, 518])
t = np.array([0, 1, 2, 3, 4, 5, 6])
z = np.zeros(x.shape)

objects = {'x': x,
           'y': y,
           'z': z,
           't': t}

objects = btrack.dataio.objects_from_dict(objects)

I track this object:

with btrack.BayesianTracker() as tracker:

    tracker.configure_from_file('tracker_config.json')

    tracker.max_search_radius = 100

    tracker.append(objects)
    tracker.frame_range = (0, 48)

    # set the volume (Z axis volume is set very large for 2D data)
    tracker.volume=((0, 2048), (0, 2048), (-1e5, 1e5))

    # track them (in interactive mode)
    tracker.track() #(step_size=100)

    # generate hypotheses and run the global optimizer
    tracker.optimize()

    tracks = tracker.tracks

The output is:
tracks[0]

ID t x y z parent root state generation dummy
1 0 200.0 503.0 0.0 1 1 5 0 False
1 1 201.0 507.0 0.0 1 1 5 0 False
1 2 202.0 499.0 0.0 1 1 5 0 False
1 3 203.0 500.0 0.0 1 1 5 0 False
1 4 204.0 510.0 0.0 1 1 5 0 False
1 5 207.0 515.0 0.0 1 1 5 0 False

My expectation was, that the last frame (t=6) would also be in the track. This behavior appears every time I use btracker.

Is this behavior intentional and if so, what is the reason?
Is there any possibility to include also the last frame in the tracks?

Thank you!

PS: I use the example configuration file.

Reorganize repo

  • Move ObjectModel, MotionModel and HypothesisModel to new subpackage models.py
  • Move Tracklet etc.. to new subpackage tracks.py
  • update docstrings for models to allow easier user configuration (see #49, #34)

Add option to save out raw intensity images to the .h5 file

Would it be useful for general users to also be able to save the raw data, not only the segmentation masks?
e.g. This would allow the user to crop the raw intensity patches belonging to individual tracks and visualize them as a montage.

It would be identical code to the segmentation property and the write_segmentation method in the HDF5FileHandler, so should be easy to implement.

Unable to configure ObjectModel - function "model" not found

Hi. First of all, thanks for writing and maintaining this tool. Disclaimer - I'm completely new to programming so please excuse any blatant errors, inaccuracies and misconceptions I might make writing this. I'll try to describe my issue as straightforwardly as possible:

I'm trying to use btrack for particle tracking and assessing of colocalization. For that, I have written a script that simulates random walks of an arbitrary number of particles of 2 types. When particles of different types are within a certain radius of each other, they may associate and travel along a common trajectory until they dissociate by stochastic principle. I need to run btrack on the simulated trajectories and have it recognize the stretches where particles are colocalized (assign the points belonging to those stretches a different state). To do that, from what I read I need to configure an object model.

I tried doing so simply by filling in the OrdededDict as shown in btypes.py:

  "ObjectModel":
    {
      "name": "coloc_model",
      "states": 3,
      "emission": {
        "matrix": [1,0,0,
                   0,1,0,
                   0,0,1]
      },
      "transition": {
        "matrix": [1,0,0,
                   0,1,0,
                   0,0,1]
      },
      "start": {
        "matrix": [1,0,0,
                   0,1,0,
                   0,0,1]
      }
    },

The size of the matrices seems to be correct (no idea about the values), but then I got an error in core.py:

AttributeError: function 'model' not found

I found that this function was to be called from (please excuse me if I'm using the wrong term or misunderstanding the situation) libtracker.dll and have the OrderedDict elements passed on to it. I opened the .dll with Dependency Walker to see what's inside and found that the name "model" was not there although there were some references to an "ObjectModel". By analogy, the function "motion", which seems to be the one taking care of the motion model, was there and therefore the arguments from the MotionModel OrderedDict were successfully passed on to it. Am I correct to assume that the function needs to be included into the DLL for the ObjectModel parameters to be used?

Is there a way to enable the configuring of the object model and how would that work with particles? I'd be grateful for any answers.

Add properties to Tracklets, allow these to be exported

Allow the Tracklet to store properties (e.g. cell area) over time, and export these in the '.h5' file. Update the to_napari() convenience function to format these for the napari Tracks layer.

  • store properties in objects
  • return track properties by querying object properties
  • export object properties to h5 format
  • recover object properties when loading from h5 format
  • update to_napari() function to allow property visualization
  • update README

add a `replace_nan` option to the `to_napari` convenience function

Currently, any dummy objects inserted by the tracker do not have properties. This means that tracks containing dummies will have NaN for certain properties, which causes them to render incorrectly in napari.

We should provide a flag to replace these NaN values to aid in visualization. We could perhaps interpolate the values for the missing ones.

https://github.com/quantumjot/BayesianTracker/blob/7d80399db64af4723c1da2381a22d9910e1ba7c8/btrack/utils.py#L294

would become:

 def tracks_to_napari(tracks: list, ndim: int = 3, replace_nan: bool = True): 

Btrack configuration

First of all, thank you for this great resource! It has been a great help in single-cell tracking of my cells.

As the cells in my videos behave quite differently to those that this software was initially designed for (as shown in the videos), I have had to toggle values in the cell_config file. To do this, I have inferred the roles of parameters (such as 'lambda_dist' etc.) by tweaking them and observing changes in tracks on Napari. However, if all goes well, as I would like to use this analysis in a near-future publication, it would be great if I could make sure of what I have actually done (perhaps I have tweaked values that I should not have?).

It would be great to have a brief rundown of what each parameter does as well as what would be expected if each parameter was increased/decreased.

Again, thanks for the great resource!

CUDA implementation?

Hello!

I am attempting to run B-Track on a medium sized timelapse (~20-30 cells per image across 60 or so 512x512 images), and the run time seems to be taking longer than I otherwise would have expected. Approximately how long does each image take to track/how long does the optimizer take on average? I have it set to the EXACT mode at the moment, but I am at the 9 hour mark of the optimizer running with no real end in sight.

I'm curious if the CUDA version of the model is currently functional, and if so, do you have any recommendations on using it?

Thanks!
Peter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.