talmolab / sleap Goto Github PK
View Code? Open in Web Editor NEWA deep learning framework for multi-animal pose tracking.
Home Page: https://sleap.ai
License: Other
A deep learning framework for multi-animal pose tracking.
Home Page: https://sleap.ai
License: Other
Reproduce:
labels json doesn't add new nodes to:
"skeletons"
-> "nodes"
list.
IDs of previously existing nodes are updated, but new nodes are not added to this list.
Top-level "nodes"
key is updated with new nodes though.
Implementation needs to more closely match the described algorithm in Xiao et al. (2018):
Namely:
Additional enhancements:
Reproduce:
Workarounds:
Adding/deleting node or edge does trigger unsaved modifications.
Replace "" or "\" with "/" always when doing file path resolutions. Store all paths with forward slashes since they will work on all OSes.
Checkbox in Expert Controls window below Use trained paf model that says something like Single instance mode?
Easiest: video index within Labels.videos
being displayed in the status bar
There's no way to remove a model you've loaded and release the gpu memory. Currently it's a bad idea to run active learning after you've used "visualize model outputs", but the gui doesn't tell you this.
A single HDF5 file analogous to the *.json.zip training package.
Must include images and all other metadata to exactly unstructure it equivalently to the json.zip method.
import tensorflow_probability as tfp
tf.distributions.Normal
-> tfp.distributions.Normal
or:
def normal_probs(x, loc, scale):
log_unnormalized = -0.5 * tf.math.squared_difference(
x / scale, loc / scale)
log_normalization = 0.5 * np.log(2. * np.pi) + tf.math.log(scale)
return tf.exp(log_unnormalized - log_normalization)
should work and avoids the need for a new dependency, but will need to add tests and/or make sure that ndarrays <-> tensors <-> casting is all done properly
Workflow 1 (works as expected):
Workflow 2 (does not work):
The only difference here is the order of steps 2 and 3. Is node identity checking based on ID instead of string name?
(I know equality checking probably is, but is matching from a saved model also done this way?)
Do the IDs of previously existing nodes change when the skeleton is modified? If so, how does workflow 1 work after new nodes are added?
Would be a nice feature to be able to customize sleap/config/shortcuts.yaml from the GUI.
Can also double as a table to be displayed from the Help -> Keyboard Reference menu item.
error:
File "C:\Users\Sama\Anaconda\envs\sleap\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "c:\code\sleap\sleap\nn\inference.py", line 355, in predict
Labels.save_hdf5(labels, filename=self.output_path)
File "c:\code\sleap\sleap\io\dataset.py", line 1076, in save_hdf5
os.unlink(filename) PermissionError: [WinError 5] Access is denied: 'C:/Users/Sama/Desktop/abdus-saboor/grant\\models\\190923_104919.inferen ce.h5' """ The above exception was the direct cause of the following exception: Traceback (most recent call last):
File "c:\code\sleap\sleap\gui\active.py", line 344, in run
with_tracking = with_tracking)
File "c:\code\sleap\sleap\gui\active.py", line 632, in run_active_learning_pipeline
run_active_inference(labels, trained_jobs, save_dir, frames_to_predict, with_tracking)
File "c:\code\sleap\sleap\gui\active.py", line 777, in run_active_inference
result.get()
File "C:\Users\Sama\Anaconda\envs\sleap\lib\multiprocessing\pool.py", line 644, in get
raise self._value PermissionError: [Errno 13] Access is denied: 'C:/Users/Sama/Desktop/abdus-saboor/grant\\models\\190923_104919.inference .h5'
Interface:
Functionality:
Video
, Track
, Instance
, PredictedInstance
based on .matches()
(value-based) identity, adding to appropriate LabeledFrame
s as needed or importing new ones.matches()
-like merging for SuggestionsLabels
Labels
PredictedInstance
s/Track
s:) Accept higher scoring predictionsmatches()
testsInstance
s/PredictedInstance
s with partially overlapping skeletons (e.g., 6 node vs 32 node flies), merge skeleton graphs, update existing instances if skeleton changed.Test data:
/tigress/MMURTHY/talmo/wt_gold_labeling/091319.sleap_wt_gold.30pt_init.n=19.talmo.h5
+
/tigress/MMURTHY/talmo/wt_gold_labeling/091719.sleap_wt_gold.30pt_init.n=19.junyu.h5
These have the same suggestions, videos and predicted instances, but different user-labeled instances with some duplicates
There's no way to get a list of all your negative samples or to remove any once they've been added.
Using this profile results in two exceptions:
{
"model": {
"output_type": 1,
"backbone": {
"down_blocks": 3,
"up_blocks": 3,
"convs_per_depth": 0,
"num_filters": 16,
"kernel_size": 0,
"upsampling_layers": true,
"interp": "bilinear"
},
"skeletons": null,
"backbone_name": "UNet"
},
"trainer": {
"val_size": 0.1,
"optimizer": "adam",
"learning_rate": "5e-05",
"amsgrad": true,
"batch_size": 2,
"num_epochs": 150,
"steps_per_epoch": 200,
"shuffle_initially": true,
"shuffle_every_epoch": true,
"augment_rotation": 180,
"augment_scale_min": 1.0,
"augment_scale_max": 1.0,
"save_every_epoch": false,
"save_best_val": true,
"reduce_lr_min_delta": "1e-06",
"reduce_lr_factor": 0.5,
"reduce_lr_patience": 8,
"reduce_lr_cooldown": 3,
"reduce_lr_min_lr": "1e-10",
"early_stopping_min_delta": "1e-08",
"early_stopping_patience": 30.0,
"scale": 1.0,
"sigma": 5.0,
"instance_crop": true,
"bounding_box_size": 0,
"min_crop_size": 0,
"negative_samples": 0
},
"labels_filename": null,
"run_name": "",
"save_dir": null,
"best_model_filename": null,
"newest_model_filename": null,
"final_model_filename": null
}
This first prints this exception after selecting the file from the GUI:
Traceback (most recent call last):
File "d:\sleap\sleap\gui\active.py", line 105, in <lambda>
self.form_widget.valueChanged.connect(lambda: self.update_gui())
File "d:\sleap\sleap\gui\active.py", line 218, in update_gui
paf_job, _ = self._get_current_job(ModelOutputType.PART_AFFINITY_FIELD)
File "d:\sleap\sleap\gui\active.py", line 246, in _get_current_job
job_filename, job = self.job_options[model_type][idx]
IndexError: list index out of range
Which doesn't prevent the profile from being selected or subsequently used for training. After going through, this is what happens:
Use tf.cast instead.
INFO:sleap.nn.training:Closing the reporter controller/context.
INFO:sleap.nn.training:Closing the training controller socket/context.
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Anaconda3\envs\sleap\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "d:\sleap\sleap\nn\training.py", line 356, in train
workers=multiprocessing_workers,
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrap
per
return func(*args, **kwargs)
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1418, in fit_
generator
initial_epoch=initial_epoch)
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\engine\training_generator.py", line 40,
in fit_generator
model._make_train_function()
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 509, in _make
_train_function
loss=self.total_loss)
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrap
per
return func(*args, **kwargs)
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\optimizers.py", line 501, in get_update
s
self.updates.append(K.update(vhat, vhat_t))
File "C:\Anaconda3\envs\sleap\lib\site-packages\keras\backend\tensorflow_backend.py", line 97
3, in update
return tf.assign(x, new_x)
File "C:\Anaconda3\envs\sleap\lib\site-packages\tensorflow\python\ops\state_ops.py", line 224
, in assign
return ref.assign(value, name=name)
AttributeError: 'Tensor' object has no attribute 'assign'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\sleap\sleap\gui\active.py", line 337, in run
with_tracking = with_tracking)
File "d:\sleap\sleap\gui\active.py", line 617, in run_active_learning_pipeline
trained_jobs = run_active_training(labels, training_jobs, save_dir)
File "d:\sleap\sleap\gui\active.py", line 692, in run_active_training
result.get()
File "C:\Anaconda3\envs\sleap\lib\multiprocessing\pool.py", line 644, in get
raise self._value
AttributeError: 'Tensor' object has no attribute 'assign'
Oddly, this training profile for PAFs using UNet does work and trains successfully:
{
"model": {
"output_type": 1,
"backbone": {
"down_blocks": 3,
"up_blocks": 3,
"convs_per_depth": 0,
"num_filters": 16,
"kernel_size": 5,
"upsampling_layers": true,
"interp": "bilinear"
},
"skeletons": null,
"backbone_name": "UNet"
},
"trainer": {
"val_size": 0.15,
"optimizer": "adam",
"learning_rate": "5e-05",
"amsgrad": true,
"batch_size": 2,
"num_epochs": 150,
"steps_per_epoch": 200,
"shuffle_initially": true,
"shuffle_every_epoch": true,
"augment_rotation": 180,
"augment_scale_min": 1.0,
"augment_scale_max": 1.0,
"save_every_epoch": false,
"save_best_val": true,
"reduce_lr_min_delta": "1e-06",
"reduce_lr_factor": 0.5,
"reduce_lr_patience": 8,
"reduce_lr_cooldown": 3,
"reduce_lr_min_lr": "1e-10",
"early_stopping_min_delta": "1e-08",
"early_stopping_patience": 30.0,
"scale": 1.0,
"sigma": 5.0,
"instance_crop": true,
"bounding_box_size": 0,
"min_crop_size": 0,
"negative_samples": 0
},
"labels_filename": null,
"run_name": "",
"save_dir": null,
"best_model_filename": null,
"newest_model_filename": null,
"final_model_filename": null
}
So I guess it's an issue with the kernel_size
parameter. I think this defaults to 0 when using the LEAP CNN since it's not an attribute of the model. Maybe some parameter validation that's model-specific would be useful?
Unclear why this results in a GUI error though. And how the model manages to get constructed in the first place?
Use case: copying filename of video and/or going to the folder manually
Steps to reproduce:
M:\talmo\wt_gold_labeling\pilot_test_0910.json
M:/junyu/data/pair/wt/190612_110405_wt_16276625_rig2.1/000000.mp4
M:/junyu/data/pair/wt/190612_110405_wt_18159111_rig2.2/000000.mp4
And this skeleton:
M:\talmo\wt_gold_labeling\skeleton_30pts.json
The mount is: M: <-> \\tigress-cifs.princeton.edu\fileset-mmurthy <-> /tigress/MMURTHY
M:\Brandon\sLEAP\models\190911_100301.confmaps.UNet.n=848.json
M:\Brandon\sLEAP\models\190911_105618.pafs.LeapCNN.n=848.json
M:\Brandon\sLEAP\models\190812_144937.centroids.UNet.n=399.json
Generate some suggestions for Predict On (or just use random frames).
Inference should start running, but first bug:
If creating the subfolder manually, then:
Traceback (most recent call last):
File "d:\sleap\sleap\gui\app.py", line 1314, in saveProject
compress = compress)
File "d:\sleap\sleap\io\dataset.py", line 803, in save_json
d = labels.to_dict()
File "d:\sleap\sleap\io\dataset.py", line 720, in to_dict
dicts['labels'] = label_cattr.unstructure(self.labeled_frames)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 139, in unstructure
return self._unstructure_func.dispatch(obj.__class__)(obj)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 204, in unstructure_attrs_asdict
rv[name] = dispatch(v.__class__)(v)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "d:\sleap\sleap\instance.py", line 696, in unstructure_instance
for field in attr.fields(x.__class__)
File "d:\sleap\sleap\instance.py", line 697, in <dictcomp>
if field.name not in ['_points', 'frame']}
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 139, in unstructure
return self._unstructure_func.dispatch(obj.__class__)(obj)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "d:\sleap\sleap\io\dataset.py", line 700, in <lambda>
label_cattr.register_unstructure_hook(Node, lambda x: str(self.nodes.index(x)))
ValueError: Node(name='head', weight=1.0) is not in list
Traceback (most recent call last):
File "d:\sleap\sleap\gui\app.py", line 1314, in saveProject
compress = compress)
File "d:\sleap\sleap\io\dataset.py", line 803, in save_json
d = labels.to_dict()
File "d:\sleap\sleap\io\dataset.py", line 720, in to_dict
dicts['labels'] = label_cattr.unstructure(self.labeled_frames)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 139, in unstructure
return self._unstructure_func.dispatch(obj.__class__)(obj)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 204, in unstructure_attrs_asdict
rv[name] = dispatch(v.__class__)(v)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "d:\sleap\sleap\instance.py", line 696, in unstructure_instance
for field in attr.fields(x.__class__)
File "d:\sleap\sleap\instance.py", line 697, in <dictcomp>
if field.name not in ['_points', 'frame']}
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 139, in unstructure
return self._unstructure_func.dispatch(obj.__class__)(obj)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in _unstructure_seq
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "C:\Anaconda3\envs\sleap\lib\site-packages\cattr\converters.py", line 225, in <genexpr>
return seq.__class__(dispatch(e.__class__)(e) for e in seq)
File "d:\sleap\sleap\io\dataset.py", line 700, in <lambda>
label_cattr.register_unstructure_hook(Node, lambda x: str(self.nodes.index(x)))
ValueError: Node(name='head', weight=1.0) is not in list
If you visualize model outputs for one video and then change the video you're looking at, it's still running the models on frames from the original video.
On multiscale models that output tiny confmaps, if we blur the confmaps we aren't finding peaks. For now I've disabled blur if upsample_factor is set. If we're upsampling, it would probably be better if we apply the blur after the cubic resize.
code in sleap/nn/peakfinding_tf.py
test case:
python sleap/nn/inference.py /tigress/MMURTHY/Brandon/FoxPdata/video/CS-males/190701_102018_18159211_20190701_102018/000000.mp4 -m /tigress/MMURTHY/Brandon/sLEAP/models/190812_144937.centroids.UNet.n=399.json -m /tigress/MMURTHY/Brandon/sLEAP/models/190812_135728.confmaps.UNet.n=794.json -m /tigress/MMURTHY/Brandon/sLEAP/models/190812_141241.pafs.LeapCNN.n=794.json --frame 123
Alert user in gui when a save fails. (This was happening with a file where the user didn't have write permissions on a file that I had generated on the cluster.)
currently gui uses track.spawned_on which doesn't get updated (e.g., when user removes instances from a given track). we should instead use Labels.get_track_occupany(video) since this is updated.
Explicit options when initializing (from scratch or when creating from predicted instance):
Reference: Developer Guide
Tasks:
black
https://github.com/murthylab/sleap/blob/ff6119056118bbf7d42c0e6d0d5714a3eb6dcef9/sleap/nn/datagen.py#L103-L105
https://github.com/murthylab/sleap/blob/0f5f6f6cbb2f0d57de8f998628ef46776db78324/sleap/gui/overlays/base.py#L31
https://github.com/murthylab/sleap/blob/ff6119056118bbf7d42c0e6d0d5714a3eb6dcef9/sleap/nn/inference.py#L242-L246
divisibility factor should be 2 ** down_blocks
If the names of the tracks are changed so that the alphabetic order changes, then the track assignments get messed up during save/open. Maybe we're using index from before alphabetic re-order to access track list after re-order? It appears that bug only occurs when saving/loading labels in h5 format.
I0912 01:07:02.182041 47450918620800 inference.py:624] Inferred confmaps and found-peaks (gpu) [19.1s]
^ this line starts to increase in runtime, very noticeably for longer inference jobs
Relevant block:
https://github.com/murthylab/sleap/blob/ff6119056118bbf7d42c0e6d0d5714a3eb6dcef9/sleap/nn/inference.py#L607-L634
After turning a predicted instance into a regular instance, the instance selected in the video was different than the instance selected in the table.
Add suggestion sorting for suggested frames with predictions. Can just be mean score of all predicted instances.
In GUI: extra column in suggestions that is sortable.
This also enables a random --> predict --> sort by worst score type of workflow.
When in "proofreading" mode (predictions showing in track colors?), show the list of tracks w/ numbers corresponding to keyboard shortcut for setting track. Maybe hover this over video when control/command held done?
sLEAP -> SLEAP
LEAP -> SLEAP
log:
https://ci.appveyor.com/project/talmo/sleap/builds/27087209
I am able to manually build and upload the conda package on my local machine.
Instance.points() and points_array() should be attribute getters or renamed with verb ("get_*")
Since points_array has an argument, rename to verb form and maybe create a convenience property method.
points() should just be property method.
write_tracking_h5 currently includes all tracks, regardless of whether they have any instances
Enhancement:
In Expert controls menu
Similar to expert controls menus that allow you to choose model.
At a minimum, these should be available at the top-level:
import sleap as slp
slp.Labels
slp.Video # also subclasses? or maybe just through class/static methods?
slp.LabeledFrame
slp.Instance
slp.PredictedInstance
slp.Skeleton
Also inference, though Predictor
might need some refactoring to be practical
Video
class and Labels
for programmatic data import)Predictor
in a notebook)For current video and total for current Labels dataset
These are intended to be displayed above the trackbar and used to guide manual proofreading
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.