Git Product home page Git Product logo

ncem's Introduction

ncem

PyPI Python Version License Read the documentation at https://ncem.readthedocs.io/ Build Package Status pre-commit Black

ncem concept

Features

ncem is a model repository in a single python package for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021. (preprint)

Installation

You can install ncem via pip from PyPI:

$ pip install ncem

Credits

This package was created with cookietemple using Cookiecutter based on Hypermodern_Python_Cookiecutter.

ncem's People

Contributors

annachristina avatar davidsebfischer avatar dependabot[bot] avatar pre-commit-ci[bot] avatar zethson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ncem's Issues

1877 del celldata.uns["spatial"] - error

Hi,

I would really like to try to use this method on my Xenium spatial transcriptomics data that I've pre-processed with squidpy, however Im running into some issues.

Here is my Anndata object:
AnnData object with n_obs × n_vars = 53466 × 339
obs: 'cell_id', 'x_centroid', 'y_centroid', 'transcript_counts', 'control_probe_counts', 'control_codeword_counts', 'unassigned_codeword_counts', 'deprecated_codeword_counts', 'total_counts', 'cell_area', 'nucleus_area', 'n_genes_by_counts', 'log1p_n_genes_by_counts', 'log1p_total_counts', 'pct_counts_in_top_10_genes', 'pct_counts_in_top_20_genes', 'pct_counts_in_top_50_genes', 'n_counts', 'leiden_1.0', 'new_clusters', 'Cell_Type', 'Cluster', 'Condition', 'Sample_ID'
var: 'gene_ids', 'feature_types', 'genome', 'n_cells_by_counts', 'mean_counts', 'log1p_mean_counts', 'pct_dropout_by_counts', 'total_counts', 'log1p_total_counts', 'n_cells'
uns: 'dendrogram_leiden_1.0', 'leiden', 'leiden_1.0_colors', 'log1p', 'neighbors', 'new_clusters_colors', 'pca', 'rank_genes_groups', 'umap'
obsm: 'X_pca', 'X_umap', 'spatial'
varm: 'PCs'
layers: 'raw_count'
obsp: 'connectivities', 'distances'

This is the code I'm running

ncem.data = customLoader(
    adata=adata, cluster='new_cluster', patient='Sample_ID', library_id='None', radius=52
)
get_data_custom(interpreter=ncem)

Here is the error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) ncem.data = customLoader(
      [2](vscode-notebook-cell:?execution_count=7&line=2)     adata=adata, cluster='new_cluster', patient='Sample_ID', library_id='None', radius=52
      [3](vscode-notebook-cell:?execution_count=7&line=3) )
      [4](vscode-notebook-cell:?execution_count=7&line=4) get_data_custom(interpreter=ncem)

File [~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1852](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1852), in customLoader.__init__(self, adata, cluster, patient, library_id, radius, coord_type, n_rings, n_top_genes, label_selection)
   [1849](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1849) self.library_id = library_id
   [1851](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1851) print("Loading data from raw files")
-> [1852](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1852) self.register_celldata(n_top_genes=n_top_genes)
   [1853](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1853) self.register_img_celldata()
   [1854](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1854) self.register_graph_features(label_selection=label_selection)

File [~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1772](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1772), in DataLoader.register_celldata(self, n_top_genes)
   [1770](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1770) """Load AnnData object of complete dataset."""
   [1771](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1771) print("registering celldata")
-> [1772](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1772) self._register_celldata(n_top_genes=n_top_genes)
   [1773](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1773) assert self.celldata is not None, "celldata was not loaded"

File [~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1877](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1877), in customLoader._register_celldata(self, n_top_genes)
   [1875](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1875) celldata.X = celldata.X.toarray()
   [1876](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1876) celldata.uns["metadata"] = metadata
-> [1877](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1877) del celldata.uns["spatial"]
   [1879](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1879) # register node type names
   [1880](https://file+.vscode-resource.vscode-cdn.net/Users/sarapatti/Desktop/PhD_projects/Llyod_lab/ST_Xenium_Asthma/scripts/~/opt/anaconda3/envs/ncem/lib/python3.8/site-packages/ncem/data.py:1880) node_type_names = list(np.unique(celldata.obs[self.cluster]))

KeyError: 'spatial'

Any suggestions on why Im getting this error?

System:

  • OS: 14.2.1
  • Language Version: Python 3.9
  • Virtual environment: [e.g. Conda]

Getting error in running python notebook of melc tonsils

Hi, I am running your example of following two notebooks data_exploration_melc_tonsils.ipynb, and model_benchmarks_melc_tonsils.ipynb
In the first one I get the error at step "Supp. Fig. 7: Ligand–receptor permutation test"

TypeError: init() got an unexpected keyword argument 'allowed_methods'

And from the other notebook at step "Figure 1 (d): Modeling cell communication as spatial cell state dependencies"

FileNotFoundError: [Errno 2] No such file or directory: '.210419_INTERACTIONS_BASELINE_NONE_NODES_PATIENT_1_pascualreguanttonsil/results/'

Can you please let me know what I am missing here? Thanks.

Allow cylindrical shell as neighbourhood

reduce model dependencies from segmentation and background RNA when fitting models only to neighbourhood within a radius x from a target cell.

cylindrical shell creates neighbourhood for min_radius < x < max_radius

QuestionInternalError: Graph execution error:

Question

Hello,
I tried to apply ncem on DBiT-seq data. I followed the tutorial
, prepared the expecting input data successfully.
AnnData object with n_obs × n_vars = 20000 × 5000 obs: 'index', 'target_cell' var: 'highly_variable', 'means', 'dispersions', 'dispersions_norm' uns: 'node_type_names', 'log1p', 'hvg' obsm: 'proportions', 'node_types', 'spatial'

I continued tutorial. But I cannot finish the last step trainer.estimator.train(epochs=5).
It raised an error InternalError: Graph execution error:.

I didn't find a feasible solution.
Looking forward to a solution to this error. Thanks a lot.

Whole error information:

`InternalError Traceback (most recent call last)
Cell In[50], line 1
----> 1 trainer.estimator.train(epochs=5)

File ~/miniconda3/envs/ncem/lib/python3.8/site-packages/ncem/estimators/base_estimator.py:1262, in Estimator.train(self, epochs, epochs_warmup, max_steps_per_epoch, batch_size, validation_batch_size, max_validation_steps, shuffle_buffer_size, patience, lr_schedule_min_lr, lr_schedule_factor, lr_schedule_patience, initial_epoch, monitor_partition, monitor_metric, log_dir, callbacks, early_stopping, reduce_lr_plateau, pretrain_decoder, decoder_epochs, decoder_patience, decoder_callbacks, aggressive, aggressive_enc_patience, aggressive_epochs, seed, **kwargs)
1245 self.train_normal(
1246 epochs=epochs_warmup,
1247 patience=patience,
(...)
1258 **kwargs,
1259 )
1260 initial_epoch += epochs_warmup
-> 1262 self.train_normal(
1263 epochs=epochs,
1264 patience=patience,
1265 lr_schedule_min_lr=lr_schedule_min_lr,
1266 lr_schedule_factor=lr_schedule_factor,
1267 lr_schedule_patience=lr_schedule_patience,
1268 initial_epoch=initial_epoch,
1269 monitor_partition=monitor_partition,
1270 monitor_metric=monitor_metric,
1271 log_dir=log_dir,
1272 callbacks=callbacks,
1273 early_stopping=early_stopping,
1274 reduce_lr_plateau=reduce_lr_plateau,
1275 **kwargs,
1276 )

File ~/miniconda3/envs/ncem/lib/python3.8/site-packages/ncem/estimators/base_estimator.py:1366, in Estimator.train_normal(self, epochs, patience, lr_schedule_min_lr, lr_schedule_factor, lr_schedule_patience, initial_epoch, monitor_partition, monitor_metric, log_dir, callbacks, early_stopping, reduce_lr_plateau, **kwargs)
1362 if callbacks is not None:
1363 # callbacks needs to be a list
1364 cbs += callbacks
-> 1366 history = self.model.training_model.fit(
1367 x=self.train_dataset,
1368 epochs=epochs,
1369 initial_epoch=initial_epoch,
1370 steps_per_epoch=self.steps_per_epoch,
1371 callbacks=cbs,
1372 validation_data=self.eval_dataset,
1373 validation_steps=self.validation_steps,
1374 verbose=2,
1375 **kwargs,
1376 ).history
1377 for k, v in history.items(): # append to history if train() has been called before.
1378 if k in self.history.keys():

File ~/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.traceback)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb

File ~/miniconda3/envs/ncem/lib/python3.8/site-packages/tensorflow/python/eager/execute.py:52, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
50 try:
51 ctx.ensure_initialized()
---> 52 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
53 inputs, attrs, num_outputs)
54 except core._NotOkStatusException as e:
55 if name is not None:

InternalError: Graph execution error:

Detected at node 'StatefulPartitionedCall' defined at (most recent call last):
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel_launcher.py", line 17, in
app.launch_new_instance()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/traitlets/config/application.py", line 1043, in launch_instance
app.start()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/kernelapp.py", line 725, in start
self.io_loop.start()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/tornado/platform/asyncio.py", line 215, in start
self.asyncio_loop.run_forever()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
self._run_once()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/asyncio/base_events.py", line 1859, in _run_once
handle._run()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/asyncio/events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 513, in dispatch_queue
await self.process_one()
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 502, in process_one
await dispatch(*args)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 409, in dispatch_shell
await result
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/kernelbase.py", line 729, in execute_request
reply_content = await reply_content
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/ipkernel.py", line 422, in do_execute
res = shell.run_cell(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ipykernel/zmqshell.py", line 540, in run_cell
return super().run_cell(*args, **kwargs)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 2961, in run_cell
result = self._run_cell(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3016, in _run_cell
result = runner(coro)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/async_helpers.py", line 129, in pseudo_sync_runner
coro.send(None)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3221, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3400, in run_ast_nodes
if await self.run_code(code, result, async
=asy):
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3460, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_3662462/1675290643.py", line 1, in
trainer.estimator.train(epochs=5)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ncem/estimators/base_estimator.py", line 1262, in train
self.train_normal(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/ncem/estimators/base_estimator.py", line 1366, in train_normal
history = self.model.training_model.fit(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/engine/training.py", line 1027, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
self.apply_gradients(grads_and_vars)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients
return super().apply_gradients(grads_and_vars, name=name)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 634, in apply_gradients
iteration = self._internal_apply_gradients(grads_and_vars)
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1166, in _internal_apply_gradients
return tf.internal.distribute.interim.maybe_merge_call(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1216, in _distributed_apply_gradients_fn
distribution.extended.update(
File "/home/duan/miniconda3/envs/ncem/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1211, in apply_grad_to_update_var
return self._update_step_xla(grad, var, id(self._var_key(var)))
Node: 'StatefulPartitionedCall'
libdevice not found at ./libdevice.10.bc
[[{{node StatefulPartitionedCall}}]] [Op:__inference_train_function_52682]`

How to run NCEM for CODEX csv/or FCS output data?

Question

Hi
I want to use NCEM for CODEX data , but I could find any tutorial or how to prepare the data for this type of analysis.

Could please direct me if there is any tutorial or notebook that you could share with us ?

Error while loading custom data in NCEM

Hello

I am trying to run NCEM after running the deconvolution in cell2location method. I followed entire workflow as given in https://github.com/theislab/ncem_benchmarks/blob/main/notebooks/data_preparation/deconvolution/cell2location_human_lymphnode.ipynb and saved dataset as "cell2location_test.h5ad" based on the information given in the "Collect ncem anndata object" section. Next, I tried to read h5ad data into NCEM for further analysis with custom loader function.

adata_vis # stores information of my cell2location_test.h5ad file

ncem = InterpreterInteraction()

ncem.data = customLoader(
adata=adata_vis, cluster='cell_type', patient='A1', library_id='sam_A1', radius=52
)

But its giving me error such as AttributeError: 'numpy.ndarray' object has no attribute 'toarray'

If I am doing something wrong, please correct me.

Thanks in advance

getting only zeros at enrichment step

Describe the bug
Hello
I am working with 10X visum spatial transcriptome data.
From cell2location, estimated cell-type specific expression of every gene in the spatial data and did the clustering. Now, I am running NCEM for cell-types integration (or cluster specific interaction). At enrichment step, I am getting zero values for every cluster. Here is result for cluster 1.

adata_img, adata_vis, log_pval, fold_change = ncem.data.compute_cluster_enrichment(
image_key=['Sample_A1'],
target_cell_type='1',
clip_pvalues=-5,
n_neighbors=100,
n_pcs=None,
)

I am getting values as given below:
Ouptut:
1 substates 1 0 1 1 1 2
new_index
0 0.0 0.0 0.0
1 0.0 0.0 0.0
10 0.0 0.0 0.0
11 0.0 0.0 0.0
2 0.0 0.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 0.0 0.0 0.0
6 0.0 0.0 0.0
7 0.0 0.0 0.0
8 0.0 0.0 0.0
9 0.0 0.0 0.0

Please let me know if I am doing something wrong.

To Reproduce

Steps to reproduce the behavior:

  1. ...
  2. ...
  3. ...

Expected behavior

System [please complete the following information]:

  • OS: e.g. [Ubuntu 18.04]
  • Language Version: [e.g. Python 3.8]
  • Virtual environment: [e.g. Conda]

Additional context

Naive Question

Hello ncem team,

Thanks to develop NCEM.
I'm trying out NCEM tool for downstream analysis(the receiver/sender analysis) of 10X visium with cell2location.

following this tutorial:
https://github.com/theislab/ncem_tutorials/blob/main/tutorials/type_coupling_visium.ipynb

e.g.
adata

AnnData object with n_obs × n_vars = 79849 × 2000
    obs: 'index', 'target_cell'
    var: 'highly_variable', 'means', 'dispersions', 'dispersions_norm'
    uns: 'hvg', 'log1p', 'node_type_names'
    obsm: 'node_types', 'proportions', 'spatial'

get_data_custom(interpreter=ncem_ip, deconvolution=True)

Mean of mean node degree per images across images: 6.000000
Using split method: node. 
 Train-test-validation split is based on total number of nodes per patients over all images.

Excluded 0 cells with the following unannotated cell type: [None] 

Whole dataset: 79849 cells out of 2 images from 1 patients.
Test dataset: 7985 cells out of 2 images from 1 patients.
Training dataset: 65008 cells out of 2 images from 1 patients.
Validation dataset: 7187 cells out of 2 images from 1 patients.

1/ for now I can plot the figures of type coupling analysis (e.g. Fig2 b, c, d, f in your paper).
How about the part of Training the ncem model for deconvoluted Visium, what's the output from this part ?
This part is Independent ?

2/ when I running the code below:
trainer.estimator.train(epochs=5)
I got the error messages:

Node: 'StatefulPartitionedCall'
RET_CHECK failure (tensorflow/compiler/xla/service/gpu/gpu_compiler.cc:618) dnn != nullptr 
	 [[{{node StatefulPartitionedCall}}]] [Op:__inference_train_function_1382]

I have checked Memory GPU is mostly full. 23178MiB / 24564MiB

Fri Mar 24 10:42:34 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01    Driver Version: 525.78.01    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A5000    Off  | 00000000:3B:00.0 Off |                  Off |
| 30%   38C    P2    59W / 230W |  23178MiB / 24564MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

Do you have any idea about GPU memory required for running NCEM with n_obs × n_vars = 79849 × 2000 ?

I have also followed the tutorial from https://github.com/theislab/spatial_scog_workshop_2022/blob/main/ncem/tutorial_ncem.ipynb

3/ For the part of Cell heterogeneity attributed to niche composition. the code below doesn't work to me.

adata_img, adata, log_pval, fold_change = ncem.data.compute_cluster_enrichment(
    image_key=['point16', 'point23', 'point8'],
    target_cell_type='Tcell_CD8',
    clip_pvalues=-5,
    n_neighbors=22,
    n_pcs=None,
)
    828 adata_list = list(self.img_celldata.values())
    829 adata = adata_list[0].concatenate(adata_list[1:], uns_merge="same")
--> 831 cluster_col = self.celldata.uns["metadata"]["cluster_col_preprocessed"]
    832 image_col = self.celldata.uns["metadata"]["image_col"]
    833 if undefined_type:

KeyError: 'cluster_col_preprocessed'

How to built cluster_col_preprocessed metadata ? This part is also for 10x visium?

4/ In general, I have misunderstood your tutorial. Could you tell to me step by step how to analysis 10x visium dataset from anndata with metadata ( obsm: 'node_types', 'proportions', 'spatial' ) using NCEM, especially downstream analysis from Training the ncem model for deconvoluted Visium.

Best,
Chuang

Update required: extend_formula_ncem

Hi Anna,

Currently, the extend_formula_ncem function (referenced below) computes a list of coef_couplings:

coef_couplings = [f"{PREFIX_INDEX}{x}:{PREFIX_NEIGHBOR}{y}" for y in cell_types for x in cell_types]

For my cell types, this list looks like:

['index_AcinarCells:neighbor_AcinarCells', 'index_AcinarCells:neighbor_B', 'index_AcinarCells:neighbor_Basal', ...]

However, the get_dmats_from_deconvoluted generates a different list of coefficients. When I inspect dmats[x].design_info.column_names, I instead see

['index_AcinarCells[False]:neighbor_AcinarCells', 'index_AcinarCells[True]:neighbor_AcinarCells', 'index_AcinarCells[False]:neighbor_B', 'index_AcinarCells[True]:neighbor_B', ...]

Note the addition of [False] and [True]. The fact that these lists are different leads to problems when we run test_deconvoluted.

I think this issue can be fixed by updating the extend_formula_ncem function. I'll take a look at it and let you know when I have a solution.

coef_couplings = [f"{PREFIX_INDEX}{x}:{PREFIX_NEIGHBOR}{y}" for y in cell_types for x in cell_types]

Question about r2 values

Question

Hi @AnnaChristina. In the evaluate_per_node_type function, are the $r^2$ values calculated averages over all gene expression predictions for a given cell type?

Input to the "interaction linear model"

Thank you for making ncem's codes public. My question is regarding the input to linear model with interactions. In the paper input
xl is cell * unique cell type in real space. But from the tutorial data (MERFISH brain) I found xl is one-hot encoded, which is not REAL. I am wondering, why is the paper xl is REAL but in the tutorial it is not.

Error saving the output figures in png or pdf format

empty figures
Hi

I am runing NCEM code from the scratch for coupling and cell enrichment analysis. Meanwhile, I wanted to save all the imaging output in png (or pdf) format. But except for the sc.pl.umap output , I am getting empty png file for the remaining analysis. I tried different ways to save the "image output" but still its not working.

import ncem as nc
import numpy as np
import matplotlib.pyplot as plt
import scanpy as sc
import squidpy as sq

sc.settings.set_figure_params(dpi=100)

import warnings
warnings.filterwarnings("ignore")

from ncem.interpretation import InterpreterInteraction
from ncem.data import get_data_custom, customLoader

results_folder = 'test'
run_name = f'{results_folder}/anndata/'
import scanpy as sc

import anndata as an

adata_file = f"{run_name}/sp.h5ad" # sp_cluster
ad = an.read_h5ad(adata_file)
ncem = InterpreterInteraction()

adata_vis = ad

ncem.data = customLoader(
adata=adata_vis, cluster='region_cluster', patient='sampleid', library_id='sample', radius=150
)

get_data_custom(interpreter=ncem)

var_decomp = ncem.data.compute_variance_decomposition()

ncem.data.variance_decomposition(var_decomp, figsize=(5,3))

plt.savefig('test.png')

ncem.data = customLoader(
adata=adata_vis, cluster='region_cluster', patient='sample', library_id='sample', radius=52
)
get_data_custom(interpreter=ncem)

adata_img, adata_vis, log_pval, fold_change = ncem.data.compute_cluster_enrichment(
image_key=['S1_A1_Complete'],
target_cell_type='2',
clip_pvalues=-5,
n_neighbors=50,
n_pcs=None,
)

sc.pl.umap(adata_vis, color='2 substates', palette='tab10')

plt.savefig("subclust_test.png")
plt.show()

I would appreciate your help
Thanks

cannot get the grid search results

Hi,

I ran this block from the tutorial_mibitof notebook

`gs = ncem.train.GridSearchContainer(
'/home/mohammed/NCEM/Anna Schaar/grid_searches_gen/',
gs_ids=[
"210419_INTERACTIONS_BASELINE_NONE_NODES_IMAGE_1_HARTMANN",
"210419_INTERACTIONS_MAX_NODES_IMAGE_1_HARTMANN",
],
lateral_resolution = 400/1024
)
gs.load_gs()

gs.plot_best_model_by_hyperparam(
graph_model_class='interactions',
baseline_model_class='interactions_baseline',
rename_levels = [
("model", {
"INTERACTIONS_BASELINE_NONE_NODES_IMAGE_1": "baseline",
"INTERACTIONS_MAX_NODES_IMAGE_1": "NCEM",
})
],
xticks=[0, 10, 50, 200, 1000],
)
`

and got this error


FileNotFoundError Traceback (most recent call last)
Cell In[12], line 9
1 gs = ncem.train.GridSearchContainer(
2 '/home/mohammed/NCEM/Anna Schaar/grid_searches_gen/',
3 gs_ids=[
(...)
7 lateral_resolution = 400/1024
8 )
----> 9 gs.load_gs()
11 gs.plot_best_model_by_hyperparam(
12 graph_model_class='interactions',
13 baseline_model_class='interactions_baseline',
(...)
20 xticks=[0, 10, 50, 200, 1000],
21 )

File ~/.local/lib/python3.8/site-packages/ncem/train/summaries.py:114, in GridSearchContainer.load_gs(self, expected_pickle, add_posterior_sampling_model, report_unsuccessful_runs)
106 indir = self.source_path + gs_id + "/results/"
107 # runs_ids are the unique hyper-parameter settings, which are again subsetted by cross-validation.
108 # These ids are present in all files names but are only collected from the *model.tf file names here.
110 run_ids = np.sort(
111 np.unique(
112 [
113 "".join(".".join(x.split(".")[:-1]).split("")[:-1])
--> 114 for x in os.listdir(indir)
115 if x.split("")[-1].split(".")[0] == "time"
116 ]
117 )
118 )
119 cv_ids = np.sort(np.unique([x.split("
")[-1] for x in run_ids])) # identifiers of cross-validation splits
120 run_ids = np.sort(
121 np.unique(["".join(x.split("")[:-1]) for x in run_ids]) # identifiers of hyper-parameters settings
122 )

FileNotFoundError: [Errno 2] No such file or directory: '/home/med/NCEM/Anna Schaar/grid_searches_gen/210419_INTERACTIONS_BASELINE_NONE_NODES_IMAGE_1_HARTMANN/results/'

Could you help?
Thanks

Supporting jinja2>=3.0.3

I have a problem with installing ncem in my virtual environment.

My environment is pre-installed with jupyterlab-server 2.11.2 requiring jinja2>=3.0.3. When I tried to install ncem, I received an error message stating that it requires jinja2<3.0.0,>=2.11.3.

**Could you please update ncem to support Jinja2>=3.0.3? **

LinAlgError: SVD did not converge

Describe the bug

Hello Author,

I am following your tutorial

https://github.com/theislab/ncem_tutorials/blob/main/tutorials/type_coupling_visium.ipynb

In total, I have 3 slides 10x visium using the same preprocessing, There are 2 samples I've got error message below:

---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)
Cell In[26], line 1
----> 1 ncem_ip.get_sender_receiver_effects()

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/interpretation/interpreter.py:1957, in InterpreterDeconvolution.get_sender_receiver_effects(self, params_type, significance_threshold)
   1955 # get inverse fisher information matrix
   1956 print('calculating inv fim.')
-> 1957 fim_inv = get_fim_inv(x_design, y)
   1959 is_sign, pvalues, qvalues = wald_test(
   1960     params=params, fisher_inv=fim_inv, significance_threshold=significance_threshold
   1961 )
   1962 interaction_shape = np.int(self.n_features_0**2)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/utils/wald_test.py:10, in get_fim_inv(x, y)
      6 fim = np.expand_dims(np.matmul(x.T, x), axis=0) / np.expand_dims(var, axis=[1, 2])
      8 fim = np.nan_to_num(fim)
---> 10 fim_inv = np.array([
     11     np.linalg.pinv(fim[i, :, :])
     12     for i in range(fim.shape[0])
     13 ])
     14 return fim_inv

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/ncem/utils/wald_test.py:11, in <listcomp>(.0)
      6 fim = np.expand_dims(np.matmul(x.T, x), axis=0) / np.expand_dims(var, axis=[1, 2])
      8 fim = np.nan_to_num(fim)
     10 fim_inv = np.array([
---> 11     np.linalg.pinv(fim[i, :, :])
     12     for i in range(fim.shape[0])
     13 ])
     14 return fim_inv

File <__array_function__ internals>:180, in pinv(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1990, in pinv(a, rcond, hermitian)
   1988     return wrap(res)
   1989 a = a.conjugate()
-> 1990 u, s, vt = svd(a, full_matrices=False, hermitian=hermitian)
   1992 # discard small singular values
   1993 cutoff = rcond[..., newaxis] * amax(s, axis=-1, keepdims=True)

File <__array_function__ internals>:180, in svd(*args, **kwargs)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:1648, in svd(a, full_matrices, compute_uv, hermitian)
   1645         gufunc = _umath_linalg.svd_n_s
   1647 signature = 'D->DdD' if isComplexType(t) else 'd->ddd'
-> 1648 u, s, vh = gufunc(a, signature=signature, extobj=extobj)
   1649 u = u.astype(result_t, copy=False)
   1650 s = s.astype(_realType(result_t), copy=False)

File ~/miniconda3/envs/tf-gpu-cuda10/lib/python3.8/site-packages/numpy/linalg/linalg.py:97, in _raise_linalgerror_svd_nonconvergence(err, flag)
     96 def _raise_linalgerror_svd_nonconvergence(err, flag):
---> 97     raise LinAlgError("SVD did not converge")

LinAlgError: SVD did not converge

Any help on how to fix this issue would be appropriated.

Best,
Chuang

To Reproduce
ncem_ip.get_sender_receiver_effects()

Steps to reproduce the behavior:

  1. ...
  2. ...
  3. ...

Expected behavior

System [please complete the following information]:

  • OS: e.g. [Ubuntu 18.04]
  • Language Version: [e.g. Python 3.8]
  • Virtual environment: [e.g. Conda]

Additional context

ReadTheDocs improvements

  • remove black mode default
  • use class summaries that contain links to methods in Usage documentation
  • separate API and Tutorials as main chapters
  • update README
  • Add ecosystem chapter

Trouble installing on python 3.10

Describe the bug
I believe this package should be installable on python 3.10, since the pyproject.toml says:

python = ">=3.8,<=3.10"

But pip in a fresh conda environment isn't letting me install:

mamba create -yn "ncem-env" python=3.10 
conda activate ncem-env
pip install ncem
ERROR: Ignored the following versions that require a different python version: 0.1.0 Requires-Python >=3.7,<3.10; 0.1.1 Requires-Python >=3.7,<3.9; 0.1.2 Requires-Python >=3.7,<3.9; 0.1.3 Requires-Python >=3.7,<3.9; 0.1.4 Requires-Python >=3.7,<3.9; 0.1.5 Requires-Python >=3.8,<=3.10
ERROR: Could not find a version that satisfies the requirement ncem (from versions: none)
ERROR: No matching distribution found for ncem

Maybe the versioning could be loosened up a bit?

System [please complete the following information]:

-----
session_info        1.0.0
-----
Python 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
Linux-5.15.0-82-generic-x86_64-with-glibc2.35
-----
Session information updated at 2023-09-25 14:56

Work with 10X Visium Estimator

Question

Dear @AnnaChristina and ncem developers,
thank you for creating this impressive tool! I am studying immune signalling in the visium data. For this reason, I deconvoluted a single cell reference atlas into the visium anndata and created anndatas containing the cell specific expression as shown here: https://github.com/theislab/ncem_benchmarks/blob/main/notebooks/data_preparation/deconvolution/cell2location_human_lymphnode.ipynb.

I did not find any further explanation in the documentation or the tutorials and could not figure it by looking into the functions available but am I correct to analyze the visium signaling like this eg.

## Initialize ncem model for deconvoluted Visium
          ncem_ip = InterpreterDeconvolution()

          ncem_ip.data = customLoaderDeconvolution(
              adata=adata, patient=None, library_id='library_id', radius=None
          )

          print("Load the data for visium")
          get_data_custom(interpreter=ncem_ip, deconvolution=True)


          print("Get sender and receiver effects")
          
          ncem_ip.get_sender_receiver_effects()

            ## Get type coupling
            type_coupling = ncem_ip.type_coupling_analysis_circular(edge_width_scale=0.3,edge_attr='magnitude', figsize=(20,20), text_space=1.28, de_genes_threshold=150,suffix=f"{niche}_type_coupling_analysis_circular.pdf")

            type_coupling.to_csv(f."/#{sample}_T_ST_{niche}_NCEM_Object_type_coupling.csv")

            ## Get type coupling
            ncem_ip.type_coupling_analysis(figsize=(15, 15),suffix=f"{niche}_type_coupling_analysis_Heatmap.pdf")

or should I first train the estimator and then load the model into the interpreter.

Either way, how can I access, or plot features learned by the estimator for deconvoluted visium?

Thanks in advance for your help!

Question

Hi,
Does ncem support differential analysis between two conditions (healthy and disease samples)?

Almost zero documentation for the training modules API

Hello,

I would like to try out this package but realize there is close to zero documentation in the official 'readthedocs'/API for the train classes. Is this actually the official documentation? None of the parameters are defined, the differences between functions are undefined, and almost nothing is written here. Please correct me if this is not the official documentation. It's unclear how to even use the train modules

https://ncem.readthedocs.io/en/latest/readme.html

Use Tangram mapped visium for ncem

Question
Hello,
Thank you for creating ncem. I wanted to ask if the Custom data loader also accepts tangram mapped adata.
In specific: what should obsm node_types, proportions consist of ?

How to run NCEM for own spatial data?

Hi Anna,

I want to use NCEM for other MERFISH data to compare with other tools that how the information differs. I followed your tutorial
https://github.com/theislab/spatial_scog_workshop_2022/blob/main/ncem/tutorial_ncem.ipynb
but I don't understand how to create customLoader part for my own data.
Compare to your anndata [ad = sq.datasets.mibitof()]structure.

AnnData object with n_obs × n_vars = 3309 × 36
    obs: 'row_num', 'point', 'cell_id', 'X1', 'center_rowcoord', 'center_colcoord', 'cell_size', 'category', 'donor', 'Cluster', 'batch', 'library_id'
    var: 'mean-0', 'std-0', 'mean-1', 'std-1', 'mean-2', 'std-2'
    uns: 'Cluster_colors', 'batch_colors', 'neighbors', 'spatial', 'umap'
    obsm: 'X_scanorama', 'X_umap', 'spatial'
    obsp: 'connectivities', 'distances'

My anndata object have expression matrix in adata.X,
cluster annotation in
obs: 'clusters'
and spatial coordinate in
obsm: 'spatial'
That's all I have in anndata. With this information could I able to repeat the tutorial for ncem.sender_similarity_analysis?
Thanks.

Problems using GPU

Good day,

I have tested the building of the model using GPU but seem to have some problems. On our HPC with V100 and RTX6000 I get the message below:

2022-07-08 17:43:48.418309: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
/hpc/pmc_stunnenberg/cruiz/miniconda3/envs/ncem/lib/python3.8/site-packages/anndata/_core/anndata.py:121: ImplicitModificationWarning: Transforming to str index.
  warnings.warn("Transforming to str index.", ImplicitModificationWarning)

  0%|          | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:22<00:00, 22.25s/it]
100%|██████████| 1/1 [00:22<00:00, 22.25s/it]2022-07-08 17:45:09.445032: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-07-08 17:45:09.864682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21322 MB memory:  -> device: 0, name: Quadro RTX 6000, pci bus id: 0000:5e:00.0, compute capability: 7.5
2022-07-08 20:15:32.071303: W tensorflow/core/framework/op_kernel.cc:1733] UNKNOWN: KeyError: 829
Traceback (most recent call last):

  File "/hpc/pmc_stunnenberg/cruiz/miniconda3/envs/ncem/lib/python3.8/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
    ret = func(*args)

  File "/hpc/pmc_stunnenberg/cruiz/miniconda3/envs/ncem/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 642, in wrapper
    return func(*args, **kwargs)

  File "/hpc/pmc_stunnenberg/cruiz/miniconda3/envs/ncem/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1132, in finalize_py_func
    generator_state.iterator_completed(iterator_id)

  File "/hpc/pmc_stunnenberg/cruiz/miniconda3/envs/ncem/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 853, in iterator_completed
    del self._iterators[self._normalize_id(iterator_id)]

KeyError: 829

For this I created a fresh conda environment and only pip installed ncem. The process is still running on CPU but it has alredy taken over four days and still has not finished the training.

I have also tried on another HPC with an A100 GPU. However, there the error is recognizing my GPU

2022-07-11 10:23:36.877594: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
/home/cruiz2/miniconda3/envs/ncem/lib/python3.8/site-packages/anndata/_core/anndata.py:121: ImplicitModificationWarning: Transforming to str index.
  warnings.warn("Transforming to str index.", ImplicitModificationWarning)

  0%|          | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:18<00:00, 18.96s/it]
100%|██████████| 1/1 [00:18<00:00, 18.96s/it]2022-07-11 10:24:57.496311: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2022-07-11 10:24:57.496383: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2022-07-11 10:24:57.505115: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA

Added export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cruiz2/miniconda3/envs/ncem/lib/libcudnn.so.8 on my bashrc file but still get the same error.

Do you have any recomendations on how to set the package to recognize the GPU properly, please?

Thanks in advance

Differential interaction

Hi,
Does ncem support differential analysis between two conditions (healthy and disease samples)?

compatibility with 10x visium

Hi,
Does NCEM support 10x visium datasets ? This is a widely used platform, if NCEM support 10x visium, I think it will become more active becasue so may labs work with 10x visium. We know that there are some limitations for celltalker and cellphonedb and NCEM will help a lot.

How to implement Input CODEX data for 'NCEM

I am appreciated by your excellent work. I plan to use 'NCEM' package with my CODEX image data. Yet, I failed to load my data referring the tutorial code. I find there are a lot of prefix like 'zhang','Hartmann' to build a dataloader. I don't know how to choose them. May I request for a help?

Issue installing ncem on macOS

I am trying to install ncem on my machine:
macOS Monterey 12.6.1
pip 22.2.2 from /Users/sei/opt/miniconda3/envs/test-env/lib/python3.8/site-packages/pip (python 3.8)

pip install ncem

When I do this I get the following error:

(test-env) sei@192-168-1-136 ~ % pip install ncem

Collecting ncem
  Using cached ncem-0.1.4-py3-none-any.whl (119 kB)
Collecting click<8.0.0,>=7.1.2
  Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting patsy<0.6.0,>=0.5.1
  Using cached patsy-0.5.3-py2.py3-none-any.whl (233 kB)
Collecting Jinja2<4.0.0,>=2.11.3
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting docrep<0.4.0,>=0.3.2
  Using cached docrep-0.3.2.tar.gz (33 kB)
  Preparing metadata (setup.py) ... done
Collecting matplotlib<4.0.0,>=3.4.2
  Using cached matplotlib-3.6.1-cp38-cp38-macosx_11_0_arm64.whl (7.2 MB)
Collecting diffxpy<0.8.0,>=0.7.4
  Using cached diffxpy-0.7.4-py3-none-any.whl (85 kB)
Collecting scipy<2.0.0,>=1.7.0
  Using cached scipy-1.9.3-cp38-cp38-macosx_12_0_arm64.whl (28.5 MB)
Collecting scanpy<2.0.0,>=1.7.2
  Using cached scanpy-1.9.1-py3-none-any.whl (2.0 MB)
Collecting squidpy<2.0.0,>=1.0.0
  Using cached squidpy-1.2.3-py3-none-any.whl (180 kB)
Collecting PyYAML<6.0.0,>=5.4.1
  Using cached PyYAML-5.4.1.tar.gz (175 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting louvain<0.8.0,>=0.7.0
  Using cached louvain-0.7.2-cp38-cp38-macosx_11_0_arm64.whl (187 kB)
Collecting rich<11.0.0,>=10.1.0
  Using cached rich-10.16.2-py3-none-any.whl (214 kB)
Collecting ncem
  Using cached ncem-0.1.3-py3-none-any.whl (118 kB)
  Using cached ncem-0.1.2-py3-none-any.whl (117 kB)
  Using cached ncem-0.1.1-py3-none-any.whl (105 kB)
  Using cached ncem-0.1.0-py3-none-any.whl (82 kB)
Collecting Jinja2<3.0.0,>=2.11.3
  Using cached Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
ERROR: Cannot install ncem==0.1.0, ncem==0.1.1, ncem==0.1.2, ncem==0.1.3 and ncem==0.1.4 because these package versions have conflicting dependencies.

The conflict is caused by:
    ncem 0.1.4 depends on tensorflow<3.0.0 and >=2.5.0
    ncem 0.1.3 depends on tensorflow<3.0.0 and >=2.5.0
    ncem 0.1.2 depends on tensorflow<3.0.0 and >=2.5.0
    ncem 0.1.1 depends on tensorflow<3.0.0 and >=2.5.0
    ncem 0.1.0 depends on tensorflow<3.0.0 and >=2.5.0

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

Please let me know if there's anything else I should be doing beforehand.
Thank you kindly for your help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.