camaralab / stvea Goto Github PK
View Code? Open in Web Editor NEWSpatially-resolved Transcriptomics via Epitope Anchoring
License: GNU General Public License v3.0
Spatially-resolved Transcriptomics via Epitope Anchoring
License: GNU General Public License v3.0
Hi STvEA team,
Thanks for providing a wonderful tool! I'm interested in matching codex cells to citeseq cell using the MapCODEXtoCITE and GetTransferMatrix function. My goal is to get the NN matrix with which codex cell is matching to which citeseq cells. I run the codex with a subset of the balbc codex dataset and a murine spleen citeseq dataset:
( I did not run the umap/clustering steps as they should not be related to matching i pressume?) I had the error
Error in [<-
(*tmp*
, i, , value = idx) : subscript out of bounds
during the GetTransferMatrix function.
Also, it seems that the corrected codex data from MapCODEXtoCITE was incomplete in my case?
(where my input is 5000 codex cells)
Thank you again for the wonderful tool and let me know where the problem could potentially be.
Hi,
Thank you for developing this tool, I'm trying to use it for CODEX data but I find it quite hard to interpret the heatmaps at the moment. And is there a way to play around with the color scheme, as I have difficulty to distinguish what is significant and what not. And is there a way to add a color legend to these heatmaps?
Thank you,
Hi,
I love this tool and am trying to apply it to my data.
I get this error when running the CODEX only tutorial on my data:
stvea_object <- CleanCODEX(stvea_object)
Error in quantile.default(x, seq(from = 0, to = 1, length = n)) :
missing values and NaN's not allowed if 'na.rm' is FALSE
Is this error because my codex_protein matrix is much smaller than my codex_spatial matrix as the markers i used only bind to a subset of cells in my tissue sample.
Is it possible to run STvEA on such data?
Thanks
Andrew
Hi,
I had another question. are you guys planning to update your code to take CITE-seq Seurat objects created with Seurat V3 or later?
Thanks,
Andrew
Hallo,
This is very excellent tool!.
I have tried the code provided in the section of "Analyzing CODEX data" and it looks working well to my own CODEX dataset. However, I have an obstacle on mapping back the UMAP result into CODEX spatial figure using this code:
PlotExprCODEXspatial(stvea_object, c("marker"))
is there any code must be inputted before I execute that code ?
your help, I really appreciate in advance.
Best regards,
Bugie
Institute of Biomedical Science, Academia Sinica, Taiwan
According to the CODEX tutorial, you started with "segmented and spillover corrected data as FCS files". Are these the files that are generated by CODEX Processor or was there additional processing? If it's just the CODEX Processor output, can the CSV files be used instead or do the FCS files contain some additional required information?
The codex_balbc1
dataset contains 4 objects. How were those generated?
Which file do you input to data() function?
I tried with this file https://welikesharingdata.blob.core.windows.net/forshare/BALBc-1.fcs
like this
data("BALBc-1.fcs")
but I get this message
Warning message:
In data("BALBc-1.fcs") : data set ‘BALBc-1.fcs’ not found
Hi,
I have never had to load .fcs data into R before. The tutorial doesnt include info on how best to read data from .fcs files.
any advice would be greatly appreciated.
Thank you for the pipeline, it's helping me a lot in analyzing CODEX data.
I am following the code on codex_tutorial.md for preprocessing the BALBc data from Goltsev et al. using the fcs provided by the authors of the study.
However, after filtering and cleaning each dataset separately, if I compare the distributions of a marker between datasets, sometimes I get very different results.
For example, comparing BALBc-1 and BALBc-2 the distribution of CD45 is quite similar:
However, for Ly6C they are extremely different:
This also leads to a clear separation in the UMAP between the datasets. This separation is not present in the original data, so it's not caused by a batch effect.
I tried to use as values for the parameters of FilterCODEX either the ones specified in the tutorial, or leave them blank, but the conclusions are always the same.
Did you already encounter this problem? How can I solve it?
Thanks very much for developing such a great useful tool for the use of CODEX analysis. I have no trouble running the package and the spatial analyses were really helpful for my image analysis. However, I have recently encounter the problem for running protein expression on the adjacent cells. Below is the error message:
protein_adj <- AdjScoreProteins(stvea_object, k=3, num_cores=8)
Creating permutation matrices - 77.365 seconds
Computing adjacency score for each feature pairError: $ operator is invalid for atomic vectors
In addition: Warning messages:
1: In if (class(f) != "matrix") { :
the condition has length > 1 and only the first element will be used
2: In if (class(f_pairs) == "list") { :
the condition has length > 1 and only the first element will be used
3: In mclapply(1:nrow(f), function(i) as(t(sapply(1:nrow(permutations), :
all scheduled cores encountered errors in user code
4: In mclapply(work, worker, mc.cores = num_cores) :
all scheduled cores encountered errors in user code
I am uncertain if I ran out of memory or cores? I have 24 cores in my computer and will able to increase if needed. I did adjust the k and num_cores parameters, however, I kept getting back these error messages. Any help will be deeply appreciated. Thanks very much.
Hello. I don't understand how you normalize the CODEX data in your article.
You mentioned the following:
We normalized the processed CODEX data by the total levels in each cell
where Mhi is the level of antigen h in cell i before normalization. After this process, antigen levels are well approximated by a two- component Gaussian mixture model where the Gaussian with the highest median corresponds to the sig- nal component, and the mixing parameter ah represents the proba- bility of a measurement of antigen h actually coming from the background. Upon fitting the model to the data using the expectation- maximization algorithm for maximum likelihood estimation, we filtered out the background component of the data by considering
the probabilities"
Could you please tell me where is the code to perform this normalization.
Thanks a lot
Leandro Balzano
Hi,
I'm interested in using STvEA for integrating CODEX data with CITEseq data, but got stuck at MapCODEXtoCITE call with following error:
stvea_object <- MapCODEXtoCITE(stvea_object, num_chunks=8, seed=30, num_cores=20)
Error in (nrow(cite_protein) + 1):nrow(corrected_data[[i]]) :
argument of length 0
In addition: Warning message:
In mclapply(1:length(chunk_ids), function(i) AnchorCorrection(ref_mat = cite_protein, :
all scheduled cores encountered errors in user code
I tried running it step by step as #mentioned and found that it is getting stuck at following call:
scoredAnchors <- ScoreAnchors(neighbors, filteredAnchors, nrow(ref_mat), nrow(query_mat), k.score=k.score, verbose=verbose)
Scoring anchors
Error in sparseMatrix(i = i, j = j, x = 1, dims = dims) :
all(dims >= dims.min) is not TRUE
In addition: Warning message:
In ScoreAnchors(neighbors, filteredAnchors, nrow(ref_mat), nrow(query_mat), :
Requested k.score = 80, only 50 in dataset
I looked for NAs in dataset - none found. Could anyone please guide as to what could be causing this?
I was curious to see if you were planning to share the source code for the shiny app: https://camara-lab.shinyapps.io/stvea/ on your github? That would be very helpful!
Thank you in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.