Git Product home page Git Product logo

aertslab / scope Goto Github PK

View Code? Open in Web Editor NEW
68.0 9.0 14.0 33.72 MB

Fast visualization tool for large-scale and high dimensional single-cell data

License: GNU General Public License v3.0

HTML 0.17% JavaScript 43.75% Python 50.76% CSS 0.63% Shell 0.07% Dockerfile 0.12% TypeScript 4.34% Mako 0.04% Singularity 0.11%
single-cell large-scale-data-visualization gene-expression gene-regulatory-network aws cloud loom reactjs grpc

scope's Introduction

CodeFactor

SCope v1.8.2: Visualization of large-scale and high dimensional single cell data

SCope is a fast visualization tool for large-scale and high dimensional scRNA-seq datasets. Currently the data format supported by SCope is .loom. This file format for very large omics datasets is maintained by the Linnarsson Lab through the loompy Python package (https://github.com/linnarsson-lab/loompy).

View the change log here.

Demo

Visit https://scope.aertslab.org to test out SCope on several published datasets! Personal loom file files can be uploaded but will only be kept for 5 days.

Loom File Generation

Currently there are two packages to generate extended loom files compatible with SCope.

  • R: SCopeLoomR - Dedicated R package
  • Python: pySCENIC - Single function for generation from SCENIC results

Eventually the functionality from pySCENIC will be expanded and put in its own python package.

Run SCope

Standalone App

Standalone apps for macOS and Linux can be downloaded from the releases page..

โ— SCope standalone app requires Node.js (> v9). To install it, go to https://nodejs.org/en/download/.

A Windows app is under development, but currently has no ETA.

Command Line

You will need access to at least Python 3.7 do run this.

  1. Clone the GitHub repository and install,
# Define where you want to clone the SCope repository.
LOCAL_SCOPE_REPO="${HOME}/repos/SCope"
# Clone SCope git repository.
git clone https://github.com/aertslab/SCope "${LOCAL_SCOPE_REPO}"
# Go to your local cloned SCope repository.
cd "${LOCAL_SCOPE_REPO}"
# Install SCope.
npm install
  1. Run,
# Go to your local cloned SCope repository.
cd "${LOCAL_SCOPE_REPO}"
SCOPE_CONFIG=config.json npm run scope

Deploy a Cloud-based Instance

Amazon Web Services

Public AMI

No ETA.

Source

To create a SCope AWS instance from scratch please read the tutorial aws-deployment-source.

Features

Enabling ORCID Functionality

To enable colaborative annotations and login via ORCID ID, API credentials (orcidAPIClientID, orcidAPIClientSecret and orcidAPIRedirectURI) must be added to the config file provided. These can be generated at the orcid developer tools page.

The dataHashSecret entry in the config file should be filled in with a randomly generated string for example from the python secrets package. This string will be used to salt all annotation data, allowing validation of data generated on the instance of SCope. Any changes in this string will invalidate all pre-existing annotations.

Development

  1. Clone the GitHub repository and install,
# Define where you want to clone the SCope repository.
LOCAL_SCOPE_REPO="${HOME}/repos/SCope"
# Clone SCope git repository.
git clone https://github.com/aertslab/SCope "${LOCAL_SCOPE_REPO}"
# Go to your local cloned SCope repository.
cd "${LOCAL_SCOPE_REPO}"
# Install SCope.
npm install
  1. Run,
# Go to your local cloned SCope repository.
cd "${LOCAL_SCOPE_REPO}"

# Start SCope Server (terminal 1).
cd opt
poetry run hypercorn main:scope_api --reload

# Start SCope Client (terminal 2).
cd ..
npm run dev

Configuration file (config.json)

Keys:

  • data: This is a directory containing data files (e.g. the motd.txt message of the day). Can be an absolute path or a relative path from where you start SCope. By default it is ./data/.

Deploying SCope with Docker

docker-compose.yml is configured to spin up 2 containers: One to run the SCope backend and another to run an Apache reverse proxy server.

The SCope application will be available on port 80 by default. You can specify a port by using env variable: SCOPE_PORT before running the docker-compose command. Apache will proxy requests through to the appropriate port inside the container.

The docker-compose.yml will serve the assets from inside the scope container, and the docker-compose.host.yml will serve them from the host. This supports as many use cases as possible, because you can either build the assets on the host yourself using whatever configuration you need, or serve them from the container if your environment doesn't allow for that (e.g. you don't have npm installed on the host).

Before running the compose build, you can specify a SCOPE_PORT with: docker-compose build --build-arg SCOPE_PORT=8080

The scope webpack assets will have to be built with the config: "reverseProxyOn": true. You can use environment variable: SCOPE_CONFIG=path to your config to specify a config file instead of changing the main one.

You can configure where the dockerised SCope data directories should be located on the host machine by using the env var SCOPE_DATA_DIR before launching the docker-compose. The default location is ./scope_data which will be created if you do not specify one.

Note: in this config, you do not need to specify the port in publicHostAddress. The env var SCOPE_PORT gets appended for you.

If deploying the container on a specific port with another external apache reverse-proxy server, you may have to add a config to the external apache site config to allow http and websocket reverse-proxying. Here is an example:

    ProxyPass / http://0.0.0.0:8080/
    RewriteEngine on
    RewriteCond %{HTTP:Upgrade} websocket [NC]
    RewriteCond %{HTTP:Connection} upgrade [NC]
    RewriteRule ^/?(.*) "ws://0.0.0.0:8080/$1" [P,L]
Example serve from container
  1. Copy config.json to a new file and modify with "reverseProxyOn": true, and publicHostAddress set to your domain
  2. docker-compose build --build-arg SCOPE_PORT=8080
  3. SCOPE_DATA_DIR=$HOME/scope_data SCOPE_PORT=8080 docker-compose up -d
OR Serve from host
  1. npm run build
  2. SCOPE_DATA_DIR=$HOME/scope_data SCOPE_PORT=8080 docker-compose -f docker-compose.host.yml up -d

You should be able to visit http://localhost:8080 and see the app!

scope's People

Contributors

bonchondev avatar bujesse avatar dependabot[bot] avatar dweemx avatar ghuls avatar krisdavie avatar maybejustjames avatar rcannood avatar tverbeiren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scope's Issues

Plot cells based on their expression level

A very nice feature that can enrich the investigation of the data, is for points to be plotted based on their expression level, with the point on the top (plotted last) being the higher expressing cells.

The question is, whether this is easily implementable given that most of the frontend is working off of indices rather than the cellIDs?

Thoughts @kreftl ?

extend the getCellColorByFeatures function (1)

to support regulon view and comparison view, a nice feature would be that this function would take into account:

threshold values (per each regulon) to filter out cells passing (coloured) or not (black)
intesity flag: true/false to take into accout auc values scaling (true) or just full rgb (false)

update plot when changing normalisations

The plot should be updated when the log or CPM normalisations are toggles on and off to update the values shown. Will need to request features from the server again

extend getCellColorByFeatures (2)

extend getCellColorByFeatures by new arguments:

coordinatesID
ageingAnnotationID
add possibility to put clusters next to genes and regulons

Memory error

running servers on windows for a longer period of time results in memory error
image

rearrange jsx modules

accordingly: app, header, footer, leftsidebar, rightsidebar, content for each tab

getMarkerGenes()

Add function for retrieving markers from gene sets, clusters and regulons.

extend getMyLooms with metadats

We have to deal and warn the user when loom files are invalid/corrupted or valid but don't have the extended version of the loom (e.g.: RegulonsAUC not present or any other attributes not present). How should we do this?

getCellColorByFeatures does not allow selective colouring

currently the R, RG, or RGB colouring is supported
alternatively user needs also to have access to G, B, GB and RB colourings
so for query e.g. {
0: {type: "gene", value: ""}
1: {type: "gene", value: "VGlut"}
2: {type: "gene", value: ""}
}
the getCellColorByFeatures should return colors array iso null

Settings changes

CPM Normalisation should be off by default.
Both CPM and Log2 normalisation settings should be shown on each tab - Possibly under a 'Gene expression' subheading

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.