Git Product home page Git Product logo

mnaseseq's Introduction

nf-core/mnaseseq

GitHub Actions CI Status GitHub Actions Linting Status Nextflow

install with bioconda Docker Cite with Zenodo

Introduction

nfcore/mnaseseq is a bioinformatics analysis pipeline used for DNA sequencing data obtained via micrococcal nuclease digestion.

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker containers making installation trivial and results highly reproducible.

Pipeline summary

  1. Raw read QC (FastQC)
  2. Adapter trimming (Trim Galore!)
  3. Alignment (BWA)
  4. Mark duplicates (picard)
  5. Merge alignments from multiple libraries of the same sample (picard)
    1. Re-mark duplicates (picard)
    2. Filtering to remove:
      • reads mapping to blacklisted regions (SAMtools, BEDTools)
      • reads that are marked as duplicates (SAMtools)
      • reads that arent marked as primary alignments (SAMtools)
      • reads that are unmapped (SAMtools)
      • reads that map to multiple locations (SAMtools)
      • reads containing > 4 mismatches (BAMTools)
      • reads that are soft-clipped (BAMTools)
      • reads that have an insert size within specified range (BAMTools; paired-end only)
      • reads that map to different chromosomes (Pysam; paired-end only)
      • reads that arent in FR orientation (Pysam; paired-end only)
      • reads where only one read of the pair fails the above criteria (Pysam; paired-end only)
    3. Alignment-level QC and estimation of library complexity (picard, Preseq)
    4. Create normalised bigWig files scaled to 1 million mapped reads (BEDTools, bedGraphToBigWig)
    5. Calculate genome-wide coverage assessment (deepTools)
    6. Call nucleosome positions and generate smoothed, normalised coverage bigWig files that can be used to generate occupancy profile plots between samples across features of interest (DANPOS2)
    7. Generate gene-body meta-profile from DANPOS2 smoothed bigWig files (deepTools)
  6. Merge filtered alignments across replicates (picard)
    1. Re-mark duplicates (picard)
    2. Remove duplicate reads (SAMtools)
    3. Create normalised bigWig files scaled to 1 million mapped reads (BEDTools, wigToBigWig)
    4. Call nucleosome positions and generate smoothed, normalised coverage bigWig files that can be used to generate occupancy profile plots between samples across features of interest (DANPOS2)
    5. Generate gene-body meta-profile from DANPOS2 smoothed bigWig files (deepTools)
  7. Create IGV session file containing bigWig tracks for data visualisation (IGV).
  8. Present QC for raw read and alignment results (MultiQC)

Quick Start

i. Install nextflow

ii. Install either Docker or Singularity for full pipeline reproducibility (please only use Conda as a last resort; see docs)

iii. Download the pipeline and test it on a minimal dataset with a single command

nextflow run nf-core/mnaseseq -profile test,<docker/singularity/conda/institute>

Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.

iv. Start running your own analysis!

nextflow run nf-core/mnaseseq -profile <docker/singularity/conda/institute> --input design.csv --genome GRCh37

See usage docs for all of the available options when running the pipeline.

Documentation

The nf-core/mnaseseq pipeline comes with documentation about the pipeline, found in the docs/ directory:

  1. Installation
  2. Pipeline configuration
  3. Running the pipeline
  4. Output and how to interpret the results
  5. Troubleshooting

Credits

The pipeline was originally written by The Bioinformatics & Biostatistics Group for use at The Francis Crick Institute, London.

The pipeline was developed by Harshil Patel.

Many thanks to others who have helped out along the way too, including (but not limited to): @crickbabs.

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on Slack (you can join with this invite).

Citation

If you use nf-core/mnaseseq for your analysis, please cite it using the following doi: 10.5281/zenodo.6581372.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.
ReadCube: Full Access Link

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

mnaseseq's People

Contributors

drpatelh avatar maxulysse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mnaseseq's Issues

hi can I try this pipeline now with my own data?

Hi
I am trying to run this pipeline on my HPC but it told me " you need to specify explicitly a revision with the option -r to use it". I know this is because the pipeline will finish in the future.

I am just wondering if there is any way to have a run of this pipeline now and it would be great to see how this works

Can you keep maintaining it?

I encountered the same problem as TrimGalore in Sarek, that is, the python version is not compatible.

Thank you!

>>> Now performing quality (cutoff '-q 20') and adapter trimming in a single pass for the adapter sequence: 'AGATCGGAAGAGC' from file Heart_R1_T1_1.fastq.gz <<<

ERROR: Running in parallel is not supported on Python 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.