Git Product home page Git Product logo

datasets-sars-cov-2's Introduction

Datasets

Benchmark datasets for WGS analysis of SARS-CoV-2.

Purpose

Technical Outreach and Assistance for States Team (TOAST) developed benchmark datasets for SARS-CoV-2 sequencing which are designed to help users at varying stages of building sequencing capacity. It consists of six datasets summarized in the table below, each chosen to represent a different use case.

Summary Table

Dataset Name Description Intended Use tsv name Primer Set Reference
1 Boston Outbreak A cohort of 63 samples from a real outbreak with three introductions, Illumina platform, metagenomic approach To understand the features of virus transmission during real outbreak setting, metagenomic sequencing sars-cov-2-SNF-A.tsv NA Lemieux et al.
2 CoronaHiT rapid A cohort of 39 samples prepared by different wet-lab approaches and sequenced at two platforms (Illumina vs MinIon) with MinIon running for 18 hrs, amplicon-based approach To verify that a bioinformatics pipeline finds virtually no differences between platforms of the same genome, outbreak setting sars-cov-2-coronahit-rapid.tsv ARTIC_V3 Baker et al.
3 CoronaHiT routine A cohort of 69 samples prepared by different wet-lab approaches and sequenced at two platforms (Illumina vs MinIon) with MinIon running for 30 hrs, amplicon-based approach To verify that a bioinformatics pipeline finds virtually no differences between platforms of the same genome, routinue surveillance sars-cov-2-coronahit-routine.tsv ARTIC_V3 Baker et al.
4 VOI/VOC lineages A cohort of 16 samples from 10 representative CDC defined VOI/VOC lineages as of 06/15/2021, Illumina platform, amplicon-based approach To benchmark lineage-calling bioinformatics pipeline especially for VOI/VOCs, bioinformatics pipeline validation sars-cov-2-voivoc.tsv ARTIC_V3 This study
5 Non-VOI/VOC lineages A cohort of 39 samples from representative non VOI/VOC lineages as of 05/30/2021, Illumina platform, amplicon-based approach To benchmark lineage-calling pipeline nonspecific to VOI/VOCs, bioinformatics pipeline validation sars-cov-2-nonvoivoc.tsv ARTIC_V3: 34, ARTIC_V1: 2, RandomPrimer-SSIV_NexteraXT: 2, NA: 1 This study
6 Failed QC A cohort of 24 samples failed basic QC metrics, covering 8 possible failure scenarios, Illumina platform, amplicon-based approach To serve as controls to test bioinformatics quality control cutoffs sars-cov-2-failedQC.tsv ARTIC_V3: 5, CDC in house multiplex PCR primers (Paden et al.): 19 This study

Installation & Usage

Other installation methods

Some methods of installation are maintained by the community. Although we do not have direct control over them, we would like to list them for convenience.

Visit INSTALL.md for these methods.

From Source Code

Grab the latest stable release under the releases tab. If you are feeling adventurous, use git clone! Include the scripts directory in your path. For example, if you downloaded this project into your local bin directory:

$ export PATH=$PATH:$HOME/bin/datasets/scripts

Additionally, ensure that you have the NCBI API key. This key associates your edirect requests with your username. Without it, edirect requests might be buggy. After obtaining an NCBI API key, add it to your environment with

export NCBI_API_KEY=unique_api_key_goes_here

where unique_api_key_goes_here is a unique hexadecimal number with characters from 0-9 and a-f. You should also set your email address in the EMAIL environment variable as edirect tries to guess it, which is an error prone process. Add this variable to your environment with

using your own email address instead of [email protected].

Dependencies

In addition to the installation above, please install the following.

  1. edirect (see section on edirect below)
  2. sra-toolkit, built from source: https://github.com/ncbi/sra-tools/wiki/Building-and-Installing-from-Source
  3. Perl 5.12.0
  4. Make
  5. wget - Brew users: brew install wget
  6. sha256sum - Linux-based OSs should have this already; Other users should see the relevant installation section below.

Installing edirect

Modified instructions from https://www.ncbi.nlm.nih.gov/books/NBK179288/

sh -c "$(curl -fsSL ftp://ftp.ncbi.nlm.nih.gov/entrez/entrezdirect/install-edirect.sh)"

NOTE: edirect needs an NCBI API key. Instructions can be found at https://ncbiinsights.ncbi.nlm.nih.gov/2017/11/02/new-api-keys-for-the-e-utilities

Installing sha256sum

If you do not have sha256sum (e.g., if you are on MacOS), then try to make the shell function and export it.

function sha256sum() { shasum -a 256 "$@" ; }
export -f sha256sum

This shell function will need to be defined in the current session. To make it permanent for future sessions, add it to $HOME/.bashrc.

Downloading a dataset

To run, you need a dataset in tsv format. Here is the usage statement:

Usage: GenFSGopher.pl -o outdir spreadsheet.dataset.tsv
PARAM        DEFAULT  DESCRIPTION
--outdir     <req'd>  The output directory
--compressed          Compress files after finishing hashsum verification
--format     tsv      The input format. Default: tsv. No other format
                      is accepted at this time.
--layout     onedir   onedir   - Everything goes into one directory
                      byrun    - Each genome run gets its separate directory
                      byformat - Fastq files to one dir, assembly to another, etc
                      cfsan    - Reference and samples in separate directories with
                                 each sample in a separate subdirectory
--shuffled   <NONE>   Output the reads as interleaved instead of individual
                      forward and reverse files.
--norun      <NONE>   Do not run anything; just create a Makefile.
--numcpus    1        How many jobs to run at once. Be careful of disk I/O.
--citation            Print the recommended citation for this script and exit
--version             Print the version and exit
--help                Print the usage statement and die

Using a dataset

There is a field intendedUse which suggests how a particular dataset might be used. For example, Epi-validated outbreak datasets might be used with a SNP-based or MLST-based workflow. As the number of different values for intendedUse increases, other use-cases will be available. Otherwise, how you use a dataset is up to you!

Creating your own dataset

To create your own dataset and to make it compatible with the existing script(s) here, please follow these instructions. These instructions are subject to change.

Start by creating a new Excel spreadsheet with only one tab. Please delete any extraneous tabs to avoid confusion. Then view the specification.

Citation

If this project has helped you, please cite both this website and the publication:

Xiaoli L, Hagey JV, et al. "Benchmark datasets for SARS-CoV-2 surveillance bioinformatics." PeerJ 10 (2022): e13821.
DOI: 10.7717/peerj.13821


Notices and Disclaimers

Public Domain

This repository constitutes a work of the United States Government and is not subject to domestic copyright protection under 17 USC § 105. This repository is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication. All contributions to this repository will be released under the CC0 dedication. By submitting a pull request you are agreeing to comply with this waiver of copyright interest.

License

Unless otherwise specified, the repository utilizes code licensed under the terms of the Apache Software License and therefore is licensed under ASL v2 or later.

This source code in this repository is free: you can redistribute it and/or modify it under the terms of the Apache Software License version 2, or (at your option) any later version.

This source code in this repository is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Apache Software License for more details.

You should have received a copy of the Apache Software License along with this program. If not, see http://www.apache.org/licenses/LICENSE-2.0.html

Any source code forked from other open source projects will inherit its license.

Privacy

This repository contains only non-sensitive, publicly available data and information. All material and community participation is covered by the Disclaimer and Code of Conduct.For more information about CDC's privacy policy, please visit http://www.cdc.gov/other/privacy.html.

Contributing

Anyone is encouraged to contribute to the repository by forking and submitting a pull request. (If you are new to GitHub, you might start with a basic tutorial.) By contributing to this project, you grant a world-wide, royalty-free, perpetual, irrevocable, non-exclusive, transferable license to all users under the terms of the Apache Software License v2 or later.

All comments, messages, pull requests, and other submissions received through CDC including this GitHub page may be subject to applicable federal law, including but not limited to the Federal Records Act, and may be archived. Learn more at http://www.cdc.gov/other/privacy.html.

More specific instructions can be found at CONTRIBUTING.md.

Records

This repository is not a source of government records, but is a copy to increase collaboration and collaborative potential. All government records will be published through the CDC web site.

datasets-sars-cov-2's People

Contributors

daisy0223 avatar jvhagey avatar kapsakcj avatar lskatz avatar marielataretu avatar mikeyweigand avatar pvanheus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datasets-sars-cov-2's Issues

bioconda plans?

I'm curious if there are plans to add this tool to bioconda?

It would greatly benefit the community if one could easily install the dependencies via conda or use the docker image that is auto-generated via the biocontainers project.

I'd be happy to write up a Dockerfile, but it would probably reach more users if it was available on bioconda

CoronaHiT routine Illumina read files are truncated

Not sure what's causing this, I can manually fix the files but I really shouldn't need to. It causes any paired-end analysis to instantly fail.

Upon further testing so is VOI/VOC, I am fairly certain the method you are using to split the reads is flawed.

I've done a little digging and I think the use of fastq-dump --gzip is to blame, it is known to be buggy and the users inbuilt gzip should probably be used instead.

It was actually fastq-dump --split-files, I have submitted a PR.

Issue with downloading datasets 2 and 3

Hello,

I was able to install and run the script outlined here to download all 6 datasets.

An odd thing is happening for datasets 'sars-cov-2-coronahit-rapid.tsv' and 'sars-cov-2-coronahit-routine.tsv' however.

All of the forward read fastq.gz files are downloading (i.e. NORW-F0A6F_CoronaHiT-ONT_1.fastq.gz) but only some of the corresponding reverse read files are downloading, while most of them are just empty files. Also all of the .fna files are empty.

An example of output:

image

I am wondering if this is normal behavior (if those empty “_2.fastq.gz” files really don’t exist and therefore can’t be downloaded for some samples), or if there is some error.

The forward and reverse files for the other 4 datasets all downloaded fine.

Note: I have Mac OS and am running this while logged into one of our BCM-HGSC login nodes.

Any help would be appreciated!

Thanks,
Evette

dataset #6 lineage assignment validation

Hi,

I downloaded the GISAID/NCBI genome based on accession numbers provided in the dataset#6 table and ran the pangolin v3.1.3 to get lineage assignment. All but one not matches with the expected lineage result (B.1.1.391) in the dataset#6.

Could you confirm the lineage of the hCoV-19_USA_CA-CZB-15265_2020 genome?

Here is the result I ran with the pangolin v3.1.3

taxon,lineage,conflict,ambiguity_score,scorpio_call,scorpio_support,scorpio_conflict,version,pangolin_version,pangoLEARN_version,pango_version,status,note

MW564975.1,B.1.1.450,0.0,0.9413064438373775,,,,PLEARN-v1.2.13,3.1.3,2021-06-15,v1.2.13,passed_qc,

hCoV-19/USA/CA-CZB-15265/2020|EPI_ISL_738705|2020-11-24,B.1.1.450,0.0,0.9413064438373775,,,,PLEARN-v1.2.13,3.1.3,2021-06-15,v1.2.13,passed_qc,

Here are the screenshots from the NCBI and GISAID metadata.

image

image

Thank you.

Consensus query

Apologies if this is documented somewhere and I missed it. I just wanted to know what the process was for producing and qc-ibg or curating the truth consensuses

Thanks so much for collecting these, is a huge contribution

Information about Supplementary_Table2_datasetsQC.xlsx

Good morning,

I'm a student in Computer Science at Università degli Studi di Milano and for my thesis I am assessing some pipeline for the analysis of SARS-CoV-2 samples.
In order to select the best pipeline for our requirements, I'm using the benchmark datasets available here.
I found the Supplementary_table2 in your paper (Xiaoli L, Hagey JV, Park DJ, Gulvik CA, Young EL, Alikhan N-F, Lawsin A, Hassell N, Knipe K, Oakeson KF, Retchless AC, Shakya M, Lo C-C, Chain P, Page AJ, Metcalf BJ, Su M, Rowell J, Vidyaprakash E, Paden CR, Huang AD, Roellig D, Patel K, Winglee K, Weigand MR, Katz LS. 2022. Benchmark datasets for SARS-CoV-2 surveillance bioinformatics. PeerJ 10:e13821 http://doi.org/10.7717/peerj.13821) and I would like to use also the data contained there for evaluations (and not only the file in.tsv available for every dataset).
I'm writing here because I can't understand how the column 'Total reads' is calculated. In particular, I used FastQC (the value of the field 'Total Sequences') to compute this value and I also counted the reads in the original .FASTQ file but the numbers don't correspond to the ones published in the Supplementary_table2.

Do you know why the numbers are different? Is it possible that Supplementary_table2 is outdated with respect to the current version of the dataset?
If this is the case, which version of the dataset is matched to Supplementary_table2 and used in your paper?

Thank you very much for your time :)

Best regards,
Sara Manfredi

Internally curated consensus sequences for lineages

Hi TOAST team,

I am interested in curating the representative genomes for VOCs/VBMs. And according to your recent publication (Xiaoli, Lingzi, et al. "Benchmark datasets for SARS-CoV-2 surveillance bioinformatics." PeerJ 10 (2022): e13821.), your dataset 4&5 were prepared based on alignments to the 'internally curated consensus sequences'. May I ask for the details about how you curated those internally? Thank you.

Best,
Gyuhyon

ref genome nomenclature

I heard one comment that especially on the VOIVOC dataset, the column header "reference" is misleading because each of these samples is a lineage representative. I do not know the exact solution yet but I did want to earmark this issue.

CONTRIBUTIONS.md

Need to make instructions on how to contribute new datasets.

  • Creating the spreadsheet
  • Need to do a quality check by themselves. Submit quality check metrics.
    • Need to have quality checks listed
    • Need to have thresholds listed
    • A checkbox for whether it passed
  • Making a pull request
  • A note about CI testing - full dataset needs to pass CI

VCF files for all data sets

It would be good to have the complete VCF files available for all the sample sets which would be beneficial. This allows that truth set does not rely on couple of interest sites, but allows for complete evaluation across the genome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.