Git Product home page Git Product logo

dna-seq-gatk-variant-calling's Introduction

Snakemake workflow: dna-seq-gatk-variant-calling

DOI Snakemake GitHub actions status

This Snakemake pipeline implements the GATK best-practices workflow for calling small germline variants.

Usage

The usage of this workflow is described in the Snakemake Workflow Catalog.

If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and its DOI (see above).

dna-seq-gatk-variant-calling's People

Contributors

eqt avatar johanneskoester avatar micwessolly avatar wdecoster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dna-seq-gatk-variant-calling's Issues

Running without known-variants

Thank you for making these workflows!
In many cases I work with samples without previously known variants available. Is it possible to configure this workflow to skip steps that rely on having known-variants? I attempted this by making an empty file. Unsurprisingly, this failed.

Erro in rule snpeff

Having trouble with the annotation rule snpeff and have attached the snpeff.log. I'm confused as to why the connection to sourceforge is necessary, but the link in the log file is valid and not sure why it's not downloading.

can't set java opts for rule recalibrate_base_qualities

Hey everyone,

I'm trying to allocate some more memory to rule recalibrate_base_qualities since it's taking ages. I tried to allocate more memory and threads through the java opts, but the rule fails. I can't provide an error message. I'm running on a cluster and the log says I should look in the log files of the involved rules. This log file is empty though.

This is my code for the rule:

rule recalibrate_base_qualities:
    (unchanged)
    params:
        (unchanged)
        java_opts="-Xmx4G -XX:ParallelGCThreads=8", # also tried just: java_opts="-Xmx4G",
    threads: 8
    resources:
        mem_mb=32768,
        time="12:00:00", # just for cluster scheduling
    wrapper:
        "0.74.0/bio/gatk/baserecalibrator"

Can anyone help me spot the error?

Thanks in advance

snpeff with custom genome database

I wish to apply this workflow to a non-model plant genome for which no reference is readily available at the snpEff database. Is there a way to direct snpEff to a genbank accession or, alternatively, a custom snpEff annotation database?
Thanks for your advice

VQSR not fully implemented

It seems that the VQSR (GATK VariantRecalibrator) step here is not fully implemented. That rule only produces the model built with the VariantRecalibrator, but does not use ApplyVQSR to actually perform the filtering.

That means, the output files produced by that rule, all.{vartype}.recalibrated.vcf.gz, are in fact the recal files used by ApplyRecalibration, but not actually the filtered VCFs that we are looking for. Hence, an additional rule needs to be implemented that runs the gatk/applyvqsr wrapper.

If that helps, I've implemented this in my pipeline, which also offers to set all the resource files needed for the VariantRecalibrator to be provided via the config file, so that users don't have to edit the snakemake files.

rule plot_stats fails with "OverflowError: value too large to convert to npy_uint32"

Hey everyone,

I am trying to run this pipeline with 144 samples so the resulting files are quite big. I managed to get it almost to the end, but the last rule (plots_stats) fails with OverflowError: value too large to convert to npy_uint32. I guess, I just have to many rows in my calls.tsv.gzto be handled. The complete error log is:

Traceback (most recent call last):
  File "/[PATH]/workflow_var_calling/.snakemake/scripts/tmp10j_ba31.plot-depths.py", line 16, in <module>
    sample_info = calls.loc[:, samples].stack([0, 1]).unstack().reset_index(1, drop=False)
  File "/[PATH]/workflow_var_calling/.snakemake/conda/5e32b1f022a698680d2667be14f8a58a/lib/python3.6/site-packages/pandas/core/series.py", line 2899, in unstack
    return unstack(self, level, fill_value)
  File "/[PATH]/workflow_var_calling/.snakemake/conda/5e32b1f022a698680d2667be14f8a58a/lib/python3.6/site-packages/pandas/core/reshape/reshape.py", line 501, in unstack
    constructor=obj._constructor_expanddim)
  File "/[PATH]/workflow_var_calling/.snakemake/conda/5e32b1f022a698680d2667be14f8a58a/lib/python3.6/site-packages/pandas/core/reshape/reshape.py", line 116, in __init__
    self.index = index.remove_unused_levels()
  File "/[PATH]/workflow_var_calling/.snakemake/conda/5e32b1f022a698680d2667be14f8a58a/lib/python3.6/site-packages/pandas/core/indexes/multi.py", line 1494, in remove_unused_levels
    uniques = algos.unique(lab)
  File "/[PATH]/workflow_var_calling/.snakemake/conda/5e32b1f022a698680d2667be14f8a58a/lib/python3.6/site-packages/pandas/core/algorithms.py", line 367, in unique
    table = htable(len(values))
  File "pandas/_libs/hashtable_class_helper.pxi", line 937, in pandas._libs.hashtable.Int64HashTable.__cinit__
OverflowError: value too large to convert to npy_uint32

any ideas?

Errors in two rules terminating run

Hello,

I'm getting the following errors that result in a downstream termination of the pipeline.

Error in rule get_genome:
    jobid: 4
    output: resources/genome.fasta
    log: logs/get-genome.log (check log file(s) for error message)
    conda-env: /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/conda/6b8a95bbd3f6e08f9691577edcf026d4

RuleException:
CalledProcessError in line 13 of /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/workflow/rules/ref.smk:
Command 'source /localhome/darwin/anaconda3/bin/activate '/localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/conda/6b8a95bbd3f6e08f9691577edcf026d4'; /localhome/darwin/anaconda3/envs/snakemake/bin/python3.9 /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/scripts/tmpcejz7p0d.wrapper.py' returned non-zero exit status 1.
  File "/localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/workflow/rules/ref.smk", line 13, in __rule_get_genome
  File "/localhome/darwin/anaconda3/envs/snakemake/lib/python3.9/concurrent/futures/thread.py", line 52, in run
Traceback (most recent call last):
  File "/localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/scripts/tmp72uissx8.wrapper.py", line 17, in <module>
    "vep_install --AUTO cf "
  File "/localhome/darwin/anaconda3/envs/snakemake/lib/python3.9/site-packages/snakemake/shell.py", line 263, in __new__
    raise sp.CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'vep_install --AUTO cf --SPECIES homo_sapiens --ASSEMBLY GRCh38 --VERSION 98 --CACHEDIR resources/vep/cache --CONVERT --NO_UPDATE   > logs/vep/cache.log 2>&1' returned non-zero exit status 22.
[Fri Aug 13 12:46:46 2021]
Error in rule get_vep_cache:
    jobid: 10
    output: resources/vep/cache
    log: logs/vep/cache.log (check log file(s) for error message)
    conda-env: /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/conda/5fe2b2f099fb33a1161e354e3631910c

RuleException:
CalledProcessError in line 112 of /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/workflow/rules/ref.smk:
Command 'source /localhome/darwin/anaconda3/bin/activate '/localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/conda/5fe2b2f099fb33a1161e354e3631910c'; python /localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/.snakemake/scripts/tmp72uissx8.wrapper.py' returned non-zero exit status 1.
  File "/localhome/darwin/cbuckner/dna-seq-gatk-variant-calling/workflow/rules/ref.smk", line 112, in __rule_get_vep_cache
  File "/localhome/darwin/anaconda3/envs/snakemake/lib/python3.9/concurrent/futures/thread.py", line 52, in run
Removing output files of failed job get_vep_cache since they might be corrupted:
resources/vep/cache

Error tokenizing data. C error: Expected 5 fields in line 4, saw 6

I tried to install the pipeline as recommended by the snakemake workflow catalog procedure.
I just created a "resources" directory containing the reads. The project directory is containing the config directory, the resources directory and the workflow directory.

I am using A Mac M1 Pro and miniconda3 with conda v. 22.11.1. I created an osx-64 environment (with the CONDA_SUBDIR=osx-64 command) containing snakemake 7.22 and snakedeploy.

When I run the command snakemake --cores all --use-conda --conda-frontend conda I have the following error:

ParserError in file https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/rules/common.smk, line 23:
Error tokenizing data. C error: Expected 5 fields in line 4, saw 6

File "/Users/xx/snakemake-models/shave3/workflow/Snakefile", line 19, in
File "https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/Snakefile", line 1, in
File "https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/rules/common.smk", line 23, in
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/util/_decorators.py", line 211, in wrapper
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/util/_decorators.py", line 331, in wrapper
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1289, in read_table
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 611, in _read
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/io/parsers/readers.py", line 1778, in read
File "/Users/xx/miniconda3/envs/smk/lib/python3.11/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 230, in read
File "pandas/_libs/parsers.pyx", line 808, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 866, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 852, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1973, in pandas._libs.parsers.raise_parser_error

TypeError in calling.smk rule merge_variants. The bwa mapping stops after creating the indexes files.

Hello,

I am running the workflow on a small test sample set (n=6) and I get the following error:

`InputFunctionException in rule merge_variants in file https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/rules/calling.smk, line 64:
Error:
TypeError: read_table() got an unexpected keyword argument 'squeeze'
Wildcards:

Traceback:
File "https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/rules/calling.smk", line 67, in
File "https://raw.githubusercontent.com/snakemake-workflows/dna-seq-gatk-variant-calling/v2.1.1/workflow/rules/common.smk", line 44, in get_contigs`

This appears while bwa is running to generate the index files ( genome.dict, genome.fasta.amb, genome.fasta.ann, genome.fasta.fai, genome.fasta.pac) and the run stops after the .bwt file is completed, but it does not proceed to the mapping step.

I cannot attach the config files becuse the formats are not supported, so I am attaching the .txt version of them.

Could you please help me in solving this issue?

config.txt
samples.txt
units.txt

Failed to open environment file

Hello,

I cloned the dna-seq-gatk-variant-calling workflow on a cluster and I am getting the following error when I try to launch snakemake:

Building DAG of jobs...
WorkflowError:
Failed to open environment file https://github.com/snakemake/snakemake-wrappers/raw/0.30.0/bio/trimmomatic/pe/environment.yaml:
URLError: <urlopen error [Errno 110] Connection timed out>

I did the following steps:

  1. Clone the repo
  2. Filled the *.tsv files and the config file with the correct infos for the run
  3. Test using snakemake -np. Everything is OK at this step.
  4. Prepare the cluster.json file
  5. Launch snakemake:
snakemake --use-conda -k -p -j 999 --cluster-config cluster.json --cluster "sbatch -A {cluster.account} -t {cluster.time} --mem {cluster.mem} --cpus-per-task {cluster.cpus-per-task}"

The jobs are submitted correctly to the cluster, but they crash after a few minutes. In the logs, I find the error message I pasted at the beginning of this issue.

I was under the impression that the wrappers were downloaded before launching the jobs, but it does not appear to be the case.

I then tried to clone the snakemake-wrappers repo and link it directly with the --wrapper-prefix parameter using the git+file://path/to/your/local/clone@ as mentioned in the doc. I get the following error:

ValueError in line 12 of /lustre04/scratch/jolybeau/LO/LO_genotypage/dna-seq-gatk-variant-calling/rules/mapping.smk:
too many values to unpack (expected 2)
  File "./dna-seq-gatk-variant-calling/Snakefile", line 16, in <module>
  File "./dna-seq-gatk-variant-calling/rules/mapping.smk", line 12, in <module>

I'm not sure what I am doing wrong. Could you please help me with this problem?

Charles.

Not executed fastqc for specified inputs

hi. I recently executed this pipeline for my job.
But after I ran this pipeline and removed the output of the fastqc, I can't reproduce the output of fastqc (qc/fastqc/A-1.html for example).

After some experiment, I find this is because only first line is recognized as "inputs" by snakemake.
So I fixed under the lines from

expand(["qc/samtools-stats/{u.sample}-{u.unit}.txt",
"qc/fastqc/{u.sample}-{u.unit}.zip",
"qc/dedup/{u.sample}-{u.unit}.metrics.txt"],
u=units.itertuples())

to

    expand("qc/samtools-stats/{u.sample}-{u.unit}.txt", u=units.itertuples())
    expand("qc/fastqc/{u.sample}-{u.unit}.zip", u=units.itertuples())
    expand("qc/dedup/{u.sample}-{u.unit}.metrics.txt", u=units.itertuples())

I can get the expected results.
Is this bug of expand module? Please explain what is happend.

Direct output to directory of interest

I think I'm missing something obvious here. Running this pipeline results in output files/directories created in the same directory as Snakefile. How do I direct them instead to directory of my choice without refactoring the code?

Non-ASCII character in .py, no encoding declared

When running snakemake --use-conda from the project root of the project directory I get the following error (line 43 of attached
snakemake.log):

SyntaxError: Non-ASCII character '\xc3' in file /spin1/users/chambersmj/snakemake_test/.snakemake/scripts/tmpmfjaxtfh.wrapper.py on line 5, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details

I thought of adding #coding=utf-8 to the head of .snakemake/scripts/tmpmfjaxtfh.wrapper.py but there are no scripts in the .snakemake/scripts directory.

Any suggestions to resolve the python source file encoding would be greatly appreciated ๐Ÿ˜Š

Note: I've also attached the default config.yaml, the project tree is below, and the data directory was cloned from snakemake-workflows/ngs-test-data.

snakemake_test
โ”œโ”€โ”€ config.txt
โ”œโ”€โ”€ config.yaml
โ”œโ”€โ”€ data
โ”œโ”€โ”€ envs
โ”œโ”€โ”€ LICENSE
โ”œโ”€โ”€ logs
โ”œโ”€โ”€ mapped
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ report
โ”œโ”€โ”€ rules
โ”œโ”€โ”€ samples.tsv
โ”œโ”€โ”€ schemas
โ”œโ”€โ”€ scripts
โ”œโ”€โ”€ Snakefile
โ”œโ”€โ”€ snakemake.log
โ”œโ”€โ”€ trimmed
โ””โ”€โ”€ units.tsv

Haplotype caller with intervals runs slow

Hi,
I've recently started using Snakemake for my variant calling pipeline. We use Slurm at our facility.
I observed that the jobs launched via Snakemake are running much slower than when I run them manually. Is there any reason? My Snakemake file content is here:

SAMPLES=["K350_", "K443_", "K450_", "K452_", "K469_", "K471_", "K472_", "K473_", "K484_", "K485_"]
intervals=["0000", "0001", "0002", "0003", "0004", "0005", "0006", "0007", "0008", "0009", "0010", "0011", "0012", "0013", "0014", "0015", "0016", "0017", "0018", "0019", "0020", "0021", "0022", "0023"]
REF = '/data/references_deprecated/cat/felCat9/bwaIndex/felCat9.fa'

def get_intervals(wildcards):
    interval=wildcards.interval
    intervalFile="interval-files-folder/" + interval + "-scattered.interval_list"
    return intervalFile

rule all:
    input:
        expand('data/IntervalGvcf/{sample}.{interval}.g.vcf.gz',sample=SAMPLES,interval=intervals)

rule GVCF:
    input:
        ref = REF,
        bam = 'data/dedup_bams/{sample}.dedup.bam'
    output: 
        vcf='data/IntervalGvcf/{sample}.{interval}.g.vcf.gz',
        index='data/IntervalGvcf/{sample}.{interval}.g.vcf.gz.tbi'
    log:
        'logs/intervalGVCF/{sample}_{interval}.log'
    params:
         intr=get_intervals
    threads: 8
    shell:
        '''
        module load vital-it;
        module add UHTS/Analysis/GenomeAnalysisTK/4.1.3.0;
        java -Xmx4G -Djava.io.tmpdir=/scratch/tmp_vj/  -XX:+UseParallelGC -XX:ParallelGCThreads=1 -jar $GATK_PATH/bin/GenomeAnalysisTK.jar HaplotypeCaller  -R {REF}  -I {input.bam} -pairHMM AVX_LOGLESS_CACHING --native-pair-hmm-threads 8 -L {params.intr} -O {output.vcf} -ERC GVCF -stand-call-conf 10 2> {log}
        '''

REMOVE_DUPLICATES is false according to logs in rule mark_duplicates

Hey everyone,

I just had a look at the logs and I realized that REMOVE_DUPLICATES is set to false there even though it's set to true in the configs. I did neither change the rule rule mark_duplicates nor the part in the config.yaml:

picard:
  MarkDuplicates: "REMOVE_DUPLICATES=true"

this is part of the log file:

[...]
MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP=50000 MAX_FILE_HANDLES_FOR_READ_ENDS_MAP=8000 SORTING_COLLECTION_SIZE_RATIO=0.25 TAG_DUPLICATE_SET_MEMBERS=false REMOVE_SEQUENCING_DUPLICATES=false TAGGING_POLICY=DontTag CLEAR_DT=true DUPLEX_UMI=false ADD_PG_TAG_TO_READS=true
REMOVE_DUPLICATES=false
ASSUME_SORTED=false DUPLICATE_SCORING_STRATEGY=SUM_OF_BASE_QUALITIES PROGRAM_RECORD_ID=MarkDuplicates PROGRAM_GROUP_NAME=MarkDuplicates READ_NAME_REGEX=<optimized capture of last three ':' separated fields as numeric values> OPTICAL_DUPLICATE_PIXEL_DISTANCE=100 MAX_OPTICAL_DUPLICATE_SET_SIZE=300000 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false
[...]

Is this intended to be this way?

missing python dependency in rule plot_stats

I got the following error running this workflow,

[Thu Oct 21 10:30:39 2021]
rule plot_stats:
    input: results/tables/calls.tsv.gz
    output: results/plots/depths.svg, results/plots/allele-freqs.svg
    log: logs/plot-stats.log
    jobid: 24
    resources: tmpdir=/tmp

Activating conda environment: /home/pedro/projects/sandbox/snake-gatk/.snakemake/conda/c8bc479b3ecf77c3e954eb4b0008039b
Activating conda environment: /home/pedro/projects/sandbox/snake-gatk/.snakemake/conda/c8bc479b3ecf77c3e954eb4b0008039b
[Thu Oct 21 10:30:40 2021]
Error in rule plot_stats:
    jobid: 24
    output: results/plots/depths.svg, results/plots/allele-freqs.svg
    log: logs/plot-stats.log (check log file(s) for error message)
    conda-env: /home/pedro/projects/sandbox/snake-gatk/.snakemake/conda/c8bc479b3ecf77c3e954eb4b0008039b

The log shows the following message,

Traceback (most recent call last):
  File "/home/pedro/projects/sandbox/snake-gatk/.snakemake/scripts/tmpjsez2vr4.plot-depths.py", line 8, in <module>
    import common
ModuleNotFoundError: No module named 'common'

Help with dna-seq-gatk-variant-calling

Hi!

I have installed the pipeline on HPC and trying to run the pipeline. I updated pip and I am getting below error. Any help will be appreciated!

Thank you
Mousumi

Error1:
-bash-4.2$ snakemake --cores all --use-conda
ImportError in file /home/sahum2/DNA-seq/workflow/Snakefile, line 19:
You are trying to use the http functionality of smart_open
but you do not have the correct http dependencies installed. Try:

pip install smart_open[http]

File "/home/sahum2/DNA-seq/workflow/Snakefile", line 19, in
File "/cm/shared/apps/python/3.7.2/lib/python3.7/site-packages/smart_open/smart_open_lib.py", line 81, in parse_uri
File "/cm/shared/apps/python/3.7.2/lib/python3.7/site-packages/smart_open/transport.py", line 92, in get_transport

--------------------------|

-bash-4.2$ pip install smart_open[http]
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: smart_open[http] in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (6.4.0)
Requirement already satisfied: requests in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (from smart_open[http]) (2.31.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (from requests->smart_open[http]) (3.3.0)
Requirement already satisfied: idna<4,>=2.5 in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (from requests->smart_open[http]) (2.8)
Requirement already satisfied: urllib3<3,>=1.21.1 in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (from requests->smart_open[http]) (2.0.6)
Requirement already satisfied: certifi>=2017.4.17 in /cm/shared/apps/python/3.7.2/lib/python3.7/site-packages (from requests->smart_open[http]) (2019.3.9)

README report link served through RawGit will stop working

https://rawgit.com/

RawGit has reached the end of its useful life
October 8, 2018
RawGit is now in a sunset phase and will soon shut down. It's been a fun five years, but all things must end.

GitHub repositories that served content through RawGit within the last month will continue to be served until at least October of 2019. URLs for other repositories are no longer being served.

If you're currently using RawGit, please stop using it as soon as you can.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.