Git Product home page Git Product logo

vrs's Issues

reimplement flexible id assignment

reimplement id assignment to enable ids by 1) vmc digest, 2) uuid, 3) serial #.

Then, use this to regenerate examples that are easier on the eyes.

Question about CNVs with additional alterations

Conversation with @ratsch today described a requirement for consideration under our current model:

Given a Copy Number Gain (n=5), how to represent an alteration that occurs on only some of the copies (e.g. a SNV found on 2 of the copies)? One proposed way is to represent a CNV as a collection of haplotypes, where each haplotype represents either an altered or unaltered copy.

A second approach could be to optionally specify a genotype within a CNVState to represent this complexity.

RFC: Should VMC use Canonical JSON or custom serialization spec?

Background

Objects must be serialized in order to generate a computed identifier. At the time the VMC spec was originally written, there was no widely implemented canonical serialization method. Therefore, a custom serialization method is currently defined in the VMC spec.

In the last two years, the Canonical JSON quasi-spec has gained a foothold. Implementations exist in python, rust, go, javascript, java at least. (I haven't tested them.) Canonical JSON was created for exactly our purposes: serializing objects in order to compute digests as computed identifiers.

The JSON Canonicalization Scheme (JCS) appears to be a more sophisticated standard, under jurisdiction of a standards organization (IETF), but apparently without existing implementations.

Question

Moving VMC to use a canonical json structure could provide significant benefits and it may have some risks. Please comment on whether and when we should do so.

write language independent tests

The goal of this issue to make a set of tests that can be used to validate implementations of the VMC specification. After an initial set of ideas, this issue is reformulated to provide the following tests:

Function tests

  • truncated_digest: Provide digests for short strings (e.g., '', 'vmc', 'VMC')
  • sequence id lookup: Provide tests for external accession → VMC digest (e.g., NC_000019.10 → VMC:GS_IIB53T8CNeJJdUqzn9V_JnRtQadwWCbl)

Object generation, serialization, computed ids

Provide serialization tests (and digests for identifiable objects) of generating primitive objects generated from yaml/json data

  • Interval
  • Location
  • Allele
  • Haplotype
  • Genotype

Conversion tests

  • Tests of VCF (ref, alt) to VMC
  • HGVS (strings) to VMC

consolidate VMC slides and rationale

Topics

  • all sequence types
  • progressively add support for types of variation
  • eliminate man-made ambiguity (e.g., difference sequence names for same sequence)
  • represent biological ambiguity (e.g.,
  • start with high precision in order to enable fuzzy relationships

Digest Generation example using HGVS.

@reece I have a question regarding your HGVS VMC implementation found here. I'm unable to replicate HGVS ID's you generate, and I wanted to confirm/correct the implementation.

Based on the HGVS expression "NC_000019.10:g.44908684C>T" I used the following reference fasta: NC_000019.10

Sequence_ID:
VMC:GS_IIB53T8CNeJJdUqzn9V_JnRtQadwWCbl

And created the following Location_ID:
Digest(<Location|VMC:GS_IIB53T8CNeJJdUqzn9V_JnRtQadwWCbl|44908683>)
VMC:GL_0KNCxpnWXoM4fIokZexmx_sEjPBsCsTQ

And created the following Allele_ID:
Digest(<Allele|VMC:GL_0KNCxpnWXoM4fIokZexmx_sEjPBsCsTQ|T>)
VMC:GA_2FZTmPgPLK2z_sr5HAj8qRZEDQ2TlhA

If you could review this and let me know where we differ, it would be appreciated.

overlaps in Interval

The following code in models.py would return false for two zero length intervals, e.g., Interval a(42,42) and Interval b(42,42).

Is this desired behavior?

def overlaps(self, other):
assert isinstance(other, Interval)
return self.end > other.start and other.end > self.start

Add reference agreement example to docs

There is ambiguity about how reference agreement at a "small" variant might be represented in the current schema.

One possibility is an Allele with a SequenceState that contains the reference base.

Another is a Haplotype with reference location equal to the above Allele location and an empty set of member Alleles.

This ambiguity might be an unavoidable consequence of a specific edge case, in which case the spec needs to provide guidance.

Related example (from Javi): Given a VCF row with 12 100 A T 1/0, how would we write this Genotype in VR? What if from a gVCF with asserted ref agree in this region?

Should we recommend CURIEs for use as identifiers?

Background

VMC currently uses Identifier objects that consist of a namespace and accession. These are be (nearly) equivalently represented as JSON (e.g., {"accession": "GA_0123abcd", "namespace": "VMC"}) and as a string (e.g., "VMC:GA_0123abcd"). A VMC accession is constructed as prefix + "_" + digest, where prefix is obtained from a mapping from types to prefixes, such as "VMC:GA_0123abcd" for Alleles, "VMC:GL_0123abcd" for Locations, etc.

The VMC string format is inspired by long-standing conventions of colon-separated namespace:key pairs, which is also consistent with the W3C CURIE syntax and semantics. It is anticipated that future VMC specs will adopt CURIEs as the Identifier format rather than defining its own syntax.

Briefly, a CURIE adds additional meaning to the current namespace:accession scheme (prefix:reference in CURIE parlance): prefix is an alias for a base IRI/URI, and reference is a (possibly) relative IRI/URI. A fully-qualified IRI/URI is constructed by concatenating the prefix and reference, separated by a /. In other words, CURIEs serve to provide a convenient shorthand for an object.

Question

What format should the VMC Identifier have?

Write fusion requirements document

The goal for this issue is to write a single document with fusion use cases and a proposed model.

This should also serve as an introductory exercise for handling ambiguous representations of fusions (e.g. only one fusion partner specified or only gene names specified) alongside particular representations of fusions (defined transcript regions present / absent).

Consider splitting 'Haplotype' into two subtypes

This is a proposal to consider splitting 'Haplotype' into two subtypes - one to represent the unlocated haplotype (which is a discontinuous set of alleles), and the other to represent the located version (which is a continuous extent of sequence that contains some variation as part).

This idea that a Haplotype can be these two very different things depending on whether a Location is provided seems to be a point of confusion for many. In my opinion explicitly separating these in different classes makes this distinction easier to explain and understand, and would allow more precise specification supporting haplotype creation in the schema.
We already do polymorphism so it can be implemented with no technical barriers.

Another proposed benefit is that this would make mapping to classes in variation-related ontologies possible (e.g. SO, GENO MSO). Each of these ontologies make high level distinctions between sequences that are continuous features and things that are discontinuous sets, and thus they hold no classes that could map to the current VMC haplotype definition. But if haplotype is split as proposed, then mappings could be made. For example, the unlocated vmc haplotype would map directly to a GENO:haplotype (which it defines as a set of discontinuous alterations in cis), and the located vmc haplotype would map to a GENO:complex allele . . . as GENO, like the ClinVar allele model, distinguishes 'simple' alleles that vary along their entire extent, from 'complex' alleles that are defined extent's of sequence that contain variation(s) as a proper part).

The ability to map to ontology terms would let us point to an ontological-based, high-level, biologically-rooted, conceptual model of the VMC domain of discourse. And perhaps more importantly, it would make it easy for people to understand how vmc concepts are related to variation concepts in other models that are also mapped in GENO - as GENO would act as a Roseetta Stone allowing this translation. This would be a big win w.r.t. interoperability with models in other communities (e.g. model organism/AGR) who are at this time also working to define variation models to support their data, and considering use/alignment with GENO concepts.

@reece @larrybabb @ahwagner curious as to your thoughts here.

Representing repeating sequences

From Eric Moyer to Everyone: (12:44 PM)

A discussion I had last week made me realize that better than "number of repeats" it would be better to give the repeating sequence right/left shifted and then the resulting length. This allows for partial repeats. i.e. CGTA:6 would represent CGTACG

From Eric Moyer to Everyone: (12:55 PM)

This was with Tim Hoeffer, Minghong Ward, Lon Phan, and Brad Holmes. (We were discussing how to distinguish between variants that should be in dbSNP and dbVar.) But this was my take-away, not theirs. I saw that when the repeated region gets longer, integer repeats are harder to get by chance and the limitation to integers seems more arbitrary ... and it misses parts of the sequence. If I had CGTACGTACGT ... and I call it a repeat of 2, I lose 3 nt. This gets worse as repeated sequence increases in length. Using length is the most elegant way to represent those partial repeats.
Unfortuntately I need to leave for my next meeting.

Questions about Feature-Based Locations

Questions for Reece about the idea of Feature Based Locations (e.g. Gene Locations), as discussed on the 4-24-19 VA/VR call. I think this concept will be relevant for the Rule-Based Variation modeling work Larry, Alex, and I have been discussing. And more generally, I think getting on the same page here could be a catalyst for aligning our understanding and models more generally.

The questions below are based on this slide - which I think is now out-of-date, but raises what I think will be informative questions regarding the evolution from this to the current model discussed n the 4-24-19 call. Lots to unpack here - so we can discuss on a call/in person if that is easiest.

Questions/Comments:

  1. I think what are called Positions on this slide were formerly called Intervals, and you are now calling Regions. Is this correct?
  2. The Map Location type in the slide seems to include a specific assembly, but on the call today your model did not include this - to allow for a more generic model where tools can be provided to resolve a Map Location to a more precise Sequence Location based on a specific assembly.
  3. Is the Transcript Location type in the slide analogous to Sequence Location (but where the reference sequence is at the transcript level rather than genomic level)? Or is it more analogous to Gene Location (but where the type of feature used as a proxy for a precise SequenceLocation is a Transcript/CDS rather than a Gene)?
    • The fact that the elements in Transcript Location include a transcript id and TranscriptPosition suggest the former. But in this case the 'projection' back to SequenceLocation indicated by the arrows would be a different type of projection than for Gene Location (as for Transcripts you are starting with precise transcript-level coordinates and just mapping these to the genomic level).
  4. Do you imagine the need for other 'feature based location' subtypes based on feature types other than genes (e.g. CDS, Exon, Intron, Promoter, etc . . . ) - to allow reference to the generic location where any of these feature types exists? Seems like the list of such feature types could get long.
  5. Generally, I’d love to hear more about specific requirements/use cases around this level of modeling (Feature-Based Locations) from your perspective. I think you indicated on the call you had some informal models/proposals in this space.

. . . again, no need to respond initially here to all these points. we can review next time we talk and document outcomes here if that is easiest. Just wanted to record/share my thoughts while they are clear in my mind.

Interval of Ranges or Interval of Precise Endpoints or Option 3?

@tnavatar captured 3 approaches to modeling out the Location.Interval concept to support both precise and imprecise interval representations.

Option 1: Polymorphic Interval that somehow can be either an interval of 2 exact endpoints defined as s or 2 "range-like" endpoints defined by a "special/new" data type of 2 exact endpoints.

Option 2: Refactor/modify Interval to be a set of 2 "range-like" endpoints of some new datatype that is 2 endpoints defined as integers. This option would mean that all "precise" representations of Intervals would potentially need to duplicate the 2 range attributes (start/end, from/to, min/max, etc). Thus the new explicitly represented method for determining endpoint precision is through the exact integer matching of the range-like data of a given Interval endpoint. The drawback is the increase in the size of the message representation of a variant (especially for large sets of variants), but the benefit is having a single structure and definition for both precise and and imprecise based variants/alleles/locations on which functionality could be written to compare, contrast and handle these "different" things when useful as well as be able to separate and reduce to more commonly considered or classical representations.

Option 3: Keep the precise Interval of 2 endpoint integers and use annotations to somehow express imprecision.
@tnavatar needs to expand on this notion as it is not clear to me at this time.

Which option are folks in favor or against? Are there other options that can be raised?

CNV, STRs, somatic Var Rep Group concept needed?

CNVs, microsatellites and a variety of somatic variant representations have given rise to the notion of defining a variant grouping that is a set of variant instances (not necessarily equivalent) which can be used to for annotations, assertions, interpretations, evidence collection, etc...

In our modeling to date we have intentionally been focusing on the most atomic representations, rightfully so. However, with the advent of the copy number discussion, we have introduce the notion of providing a range for the quantity of copies for copy number gain variants.

All previous examples (afaik) have focused on defining a very specific instance of a variant (i.e. allele, haplotype, genotype). We sort of got into this realm of a "set" or "group" of instances when discussion PGx haplotypes as defined by CPIC/PharmGKB, but we never really resolved the concern.

Question...
To focus on CNVs and micro-satellites for now, what does it mean to specify a range of copy numbers (i,e. from 5 to 20 -or- more than 47)?

Possible answer..
A CNV instance is a specific number of copies of a given region of a chromosome. The region of the chromosome that has a non-negative number of copies, is the instance of the sequence. So, to specify a "range" of copies is essentially saying any one of the "instances" in this range belongs in this group.

For example,
If you wanted to specify that a given interpretation is valid for any copy between 4 and 10 of region 1000 to 2000 on chromosome 1 then you are saying that any specific copy instance between 4 copies and 10 copies would be covered by that interpretation.
Interpretation 1...
Variant Group : NC_00001.10:1000..2000 (4 to 10 copies)
Pathogenicity: Uncertain Significance

Interpretation 2...
Variant Group : NC_00001.10:1000..2000 (>10 copies)
Condition: Condition X
Pathogenicity: Pathogenic

Case 1 specific finding...
Variant found: NC_000001.10:1000..20000 (6 copies)
Result: interp 1 above matches and the assertion may potentially be used to inform the patient's results.

Case 2 specific finding...
Variant found: NC_000001.10:1000..2000 (20 copies)
Result: interp 2 above matches and the assertion may potentially be used to inform the patient's results.

Hopefully, this highlights the distinction between defining "variants" that are "sets" or "groups" verses "instances" and the need to be able to do both in order to collect knowledge and associate it with actual findings.

This can also be applied to microsatellites, which are short tandem repeats that often get expressed as a range as well as in the HTT gene for Huntington's disease. see ClinVar NM_002111.6(HTT):c.52CAG(27_35).

Individual assay findings produce a specific count of the tandem repeats and then determine if the fall into the variant group defined by NM_002111.6(HTT):c.52CAG(27_35) or some other group that may have a different interpretation.

As we explore variant representations, let's determine if we need to be separating the notion of atomic, specific, instance representations from group or set representations and provide a clean separation, if so.

Update tests for all schema models

Tests currently lag behind schema. Update tests so that all models are covered with representative examples. Ideally, these will use examples that are consistent with documentation and doctests. The APOE examples are a good choice.

Models, Digest and ApoE example not in agreement

models.py doesn't appear to be the same as what was used for the ApoE Example.ipynb

for example there is no models.Vmcbundle and the computed_identifier returns different results:

def computed_identifier(self):
        return "GL:" + self.digest() 

in the notebook the computed identifier is like this:

"VMC:GL_9Jht-lguk_jnBvG-wLJbjmBw5v_v7rQo"

which is computed from digest.py

def computed_identifier(o):

    """return the VMC computed identifier for the object, as an Identifier
    """

    pfx = vmc_model_prefixes[type(o)]
    dig = truncated_digest(o)
    accession = "{pfx}_{dig}".format(pfx=pfx, dig=dig)
    return models.Identifier(namespace=vmc_namespace, accession=accession)

Which is the correct version? I'm assuming the ApoE example is the correct representation?

amount of variants supported in a single message/document

Depending on the use case we could think of using VMC to exchange just 1 variant (eg: beacon project), 10s/100s of variants (eg: prioritisation results), 1,000s of variants (eg: panel sequencing or even WES results) or 1,000,000s of variants (eg: WGS results).

There are two limiting factors: (1) size of the VMC payload and (2) computation time.

About (1) it would be great to have some estimates on the size of the payload related to the number of variants. What would be a reasonable payload size limit for a exchange message in an API as REST for instance?

About (2) the most time consuming processes may be right alignment and identifier computation. Could we have too some estimates on this respect?

Explain the problem cause by 'intersecting' expansion sets

Hi all. On the May 6 VR call (and in Hinxton the week before), concerns were raised about scenarios in which a single discrete variant instance (e.g. NP_478102.2:p.Ser73Arg) might appear in more than one variation expansion sets in a given corpus of VA data (e.g. ClinVar).

Can someone explain why this creates a "conundrum" - perhaps with examples of specific tasks/use cases where this is problematic, and what the problems are? @larrybabb calling on you first here as you raised this concern most recently - I think from the perspective of mapping ClinVar variation ids to ClinGen allele registry ids.

Thanks, and apologies if this is clear to others and I am just being dense. But I suspect that myself (and others) may be thinking about this from different angles/perspectives, or imagining a different workflow for creating, indexing, and querying over variation sets. From where I sit it is not clear how 'intersecting' expansion sets cause problems.

Proposal for limited scope

Added by lbabb on behalf of eric moyer...
...
From: Eric Moyer
Date: Monday, May 14, 2018 at 1:48 PM
Subject: Proposal for limited scope of GA4GH Variant Representation

There are many, many things that different groups want to say about variants. If we support all of them, we will have a standard that is so big that no one will implement it. The standard can grow. But that has to happen after people adopt it. HTML is currently a behemoth. But it started with a document with less than 20 elements and a few pages of descriptive text.

I propose that we serve one market to start with: variant representation in electronic health records.

Why:

  • Benefits a large number of people (people who have genetic information and EHR)
  • This market is highly regulated so the existence of a standard is a feature highly desired by the market.
  • This market touches a lot of areas. If something becomes standard practice there, others will use it too because they need to interact with EHR anyway. This will drive expansion of GA4GH Variant Representation (VR) into other areas.

This could have streamlined our discussion today greatly. For EHR, no one cares that the variant is a translocation. It is just two variants, a deletion from one chromosome and an insertion on another (or on the same one). It doesn’t matter where the other part came from. No one cares whether a variant is an inversion. It is just a change. My patient does not have the history of how the variant came about.

Researchers care about these things. People trying to prevent the mutation in the future care about them. Evolutionary biologists care about them. Population geneticists care about them. But a doctor trying to treat a patient only cares about what is, not the process by which it came to be. A pharmacist prescribing a drug cares what variants a person has, not what sequence they might have had beforehand if the dice of inheritance and environment had rolled differently.

Minimal additional primitives:
This focus is useful in itself as a good way of organizing our standardization. However, I conjecture that with this focus we don’t need to worry about joining structural and precise variants. It simplifies things enough that we can accommodate structural variants into VR by adding as few as three new primitives: imprecise endpoints, complex variants, and imprecise sequence. I don’t make this conjecture very strong because I haven’t been discussing things with the structural variant subcommittee.

And with this focus, the “translocation” or “inversion” or “copy number” could be added as an annotation later. It is an interpretation of the history.

Answering some objections to my minimal primitive idea above:
If someone is worried that large variants will be hard to represent. That is a matter of compression. It could be that we will mandate a compression algorithm that can easily use parts of the reference assembly or equivalently we could tack on two additional primitives: “use sequence from this precise region” and “then do this”.

If someone is worried that these will be hard to read, that is the job of the software presenting the variant. Which should present the long variants in a compressed and abstracted manner.

Variation, Variation Profile, and Genotypes

One of the ongoing discussions of the way we represent variation is the notion of the variation profile. Recently, this notion has become conflated with the idea of a complex/compound variation, though its original intent (still captured in the original model doc) is that it could represent one or multiple variants that collectively link to an annotation.

In our current (v2) model proposal (from this lucidchart document) this is instead treated as a subclass of Variation to capture compound variation, and is seen as a parallel type (as well as a container type for) rule-based and precise variation. Precise variation, however, can contain VMC genotypes, which ostensibly could be multiple variations bundled together but not captured as a compound variant "type". This issue is raised in point c here.

Before we can fit variation profile into the model, we need to decide if a variation in our model is a single unit of change (which in some instances, e.g. CNVs, could mean multiple genomic changes, but is commonly evaluated and described as a single variation) or if it can represent multiple changes together (i.e. genotypes)?

Importantly, there's room for the VMC's notion of haplotypes and genotypes in either model–a genotype could be a type of precise variation (as currently in the v2 model) or it could be a type of variation profile.

Equivalence of Transcript variants for Molecular consequence annotations

During the VA call today, @javild reviewed the Molecular Consequence VA type found here.

Steve Hart and others mentioned the importance of being able to use the variant representation (subject) to be able to represent or at least discover all equivalent transcript representations of the variant, so that alternate forms do not get lost or discovered when digesting/using variant annotations of this nature.

Examples verbalized,
"using refseq transcript version 3 verses 4 should be mapped"
"using refseq vs. ensembl transcripts should be mapped"

It was not clear to me if some subset of multiple "isoforms" for a given gene should be considered equivalent in this case. (I think not). But there was a related discussion about whether VA types like MolConseq should be able to be aggregated, when the statement was the same or similar for multiple isoforms of the same underlying genomic variant.
@mbrush indicated that we (VA) is starting with the notion that we would define atomic annotation records for now and then consider aggregate VA statements.

We (VR) need to also consider the consideration that the VA MolConsq annotations can be alleles, haplotypes, CNVs or SVs (see @javild for more examples of CNVs and SVs for this type of annotation).

add support for naming variation

Goal: Support names for alleles and haplotypes, such as "delta F508" (allele) or "ApoE E1" (haplotype) or "E3/E4" (genotype).

Could be implemented using a namespace, such as "display name".

Format of CNV "copies" attribute should be open for unknown, directional changes

In the playground, the CNV "copies" attribute is represented through an integer. However, with frequently unknown exact copy number (e.g. high level amplifications, underlying aneuploidy in cancer...), it is necessary to capture relative changes compared to the baseline.

Proposal for Discussion:
  • Use of an object with both categorical representation (i.e., trough an ontology) and numerical one
{
  cn_state: {
    term_id: 'SO:0001742',
    term: 'copy number gain'
  },
  cn_count: 5,
  cn_confidence: {},
  cn_baseline: {}         # the ploidy of the genome being analysed; could e.g. be "1" for X in XY
}

Reorganize and reformat spec into sphinx docs @ readthedocs

Goal The current spec is in Google Docs because that format enabled feedback from a broad audience. In order to ensure that the schema and documentation are aligned and version controlled, we will move the spec documentation to the vmc repo. At the same time, the document will be reorganized to make it easier to read.

Proposal

  • Write document in rst and format with sphinx. Rationale: sphinx, which uses rst markup, provides better support for multi-document linking. (Alternative: Consider recommonmark with sphinx to support markdown, but Reece's intuition is that there will be pitfalls with that approach.)
  • Convert Google Doc to rst with pandoc (see comment from @diekhans below).
  • Autogenerate versioned docs at readthedocs.

See also
GKS Deliverable Roadmap

Total Copy/Genomic Count concept and HGVS

This is a copy and paste from an email thread initiated by Peter Causey-Freeman...

==== From 9/26/2018 ====

My colleague, Raymond Dalgleish and I have been asked to advise regarding HGVS descriptions in which the term dup (duplication) is being used to describe increases in the copy number of a specified range of genomic sequence. The problem is that dup is intended as the description of a tandem duplication of sequence, so its use in these circumstances is inappropriate because the additional copies of a described genomic region sequence would be alleles of a different genomic loci. If I remember correctly, you have views on this issue relating to the VMC model.

Raymond is a full member of the HGVS SVN working group. He recently attended a meeting concerning the best practice for variant reporting in DMD. At the meeting he was asked whether there was an HGVS compliant way to describe the total count for individual exons, for example DMD gene exons identified by MLPA (https://en.wikipedia.org/wiki/Multiplex_ligation-dependent_probe_amplification ). When Raymond and I discussed the issue, I remembered the VarRep discussions about the potential use of a total genome count object within the VMC model. Raymond and I want to propose a similar object/variant description for adoption by the HGVS and it would make sense to use similar terminology. What we need is a 3 letter description for the variant type e.g. dup = duplication, ins = insertion. Have you chosen a term for use in VMC? If not, we were thinking about proposing tcc = total copy count or tgc = total genomic count.

Do you have any comments or thoughts on the issue.

Best wishes,

Peter Causey-Freeman

How would Confidence Interval (CI) from VCF be included in VMC?

Cristina Y. Gonzalez (EMBL-EBI) raised this question at the 7/16/2018 Var rep meeting (see minutes) which left the group leaning towards the notion that CI data should be external (like an annotation) on the primitive structures still under development for the core attributes/concepts needed to define the CNV.

We will need to demonstrate how the CI would be applied going forward to help all "see" how these type of fundamental and widely used qualifiers are applied to the VMC data in practice.

Implement server demo

Server demo should include persistent storage. Real-world examples (likely APOE) should be committed with repo. The server should also implement thew new digest methods (#7).

Develop shared variation model

The first VMC release will consist of a human-readable technical document, a machine-readable specification, and a set of demonstration tools.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.