Git Product home page Git Product logo

synergy-dataset's Introduction

SYNERGY dataset

DOI PyPI

SYNERGY is a free and open dataset on study selection in systematic reviews, comprising 169,288 academic works from 26 systematic reviews. Only 2,834 (1.67%) of the academic works in the binary classified dataset are included in the systematic reviews. This makes the SYNERGY dataset a unique dataset for the development of information retrieval algorithms, especially for sparse labels. Due to the many variables available per record (i.e. titles, abstracts, authors, references, topics), this dataset is useful for researchers in NLP, machine learning, network analysis, and more. In total, the dataset contains 82,668,134 trainable data points.

SYNERGY-banner.png

Get the data

The easiest way to get the SYNERGY dataset is via the synergy-dataset Python package. Install the package with:

pip install synergy-dataset

To download and build the SYNERGY dataset, run the following command in the command line:

python -m synergy_dataset get

To get an overview of the datasets and their properties, use synergy_dataset list and synergy_dataset show <DATASET_NAME>.

Datasets and variables

The SYNERGY dataset comprises the study selection of 26 systematic reviews. The dataset contains 169,288 records of which 2,834 records are manually labeled as inclusion by the authors of the systematic review. The list of systematic review and basic properties:

Nr Dataset Topic(s) Records Included %
1 Appenzeller-Herzog_2019 Medicine 2873 26 0.9
2 Bos_2018 Medicine 4878 10 0.2
3 Brouwer_2019 Psychology, Medicine 38114 62 0.2
4 Chou_2003 Medicine 1908 15 0.8
5 Chou_2004 Medicine 1630 9 0.6
6 Donners_2021 Medicine 258 15 5.8
7 Hall_2012 Computer science 8793 104 1.2
8 Jeyaraman_2020 Medicine 1175 96 8.2
9 Leenaars_2019 Psychology, Chemistry, Medicine 5812 17 0.3
10 Leenaars_2020 Medicine 7216 583 8.1
11 Meijboom_2021 Medicine 882 37 4.2
12 Menon_2022 Medicine 975 74 7.6
13 Moran_2021 Biology, Medicine 5214 111 2.1
14 Muthu_2021 Medicine 2719 336 12.4
15 Nelson_2002 Medicine 366 80 21.9
16 Oud_2018 Psychology, Medicine 952 20 2.1
17 Radjenovic_2013 Computer science 5935 48 0.8
18 Sep_2021 Psychology 271 40 14.8
19 Smid_2020 Computer science, Mathematics 2627 27 1
20 van_de_Schoot_2018 Psychology, Medicine 4544 38 0.8
21 van_der_Valk_2021 Medicine, Psychology 725 89 12.3
22 van_der_Waal_2022 Medicine 1970 33 1.7
23 van_Dis_2020 Psychology, Medicine 9128 72 0.8
24 Walker_2018 Biology, Medicine 48375 762 1.6
25 Wassenaar_2017 Medicine, Biology, Chemistry 7668 111 1.4
26 Wolters_2018 Medicine 4280 19 0.4

Each record in the dataset is an OpenAlex Work object (Copy at web.archive.org extracted on 2023-03-31).

Some of the notable variables are:

Variable Type Description
id String The OpenAlex ID for this work.
doi String The DOI identifier of the object if available
label_included Bin 1 for included records, 0 for excluded records after full text screening
title String The title of this work.
abstract String The abstract of this work. Stored as abstract_inverted_index, but available as plaintext abstract for machine learning purposes.
authorships List List of Authorship objects, each representing an author and their institution.
type String The type or genre of the work as defined by https://api.crossref.org/types.
publication_year Integer The year this work was published.
referenced_works List List of OpenAlex IDs for works that this work cites.
concepts List List of wikidata concept objects (or topics).
best_oa_location Object An object with the best available open access location for this work.
cited_by_count Integer The number of citations to this work at April 1st, 2023.

For the full list of variables, see this persistent copy of the OpenAlex Work Object documention: https://web.archive.org/web/20230104092916/https://docs.openalex.org/api-entities/works/work-object

Benchmark

Work in progress.

Attribution & License

We would like to thank the following authors for openly sharing the data correponding to their systematic review:

Marlies L.S. Heeres, Marijn Vellinga, P Whaley, Mostafa Mohseni, P.M.J. Welsing, Marleen L.M. Hermens, Richard Torkar, Holger Schielzeth, Marjan Hericko, Arnoud Arntz, Lisanne A. H. Bevers, Christian Appenzeller-Herzog, Michael J. DeVito, Juliette Legler, Rosalie W. M. Kempkes, Daniel Bos, Sanne C. Smid, Robyn B. Blain, Carin M. A. Rademaker, David De Jong, Antoine C. G. Egberts, Tijmen Geurts, Sathish Muthu, Suzanne C. van Veen, Janet D. Allan, Pamela Hartman, Eline S van der Valk, Mitzy Kennis, Wilhelmus Drinkenburg, R. Angela Sarabdjitsingh, Nicola P. Klein, Helga Gardarsdottir, Anouk A. M. T. Donners, Sonja D. Winter, Muriel A. Hagenaars, Erica L T van den Akker, Amir Abdelmoumen, Derek W. R. Gray, Kim Peterson, Eswar Ramakrishnan, Trevor J. Hall, Maurice Dematteis, Merel Ritskes-Hoitinga, Andrew A. Shapiro, Meike W. Vernooij, Maria Brouwer, Katherine E. Pelch, Milica Miočević, Eva A.M. van Dis, Ozair Abawi, Dimitrije Radjenović, Daniel McNeish, Peggy Nygren, Maikel van Berlo, Alwin D. R. Huitema, Nicholas P. Moran, Chad R. Blystone, Alishia D. Williams, Ruud N. J. M. A. Joosten, Klaus Reinhold, Pim N.H. Wassenaar, Sanne E. Hoeks, Anand Krishnan V. Iyer, Sjoerd A.A. van den Berg, Tim Kendall, Lieke H. van Huis, Rens van de Schoot, Nancy E. E. Van Loey, Julia M.L. Menon, Cathalijn H. C. Leenaars, Rogier E. J. Verhoef, Sarah Depaoli, Frank de Wolf, M.E. Hamaker, Rinske M van den Heuvel, Leonardo Trasande, Miranda Olff, Alfredo Sánchez-Tójar, M.H. Emmelot-Vonk, Kristina A. Thayer, Steven M. Teutsch, Elisabeth F.C. van Rossum, Bibian van der Voorn, Stephanie Holmgren, André Bleich, M.S. van der Waal, Frank J. Wolters, Hannah Ewald, Marian Joëls, Franck L. B. Meijboom, Yolanda B. de Rijke, Tobias Stalder, M. Arfan Ikram, P.A.L. Seghers, Marit Sijbrandij, Vincent L. Wester, Behnam Sabayan, Tim Mathes, Parvez Ahmad Ganie, Matthijs G. P. Feenstra, Abee L. Boyles, Matthijs Oud, Andrew A. Rooney, Rosanne W. Meijboom, Karl Heinz Weiss, Jan-Bas Prins, F. Struijs, David Bowes, Neeltje M. Batelaan, Reffat A. Segufa, Serena J. Counsell, Milou S. C. Sep, Aleš Živkovič, Madhan Jeyaraman, Sirwan K.L. Darweesh, Tineke Coenen-de Roo, Heidi Nelson, Roger Chou, Vickie R. Walker, Albert Hofman, Roger E. G. Schutgens, Rob B. M. de Vries, Zhongfang Fu, Pim Cuijpers, Christ Nolten, Krista Fischer, Janneke Elzinga, Roderick H. J. Houwen, Iris M. Engelhard, Linda Humphrey, Frans A. Stafleu, Simon Beecham, Mark Helfand, Thijs J. Giezen, Retha R. Newbold, Claudi L H Bockting, Sanaz Sedaghat, Elizabeth A. Clark

Run synergy_dataset attribution or see ATTRIBUTION.md for a complete attribution including references.

SYNERGY dataset is released under the CC0 1.0 license. SYNERGY consists of CC0 1.0 licensed metadata works published by OpenAlex. The Lens was used for data quality checks and imputing some missing variables.

Citing SYNERGY dataset

If you use SYNERGY in a scientific publication, we would appreciate references to:

De Bruin, Jonathan; Ma, Yongchao; Ferdinands, Gerbrich; Teijema, Jelle; Van de Schoot, Rens, 2023, "SYNERGY - Open machine learning dataset on study selection in systematic reviews", https://doi.org/10.34894/HE6NAQ, DataverseNL, V1

BibTeX reference:

@data{HE6NAQ_2023,
  author = {De Bruin, Jonathan and Ma, Yongchao and Ferdinands, Gerbrich and Teijema, Jelle and Van de Schoot, Rens},
  publisher = {DataverseNL},
  title = {{SYNERGY - Open machine learning dataset on study selection in systematic reviews}},
  year = {2023},
  version = {V1},
  doi = {10.34894/HE6NAQ},
  url = {https://doi.org/10.34894/HE6NAQ}
}

Contributing

We are welcoming contributions of all kinds. Some examples are:

  • Do you have an openly published systematic review dataset? Read about our ambition to develop SYNERGY+ (SYNERGY Plus), a much larger dataset with lots of new features.
  • Write an example or tutorial on how to use SYNERGY and all of its hidden capabilities.
  • Write integration to load SYNERGY into existing software like Spacy, Gensim, Tensorflow, Docker, Hugging Face.

Contact

Reach out on the Discussion forum.

synergy-dataset's People

Contributors

akashagarwal7 avatar fqixiang avatar gerbrichferdinands avatar gimoai avatar j535d165 avatar jteijema avatar peterlombaers avatar qubixes avatar rensvandeschoot avatar sagevdbrand avatar terrymyc avatar weiversa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synergy-dataset's Issues

wilson dataset - 1/3 of abstracts is missing

About 1/3 of the abstracts in the wilson dataset is missing (of which 3 inclusions).

image

I think it would be worthwhile to see if we could figure out what causes this missingness.
One of the authors (Dr. Hannah Ewald) named the following possible causes:

572/1090 missing abstracts are from the years 1912 to 1989.

The rest comes from both Embase and Medline and I can’t see any systematic error (i.e. there is mixed page numbers, all names from A to Z, all indexed as journal articles, different languages and journals). Although it’s odd in such a high number, maybe they just don’t have abstracts.

I am awaiting response from the first author (dr. Christian Appenzeller-Herzog) who is currently out of office.

Contribution of SRs from EFSA

I started to have a look at our full SR database.

Maybe I start to describe what I have here, and then we can iteratively think about what would be worth to include.
Maybe we can have as well an other call.

We have in total 299 "projects" in Distiller.
Quite some of them are "tests" or other garbage.
Hard to say how many, but by looking at the project names, there might be at least 100 project which should not be looked at at all.
so 200 projects remaining.

Each of it has at least "one level", in which a level could mean different things:

"title screening"
"abstract screening"
"title + abstract screening"
"full text screening"
"data extraction"
"abstract screening 1" vs "abstract screening 2".
......
.....

There is no "clear nomenclature" or metadata on this, but often we use the word "abstract" in the name of the level to indicate "abstract screening"

The number of "levels" in total (including garbage projects) is:
1226

So in total we have 1226 times , that
"humans have decided to exclude x papers out of y"

(sometimes x or y or x and y are 0)

I filtered the levels by the ones which have "abstract" in the "level name".
These SHOULD be all about abstract screening, but we might have more.

This leaves then 126 rows.

I just pasted here for you information, some of the "statistics" I get for these.

We can see that the first row is:

  • related to an EFSA question: EFSA-Q-2012-00234 and was about "Leishmaniosis" . (from this you could find the EFSA output: https://efsa.onlinelibrary.wiley.com/doi/epdf/10.2903/sp.efsa.2014.EN-466 where on page 19 you find a summary of the SR. Sometimes we publish as well the concrete references included / excluded, but not always

  • we look at a level/phase of the systematic review called "Title and abstract screening - Study eligibility form: Title and abstract screening"
    (so I think, this is a "real" abstract screening, probably worth to add to your database or to be used in a simulation)

  • it started with 961 references

  • we excluded 877 and included 84

|                                      project |                                                                                                    level | References Added | Unreviewed | Some Reviews | Included | Excluded | Conflict | Fully Reviewed | Saved, Unsubmitted |
|----------------------------------------------|----------------------------------------------------------------------------------------------------------|------------------|------------|--------------|----------|----------|----------|----------------|--------------------|
|         AHAW_EFSA-Q-2012-00234_Leishmaniosis |                      Title and abstract screening - Study eligibility form: Title and abstract screening |              961 |          0 |            0 |       84 |      877 |        0 |            961 |                  0 |
|         AHAW_EFSA-Q-2012-00234_Leishmaniosis | Full paper screening - Study eligibility form: Full paper screening of unclear title and abstract papers |                  |          0 |            0 |       23 |       61 |        0 |             84 |                  0 |
|                   AHAW_EFSA-Q-2013-00546_EBL |                                              Title Abstract screening - Title and abstract screening EBL |             5181 |          0 |            0 |      255 |     4926 |        0 |           5181 |                  0 |
|         AHAW_EFSA-Q-2013-00835_leishmaniasis |                                                   relevance - First stage screening (title and abstract) |              182 |          0 |            0 |       14 |      168 |        0 |            182 |                  0 |
|                   AHAW_EFSA-Q-2013-00918_pox |                                                           Screening 1 - POX Screening 1 (title&abstract) |               86 |          0 |            0 |       37 |       49 |        0 |             86 |                  0 |
|                   AHAW_EFSA-Q-2013-01034_PPR |                                                             Screening - PPR Screening 1 (title&abstract) |             1076 |          0 |            0 |      243 |      833 |        0 |           1076 |                  0 |
| AHAW_EFSA-Q-2014-00187- VBD-review-GEOG-DIST |                                             Title and abstract screening - Tittle and abstract screening |              816 |         15 |            0 |      255 |      521 |       12 |            801 |                  0 |
|                   AHAW_EFSA-Q-2015-00160_PED |                                      Title and abstract screening PED - Title and abstract screening PED |             1609 |          0 |            0 |      246 |     1363 |        0 |           1609 |                  0 |
|            AHAW_EFSA-Q-2016-00160_Bluetongue |                                                               Level 1 - Q3 screening title and abstracts |              287 |          0 |            0 |      103 |      184 |        0 |            287 |                  0 |
|                   AHAW_EFSA-Q-2018-00141_ASF |                                                             ASF screening - ASF Title abstract Screening |             1512 |          0 |            0 |       89 |     1422 |        1 |           1512 |                  0 |
|         AHAW_EFSA-Q-2018-00269_AI_Monitoring |                                                      Title abstract screening - Title abstract screening |               47 |         47 |            0 |        0 |        0 |        0 |              0 |                  0 |
|  AHAW_EFSAQ201400187_DACRAH2_GeoDistribution |                                             Title and abstract screening - Tittle and abstract screening |             5433 |          0 |            0 |      982 |     4451 |        0 |           5433 |                  0 |
|       AHAW_EFSA_Q_-2014-00187-VECTORNET-OBJ1 |                                                ti/abstract screening - MIR_Tittle and abstract screening |             1756 |          0 |            0 |      679 |     1077 |        0 |           1756 |                  0 |
|       AHAW_EFSA_Q_-2014-00187-VECTORNET-OBJ2 |                                                               Level 1 - R0_Tittle and abstract screening |              145 |          0 |            0 |      107 |       38 |        0 |            145 |                  0 |
|       AHAW_EFSA_Q_-2014-00187-VECTORNET-OBJ3 |                                                          Level 1 - VecComp_Tittle and abstract screening |              703 |         27 |            0 |      327 |      349 |        0 |            676 |                  0 |
|                  AMU_EFSA-Q-2015-00592_crowd |                                                                 screening - Title and abstract screening |              371 |          0 |            0 |       25 |      346 |        0 |            371 |                  0 |
|                AMU_EFSA-Q-2016-00294_MLT- SR |                                                           Level 1 - LEVEL1 screening title and abstracts |              953 |          0 |            0 |      257 |      696 |        0 |            953 |                  0 |
|      BIOCONTAM_EFSA-Q-2014-00189_QPS2014G+NS |  Title and abstract screening - STEP 1 (Title and/or abstract): GRAM-POSITIVE - NON-SPORULATING BACTERIA |              875 |        113 |          393 |       16 |      353 |        0 |            369 |                  0 |
|       BIOCONTAM_EFSA-Q-2014-00189_QPS2014G+S |       Screening Title and Abstract - STEP 1 (Title and/or abstract): GRAM-POSITIVE -SPORULATING BACTERIA |              447 |          0 |          421 |       17 |        9 |        0 |             26 |                  0 |
|         BIOCONTAM_EFSA-Q-2014-00189_QPS2014V |         Title and Abstract screening - STEP 1 (Title and/or abstract): Viruses used for plant protection |               77 |          0 |           77 |        0 |        0 |        0 |              0 |                  0 |
|         BIOCONTAM_EFSA-Q-2014-00189_QPS2014Y |                                   Title and Abstract screening - STEP 1 (Title and/or abstract):  YEASTS |              488 |          0 |          477 |       11 |        0 |        0 |             11 |                  0 |
|       BIOCONTAM_EFSA-Q-2014-00536_EAEC_Trial |                                                       Title and abstracts - Title and abstract screening |              240 |          0 |          100 |      106 |       34 |        0 |            140 |                  0 |
|        BIOCONTAM_EFSA-Q-2015-00028_DIOX_FARM |                               Level 1 _title and abstract - DIOXIN _ FARM / Title and abstract screening |             4202 |          0 |            0 |      503 |     3699 |        0 |           4202 |                  0 |
|       BIOCONTAM_EFSA-Q-2015-00028_DIOX_NP06C |                                                 Level 1 - RPA_IEH_updated / Title and abstract screening |             6101 |          0 |            0 |     2218 |     3883 |        0 |           6101 |                  0 |
|       BIOCONTAM_EFSA-Q-2015-00028_DIOX_NP07C |                                       Level 1 - DIOXIN _TOXICOLOGY MODELS / Title and abstract screening |             4906 |          0 |            0 |      633 |     4273 |        0 |           4906 |                  0 |

So one contribution to you could be the (at least 126) abstract screenings from our database, including its meta-data:

  • title
  • authors
  • DOI (not always present)
  • year (not always present)
  • journal (not always present)
  • label (excluded / included)

Some might be "half done", but that you could see from the numbers of "total papers", "included", "excluded" , "conflict".
I would say "nearly all" are complete.

I have automated all extractions, so the "volume of SRs" does not make any difference for me.

SSL Issue when running a get command in Plain Text on MacOS

I received an ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) error when trying to run the synergy_dataset get on macOS Ventura Version 13.6.6.

It is a known issue on MacOS that I was able to solve using the open /Applications/Python\ 3.9/Install\ Certificates.command in a terminal.

This is a common issue on macOS where Python is not able to verify the SSL certificate provided by the server. To fix it we are using Certificates.command script from python.org for macOS.

Information of "saving" rates ?

Did you use in some form all these datasets for simulation studies ?
So, do you have some numbers "how much effort" can be saved potentially using ASReview for a larger number of reviews ?

Kwok dataset doesn't have files on persistent location

Files were shared directly with us. For this reason, no files are found in the OSF url. I removed this entry as I'm cleaning the repo from source files.

{
  "dataset_id": "Kwok_2020",
  "url": "https://raw.githubusercontent.com/asreview/systematic-review-datasets/master/datasets/Kwok_2020/output/Kwok_2020.csv",
  "reference": "https://doi.org/10.3390/v12010107",
  "link": "https://doi.org/10.17605/OSF.IO/5S27M",
  "license": "CC-BY Attribution 4.0 International",
  "title": "Virus Metagenomics in Farm Animals: A Systematic Review",
  "authors": [
    "Kwok, K. T. T.", 
    "Nieuwenhuijse, D. F.", 
    "Phan, M. V. T.", 
    "Koopmans, M. P. G."
    ],
  "year": 2020,
  "topic": "Virus Metagenomics",
  "final_inclusions": true,
  "title_abstract_inclusions": false
}

original systematic review for the PTSD dataset

It turns out there is a discrepancy of 4 inclusions between the ptsd-dataset in this repo (38 inclusions) and the systematic review we linked it to (34 inclusions).

For the ptsd dataset, up until now we refer to https://doi.org/10.1080/00273171.2017.1412293, a systematic review (1) reporting 34 inclusions. It turns out that the .ris files in the corresponding osf-page contain 38 inclusions.

This number belongs to another systematic review (2) (on the same dataset), http://dx.doi.org/10.1080/10705511.2016.1247646. On the corresponding osf-page, https://osf.io/6vdfk/ however, there are no .ris files uploaded (yet).

I think we have refer to one or the other paper. For the 34 inclusions paper we need to update the dataset by recoding the 4 inclusions to exclusions (see comment by @J535D165 below). For the paper with 38 inclusions, the osf-page should be updated (@Rensvandeschoot), and all information in the documentation on systematic review 1 should be replaced with information on systematic review 2.

Any thoughts?


Thanks @terrymyc! Thanks for your contribution.

A couple of additions and remarks are listed below:

Exclude 4 more papers

~Based on the paper, the team excluded 4 more papers. 34 of the 38 papers were described in the paper. The papers listed below have to be excluded (isn't it? @Rensvandeschoot @GerbrichFerdinands ): ~

  • Sterling, M., Hendrikz, J., & Kenardy, J. (2010). Compensation claim lodgement and health outcome developmental trajectories following whiplash injury: A prospective study. Pain, 150(1), 22-28.
  • Hou, W. K., Law, C. C., Yin, J., & Fu, Y. T. (2010). Resource Loss, Resource Gain, and Psychological Resilience and Dysfunction Following Cancer Diagnosis: A Growth Mixture Modeling Approach. Health Psychology, 29(5), 484-495. doi:10.1037/a0020809
  • Mason, S. T., Corry, N., Gould, N. F., Amoyal, N., Gabriel, V., Wiechman-Askay, S., . . . Fauerbach, J. A. (2010). Growth curve trajectories of distress in burn patients. Journal of Burn Care and Research, 31(1), 64-72. doi:10.1097/BCR.0b013e3181cb8ee6
  • Pérez, S., Conchado, A., Andreu, Y., Galdón, M. J., Cardeña, E., Ibáñez, E., & Durá, E. (2016). Acute stress trajectories 1 year after a breast cancer diagnosis. Supportive Care in Cancer, 24(4), 1671-1678. doi:10.1007/s00520-015-2960-x

Connect to RIS files on OSF

The RIS files are now available on OSF. Can you connect them to your code and remove the ones in the Github repository?

Count duplicates

Thanks for your statistics so far. Can you count the number of duplicate items as well? Please don't make things too complicated, count check of duplicate abstracts for example.

It turns out that @qubixes is also doing some work on the dataset statistics. This is implemented in an extension for asreview https://github.com/asreview/asreview-statistics. It might be interesting to have a look. It would be nice to integrate the functionality with this repo (not for now).

Originally posted by @J535D165 in #13 (comment)

Additional datapoints: domain and inclusion criteria

This may not be very necessary for active learning, but it makes the data more meaningful, and accessible on its own. In a structured format, it can be read using scripts without needing to go to the source of the data.

Would very much prefer if the inclusion criteria is a list of the criteria all in boolean question format. This is important for a project I am working on.

And domain, the general field of the research so researchers can be selective.

Inconsistent names label columns

The column with labels does not have the same name over all datasets.

  • In the Kwok data the label is final_included (see cell 8 in this notebook),
  • It looks like that most datasets have a label column called label_included. For example in the Cohen datasets (see cell 4 in this notebook.

Maybe there are more variations, but I didn't find them this far.

Then, the README of this repo says the following:

To indicate labelling decisions, one can use "included" or "label_included". The latter label called "included" is needed to indicate the final included publications in the simulations.

It would be nice to make the name of the label column more consistent throughout all datasets.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.