Git Product home page Git Product logo

desbordante / desbordante-core Goto Github PK

View Code? Open in Web Editor NEW
354.0 8.0 59.0 129.66 MB

Desbordante is a high-performance data profiler that is capable of discovering many different patterns in data using various algorithms. It also allows to run data cleaning scenarios using these algorithms. Desbordante has a console version and an easy-to-use web application.

License: GNU Affero General Public License v3.0

CMake 0.46% Shell 0.20% C++ 95.93% Batchfile 0.02% C 0.25% Python 3.14%
data-analytics data-cleaning data-cleansing data-engineering data-exploration data-mining data-profiling data-science data-wrangling data-preprocessing feature-selection feature-engineering feature-extraction spreadsheets tabular-data anomaly-detection data-mining-algorithms exploratory-data-analysis knowledge-discovery correlations

desbordante-core's Introduction

Downloads Downloads

General

Desbordante is a high-performance data profiler that is capable of discovering and validating many different patterns in data using various algorithms.

The Discovery task is designed to identify all instances of a specified pattern type of a given dataset.

The Validation task is different: it is designed to check whether a specified pattern instance is present in a given dataset. This task not only returns True or False, but it also explains why the instance does not hold (e.g. it can list table rows with conflicting values).

The currently supported data patterns are:

  • Functional dependency variants:
    • Exact functional dependencies (discovery and validation)
    • Approximate functional dependencies, with g1 metric (discovery and validation)
    • Probabilistic functional dependencies, with PerTuple and PerValue metrics (discovery)
  • Graph functional dependencies (validation)
  • Conditional functional dependencies (discovery)
  • Inclusion dependencies (discovery)
  • Order dependencies:
    • set-based axiomatization (discovery)
    • list-based axiomatization (discovery)
  • Metric functional dependencies (validation)
  • Fuzzy algebraic constraints (discovery)
  • Unique column combinations:
    • Exact unique column combination (discovery and validation)
    • Approximate unique column combination, with g1 metric (discovery and validation)
  • Association rules (discovery)

The discovered patterns can have many uses:

  • For scientific data, especially those obtained experimentally, an interesting pattern allows to formulate a hypothesis that could lead to a scientific discovery. In some cases it even allows to draw conclusions immediately, if there is enough data. At the very least, the found pattern can provide a direction for further study.
  • For business data it is also possible to obtain a hypothesis based on found patterns. However, there are more down-to-earth and more in-demand applications in this case: clearing errors in data, finding and removing inexact duplicates, performing schema matching, and many more.
  • For training data used in machine learning applications the found patterns can help in feature engineering and in choosing the direction for the ablation study.
  • For database data, found patterns can help with defining (recovering) primary and foreign keys, setting up (checking) all kinds of integrity constraints.

Desbordante can be used via three interfaces:

  • Console application. This is a classic command-line interface that aims to provide basic profiling functionality, i.e. discovery and validation of patterns. A user can specify pattern type, task type, algorithm, input file(s) and output results to the screen or into a file.
  • Python bindings. Desbordante functionality can be accessed from within Python programs by employing the Desbordante Python library. This interface offers everything that is currently provided by the console version and allows advanced use, such as building interactive applications and designing scenarios for solving a particular real-life task. Relational data processing algorithms accept pandas DataFrames as input, allowing the user to conveniently preprocess the data before mining patterns.
  • Web application. There is a web application that provides discovery and validation tasks with a rich interactive interface where results can be conveniently visualized. However, currently it supports a limited number of patterns and should be considered more as an interactive demo.

A brief introduction to the tool and its use cases can be found here (in English) and here (in Russian). Next, a list of various articles and guides can be found here. Finally, an extensive list of tutorial examples that cover each supported pattern is available here.

Console

Usage examples:

  1. Discover all exact functional dependencies in a table stored in a comma-separated file with a header row. In this example the default FD discovery algorithm (HyFD) is used.
python3 cli.py --task=fd --table=../examples/datasets/university_fd.csv , True
[Course Classroom] -> Professor
[Classroom Semester] -> Professor
[Classroom Semester] -> Course
[Professor] -> Course
[Professor Semester] -> Classroom
[Course Semester] -> Classroom
[Course Semester] -> Professor
  1. Discover all approximate functional dependencies with error less than or equal to 0.1 in a table represented by a .csv file that uses a comma as the separator and has a header row. In this example the default AFD discovery algorithm (Pyro) is used.
python3 cli.py --task=afd --table=../examples/datasets/inventory_afd.csv , True --error=0.1
[Id] -> ProductName
[Id] -> Price
[ProductName] -> Price
  1. Check whether metric functional dependency “Title -> Duration” with radius 5 (using the Euclidean metric) holds in a table represented by a .csv file that uses a comma as the separator and has a header row. In this example the default MFD validation algorithm (BRUTE) is used.
python3 cli.py --task=mfd_verification --table=../examples/datasets/theatres_mfd.csv , True --lhs_indices=0 --rhs_indices=2 --metric=euclidean --parameter=5
True

For more information consult documentation and help files.

Python bindings

Desbordante features can be accessed from within Python programs by employing the Desbordante Python library. The library is implemented in the form of Python bindings to the interface of the Desbordante C++ core library, using pybind11. Apart from discovery and validation of patterns, this interface is capable of providing valuable additional information which can, for example, describe why a given pattern does not hold. All this allows end users to solve various data quality problems by constructing ad-hoc Python programs. To show the power of this interface, we have implemented several demo scenarios:

  1. Typo detection
  2. Data deduplication
  3. Anomaly detection

There is also an interactive demo for all of them, and all of these python scripts are here. The ideas behind them are briefly discussed in this preprint (Section 3).

Simple usage examples:

  1. Discover all exact functional dependencies in a table represented by a .csv file that uses a comma as the separator and has a header row. In this example the default FD discovery algorithm (HyFD) is used.
import desbordante

TABLE = 'examples/datasets/university_fd.csv'

algo = desbordante.fd.algorithms.Default()
algo.load_data(table=(TABLE, ',', True))
algo.execute()
result = algo.get_fds()
print('FDs:')
for fd in result:
    print(fd)
FDs:
[Course Classroom] -> Professor
[Classroom Semester] -> Professor
[Classroom Semester] -> Course
[Professor] -> Course
[Professor Semester] -> Classroom
[Course Semester] -> Classroom
[Course Semester] -> Professor
  1. Discover all approximate functional dependencies with error less than or equal to 0.1 in a table represented by a .csv file that uses a comma as the separator and has a header row. In this example the AFD discovery algorithm Pyro is used.
import desbordante

TABLE = 'examples/datasets/inventory_afd.csv'
ERROR = 0.1

algo = desbordante.afd.algorithms.Default()
algo.load_data(table=(TABLE, ',', True))
algo.execute(error=ERROR)
result = algo.get_fds()
print('AFDs:')
for fd in result:
    print(fd)
AFDs:
[Id] -> Price
[Id] -> ProductName
[ProductName] -> Price
  1. Check whether metric functional dependency “Title -> Duration” with radius 5 (using the Euclidean metric) holds in a table represented by a .csv file that uses a comma as the separator and has a header row. In this example the default MFD validation algorithm (BRUTE) is used.
import desbordante

TABLE = 'examples/datasets/theatres_mfd.csv'
METRIC = 'euclidean'
LHS_INDICES = [0]
RHS_INDICES = [2]
PARAMETER = 5

algo = desbordante.mfd_verification.algorithms.Default()
algo.load_data(table=(TABLE, ',', True))
algo.execute(lhs_indices=LHS_INDICES, metric=METRIC,
             parameter=PARAMETER, rhs_indices=RHS_INDICES)
if algo.mfd_holds():
    print('MFD holds')
else:
    print('MFD does not hold')
MFD holds
  1. Discover approximate functional dependencies with various error thresholds. Here, we are using a pandas DataFrame to load data from a CSV file.
>>> import desbordante
>>> import pandas as pd
>>> pyro = desbordante.afd.algorithms.Pyro()  # same as desbordante.afd.algorithms.Default()
>>> df = pd.read_csv('examples/datasets/iris.csv', sep=',', header=None)
>>> pyro.load_data(table=df)
>>> pyro.execute(error=0.0)
>>> print(f'[{", ".join(map(str, pyro.get_fds()))}]')
[[0 1 2] -> 4, [0 2 3] -> 4, [0 1 3] -> 4, [1 2 3] -> 4]
>>> pyro.execute(error=0.1)
>>> print(f'[{", ".join(map(str, pyro.get_fds()))}]')
[[2] -> 0, [2] -> 3, [2] -> 1, [0] -> 2, [3] -> 0, [0] -> 3, [0] -> 1, [1] -> 3, [1] -> 0, [3] -> 2, [3] -> 1, [1] -> 2, [2] -> 4, [3] -> 4, [0] -> 4, [1] -> 4]
>>> pyro.execute(error=0.2)
>>> print(f'[{", ".join(map(str, pyro.get_fds()))}]')
[[2] -> 0, [0] -> 2, [3] -> 2, [1] -> 2, [2] -> 4, [3] -> 4, [0] -> 4, [1] -> 4, [3] -> 0, [1] -> 0, [2] -> 3, [2] -> 1, [0] -> 3, [0] -> 1, [1] -> 3, [3] -> 1]
>>> pyro.execute(error=0.3)
>>> print(f'[{", ".join(map(str, pyro.get_fds()))}]')
[[2] -> 1, [0] -> 2, [2] -> 0, [2] -> 3, [0] -> 1, [3] -> 2, [3] -> 1, [1] -> 2, [3] -> 0, [0] -> 3, [4] -> 1, [1] -> 0, [1] -> 3, [4] -> 2, [4] -> 3, [2] -> 4, [3] -> 4, [0] -> 4, [1] -> 4]

Web interface

While the Python interface makes building interactive applications possible, Desbordante also offers a web interface which is aimed specifically for interactive tasks. Such tasks typically involve multiple steps and require substantial user input on each of them. Interactive tasks usually originate from Python scenarios, i.e. we select the most interesting ones and implement them in the web version. Currently, only the typo detection scenario is implemented. The web interface is also useful for pattern discovery and validation tasks: a user may specify parameters, browse results, employ advanced visualizations and filters, all in a convenient way.

You can try the deployed web version here. You have to register in order to process your own datasets. Keep in mind that due to high demand various time and memory limits are enforced: processing is aborted if they are exceeded. The source code of the web interface is kept in a separate repo.

I still don't understand how to use Desbordante and patterns :(

No worries! Desbordante offers a novel type of data profiling, which may require that you first familiarize yourself with its concepts and usage. The most challenging part of Desbordante are the primitives: their definitions and applications in practice. To help you get started, here’s a step-by-step guide:

  1. First of all, explore the guides on our website. Since our team currently does not include technical writers, it's possible that some guides may be missing.
  2. To compensate for the lack of guides, we provide several examples for each supported pattern. These examples illustrate both the pattern itself and how to use it in Python. You can check them out here.
  3. Each of our patterns was introduced in a research paper. These papers typically provide a formal definition of the pattern, examples of use, and its application scope. We recommend at least skimming through them. Don't be discouraged by the complexity of the papers! To effectively use the patterns, you only need to read the more accessible parts, such as the introduction and the example sections.
  4. Finally, do not hesitate to ask questions in the mailing list (link below) or create an issue.

Papers about patterns

Here is a list of papers about patterns, organized in the recommended reading order in each item:

Installation (this is what you probably want if you are not a project maintainer)

Desbordante is available at the Python Package Index (PyPI). Dependencies:

  • Python >=3.7

To install Desbordante type:

$ pip install desbordante

However, as Desbordante core uses C++, additional requirements on the machine are imposed. Therefore this installation option may not work for everyone. Currently, only manylinux2014 (Ubuntu 20.04+, or any other linux distribution with gcc 10+) is supported. If the above does not work for you consider building from sources.

CLI installation

NOTE: Only Python 3.11+ is supported for CLI

Сlone the repository, change the current directory to the project directory and run the following commands:

pip install -r cli/requirements.txt
python3 cli/cli.py --help

Build instructions

Ubuntu

The following instructions were tested on Ubuntu 20.04+ LTS.

Dependencies

Prior to cloning the repository and attempting to build the project, ensure that you have the following software:

  • GNU g++ compiler, version 10+
  • CMake, version 3.13+
  • Boost library, version 1.74.0+

To use test datasets you will need:

  • Git Large File Storage, version 3.0.2+

Building the project

Building the Python module using pip

Clone the repository, change the current directory to the project directory and run the following commands:

./build.sh
python3 -m venv venv
source venv/bin/activate
python3 -m pip install .

Now it is possible to import desbordante as a module from within the created virtual environment.

Building tests & the Python module manually

In order to build tests, pull the test datasets using the following command:

./pull_datasets.sh

then build the tests themselves:

./build.sh -j$(nproc)

The Python module can be built by providing the --pybind switch:

./build.sh --pybind -j$(nproc)

See ./build.sh --help for more available options.

The ./build.sh script generates the following file structure in /path/to/Desbordante/build/target:

├───input_data
│   └───some-sample-csv\'s.csv
├───Desbordante_test
├───desbordante.cpython-*.so

The input_data directory contains several .csv files that are used by Desbordante_test. Run Desbordante_test to perform unit testing:

cd build/target
./Desbordante_test --gtest_filter='*:-*HeavyDatasets*'

desbordante.cpython-*.so is a Python module, packaging Python bindings for the Desbordante core library. In order to use it, simply import it:

cd build/target
python3
>>> import desbordante

We use easyloggingpp in order to log (mostly debug) information in the core library. Python bindings search for a configuration file in the working directory, so to configure logging, create logging.conf in the directory from which desbordante will be imported. In particular, when running the CLI with python3 ./relative/path/to/cli.py, logging.conf should be located in ..

Troubleshooting

Git LFS

If, when cloning the repo with git lfs installed, git clone produces the following (or similar) error:

Cloning into 'Desbordante'...
remote: Enumerating objects: 13440, done.
remote: Counting objects: 100% (13439/13439), done.
remote: Compressing objects: 100% (3784/3784), done.
remote: Total 13440 (delta 9537), reused 13265 (delta 9472), pack-reused 1
Receiving objects: 100% (13440/13440), 125.78 MiB | 8.12 MiB/s, done.
Resolving deltas: 100% (9537/9537), done.
Updating files: 100% (478/478), done.
Downloading datasets/datasets.zip (102 MB)
Error downloading object: datasets/datasets.zip (2085458): Smudge error: Error downloading datasets/datasets.zip (2085458e26e55ea68d79bcd2b8e5808de731de6dfcda4407b06b30bce484f97b): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

delete the already cloned version, set GIT_LFS_SKIP_SMUDGE=1 environment variable and clone the repo again:

GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:Mstrutov/Desbordante.git

No type hints in IDE

If type hints don't work for you in Visual Studio Code, for example, then install stubs using the command:

pip install desbordate-stubs

NOTE: Stubs may not fully support current version of desbordante package, as they are updated independently.

Cite

If you use this software for research, please cite one of our papers:

  1. George Chernishev, et al. Solving Data Quality Problems with Desbordante: a Demo. CoRR abs/2307.14935 (2023).
  2. George Chernishev, et al. "Desbordante: from benchmarking suite to high-performance science-intensive data profiler (preprint)". CoRR abs/2301.05965. (2023).
  3. M. Strutovskiy, N. Bobrov, K. Smirnov and G. Chernishev, "Desbordante: a Framework for Exploring Limits of Dependency Discovery Algorithms," 2021 29th Conference of Open Innovations Association (FRUCT), 2021, pp. 344-354, doi: 10.23919/FRUCT52173.2021.9435469.
  4. A. Smirnov, A. Chizhov, I. Shchuckin, N. Bobrov and G. Chernishev, "Fast Discovery of Inclusion Dependencies with Desbordante," 2023 33rd Conference of Open Innovations Association (FRUCT), Zilina, Slovakia, 2023, pp. 264-275, doi: 10.23919/FRUCT58615.2023.10143047.

Contacts and Q&A

If you have any questions regarding the tool usage you can ask it in our google group. To contact dev team email George Chernishev, Maxim Strutovsky or Nikita Bobrov.

desbordante-core's People

Contributors

buyt-1 avatar polyntsov avatar mstrutov avatar alexandrsmirn avatar vs9h avatar sched71 avatar xjoskiy avatar elluran avatar aviu00 avatar eduardgaisin avatar aartdem avatar cupertank avatar chernishev avatar toadharvard avatar antonchern avatar michaels239 avatar firsov62121 avatar popov-dmitriy-ivanovich avatar daniilgoncharov avatar rakhmukova avatar yakovypg avatar vyrodovmikhail avatar naniduan avatar egshnov avatar achains avatar pechenux avatar studokim avatar iliya-b avatar kirillsmirnov avatar max-fofanov avatar

Stargazers

Vincent Koc avatar Kaleb Alves avatar David Kuska avatar Robin Björklin avatar  avatar Manish Gupta avatar Kersten Lorenz avatar  avatar Leonard Tulipan avatar Oliver Kopp avatar Till Riedel avatar Merlin avatar  avatar Keerthan Jaic avatar Jason Wohlgemuth avatar Christian Kagerer avatar Tim Beermann avatar Z avatar Sebastian Schuberth avatar Nico Müller avatar Dmitry Fedoseev avatar Josue Magallanes avatar Alpha avatar  avatar  avatar Nur Arifin Akbar avatar Rodrigo Baron avatar Alex avatar Khoi Nguyen Tinh Song avatar Egor Lynov avatar Ryan avatar zungmou avatar Roberto Villegas avatar Fábio Luciano avatar Maxim avatar Dergousov Maxim avatar Sean DiSanti avatar  avatar sunpenglv avatar vincent avatar  avatar  avatar 团长CXYsama avatar  avatar wolfi3 avatar  avatar  avatar Panaos Alisandratos avatar  avatar Samuel avatar  avatar  avatar Pavel avatar Neo Ko avatar Kim Taewoong avatar Mike Brave avatar Wilson avatar Faych Chen avatar Steven Syp avatar Luis San Martin avatar MA Jianjun avatar Diego Caravana avatar Alex Sokolov avatar Kamdoum Ngamgoum Franck Junior avatar Nikhil Reddy avatar Vivek V Krishnan avatar Aaron Junod avatar Pavel Klymenko avatar Eric Feunekes avatar  avatar Fabio S. avatar Zaki Mughal [sivoais] avatar Michal Sudwoj avatar  avatar David Weddell avatar Ricardo Britto avatar Peter Bronez avatar Fernando Parra avatar Shashi Kumar Nagulakonda avatar awash avatar tom zhou avatar Sean Jensen-Grey avatar Eric Ye avatar Pau Garcia Quiles avatar Tobias avatar Caleb Lewallen avatar  avatar  avatar David Cottrell avatar David C. Lambert avatar Mark Vandergon avatar Mario Baldini avatar Michael Pfaff avatar Đỗ Hồng Phương avatar Carl Kibler avatar Alex Popescu avatar Donovan avatar Daniel Durante avatar Martin avatar Francisco Galuppo Azevedo avatar

Watchers

Pavel Alexeev aka Pahan-Hubbitus avatar Vinicius Ianni avatar  avatar Nitin Khaitan avatar  avatar Kostas Georgiou avatar  avatar  avatar

desbordante-core's Issues

Improve CLion developer experience

As a CMake project, Desbordante is automatically indexed by CLion. This configuration, however, leads to unfriendly search experience, as a bunch of files, namely gtest sources and datasets, pop up in the search results.

Try to find out means to improve the issues.

Possible remedies: reconfigure CMake and #64.

Reduce repository size

The Desbordante repository is about 100MB while the code itself is about 1.2 MBs. This is due to heavy datasets, they have been removed from the repository but are still kept in git history. So to reduce size of the repository we need to rewrite git history and remove all diffs from it that contain heavy datasets.

Mold and publish a Desbordante release branch

Some parts of the main branch are useless for a potential user: google test library, unit tests, CI instruction, etc.

Consider creating a separate, lightweight branch containing only parts essential for use.

Remove schema pointer from the vertical

It appears that a schema pointer may be removed from the Vertical class without functionality loss. It would reduce memory cost, probably increase performance, and is overall an SE-reasonable fix.

Investigate and remove.

Change param data in AlgoFactory

Now, to create any primitive object you need to pass the data parameter as string:

c.data = std::filesystem::current_path() / "inputData" / ExtractParamFromMap<std::string>(params, "data");

In a web-app, I need to use different paths (not only to embedded datasets). So, the data parameter should be of type std::filesystem::path and the above line should look like this:

c.data = ExtractParamFromMap<std::filesystem::path>(params, "data");

Refactor CSV Parser

  • implement field iterator and refactor parsing in CreateFor of CLRD and CLTRD
  • implement dataset validation and datatset properties auto-detection (e.g. header presense, separator)
  • consider switching to third-party CSV parsing library

Implement phased progress bar

There is no obvious way to calculate progress for algorithms with separate phases such as FastFDs. An empirical approach of estimating phase runtime proportions yields unstable results (Let as_gen = 0.85 * work, graph_traverse = 0.15 * work. On some datasets this assumption holds, on other -- does not). The solution is to calculate the progress for each phase and pass it to the frontend.

Two approaches come up in mind:

  • The more intuitive
std::mutex + std::vector<Phase>
struct Phase {
  std::string description;
  double progress;
};
  • The more efficient:
  concurrent_int phase_id; 
  concurrent_double progress;
  std::list<std::string> descriptions; 
    // init on ctor, then read-only -- may be requested by the app server

By default initialize as if it were only one phase (with name "total progress" or smth), so authors of single-phased algorithms wouldn't need to work with phases at all.

For the frontend sketch check the google doc.

Relocate common operations into FDAlgorithm

Several operations, e.g. parsing and schema generation, are common for all FD mining algorithms, which results in a duplicated code.

Detect such operations and move them into the base class.

Implement FDAlgorithm parameter getters

Frontend side needs information about available parameters, e.g. maxLHS, error, seed, etc.
Therefore we need a method in base class FDAlgorithm that would concurrently output info in a form similar to: "{: }".

An example:

{
  "seed": {
    "min": 0,
    "type": "int"
  },
  "error": {
    "min": 0,
    "max": 1,
    "type": "double"
  }
  ...
}

Synchronize the API with frontend developers.

Fix TypoMiner object creating

Now, to create TypoMiner object you must pass precise algorithm param as algo. But it looks strange, since I have to pull out the preciseAlgorithm from the config, because this is a parameter, not the name of the algorithm (For example, TANE mines FDs, Apriori mines ARs). The real name of the algorithm is TypoMiner (for mining typo FDs).

Secondly, I need to create tasks for mining clusters and specific clusters.
It can be useful to have different constructors:

  1. Create object for mining TypoFDs (need to provide info about the file, precise algorithm, approximate algorithm)
  2. Create object for mining clusters (need to provide info about the file, ratio, radius, typo FD)

Thirdly, I need to mine different representations of the same cluster. Is it possible to add FD creating without table parsing? If it's true, I won't create TypoMiner object in this case and use only TypoMiner static functions. (These functions must be static: FindClustersWithTypos, SquashCluster, FindClustersAndLinesWithTypos). If it isn't possible to create FD without table parsing, I need a third constructor, but that seems odd.

Fix WDCPlanets parsing error

WDCPlanets is a dataset with zero rows. Looks like CSVParser processes it incorrectly, leading to different results on Pyro and TANE.

Implement a docker image instruction

For CI and usability reasons it is crucial to have a step-by-step instruction, preferably in a form of docker-compose yaml.

Implement a docker instruction and consider publishing a docker image:

  1. for Desbordante, the console application
  2. for Desbordante, the web application

Investigate build time issues

Desbordante compiles rather slowly. Find out the cause and fix it.

Possible factors:

  • code bloat due to inefficient header inclusion
  • inclusion of boost headers

Restore the ability to build on WIndows

Desbordante could be built on Windows at commit 68d9cdd91f7b5719416bb3c82d300a5e3ce8da73, but it has lost this ability. Time to investigate the issue and support Windows.

Fix warnings

Desbordante builds with a huge amount of warnings. Some of them are quite harmless, such as unused variables, but there are, for example, declaration and initialization misordering warnings that could lead to bugs in the future. Maybe we should use the -Werror flag after fixing these?

Support algorithm progress in base class FDAlgorithm

An essential use case for Desbordante frontend is viewing the algorithm progress. The proposed solution is to add atomic double 0 <= progress_ <= 1 to FDAlgorithm and periodically update it in the specific algorithm. progress_ should be non-decreasing.

Set up logging via easylogging

Replace cout usages with LOG(ERROR), LOG(INFO), etc. and set up logging.conf file such that only the most high-level logs would be visible to a user.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.