Git Product home page Git Product logo

whatlies's Introduction

πŸ™‚ Vincent D. Warmerdam
┣━━ πŸ“¦ Open Source Packages
┃   ┣━━ bulk              - simple bulk labelling interface
┃   ┣━━ embetter          - embeddings ready for sklearn
┃   ┣━━ doubtlab          - suite of tools to help find bad labels
┃   ┣━━ drawdata          - draw datasets in jupyter
┃   ┣━━ scikit-lego       - lego bricks for sklearn
┃   ┣━━ scikit-partial    - partial_fit() pipelines for sklearn
┃   ┣━━ scikit-bloom      - bloom transformers for sklearn
┃   ┣━━ human-learn       - rule-based components for sklearn
┃   ┣━━ sentence-models   - a different take on textcat
┃   ┣━━ mktestdocs        - turn markdown files into pytest tests
┃   ┣━━ lazylines         - lightweight utils for .jsonl wrangling
┃   ┣━━ cluestar          - inspiration for your first text labels
┃   ┣━━ durations         - pytest duration insights
┃   ┣━━ tuilwindcss       - tailwindcss for textual tui apps
┃   ┣━━ memo              - saves a whole log of time
┃   ┣━━ skedulord         - makes cron a bit more fun
┃   ┣━━ icepickle         - cool and safe storage for linear models
┃   ┗━━ evol              - grammar for genetic heuristics
┣━━ πŸ‘ Project Contributions
┃   ┣━━ fairlearn         - contributed the CorrelationFilter
┃   ┣━━ polars            - contributed the .pipe() method
┃   ┗━━ BERTopic          - added lightweight sklearn pipeline support
┣━━ ⭐ Online Projects
┃   ┣━━ calmcode.io       - intermediate developer education
┃   ┣━━ koaning.io        - personal blog
┃   ┗━━ dearme.email      - reflection via a 30 day delay
┣━━ πŸŽ™οΈ Popular Talks
┃   ┣━━ Natural Intelligence is All You Need
┃   ┣━━ Group-by statements that save the day
┃   ┣━━ Tools to Improve Training Data
┃   ┣━━ Optimal on Paper, Broken in Reality
┃   ┣━━ Playing by the Rules-Based-Systems
┃   ┣━━ How to Constrain Artificial Stupidity
┃   ┣━━ The Profession of Solving the Wrong Problem
┃   ┣━━ Winning with Simple, even Linear, Models
┃   ┗━━ Untitled12.ipynb
┣━━ πŸ”¬ Random Experiments
┃   ┣━━ scikit-prune   - prune scikit learn pipelines
┃   ┣━━ gitlit         - tracking github action times across open source
┃   ┣━━ sentimany      - many sentiment models, one repo
┃   ┣━━ tokenwiser     - sklearn token tricks
┃   ┣━━ clumper        - functional API for lists of dicts
┃   ┗━━ whatlies       - exploration tools for word embeddings
┗━━ πŸ‘¨β€πŸ’» Employer
    ┣━━ 🎲 :probabl.   - scikit-learn and friends
    ┃   ┣━━ scikit-churn      - safety rails for churn work
    ┃   ┗━━ scikit-playtime   - rethinking pipelines
    ┣━━ πŸ’₯ Explosion   - developer tools for nlp
    ┃   ┣━━ prodigy-hf        - Prodigy integration for the HuggingFace stack
    ┃   ┣━━ prodigy-pdf       - Annotate PDFs via Prodigy
    ┃   ┣━━ prodigy-ann       - ANN techniques to find relevant subsets
    ┃   ┣━━ prodigy-segment   - Prodigy integration for Segment Anything
    ┃   ┣━━ prodigy-lunr      - Search techniques to find relevant subsets
    ┃   ┣━━ prodigy-whisper   - Transcribe audio with OpenAI's whisper models
    ┃   ┣━━ prodigy-tui       - Prodigy from the terminal
    ┃   ┗━━ cluestar          - inspiration for your first text labels
    ┗━━ πŸ€– Rasa        - conversational software provider
        ┣━━ nlu examples      - custom nlu components for Rasa
        ┣━━ taipo             - data augmentation tools
        ┗━━ algo whiteboard   - nlp education

Follow me on twitter @fishnets88

whatlies's People

Contributors

arsilla avatar ayushsunny avatar itsabdelrahman avatar jupyterjazz avatar koaning avatar louisguitton avatar mkaze avatar nth-attempt avatar raghibshams456 avatar rensdimmendaal avatar timvink avatar tttthomasssss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

whatlies's Issues

Addition of useful Verbs

It might make sense to think about the EmbeddingSet as if it is a DataFrame. There are some operations missing.

  • left_join, inner_join take the embeddings of two sets and join them together. If a word from a set is missing then we should fill it in with zeros I guess or maybe throw an error?
  • concat currently we have a method called "merge" for this but concatenate feels like a better term.
  • merge might be a nice verb for allowing a pandas dataframe to be merged. we could take a column containing names and a list of columns that can be added to the embedding as properties.
  • sort would be really nice. you can cluster embeddings and sort them before using plot_distances
  • reverse because why not
  • drop allows you to drop some names from the embeddingset. we should allow for a copy setting to allow the method to be used inplace. My preference is to have inplace=False as a default because it is easier to reason about and it allows for chaining.

How to plot a graph in a non interactive mode ?

Hi guys, thank for whatlies and your work.

I have a question, is it possible to use the plot function in a non interactive mode ?
And is it possible to save directly a plot as image ?

Thank for all.

A suggestion about managing dependencies

Currently, I am working on the tf-hub and huggingface integrations with whatlies. However, there is something which is annoying me for a while which I'd like to discuss here and see if it's really an issue, or it's just me overthinking too much!

Problem

As this library grows, more and more language/framework backends are supported, which is really good; however, this in turn has the side-effect of adding more and more dependency packages which provide the backbone of supporting a new language/framework. This might create two issues:

  1. The chance of dependency conflict increases.
  2. And more importantly, with current structure if a user just want to quickly try the base features of this library on their own embeddings (produced from their own custom model) or even work with only one of the minimal languages (say fasttext), they are forced to download and install all of these dependencies; and some of the dependencies, e.g. tensorflow and pytorch for hugging-face, are considerably large and itself have lots of additional dependencies. This may (or may not!) be frustrating, especially for a newbie user (e.g. in the past five years which I have worked with various versions of keras as well as tensorflow from 0.x to 1.x and then 2.x versions, I have had the pleasure of dealing with downloading and installing increasingly large wheel files and doing additional configurations, especially for GPU support; although, recently the additional configs have been considerably reduced/made easier).

Solution

One approach is to separate each of these language/framework backends, and let the user to decide what they really need. For example, the following guide could be provided as installation instruction in readme file:


If you want to use the base features of this library (without the support for language backends), you can simply run:

pip install whatlies

However, if you want to install full feature set, including the support for all the language backends, you can use the following command:

pip install whatlies[full]

Alternatively, if you need the support for only one or some of the language backends and want to be selective about that (probably, due to not downloading and installing a lot of unnecessary dependencies packages), you can simply provide the name of backend(s) in square bracket, using the following table as a guide:

Language backend Installation name
Fasttext fasttext
Gensim gensim
HuggingFace huggingface
Spacy spacy

For example, the installation commands in this case would look like as follow:

pip install whatlies[fasttext]         # install base features + Fasttext language backend
pip install whatlies[fasttext,gensim]  # install base features + Fasttext and Gensim language backends

As for the modifications to source code so that no import fails, two options comes to my mind: either we can defer the imports to be done in methods/functions:

from whatlies import EmebddingSet


class DummyLanguage():
    def __init__(model=None):
        import Dummy
        ...

or, alternatively we can use try/except block to prevent errors about missing dependencies:

from whatlies import EmebddingSet
try:
    import Dummy
except ImportError:
    pass


class DummyLanguage():
    ...

As you may already know, one example of managing dependencies with this approach is Rasa (1, 2).

Finally, I want to emphasis again that this might not be really an issue (or at least it's not an issue at this stage of development of this library) and it's likely that I am overthinking it too much. Anyways, I would really like to hear your thoughts on this, @koaning !

The problem with saving language files for tests before running the tests

The following warning from automated tests is what I was mentioning about the problem with saving model files for tests beforehand:

spacy-stale

Although, it's just a warning and hopefully newer versions of any language library (not just spaCy) is backwards-compatible (though, I don't think spacy is backwards-compatible always).

And also since the saving format may change between different versions of libraries, or is not deterministic for all the machines/envs, you would get these in your git status:

git-status

These are not serious issues, yet; but I just wanted to bring them up so that we have them on our radar.

Integer Default Parameters for Plotting Methods

The goal is to allow for this syntax;

emb.transform(Pca(2)).plot(x_axis=0, y_axis=1)

The reasoning is currently you have to write;

emb.transform(Pca(2)).plot(x_axis="pca_0", y_axis="pca_1")

This get's annoying when you want to replace the Pca with Umap.

emb.transform(Umap(2)).plot(x_axis="umap_0", y_axis="umap_1")

What's even nicer is that with proper default arguments in the future you can just run;

emb.transform(Umap(2)).plot()

Scikit-Learn for all Languages

We've added a bunch of new languages, but we should ensure that these also still allow for scikit-learn pipelines. Mainly we need to add tests and make sure that we attach the proper Mixin.

ndim attribute

@koaning Do you think it's a good idea to add a ndim (or size) attribute for Embedding class which gives the dimension of embedding vector? And what about the EmbeddingSet? For EmbeddingSet this makes sense only if all the embedding vectors in it have the same length (which I think is an implicit assumption so far; although, one might argue that this is not a necessary condition to hold for all applications).

Feature to allow backend downloads

We might make it easy to help people download language backends. spaCy for examples allows this;

python -m spacy download en_core_web_md

We could allow for something similar here.

python -m whatlies download spacy en_core_web_md
python -m whatlies download fasttext en

Flair

Might be worth to add this as a backend as well.

"make develop" fails due to model mismatch

Currently, running make develop would install the small English model of spaCy (i.e. en_core_web_sm); however, the tests/prepare_disk_for_tests.py uses the medium version (i.e. en_core_web_md). This conflict results in the make develop command to fail in a clean development installation. Which one is intended to be used in the tests, the small or the medium version? It's also worth mentioning that in the notebooks only the small model have been used.

If you would like, I can push the fix as a PR.

Using "setup.py develop" in makefile

I was wondering why the python setup.py develop has been used in develop target of makefile, especially when editable installation mode of pip have been used as well (i.e. pip install -e and pip install -e .[dev]). As far as I know, this not only makes it redundant, but also using setup.py is not recommended due to its potential conflicting behavior. Maybe it's there for good reasons which I am not aware of and would like to know.

If this is indeed a mistake, I can push a fix as a PR if you would like.

Feature request: plot graph to visualise distance

I've always been a fan of using networkx to create graphs for my vector embeddings.
The networkx implementation of the kamada kawai graph layout allows you to input a dictionary containing the distance of each node to the others (eg: {0: {0:0, 1: .01}, 1: {1:0, 0:, 0.1}}), which it then tries to visualise in a good way.
So what I often do is calculate for example the cosine distances to be used as input. This can give a nice visual on the similarity based on distance:

image

My quick code for this on the embedding set (which doesn't work for the api you made but I'll leave that to you :) ):

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

def plot_graph(self):
        plt.figure(figsize=(10, 10))
        vectors = [token.vector for k, token in self.embeddings.items()]
        label_dict = {i: w for i, (w, _) in enumerate(self.embeddings.items())}
        dist = cosine_distances(np.array(vectors), np.array(vectors))
        # Greate graph
        G = nx.from_numpy_matrix(dist)
        distance = pd.DataFrame(dist).to_dict()
        # Chhange laytout positions of the graph
        pos = nx.kamada_kawai_layout(G, dist=distance)
        # Draw nodes and labels
        nx.draw_networkx_nodes(G, pos, node_color='b', alpha=0.5)
        nx.draw_networkx_labels(G, pos, labels=label_dict, font_color='w')

Let me know what you think! I like it when dealing with high dimensional vectors.

spacy language backend: sentence vs. token embeddings

This is not a bug; rather it is more of a conceptual issue and a question to think about which may possibly improve the design and API of this library.

The embeddings given by a spaCy model for a sentence is just the average of the token embeddings. For the spacy-transformers, this is the sum of token embeddings. So the question is: with spacy models, when we feed them a query string, are we more interested to get the sentence embedding or the embeddings of the tokens in that sentence (i.e. as an EmbeddingSet object)? spaCy provides both (i.e. doc.vector and doc[i].vector); however, here personally I prefer the latter because with spacy models I can get the sentence embedding from the token embeddings but not vice versa (besides the fact that simply averaging or summing the token embeddings of a sentence usually does not give a good representation of that sentence). Further, one of the interesting aspects of spaCy models is that they run a pipeline of modules on the query which starts from tokenization (and many use it for this one); so why not use it?

Suggestion: If modifying the current behavior of __getitem__ is a no-no, we can instead add a new method called embed which takes a string as well as a boolean flag as input arguments, i.e. embed(self, query, tokens=True). In the default case, i.e. tokens=True, it would first tokenize the query and then provide the token embeddings. If tokens=False, the result would be the same as __getitem__. Note that this method is not specific to SpacyLanguage backend and it could be used for any language backend that provides both of token and sentence embeddings (e.g. fasttext, hugginface). And further, unlike __getitem__, this method could take additional arguments which may add the support for backend-specific features.

@koaning What's your opinion on this?

Multi Lang Tutorial

Add a tutorial that demonstrates how to use other languages. Compare english to portugese and check if the relationship between colors and animals are still the same.

spacy language backend sometimes produces wrong embeddings

The problem is that the embeddings given by the SpacyLanguage backend may not be correct in all cases. That's because currently a whitespace tokenizer is used for finding the position of tokens, whereas each language model in spaCy has its own tokenizer which might be more sophisticated than just a simple whitespace splitting. To demonstrate this, consider the following examples (note that the BERT model from spacy-transformers have been used here because the first two examples concern contextualized embeddings; however, it does not make a difference to use a built-in spaCy model instead):

import numpy as np
from whatlies.language import SpacyLanguage

lang = SpacyLanguage("en_trf_bertbaseuncased_lg")

# =========================================
# Example 1
query_raw = "let's go [home]"
query_clean = "let's go home"
emb = lang[query_raw]
doc = lang.model(query_clean)
print(list(doc))
# [let, 's, go, home]

assert str(doc[2]) == 'go'
assert str(doc[3]) == 'home'
# In correct case, the following two statements should raise exception
assert np.array_equal(emb.vector, doc[2].vector)
assert not np.array_equal(emb.vector, doc[3].vector)

# =========================================
# Example 2
query_raw = "player #10 [won]"
query_clean = "player #10 won"
emb = lang[query_raw]
doc = lang.model(query_clean)
print(list(doc))
# [player, #, 10, won]

assert str(doc[2]) == '10'
assert str(doc[3]) == 'won'
# In correct case, the following two statements should raise exception
assert np.array_equal(emb.vector, doc[2].vector)
assert not np.array_equal(emb.vector, doc[3].vector)

# =========================================
# Example 3: even when there is no context emb
query_raw = "pre-order this product"
emb = lang[query_raw]
doc = lang.model(query_raw)
print(list(doc))
# [pre, -, order, this, product]

assert str(doc[:-2]) == "pre-order"
# In correct case, the following two statements should raise exception
assert np.array_equal(emb.vector, doc[:-2].vector)
assert not np.array_equal(emb.vector, doc.vector)

Note that we used an English model above, and spaCy supports multiple languages which might have complex tokenization rules beyond whitespace tokenization and we may not be aware of them (although we can look them up in the source code but that's not the point); so, we cannot just blame these non-alphanumeric characters (e.g. ', #, -, ?, !) in English for the cause of this issue and treat these examples as special cases.

Solution: This issue happens because we want to provide contextualized embeddings (which, so far, only makes sense for spacy-transformer models), but we are not using the model's tokenizer (i.e. lang.model.tokenizer) to find the position of requested tokens. This could be solved by first tokenizing the raw query to find the position of [ and ] tokens and their enclosed tokens, and then clean the query string and use the model to find the embeddings of the enclosed tokens (of course, this works with the assumption that the user is formatting the sentence correctly; e.g. how[are] you would be tokenized as ['how[are', ']', 'you'], I guess that's a fair assumption to make).

Another option is to drop the support for spacy-transformers entirely and therefore there would be no need for contextualized embeddings, especially if hugginface would be supported in near(!) future. However, personally I don't like this option especially because spacy-transformers provides alignment of embeddings from wordpieces to tokens (although, I think this is not something so special such that we cannot implement it ourselves in huggingface backend).

Anyways, I would be happy to work on a PR for the fix.

feedback on visual

Is this a useful visualisation?

image

It occurs as cute when teaching, but it doesn't really help understanding the embedding, does it?

No tokens left for this setting. Consider raising prob_limit=-15

I am following the instruction on youtube to reproduce the word embeddings.

Here is the code:

from whatlies import Embedding, EmbeddingSet
from whatlies.language import SpacyLanguage
lang_spacy = SpacyLanguage("en_core_web_md")
lang_spacy.score_similar('university')

IΒ΄ll get following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-9705af8b0442> in <module>
----> 1 lang_spacy.score_similar('university')

~\Anaconda3\envs\nlpenv\lib\site-packages\whatlies\language\spacy_lang.py in score_similar(self, emb, n, prob_limit, lower, metric)
    222             emb = self[emb]
    223 
--> 224         queries = self._prepare_queries(prob_limit, lower)
    225         distances = self._calculate_distances(emb, queries, metric)
    226         by_similarity = sorted(zip(queries, distances), key=lambda z: z[1])

~\Anaconda3\envs\nlpenv\lib\site-packages\whatlies\language\spacy_lang.py in _prepare_queries(self, prob_limit, lower)
    134             queries = [w for w in queries if w.is_lower]
    135         if len(queries) == 0:
--> 136             raise ValueError(
    137                 f"No tokens left for this setting. Consider raising prob_limit={prob_limit}"
    138             )

ValueError: No tokens left for this setting. Consider raising prob_limit=-15

If i change the prob_limit to -15 I get the same error message.
If I modify the last line of code into lang_spacy.score_similar('university', prob_limit=None) the code works but I get following results:

[(Emb[university], 5.960464477539063e-08),
 (Emb[where], 0.6037042140960693),
 (Emb[who], 0.6461576223373413),
 (Emb[there], 0.6536556482315063),
 (Emb[he], 0.6619330644607544),
 (Emb[she], 0.6653806567192078),
 (Emb[must], 0.6659252047538757),
 (Emb[should], 0.6668630838394165),
 (Emb[how], 0.6717931032180786),
 (Emb[what], 0.6724168062210083)]

But this canΒ΄t be right or? I donΒ΄t want subwords.

Any suggestions what I am doing wrong?

spacy version 2.3.2
whatlies version 0.4.2

Cosine *similarity* in plot_correlation()

Change the plot_correlation so that we can see cosine similarity instead of distance. Maybe also change the name in general. Correlation is not a typical measure.

movement plot

after mapping ... points might move ... it would be cool to allow this to be plotted.

something like

emb_proj = embset | (emb["man"] - emb["woman"])
embset.comparison_plot(emb_proj)

SpacyLanguage does not create embeddings for single token

When making embeddings using the SpacyLanguage class the vectors are not created correctly:

lang['red'].vector
>>> []

It's because in the __getitem___ function the end token is set to -1, but array[0,-1] returns [] for single entries. So the new_str is empty.

print(a[0:-1])
 >>> [0]

a = np.array([0])
print(a[0:-1])
 >>>[]

BERT embeddings are not contextualized

Currently when using the 'sentence input with [token] input as this' put into the SpacyLanguage class does not result in the contextualized word embedding, since it will only input token into the model:

np.array_equal(bert['Going to the [store]'].vector, 
               bert['[store] this in the drawer please.'].vector)
>>> True

Spacy with Bert embeddings returns the context embedding using the token index:

np.array_equal(bertje('Going to the store')[3],
               bertje('store this in the drawer please.')[0])
>>> False

"files" option in pre-commit config for black hook

A pattern, i.e. files: sklego, has been set for the filenames to be checked by black hook. Essentially, this makes black to skip all the files because nothing would match with that pattern. I was wondering if this is intentional or not.

If this is a mistake, I can work on a PR to fix it as well as run a check on all the files; however, let me know if you want to only the package files to be checked, i.e. set files: whatlies, or include the tests as well, i.e. no need to set files option in that case.

Allow embeddings to be passed as *args

If foo is an Embedding instance, EmbeddingSet(foo) is not supported. Currently we need to write;

EmbeddingSet({'foo': foo})

It would be nicer if we also allow the aforementioned approach.

Caching for Github Actions

@koaning I was wondering if you have considered using caching in Github Actions (1, 2, 3).

Let me give you some stats: currently, downloading the dependency packages with pip takes a bit less than 1 minute for Ubuntu and 1.5 minutes for MacOS. Adding torch and its dependencies increases this to 1.5 minutes for Ubuntu (not much increase for MacOS, since its torch package is relatively small). This amount accounts for around one forth of the total test time per workflow. So I am not sure if it's worth it or not to use caching for downloading dependencies (or even test models!). Github gives 5 GB of total cache limit and unlimited number of cache per repository. Though, I am not sure about other pricing and usage factors of Github Action for this repository (since it depends on the type of repository which is not clear to me).

Anyways, I just wanted to bring this up for consideration as well, especially if you are not aware of it.

Communication

I am wondering if it makes sense to invest in a communication channel. Sometimes there are things I prefer handling on a github issue but other times something like instant messaging might be better.

For example. @mkaze I pushed a new version live today with a nifty feature: https://rasahq.github.io/whatlies/tutorial/scikit-learn/

Our language backends now also support Scikit-Learn. That means that you can easily hook up a logistic regression after pretrained word embeddings to check if they fit your use-case.

Would something like gitter make sense for this sort of thing? I can also set up a slack channel for people who have invested a lot in this project.

The concept of "royal"/"royalty"

In the second notebook as well as in the embeddings tutorial, two concepts have been defined using word embeddings:

royalty = tokens['king'] - tokens['queen']
gender = tokens['man'] - tokens['woman']

Although, denoting "man" - "woman" as the concept/vector of "gender" looks more or less fine and valid to me, but denoting "king" - "queen" as the concept/vector of "royalty" seems intuitively confusing and misleading to me (instead, it would basically represent the "gender" concept as well).

My argument is that basically both "king" and "queen" have the concept of "royalty" in them and therefore when we subtract one from the other this would cancel out, of course assuming a linear relationship. In other words, considering an intuitive understanding of word embedding mathematics and the relation "man is to king as woman is to queen", it would make more sense to denote "royalty" concept using either of "king" - "man" or "queen" - "woman" instead.

I would appreciate if you could provide your point of view and correct me if I am wrong.

Automatic saving

I'm currently visualizing with plot_interactive() and due to the size of the embeddings and the amount of data it takes quite a while.

Is there any way to save the plot automatically in a given format, as I would with plt.savefig('myplot.png') both.transform(Tsne(2)).plot_interactive('tsne_0', 'tsne_1',color='set', annot=False) when I come from something like this?

Because as far as I can tell you have no option of this kind documented.

Bokeh vs. Altair as visualization library

I was wondering what factors have been considered in order to choose Altair as the visualization library for this project. To me, Bokeh seems to have a much easier syntax, it's equally feature-rich, and also more popular compared to Altair. I would appreciate if some feedback on this could be provided. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.