Git Product home page Git Product logo

langchain-chatbot's Introduction

banner_transparent

My Skills My Skills

langchain-chatbot's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-chatbot's Issues

Thank you! Is chat interface available yet? I see there is a snapshot :-)

          Thank you! Is chat interface available yet? I see there is a snapshot :-) 

On Apr 30, 2023, at 7:15 PM, Haste171 @.***> wrote:

Sorry, that's a bug with the way it clears the console for Windows only, on mac it would be clear I believe, located on line 35


Reply to this email directly, view it on GitHub #3 (comment), or unsubscribe https://github.com/notifications/unsubscribe-auth/A6P4S6HEULGKFH53K44JF7TXD3XC3ANCNFSM6AAAAAAXRGY46I.
You are receiving this because you authored the thread.

Originally posted by @text2sql in #3 (comment)

Yes, it is available in the code. An externally hosted version of the interface using a different library will be available soon.

sh: cls: command not found

thanks much for this. works great. but i am getting this error (macos) : sh: cls: command not found.
still, it ingests and searches just fine

Rate limit while ingesting

Hi there, we seem to be getting this error while ingesting a PDF document. We're on the Plus plan.

Ingesting 1 document(s)...
Params: chunk_size=1000, chunk_overlap=100, split_method=recursive
INFO:root:Ingesting 1 document(s)...
Params: chunk_size=1000, chunk_overlap=100, split_method=recursive
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /embeddings in 0.784918 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /embeddings in 1.514384 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
INFO: 127.0.0.1:48524 - "POST /ingest HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
    await route.handle(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
    await self.app(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 74, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/Git/langchain-chatbot/endpoints/ingest.py", line 18, in ingest_documents
    handler.ingest_documents(documents)
  File "/home/user/Git/langchain-chatbot/handlers/base.py", line 156, in ingest_documents
    Pinecone.from_documents(
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 528, in from_documents
    return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_community/vectorstores/pinecone.py", line 435, in from_texts
    pinecone.add_texts(
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_community/vectorstores/pinecone.py", line 145, in add_texts
    embeddings = self._embed_documents(chunk_texts)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_community/vectorstores/pinecone.py", line 92, in _embed_documents
    return self._embedding.embed_documents(list(texts))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 508, in embed_documents
    return self._get_len_safe_embeddings(texts, engine=engine)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 324, in _get_len_safe_embeddings
    response = self.client.create(
               ^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/resources/embeddings.py", line 113, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Sweep: Convert Langchain Chatbot to an API

Instead of using a Discord bot to allow connection to the langchain chatbot, rewrite the code to use FastAPI to do so.

Checklist
  • Create app/main.pyb7ea58b Edit
  • Running GitHub Actions for app/main.pyEdit
  • Modify README.md58e5d48 Edit
  • Running GitHub Actions for README.mdEdit
  • Modify deprecated/chatbot.py241f9d3 Edit
  • Running GitHub Actions for deprecated/chatbot.pyEdit

HTML

You think this can be adapted to add scraped HTML code?

For example, scrape the documentation of a programming language and ask it to code based on that?

pinecone key not recognized

Hi, when i paste the keys in the .env file it doesn't recognize pinecone one and then it tries to use chroma. When it use chroma, it says that no index was found, and cannot answer the query do you know what the problem could be? Maybe something to do with the index dimension? Thanks

chromadb.errors.NoIndexException

i've defined env variables with openai and pinecone. when I run the following command receive an error

python chatbot.py

Do you want to use Pinecone? (Y/N): y
Not using Pinecone or empty Pinecone API key provided. Using Chroma instead
Do you want to ingest? (Y/N): y
No method given, passing
Using embedded DuckDB with persistence: data will be stored in: ./vectorstore
Please enter your question (or type 'exit' to end): how is bitcoin used?
Traceback (most recent call last):
File "C:\Users\antho\projects\langchain-chatbot\chatbot.py", line 135, in
chat_loop()
File "C:\Users\antho\projects\langchain-chatbot\chatbot.py", line 89, in chat_loop
result = process({"question": query, "chat_history": chat_history})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\chains\base.py", line 116, in call
raise e
File "C:\python\Lib\site-packages\langchain\chains\base.py", line 113, in call
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 79, in _call
docs = self._get_docs(new_question, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 146, in _get_docs
docs = self.retriever.get_relevant_documents(question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\vectorstores\base.py", line 225, in get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\vectorstores\chroma.py", line 138, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\langchain\vectorstores\chroma.py", line 184, in similarity_search_with_score
results = self._collection.query(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\chromadb\api\models\Collection.py", line 202, in query
return self._client._query(
^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\chromadb\api\local.py", line 247, in _query
uuids, distances = self._db.get_nearest_neighbors(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\chromadb\db\clickhouse.py", line 520, in get_nearest_neighbors
uuids, distances = index.get_nearest_neighbors(embeddings, n_results, ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\python\Lib\site-packages\chromadb\db\index\hnswlib.py", line 223, in get_nearest_neighbors
raise NoIndexException("Index not found, please create an instance before querying")
chromadb.errors.NoIndexException: Index not found, please create an instance before querying

Typo in the qa prompt

Given the current version of the prompt :

QA_PROMPT = """You are a helpful AI assistant. Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say you don't know. DO NOT try to make up an answer.
If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
Use as much detail when as possible when responding.

{context}

Question: {question}
Helpful answer in markdown format:"""

I this there is a typo in the last line : Use as much detail when as possible when responding. it should be Use as much detail as possible when responding.

Not sure if this affects the model behaviour.

Unable to use the chat function

We have no idea what the issue is here.

Error chatting
2 validation errors for LLMChain
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
ERROR:root:2 validation errors for LLMChain
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
Traceback (most recent call last):
  File "/home/user/Git/langchain-chatbot/handlers/base.py", line 187, in chat
    bot = ConversationalRetrievalChain.from_llm(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain/chains/conversational_retrieval/base.py", line 372, in from_llm
    doc_chain = load_qa_chain(
                ^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
    return loader_mapping[chain_type](
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain/chains/question_answering/__init__.py", line 73, in _load_stuff_chain
    llm_chain = LLMChain(
                ^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/user/.cache/pypoetry/virtualenvs/langchain-chatbot-mUNeNTez-py3.11/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
INFO: 127.0.0.1:49988 - "POST /chat HTTP/1.1" 500 Internal Server Error

Is there a tutorial for using it through docker?

Hi there, I'm not so familiar with coding but i found this project extremely suit my needs. But for some reason i cannot build the project directly on the laptop i mainly work with (companies' asset). So is there a way that i can pack the project to a docker image and run it on my nas at home with access through web from my office?

I tried to deploy it via railway, everything seemed fine but after the deployment succeeded, it turned out with "Server Error" when i tried to access the through browser.
image

Token Limitation

Got en error about token size. How to limit it to avoid Errors?

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4599 tokens.
Please reduce the length of the messages

It seems the template is not taken into account

What I did:

  • Remove the cached versions of the templates (\templates_pycache_)
  • Changed the copy in qa_prompt.py for my specific project (in this case the bot needs to answer in dutch, providing this input in the template is useful in other experiments I did)
  • Run the streamlit.py code

Expected result:

  • See a clear indication the the template is being processed and has an impact on the answer

Result:

  • Same answers, it doesn't matter how hard I change the template. I tried to change the character, answer type,...

Any idea why this is the case?

I need to get the pinecone id

Hi everyone.
I need to get the pinecone data id using langchain but that doesn't give the id.
How can I get it?
This is the code

const embeddings = new OpenAIEmbeddings();
const index = pinecone.Index(process.env.PINECONE_INDEX ?? '');
const dbConfig = {
  pineconeIndex: index,
  namespace: process.env.PINECONE_NAMESPACE ?? '',
  textKey: PINECONE_TEXT_KEY,
};

const result = await PineconeStore.fromDocuments(output, embeddings, dbConfig);

error while installing via pip install -r requirements.txt

Hi there,
just wanted to try out. But got an error:

steps I took first:

  1. git clone
  2. python -m venv .venv
  3. ..venv\scripts\activate
  4. pip install -e

error:

INFO: pip is looking at multiple versions of uvicorn[standard] to determine which version is compatible with other requirements. This could take a while.
Building wheels for collected packages: hnswlib, llama-index, sentence-transformers, validators
Building wheel for hnswlib (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for hnswlib (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'hnswlib' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for hnswlib
Building wheel for llama-index (pyproject.toml) ... done
Created wheel for llama-index: filename=llama_index-0.5.12-py3-none-any.whl size=259557 sha256=93b2a10f6a813b94842c7472b4f6aa54892f8a7e17df707f5a1711038043ee39
Stored in directory: c:\users\mjpa\appdata\local\pip\cache\wheels\a3\75\79\8fae52aaa654990f1d1b7b5116732f9d417696216ba7971735
Building wheel for sentence-transformers (pyproject.toml) ... done
Created wheel for sentence-transformers: filename=sentence_transformers-2.2.2-py3-none-any.whl size=125961 sha256=30f766c1a198d01b460c26d1c86df1b07351d97058b40d037d38a9f987325e6e
Stored in directory: c:\users\mjpa\appdata\local\pip\cache\wheels\ff\27\bf\ffba8b318b02d7f691a57084ee154e26ed24d012b0c7805881
Building wheel for validators (pyproject.toml) ... done
Created wheel for validators: filename=validators-0.20.0-py3-none-any.whl size=19590 sha256=84259d6d9e1c1808bb4c4d67462a7c9d24e6d6ae1de57c0423421bf7d85b0164
Stored in directory: c:\users\mjpa\appdata\local\pip\cache\wheels\82\35\dc\f88ec71edf2a5596bd72a8fa1b697277e0fcd3cde83048b8bf
Successfully built llama-index sentence-transformers validators
Failed to build hnswlib
ERROR: Could not build wheels for hnswlib, which is required to install pyproject.toml-based projects

Which tools do I need to install [see screenshot]?

https://visualstudio.microsoft.com/visual-cpp-build-tools/
image

Please add compatibility with offline models

Rather than only using OpenAi please add models such as vicuna,alpaca and more to be compatible.

Maybe something in Oobaboogas code could help with this or perhaps in Agent-LLM code as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.