Git Product home page Git Product logo

chatgpt-retrieval's People

Contributors

aliabbasi2021 avatar techleadhd avatar tilenn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-retrieval's Issues

trigram error

I'm getting this error when I try to run within Google Colab notebook:

Traceback (most recent call last):
File "/content/chatgpt-retrieval/chatgpt.py", line 35, in
index = VectorstoreIndexCreator().from_loaders([loader])
File "/usr/local/lib/python3.10/dist-packages/langchain/indexes/vectorstore.py", line 73, in from_loaders
return self.from_documents(docs)
File "/usr/local/lib/python3.10/dist-packages/langchain/indexes/vectorstore.py", line 78, in from_documents
vectorstore = self.vectorstore_cls.from_documents(
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 564, in from_documents
return cls.from_texts(
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 519, in from_texts
chroma_collection = cls(
File "/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py", line 104, in init
self._client = chromadb.Client(_client_settings)
File "/usr/local/lib/python3.10/dist-packages/chromadb/init.py", line 86, in Client
system.start()
File "/usr/local/lib/python3.10/dist-packages/chromadb/config.py", line 205, in start
component.start()
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/impl/sqlite.py", line 92, in start
self.initialize_migrations()
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/migrations.py", line 128, in initialize_migrations
self.apply_migrations()
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/migrations.py", line 156, in apply_migrations
self.apply_migration(cur, migration)
File "/usr/local/lib/python3.10/dist-packages/chromadb/db/impl/sqlite.py", line 209, in apply_migration
cur.executescript(migration["sql"])
sqlite3.OperationalError: no such tokenizer: trigram

Errors : VectorstoreIndexCreator().from_loaders([loader])

I'm getting these errors. Not sure what to do about these.

chatgpt.py", line 35, in
index = VectorstoreIndexCreator().from_loaders([loader])
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/indexes/vectorstore.py", line 72, in from_loaders
docs.extend(loader.load())
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 108, in load
self.load_file(i, p, docs, pbar)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 69, in load_file
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 63, in load_file
sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py", line 71, in load
elements = self._get_elements()

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py", line 106, in _get_elements
from unstructured.partition.auto import partition
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/unstructured/partition/auto.py", line 21, in
from unstructured.partition.image import partition_image
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/unstructured/partition/image.py", line 5, in
from unstructured.partition.pdf import partition_pdf_or_image
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/unstructured/partition/pdf.py", line 7, in
import pdf2image
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pdf2image/init.py", line 5, in
from .pdf2image import (
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pdf2image/pdf2image.py", line 15, in
from PIL import Image
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PIL/Image.py", line 103, in
from . import _imaging as core
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PIL/_imaging.cpython-311-darwin.so, 0x0002): tried: '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PIL/_imaging.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PIL/_imaging.cpython-311-darwin.so' (no such file), '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PIL/_imaging.cpython-311-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))

required package

It uses Numpy 1.24. I had to create an environment an install the older version to get it to run

RetrievalQA not importable

from langchain.chains import RetrievalQA
ImportError: cannot import name 'RetrievalQA' from 'langchain.chains'

How does one combine the training data with OpenAI's LLM?

Noob here.

I have set the data file with our country's constitution and it seems to struggle with basic questions about it. For instance, it cannot find more than five, stated human rights within the document even though there are many chapters filled with explicitly stated human rights.

Any ideas on how to make this work better?

Problem with partition_pdf module

Hello, when I try to run the code the following error is displayed:

Traceback (most recent call last):
File "C:\Users\Diego Sousa\Desktop\botchatgpt\botchatgpt\chat02.py", line 35, in
index = VectorstoreIndexCreator().from_loaders([loader])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\indexes\vectorstore.py", line 72, in from_loaders
docs.extend(loader.load())
^^^^^^^^^^^^^
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 137, in load
self.load_file(i, p, docs, pbar)
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 94, in load_file
raise e
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 88, in load_file
sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\unstructured.py", line 86, in load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\unstructured.py", line 171, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Diego Sousa\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\partition\auto.py", line 221, in partition
elements = partition_pdf(
^^^^^^^^^^^^^
NameError: name 'partition_pdf' is not defined. Did you mean: 'partition_xml'?

has anyone had this same problem?

chat history makes the system deadlock.

I have to comment out the chat_history append for it to work. if the conversation is added to the history, only the first question I ask will receive an answer.

With this #:
#chat_history.append((query, result['answer']))
I can keep asking questions, but lose the functionality that it knows what the last question was.
I see the langchain docs that we need to allocate chat history memory objects. but doing this does not help.

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model="gpt-3.5-turbo"),
retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1}),
memory=memory
)

with this, it still deadlocks.

I cant install Chromadb

When I do pip3 install Chromadb it keeps trying to install Chromadb to I dont know find the correct version? here is the link to the paste bin:https://pastebin.com/bxDXwTTA
at the end I decided to end it on purpose because it was taking too long

4 problems when running chatgpt.py in MS Visual Studio Code

I got these error messages when running your chatgpt.py file, what should I do?

Import "langchain.chat_models" could not be resolved
Import "langchain.document_loaders" could not be resolved
Import "langchain.indexes" could not be resolved
Import "langchain.indexes.vectorstore" could not be resolved

The .gitignore references constants.py, but it is already checked in.

Nice work and informative video.

The constants.py file is already checked in and the .gitignore file will not ignore it. Probably best to create a secrets.py for the APIKEY and update the readme.

Create a secrets.py to use your own OpenAI API key:

APIKEY = "YOUR_API_KEY"

Update the .gitignore with secrets.py instead of constants.py.
Update chatgpt.py with

os.environ["OPENAI_API_KEY"] = secrets.API_KEY

Changes are not ignored by git if the file is already checked in.

explicitlyspecify a vectorstore when using VectorstoreIndexCreator

I ran into this error when running chatgpt.py.
all dependencies seem to be installed properly, see the following error message:

/Users/wangzhi/anaconda3/envs/chat/lib/python3.12/site-packages/langchain/indexes/vectorstore.py:129: UserWarning: Using InMemoryVectorStore as the default vectorstore.This memory store won't persist data. You should explicitlyspecify a vectorstore when using VectorstoreIndexCreator warnings.warn( Traceback (most recent call last): File "/Users/wangzhi/Desktop/chatgpt-retrieval/chatgpt.py", line 32, in <module> index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/wangzhi/anaconda3/envs/chat/lib/python3.12/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for VectorstoreIndexCreator embedding field required (type=value_error.missing)

how should I fix this part of the code?
else: #loader = TextLoader("data/data.txt") # Use this line if you only need data.txt loader = DirectoryLoader("data/") if PERSIST: index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader]) else: index = VectorstoreIndexCreator().from_loaders([loader])

Segmentation Fault

On running chat.py, it says “segmentation fault”

i installed the latest langchain and other dependencies

any help?

Module Not Found Error

Ok, I'm new - first time poster - first time with python. I keep getting this error. I have tried everything I know - which isn't much.

File "C:\Users\xxxx\Desktop\chatgpt-retrieval-main\chatgpt.py", line 14, in
import constants
ModuleNotFoundError: No module named 'constants'

I used notebook to make constants.py with API key added.

Issue with required package chromadb for osx-arm64

Hi, I do have some issues installing pip install chromadb. I am using a osx-arm64 and get the following error:

Getting requirements to build wheel ... error
error: subprocess-exited-with-error
Getting requirements to build wheel did not run successfully.

I tried installing it for python 3.8, 3.10, and 3.11. Can you please help me out?

Is poppler installed and in PATH

i changed the code slightly to point it to a directory with pdf files ..
loader = DirectoryLoader("Q:/", recursive=True)

And keep getting the following errors ..

I have tried:
pip3 install pdf2image pdfminer.six
somewhere else advised
pip install unstructured==0.7.12

however then I got "pytesseract.pytesseract.TesseractNotFoundError: tesseract is not installed or it's not in your PATH. See README file for more information."
so I did a pip install tesseract

And now I end up back with the poppler error

File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\pdf2image\pdf2image.py", line 568, in pdfinfo_from_path
proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1026, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 1538, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] The system cannot find the file specified

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\User\Downloads\chatgpt-retrieval-main\chatgpt-retrieval-main\chatgpt.py", line 37, in
index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\indexes\vectorstore.py", line 81, in from_loaders
docs.extend(loader.load())
^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 156, in load
self.load_file(i, p, docs, pbar)
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 105, in load_file
raise e
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\directory.py", line 99, in load_file
sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\unstructured.py", line 86, in load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\document_loaders\unstructured.py", line 172, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\partition\auto.py", line 180, in partition
elements = partition_pdf(
^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\documents\elements.py", line 138, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\file_utils\filetype.py", line 519, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\partition\pdf.py", line 83, in partition_pdf
return partition_pdf_or_image(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\partition\pdf.py", line 141, in partition_pdf_or_image
return _partition_pdf_or_image_with_ocr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\utils.py", line 43, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\unstructured\partition\pdf.py", line 353, in _partition_pdf_or_image_with_ocr
document = pdf2image.convert_from_path(filename)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\pdf2image\pdf2image.py", line 127, in convert_from_path
page_count = pdfinfo_from_path(
^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\pdf2image\pdf2image.py", line 594, in pdfinfo_from_path
raise PDFInfoNotInstalledError(
pdf2image.exceptions.PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?

How do i fix this

Traceback (most recent call last):
File "/workspaces/chatgpt-retrieval/chatgpt.py", line 35, in
index = VectorstoreIndexCreator().from_loaders([loader])
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/indexes/vectorstore.py", line 81, in from_loaders
docs.extend(loader.load())
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/document_loaders/directory.py", line 156, in load
self.load_file(i, p, docs, pbar)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/document_loaders/directory.py", line 105, in load_file
raise e
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/document_loaders/directory.py", line 99, in load_file
sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/document_loaders/unstructured.py", line 86, in load
elements = self._get_elements()
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/document_loaders/unstructured.py", line 172, in _get_elements
return partition(filename=self.file_path, **self.unstructured_kwargs)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/unstructured/partition/auto.py", line 292, in partition
_partition_pdf = _get_partition_with_extras("pdf")
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/unstructured/partition/auto.py", line 110, in _get_partition_with_extras
raise ImportError(
ImportError: partition_pdf is not available. Install the pdf dependencies with pip install "unstructured[pdf]"

Will this no longer work?

Traceback (most recent call last):
File "/Users/sooraj/Documents/chat/chatgpt-retrieval/chatgpt.py", line 6, in
from langchain.chat_models import ChatOpenAI
File "/opt/homebrew/lib/python3.12/site-packages/langchain/chat_models/init.py", line 27, in getattr
from langchain_community import chat_models
ModuleNotFoundError: No module named 'langchain_community'
iconv: iconv_open(, -t): Invalid argument
Error converting string from to UTF-8

The system lacks context between queries

I know it's not trivial to implement, but without such a feature this system is quite limited.

First query:

python3 chatgpt "I live in Tel Aviv since 1997"
Thank you for sharing that information about where you live. Do you have any further questions or is there anything else I can assist you with?

Second query:

python3 chatgpt.py "How long have I been living in Tel Aviv?"
I don't have that information.

TypeError: issubclass() arg 1 must be a class

(myvenv)  ~/PROJECTS/chatgpt-retrieval   main ±  py chatgpt.py "what is my dog's name"
Traceback (most recent call last):
File "/home/ezri/PROJECTS/chatgpt-retrieval/chatgpt.py", line 5, in
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/init.py", line 6, in
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/agents/init.py", line 2, in
from langchain.agents.agent import (
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/agents/agent.py", line 16, in
from langchain.agents.tools import InvalidTool
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in
from langchain.tools.base import BaseTool, Tool, tool
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/tools/init.py", line 3, in
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 12, in
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/utilities/init.py", line 3, in
from langchain.utilities.apify import ApifyWrapper
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in
from langchain.document_loaders import ApifyDatasetLoader
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/document_loaders/init.py", line 44, in
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/home/ezri/myvenv/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in
class BaseEmbaasLoader(BaseModel):
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in new
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in init
self.prepare()
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.name, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, base, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in new
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in init
self.prepare()
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/home/ezri/myvenv/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 1551, in subclasscheck
return issubclass(cls, self.origin)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class

JSONDecodeError: Expecting value: line 1 column 2 (char 1)

I had the idea to make a dataset out of IBM's Project Codenet codes (a little less than 14 million in total). I converted them all into text files (ending in .txt rather than .py, .c, etc) and after a couple of prior issues with the encoding that I solved by removing about 1 million files that had incorrect encodings (reducing it to about 12 million files), I tried to run it again. It then gave another, different error:

Traceback (most recent call last):
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/unstructured/partition/json.py", line 45, in partition_json
    dict = json.loads(file_text)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/media/impromise/ExternalDrive/chatgpt-retrieval-main.txt.txt/chatgpt.py", line 36, in <module>
    index = VectorstoreIndexCreator().from_loaders([loader])
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/indexes/vectorstore.py", line 81, in from_loaders
    docs.extend(loader.load())
                ^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 156, in load
    self.load_file(i, p, docs, pbar)
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 105, in load_file
    raise e
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/document_loaders/directory.py", line 99, in load_file
    sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py", line 86, in load
    elements = self._get_elements()
               ^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py", line 172, in _get_elements
    return partition(filename=self.file_path, **self.unstructured_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/unstructured/partition/auto.py", line 230, in partition
    elements = partition_json(filename=filename, file=file, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/unstructured/documents/elements.py", line 138, in wrapper
    elements = func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/unstructured/file_utils/filetype.py", line 519, in wrapper
    elements = func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
  File "/media/impromise/ExternalDrive/miniconda3/lib/python3.11/site-packages/unstructured/partition/json.py", line 48, in partition_json
    raise ValueError("Not a valid json")
ValueError: Not a valid json

This has had me stuck for a while. It had previously made such an error beforehand, and I decided that I'd move away certain folders that had the issue to be fixed later, but there were 4053 overall folders and I couldn't get move all problematic folders. I also looked through my dataset to make sure there were no shell files, JSON files, or CSV files that could cause such an issue, but there were none- it's just .txt, the cat.pdf file, and a Word doc. Without the dataset, I've been able to run the program with little difficulty, and even certain (if not most) folders were suitable as datasets to use.

Why is this error happening and how could I fix it? I am using Ubuntu 22.04, and Python 3.11.4. I've placed the program files onto an external hard drive, where the program has been able to run. If the error is unable to be fixed, are there any ways to circumvent the error?

Vector/Element Issues

Hello, I keep having issues when trying to run this.

I am trying to train the model using many manuals (23 manuals that I have converted to txt files).

Traceback (most recent call last): File "c:\Users\rschmidt\Desktop\ChatGPT Retrieval\chatgpt.py", line 33, in <module> index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader]) File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\indexes\vectorstore.py", line 73, in from_loaders return self.from_documents(docs) File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\indexes\vectorstore.py", line 78, in from_documents vectorstore = self.vectorstore_cls.from_documents( File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\chroma.py", line 462, in from_documents return cls.from_texts( File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\chroma.py", line 430, in from_texts chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids) File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\vectorstores\chroma.py", line 150, in add_texts self._collection.upsert( File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\chromadb\api\models\Collection.py", line 299, in upsert self._client._upsert( File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\chromadb\api\local.py", line 318, in _upsert self._add( File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\chromadb\api\local.py", line 260, in _add self._db.add_incremental(collection_id, added_uuids, embeddings) File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\chromadb\db\clickhouse.py", line 639, in add_incremental index.add(ids, embeddings) File "C:\Users\rschmidt\AppData\Local\Programs\Python\Python310\lib\site-packages\chromadb\db\index\hnswlib.py", line 177, in add self._index.add_items(embeddings, labels) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4634,) + inhomogeneous part.

BS4 required..

Running windows 11, Python 3.11, in a venv. following docs I got an error on the first run
"ModuleNotFoundError: No module named 'bs4'"
pip install bs4 in the virtual environment fixed it up.

LangChain Deprecation Warning

Hi, it looks like this is not working anymore? Would you have time to update the project on github? There is an error that sends me down a rabbit hole - LangChainDeprecationWarning: Importing vector stores from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead

ImportError: cannot import name 'extract_pages' from 'pdfminer.high_level'

Getting this error IDK y:

Traceback (most recent call last):
File "C:\Users\l\streamlit-google-oauth\chatgpt-retrieval\chatgpt.py", line 38, in
index = VectorstoreIndexCreator().from_loaders([loader])
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\indexes\vectorstore.py", line 72, in from_loaders
docs.extend(loader.load())
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 108,
in load
self.load_file(i, p, docs, pbar)
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 69, in load_file
raise e
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 63, in load_file
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\unstructured.py", line 71, in load
elements = self._get_elements()
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\unstructured.py", line 106, in _get_elements
from unstructured.partition.auto import partition
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\unstructured\partition\auto.py", line 21, in
from unstructured.partition.image import partition_image
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\unstructured\partition\image.py", line 5, in
from unstructured.partition.pdf import partition_pdf_or_image
from pdfminer.high_level import extract_pages
ImportError: cannot import name 'extract_pages' from 'pdfminer.high_level' (C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\pdfminer\high_level.py)
PS C:\Users\l\streamlit-google-oauth\chatgpt-retrieval> ^C
PS C:\Users\l\streamlit-google-oauth\chatgpt-retrieval> pip install pdfminer.six
Requirement already satisfied: pdfminer.six in c:\users\l\appdata\local\programs\python\python39\lib\site-packages (20191110)
Requirement already satisfied: pycryptodome in c:\users\l\appdata\local\programs\python\python39\lib\site-packages (from pdfminer.six) (3.17)
Requirement already satisfied: sortedcontainers in c:\users\l\appdata\local\programs\python\python39\lib\site-packages (from pdfminer.six) (2.4.0)
Requirement already satisfied: chardet in c:\users\l\appdata\local\programs\python\python39\lib\site-packages (from pdfminer.six) (3.0.4)
Requirement already satisfied: six in c:\users\l\appdata\local\programs\python\python39\lib\site-packages (from pdfminer.six) (1.16.0)

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip
PS C:\Users\l\streamlit-google-oauth\chatgpt-retrieval> python chatgpt.py "what is my dog's name"
Traceback (most recent call last):
File "C:\Users\l\streamlit-google-oauth\chatgpt-retrieval\chatgpt.py", line 38, in
index = VectorstoreIndexCreator().from_loaders([loader])
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\indexes\vectorstore.py", line 72, in from_loaders
docs.extend(loader.load())
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 108,
in load
self.load_file(i, p, docs, pbar)
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 69, in load_file
raise e
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\directory.py", line 63, in load_file
sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\unstructured.py", line 71, in load
elements = self._get_elements()
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\document_loaders\unstructured.py", line 106, in _get_elements
from unstructured.partition.auto import partition
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\unstructured\partition\auto.py", line 21, in
from unstructured.partition.image import partition_image
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\unstructured\partition\image.py", line 5, in
from unstructured.partition.pdf import partition_pdf_or_image
File "C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\unstructured\partition\pdf.py", line 9, in
from pdfminer.high_level import extract_pages
ImportError: cannot import name 'extract_pages' from 'pdfminer.high_level' (C:\Users\l\AppData\Local\Programs\Python\Python39\lib\site-packages\pdfminer\high_level.py)

ModuleNotFoundError: No module named 'langchain_community.chains'

Hi,

I'm keep getting this error...

pip3 install --upgrade langchain Requirement already satisfied: langchain in /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages (0.1.1) Requirement already satisfied: PyYAML>=5.3 in /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages (from langchain) (6.0.1) Requirement already satisfied: SQLAlchemy<3,>=1.4 in /Library/Fra

and so on
but in pycharm I'm getting errors like "Cannot find reference 'chains' in 'init.py'" and some other...

I'm bit new to python so maybe it is something oubvious???

I'm using mac with this M1 processor so maybe this is somehow issue? for example J can't install pip in Linux vm that I run on this mac.

Module not Found error

chatgpt-retrieval-main % python chatgpt.py "what is my dog name"
/Users/soyeb/Library/Python/3.9/lib/python/site-packages/urllib3/init.py:34: NotOpenSSLWarning: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: urllib3/urllib3#3020
warnings.warn(
Traceback (most recent call last):
File "/Users/soyeb/Desktop/Work/chatgpt-retrieval-main/chatgpt.py", line 5, in
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
ModuleNotFoundError: No module named 'langchain'

seeing openai.error.RateLimitError

I'm using the free trial App Key of open ai but seeing the following error, do I need to upgrade my plan to get it working?

openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

query = None
if len(sys.argv) > 1:
query = sys.argv[1]

loader = TextLoader('data.txt')
print(query)
index = VectorstoreIndexCreator().from_loaders([loader])
print(index.query(query))

facing issue in Installation of Chromadb

Hi Everyone, I got this error while trying this locally.

Error:
Building wheels for collected packages: chroma-hnswlib
Building wheel for chroma-hnswlib (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\RakeshRanjanKumar\chatgpt-retrieval\v_env\Scripts\python.exe' 'C:\Users\RakeshRanjanKumar\chatgpt-retrieval\v_env\lib\site-packages\pip_vendor\pep517\in_process_in_process.py' build_wheel 'C:\Users\RAKESH~1\AppData\Local\Temp\tmpz19tp3tl'
cwd: C:\Users\RakeshRanjanKumar\AppData\Local\Temp\pip-install-gqobnf5q\chroma-hnswlib_57412d3097c849e4bd231978707c330e
Complete output (12 lines):
running bdist_wheel
running build
running build_ext
building 'hnswlib' extension
creating build
creating build\temp.win-amd64-cpython-310
creating build\temp.win-amd64-cpython-310\Release
creating build\temp.win-amd64-cpython-310\Release\python_bindings
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\RakeshRanjanKumar\AppData\Local\Temp\pip-build-env-927jtnt0\overlay\Lib\site-packages\pybind11\include -IC:\Users\RakeshRanjanKumar\AppData\Local\Temp\pip-build-env-927jtnt0\overlay\Lib\site-packages\numpy\core\include -I./hnswlib/ -IC:\Users\RakeshRanjanKumar\chatgpt-retrieval\v_env\include -IC:\Users\RakeshRanjanKumar\AppData\Local\Programs\Python\Python310\include -IC:\Users\RakeshRanjanKumar\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" /EHsc /Tp./python_bindings/bindings.cpp /Fobuild\temp.win-amd64-cpython-310\Release./python_bindings/bindings.obj /EHsc /openmp /O2 /DVERSION_INFO=\"0.7.2\"
bindings.cpp
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include\yvals.h(21): fatal error C1083: Cannot open include file: 'crtdbg.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\bin\HostX86\x64\cl.exe' failed with exit code 2

ERROR: Failed building wheel for chroma-hnswlib
Failed to build chroma-hnswlib
ERROR: Could not build wheels for chroma-hnswlib which use PEP 517 and cannot be installed directly
WARNING: You are using pip version 21.2.3; however, version 23.2.1 is available.
You should consider upgrading via the 'C:\Users\RakeshRanjanKumar\chatgpt-retrieval\v_env\Scripts\python.exe -m pip install --upgrade pip' command.

I'm using Python 3.10.0 still facing this issue. Please help me out.

Didn't work help me please

I do all the step in the read me but when I start the program chatgpt.py it open and text appear and scroll then it immediately close

System role?

Does anyone with experience with OpenAI know if it possible to assign a role to the system to prevent it being able to answer questions outside the information supplied via the data files?

For example, a more simple example found online follows this format (but I don't believe this is possible while using langchain as in the script in the repo):

import openai

openai.api_key = "YOUR_API_KEY"

prompt = “You're a nutritionist chatbot that creates customized meal plan
Only answer questions related to nutrition.
Only ask questions related to nutrition, health and meal plans.

messages = [
{
"role": "system",
"content": prompt
}
]

def get_completion (messages, model="gpt-3.5-turbo"):
response = openai. ChatCompletion. create(
mode l=model,
messages=messages, temperature=0
)
return response. choices [0].message ("content"]

print(get_completion (messages) )

note: I'm specifically referring to the 'prompt' and 'messages = [ { "role": "system", "content": prompt } ]' lines in the examples.

Running into TypeError

By using the code from the video i run into

TypeError: VectorstoreIndexCreator.from_loaders() missing 1 required positional argument: 'self'

My code looks more or less the same as in the video:

import os
import sys
import constants

from langchain.document_loaders import TextLoader
# from langchain.document_loaders import DirectoryLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI



print(constants.APIKEY)
os.environ["OPENAI_API_KEY"] = constants.APIKEY

query = sys.argv[1]
print(query)

# loader = DirectoryLoader(".", glob="*.txt")
loader = TextLoader('data.txt')
loaders = [loader]
index = VectorstoreIndexCreator.from_loaders(loaders=loaders)
# print(index.query(query))

Errors when trying to run

C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\document_loaders_init_.py:36: LangChainDeprecationWarning: Importing document loaders from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

The .gitignore references constants.py, but it is already checked in.

Nice work and informative video.

The constants.py file is already checked in and the .gitignore file will not ignore it. Probably best to create a secrets.py for the APIKEY and update the readme.

Create a secrets.py to use your own OpenAI API key:

APIKEY = "YOUR_API_KEY"

Update the .gitignore with secrets.py instead of constants.py.
Update chatgpt.py with

os.environ["OPENAI_API_KEY"] = secrets.API_KEY

Proposal

Add requirements.txt to simplify installation process of your library.

Platform Independent

Please make this into docker container so that we can install all the necessary tools and library, as on running pip install, it returning error to download wheel lib.
Follow is the error coming on using pip command mentioned in the readme file:
`Building wheels for collected packages: chroma-hnswlib
Building wheel for chroma-hnswlib (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for chroma-hnswlib (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'hnswlib' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for chroma-hnswlib
Failed to build chroma-hnswlib
ERROR: Could not build wheels for chroma-hnswlib, which is required to install pyproject.toml-based projects `

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.