Git Product home page Git Product logo

gpt4all-ts's Introduction

This repository is no longer maintained. Please visit https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript for progress on typescript bindings.

gpt4all-ts ๐ŸŒ๐Ÿš€๐Ÿ“š

โš ๏ธ Does not yet support GPT4All-J

gpt4all-ts is a TypeScript library that provides an interface to interact with GPT4All, which was originally implemented in Python using the nomic SDK. This library aims to extend and bring the amazing capabilities of GPT4All to the TypeScript ecosystem.

gpt4all-ts is inspired by and built upon the GPT4All project, which offers code, data, and demos based on the LLaMa large language model with around 800k GPT-3.5-Turbo Generations ๐Ÿ˜ฒ. You can find the GPT4All Readme here to learn more about the project.

๐Ÿ™ We would like to express our gratitude to the GPT4All team for their efforts and support in making it possible to bring this library to life.

Getting Started ๐Ÿ

To install and start using gpt4all-ts, follow the steps below:

1. Install the package

Use your preferred package manager to install gpt4all-ts as a dependency:

npm install gpt4all
# or
yarn add gpt4all

2. Import the GPT4All class

In your TypeScript (or JavaScript) project, import the GPT4All class from the gpt4all-ts package:

import { GPT4All } from 'gpt4all';

3. Instantiate and use the GPT4All class

Create an instance of the GPT4All class and follow the example in the Example Usage section to interact with the model.

Happy coding! ๐Ÿ’ป๐ŸŽ‰

Example Usage ๐ŸŒŸ

Below is an example of how to use the GPT4All class in TypeScript:

import { GPT4All } from 'gpt4all';

const main = async () => {
    // Instantiate GPT4All with default or custom settings
    const gpt4all = new GPT4All('gpt4all-lora-unfiltered-quantized', true); // Default is 'gpt4all-lora-quantized' model
  
    // Initialize and download missing files
    await gpt4all.init();

    // Open the connection with the model
    await gpt4all.open();
    // Generate a response using a prompt
    const prompt = 'Tell me about how Open Access to AI is going to help humanity.';
    const response = await gpt4all.prompt(prompt);
    console.log(`Prompt: ${prompt}`);
    console.log(`Response: ${response}`);
  
    const prompt2 = 'Explain to a five year old why AI is nothing to be afraid of.';
    const response2 = await gpt4all.prompt(prompt2);
    console.log(`Prompt: ${prompt2}`);
    console.log(`Response: ${response2}`);
  
    // Close the connection when you're done
    gpt4all.close();
}
  
main().catch(console.error);

To use the library, simply import the GPT4All class from the gpt4all-ts package. Create an instance of the GPT4All class and optionally provide the desired model and other settings.

After the gpt4all instance is created, you can open the connection using the open() method. To generate a response, pass your input prompt to the prompt() method. Finally, remember to close the connection using the close() method once you're done interacting with the model.

Here's some output from the GPT4All model which you can look forward to:

Prompt: Tell me about how Open Access to AI is going to help humanity.

Response: Open access to AI has already helped in numerous ways, such as improving medical diagnosis and treatment options through machine learning algorithms that analyze patient data more efficiently than humans can alone. It's also helping with the development of autonomous vehicles by using deep neural networks for image recognition and object detection tasks. Open Access is expected to play a crucial role in solving complex problems like climate change, drug discovery or even creating new jobs through AI-enabled automation technologies such as robotics process automation (RPA).

Prompt: Explain to a five year old why AI is nothing to be afraid of.

Response: Artificial Intelligence, also known as AI or machine learning, are systems that can learn and improve themselves through data analysis without being explicitly programmed for each task they perform. They have the ability to understand complex patterns in large datasets which makes them useful tools across various industries such as healthcare, transportation, finance etc.

AI is not something we should be afraid of because it has been designed with our best interests at heart and can help us make better decisions based on data analysis rather than gut feelings or personal preferences. AI systems are also becoming more transparent to users so that they understand how the system works, which helps build trust between them and their machines.

AI is here to stay as it has already been adopted by many industries for its benefits in terms of cost savings, efficiency gains etc., but we need not be afraid or suspicious about this technology because AI can also benefit us if used properly with the right intentions behind it.

Citation ๐Ÿ“

If you utilize this repository, the original GPT4All project, or its data in a downstream project, please consider citing it with:

@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}

If you have any questions or need help, feel free to join the Discord channel and ask for assistance at the #gpt4all-help section.

About the Author ๐Ÿง‘โ€๐Ÿ’ป

gpt4all-ts was created by Conner Swann, founder of Intuitive Systems. Conner is a passionate developer and advocate for democratizing AI models, believing that access to powerful machine learning tools should be available to everyone ๐ŸŒ. In the words of the modern sage, "When the AI tide rises, all boats should float" ๐Ÿšฃ.

You can find Conner on Twitter, sharing insights and occasional shenanigans ๐ŸŽญ at @YourBuddyConner. While he definitely enjoys being on the bandwagon for advancing AI ๐Ÿค–, he remains humbly committed to exploring and delivering state-of-the-art technology for everyone's benefit.

gpt4all-ts's People

Contributors

andriymulyar avatar lucasjohnston avatar riderx avatar yourbuddyconner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpt4all-ts's Issues

Support for multiple requests simultaneously

I have created my own expressjs/socket.io web UI using this package. However, this package seems to only allow one connection/request for the model at a time. I could instruct my code to create a new instance of the model every time a user connects to my page, but that would be very inefficient memory-wise.

I had the idea of creating a queue system where the users wait for other requests to complete before serving them, but depending on the length of the answers and how many are waiting, users could be waiting around for a long time.

TL;DR: Would it be possible to allow the package or model to support more than one request/prompt simultaneously without dramatically increasing RAM consumption?

gpt4all-j does not seem to be supported?

First of all: thank you very much for GPT4All and its bindings!

That said, I'd like to inform you about a problem I encountered: when trying to

const gpt4all = new GPT4All('ggml-gpt4all-j',true)

I got an exception telling me that only gpt4all-lora-[filtered-]quantized would be supported - how can I change that?

Support for nous-gpt4-vicuna-13b?

Is nous-gpt4-vicuna-13b not supported yet?

I have nous-gpt4-vicuna-13b downloaded into gpt4all and you like to access it programmatically.

 with response every time.

I used this library in a my remix app. You can find the code here https://github.com/harshil4076/ts-voice-text (live version won't work).

I am running it on my local machine Ubuntu with 24 GB Ram and no extra GPU power. It takes some time to response but its good for testing.

With every response I am seeing this ๏ฟฝ[1m๏ฟฝ[32m๏ฟฝ[0m. either in the begining or the end.

image

image

What could be causing this??

Great library btw. Shout out to the owners! Cheers!

[Binding/UI] Node-RED nodes and flows for GPT4All

Hi!

First of all: thank you very much for your marvellous contribution! Being able to run inferences from within JavaScript/TypeScript is awesome!

Using gpt4all-ts, I have built function nodes and complete flows for Node-RED, which can be used to run inferences based on both the filtered and the unfiltered GPT4All models.

Node-RED is a data flow processor which allows people ranging from none over casual up to professional programmers to build complex systems from within their browsers just by wiring components (aka "nodes") together.

Having GPT4All models as such nodes allows these people to create their own user interfaces or even build their own autonomous agents, always having full control over everything they do!

Thanks again for your contribution!

With greetings from Germany,

Andreas Rozek

ENOENT when starting the app while having the model and everything downloaded

Hey, i wanted to try the gpt4all-ts so i just copied the started code from the readme and installed the npm package. It downloaded the model into C:\Users\P33tT.nomic (gpt4all & gpt4all-lora-unfiltered-quantized.bin) but when i try to start my app it throws an error:

Error: spawn C:\Users\P33tT/.nomic/gpt4all ENOENT
    at Process.ChildProcess._handle.onexit (node:internal/child_process:285:19)
    at onErrorNT (node:internal/child_process:483:16)
    at processTicksAndRejections (node:internal/process/task_queues:82:21) {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'spawn C:\\Users\\P33tT/.nomic/gpt4all',
  path: 'C:\\Users\\P33tT/.nomic/gpt4all',
  spawnargs: [
    '--model',
    'C:\\Users\\P33tT/.nomic/gpt4all-lora-unfiltered-quantized.bin'
  ]
}

I don't know what to do, i tried deleting the gpt4all file but it didn't work

Unable to open the connection with the model

// Open the connection with the model
await gpt4all.open();

The above await never resolves and the connection never opens, no errors are thrown. I have succefsuflly dowloaded the model and the executable (The model is a .bin file). Using node js v 17.0.0

Os Mac Catalina - Version: 10.15.6 (19G73)

[BUG] User reports MacOS architecture detection fails silently and defaults to intel architecture when Node is managed by nvm or fnm

User reports that model selection fails silently:
https://twitter.com/mattapperson/status/1642965761676156935

Hey! heads up that this does not always get m1 vs intel Mac correctly depending on how nodes was installed... in such cases it fails silently

Not much info other then when installed using fnm or nvm node will return the same results for arch as if it was intel. You will need to run uname -m as a child process to know for sure

Printing gibrish on terminal when running

ra-quantized-linux-x86
main: seed = 1681019976
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.35 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'gpt4all-lora-quantized.bin'
llama_model_load: done
llama_model_load: model size = 78.13 MB / num tensors = 1

system_info: n_threads = 4 / 4 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000

== Running in chat mode. ==

  • Press Ctrl+C to interject at any time.
  • Press Return to return control to LLaMA.
  • If you want to submit another line, end your input in ''.

hi
โ†‘โ™ โ–บโ†“#โ†” โ‡ โ†‘โ–ฒโ˜บ โ€ผ$โ†จ โ‡ ยงโ†“
โ™ซโ†’โ†‘
โ™ฃ$

Other installation steps?

Might be a silly question but do you have to have completed the setup steps in the main gpt4all repo in order to use this TS package? It doesn't state that anywhere in the docs, but when I add the package and run the example, the gpt4all.open(); call just hangs and never completes.

Any ideas?

it's not working.

image
as you can see, the code crashs on .open()
checkpoint 3 is not logged into my console and the application simply stops

Axios error 404 when creating a new GPT4All instance -- npm version out of date?

Hello, while trying this out, using the basic example

import { GPT4All } from 'gpt4all';

const main = async () => {
    // Instantiate GPT4All with default or custom settings
    const gpt4all = new GPT4All('gpt4all-lora-unfiltered-quantized', true); // Default is 'gpt4all-lora-quantized' model
  
    // Initialize and download missing files
    await gpt4all.init();

    // Open the connection with the model
    await gpt4all.open();
    // Generate a response using a prompt
    const prompt = 'Tell me about how Open Access to AI is going to help humanity.';
    const response = await gpt4all.prompt(prompt);
    console.log(`Prompt: ${prompt}`);
    console.log(`Response: ${response}`);
  
    const prompt2 = 'Explain to a five year old why AI is nothing to be afraid of.';
    const response2 = await gpt4all.prompt(prompt2);
    console.log(`Prompt: ${prompt2}`);
    console.log(`Response: ${response2}`);
  
    // Close the connection when you're done
    gpt4all.close();
}
  
main().catch(console.error);

I encountered the following axios error

AxiosError: Request failed with status code 404
    at settle (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/core/settle.js:19:12)
    at RedirectableRequest.handleResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/axios/lib/adapters/http.js:518:9)
    at RedirectableRequest.emit (node:events:513:28)
    at RedirectableRequest.emit (node:domain:489:12)
    at RedirectableRequest._processResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:356:10)
    at ClientRequest.RedirectableRequest._onNativeResponse (/Users/chris/www/personal/learning/langchain/learn-langchain-1/node_modules/follow-redirects/index.js:62:10)
    at Object.onceWrapper (node:events:628:26)
    at ClientRequest.emit (node:events:513:28)
    at ClientRequest.emit (node:domain:489:12)
    at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:693:27) {
  code: 'ERR_BAD_REQUEST',
  config: {
    transitional: {
      silentJSONParsing: true,
      forcedJSONParsing: true,
      clarifyTimeoutError: false
    },
    adapter: [ 'xhr', 'http' ],
    transformRequest: [ [Function: transformRequest] ],
    transformResponse: [ [Function: transformResponse] ],
    timeout: 0,
    xsrfCookieName: 'XSRF-TOKEN',
    xsrfHeaderName: 'X-XSRF-TOKEN',
    maxContentLength: -1,
    maxBodyLength: -1,
    env: { FormData: [Function], Blob: [class Blob] },
    validateStatus: [Function: validateStatus],
    headers: AxiosHeaders {
      Accept: 'application/json, text/plain, */*',
      'User-Agent': 'axios/1.4.0',
      'Accept-Encoding': 'gzip, compress, deflate, br'
    },
    responseType: 'stream',
    method: 'get',
    url: 'https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true',
    data: undefined
  },
  // ...etc

I noticed that the URL of the model being downloaded is https://github.com/nomic-ai/gpt4all/blob/main/chat/gpt4all-lora-quantized-OSX-intel?raw=true which seems to be out of date compared to what is in the current code of the repo, namely: https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-training/chat/gpt4all-lora-quantized-OSX-intel?raw=true (see https://github.com/nomic-ai/gpt4all-ts/blob/main/src/gpt4all.ts#L92).

I changed my local version of the package to correct this error, and the model seems to download correctly. It seems to me that maybe the version of gpt4all on npm needs to be bumped to include this change?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.