Git Product home page Git Product logo

chatgpt-at-home's People

Contributors

sentdex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-at-home's Issues

Just some ideas

Hi guys,
First of all, great video and funny project !
There are 3 ideas that could be nice to implement:

  1. Model selector, to be able to easily download and switch between models.
  2. Bloom Petals integration. It's a distributed version of Bloom (also Bloomz), that is an open-source equivalent of GPT-3 (176 billions parameters, so 1b more), allowing you to only download a small part of the model and then to act as a node to share/leverage/use the GPU power of the whole p2p network. Yes, that's amazing !
  3. Adding some buttons for the parameters : max length, temperature, top k, top p, repetition penalty, etc.
    What do you think ?

Pressing 'Enter' To Submit Text

Again not an issue, but it would be nice to have the site accept input when the user presses the enter key. Multi-line doesn't make a lot of sense here. This is more of an optimization.

Feature Request

Apologies for the constant issue-raising, feel free to correct or close my threads. I thought of adding an option to have the ai communicate with itself. Using the ai as input and response and get live quick feedback from both models. If I manage to get the version I'm working on optimized I'll upload it on this thread.

"LayerNormKernelImpl" not implemented for 'Half'

Can you please tell us the module versions you have used to make this app run?
Mine are
2023-01-25 19:12:05 (rev 2)
ca-certificates {2022.10.11 (defaults/win-64) -> 2023.01.10 (defaults/win-64)}
certifi {2022.9.24 (defaults/win-64) -> 2022.12.7 (defaults/win-64)}
+brotlipy-0.7.0 (defaults/win-64)
+cffi-1.15.1 (defaults/win-64)
+charset-normalizer-2.0.4 (defaults/noarch)
+colorama-0.4.6 (defaults/win-64)
+cryptography-38.0.4 (defaults/win-64)
+filelock-3.9.0 (defaults/win-64)
+flit-core-3.6.0 (defaults/noarch)
+future-0.18.2 (defaults/win-64)
+huggingface_hub-0.10.1 (defaults/win-64)
+idna-3.4 (defaults/win-64)
+libuv-1.40.0 (defaults/win-64)
+ninja-1.10.2 (defaults/win-64)
+ninja-base-1.10.2 (defaults/win-64)
+packaging-22.0 (defaults/win-64)
+pycparser-2.21 (defaults/noarch)
+pyopenssl-22.0.0 (defaults/noarch)
+pysocks-1.7.1 (defaults/win-64)
+pytorch-1.12.1 (defaults/win-64)
+pyyaml-6.0 (defaults/win-64)
+regex-2022.7.9 (defaults/win-64)
+requests-2.28.1 (defaults/win-64)
+tokenizers-0.11.4 (defaults/win-64)
+tqdm-4.64.1 (defaults/win-64)
+transformers-4.24.0 (defaults/win-64)
+typing-extensions-4.4.0 (defaults/win-64)
+typing_extensions-4.4.0 (defaults/win-64)
+urllib3-1.26.14 (defaults/win-64)
+win_inet_pton-1.1.0 (defaults/win-64)
+yaml-0.2.5 (defaults/win-64)

2023-01-25 19:14:44 (rev 3)
+click-8.0.4 (defaults/win-64)
+flask-2.2.2 (defaults/win-64)
+itsdangerous-2.0.1 (defaults/noarch)
+jinja2-3.1.2 (defaults/win-64)
+markupsafe-2.1.1 (defaults/win-64)
+werkzeug-2.2.2 (defaults/win-64)

\Anaconda3\envs\aiml\lib\site-packages\torch\nn\functional.py", line 2503, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
127.0.0.1 - - [25/Jan/2023 19:16:13] "POST / HTTP/1.1" 500 -

I guess I am using pytorch and you are using torch, right?

Faster GPU execution

Not necessarily an issue but I didn't know where else to post (I'm still new to GitHub methodology).

After prompting ChatGPT, I got this code which managed to decrease the amount of processing time by a considerable amount:

import transformers
from transformers import utils, pipeline, set_seed
import torch
from flask import Flask, request, render_template, session, redirect


app = Flask(__name__)

# Set the secret key for the session
app.secret_key = 'your-secret-key'

MODEL_NAME = "facebook/opt-125m" 

# Initialize the chat history
history = ["Human: Can you tell me the weather forecast for tomorrow?\nBot: Try checking a weather app like a normal person.\nHuman: Can you help me find a good restaurant in the area\nBot: Try asking someone with a functioning sense of taste.\n"]
generator = pipeline('text-generation', model=f"{MODEL_NAME}", do_sample=True, device=0) # Use the first available GPU


# Define the chatbot logic
def chatbot_response(input_text, history):
    # Concatenate the input text and history list
    input_text = "\n".join(history) + "\nHuman: " + input_text + " Bot: "
    set_seed(32)
    response_text = generator(input_text, max_length=1024, num_beams=1, num_return_sequences=1)[0]['generated_text']
    # Extract the bot's response from the generated text
    response_text = response_text.split("Bot:")[-1]
    # Cut off any "Human:" or "human:" parts from the response
    response_text = response_text.split("Human:")[0]
    response_text = response_text.split("human:")[0]
    return response_text


@app.route('/', methods=['GET', 'POST'])
def index():
    global history  # Make the history variable global
    if request.method == 'POST':
        input_text = request.form['input_text']
        response_text = chatbot_response(input_text, history)
        # Append the input and response to the chat history
        history.append(f"Human: {input_text}")
        history.append(f"Bot: {response_text}")
    else:
        input_text = ''
        response_text = ''
    # Render the template with the updated chat history
    return render_template('index.html', input_text=input_text, response_text=response_text, history=history)


@app.route('/reset', methods=['POST'])
def reset():
    global history  # Make the history variable global
    history = ["Bot: Hello, how can I help you today? I am a chatbot designed to assist with a variety of tasks and answer questions. You can ask me about anything from general knowledge to specific topics, and I will do my best to provide a helpful and accurate response. Please go ahead and ask me your first question.\n"]
    # Redirect to the chat page
    return redirect('/')


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5001)

This way it takes more advantage of the GPU instead of RAM memory.

The only difference I see is in line 16.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.